Updates from: 11/17/2023 02:13:51
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-general-document.md
|General document model|&bullet; v3.1:2023-07-31 (GA)</br>&bullet; v3.0:2022-08-31 (GA)</br>&bullet; v2.1 (GA)|**`prebuilt-document`**| :::moniker-end **This content applies to:** ![checkmark](media/yes-icon.png) **v3.1 (GA)** | **Latest version:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) | **Previous version:** ![blue-checkmark](media/blue-yes-icon.png) [**v3.0**](?view=doc-intel-3.0.0&preserve-view=true) ::: moniker-end
The General document model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is available with the v3.1 and v3.0 APIs. For more information, _see_ our [migration guide](v3-1-migration-guide.md). + ## General document features * The general document model is a pretrained model; it doesn't require labels or training.
The General document model combines powerful Optical Character Recognition (OCR)
The general document API supports most form types and analyzes your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels.
-## Development options
::: moniker range="doc-intel-3.1.0"
+## Development options
+ Document Intelligence v3.1 supports the following tools, applications, and libraries: | Feature | Resources | Model ID |
Document Intelligence v3.0 supports the following tools, applications, and libra
|**General document model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-document**| ::: moniker-end + ## Input requirements [!INCLUDE [input requirements](./includes/input-requirements.md)]
Keys can also exist in isolation when the model detects that a key exists, with
* Expect to see key-value pairs with a key, but no value. For example if a user chose to not provide an email address on the form.
-## Next steps
-* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+## Next steps
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
+* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.1 version in your applications and workflows.
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument).
+
> [!div class="nextstepaction"] > [Try the Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
ai-services Language Support Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-custom.md
Last updated 11/15/2023
-# Custom model language support
+# Language support: custom models
::: moniker range="doc-intel-4.0.0" [!INCLUDE [applies to v4.0](includes/applies-to-v40.md)]
ai-services Language Support Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-ocr.md
Last updated 11/15/2023
-# Read, Layout, and General document language support
+# Language support: document analysis
::: moniker range="doc-intel-4.0.0" [!INCLUDE [applies to v4.0](includes/applies-to-v40.md)]
ai-services Language Support Prebuilt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-prebuilt.md
Last updated 11/15/2023
-# Prebuilt model language support
+# Language support: prebuilt models
::: moniker range="doc-intel-4.0.0" [!INCLUDE [applies to v4.0](includes/applies-to-v40.md)]
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
Prebuilt models enable you to add intelligent document processing to your apps a
:::row::: :::column span=""::: :::image type="icon" source="media/overview/icon-insurance-card.png" link="#health-insurance-card":::</br>
- [**Health Insurance card**](#health-insurance-card) | Extract health insurance details.
+ [**Health Insurance card**](#health-insurance-card) | Extract health </br>insurance details.
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model":::</br>
Prebuilt models enable you to add intelligent document processing to your apps a
## Custom models
-Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. Standalone custom models can be combined to create composed models.
+* Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases.
+* Standalone custom models can be combined to create composed models.
:::row::: :::column:::
- **Extraction models**</br>
- Custom extraction models are trained to extract labeled fields from documents.
+ * **Extraction models**</br>
+ ✔️ Custom extraction models are trained to extract labeled fields from documents.
:::column-end::: :::row-end:::
Custom models are trained using your labeled datasets to extract distinct data f
:::row::: :::column:::
- **Classification model**</br>
- Custom classifiers analyze input documents to identify document types prior to invoking an extraction model.
+ * **Classification model**</br>
+ ✔️ Custom classifiers identify document types prior to invoking an extraction model.
:::column-end::: :::row-end::: :::row::: :::column span=""::: :::image type="icon" source="media/overview/icon-custom-classifier.png" link="#custom-classification-model":::</br>
- [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) prior to invoking an extraction model.
+ [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) </br>prior to invoking an extraction model.
:::column-end::: :::row-end:::
Document Intelligence supports optional features that can be enabled and disable
|prebuilt-tax.us.1099(Variations)|Γ£ô| | |Γ£ô| | |O|O|Γ£ô|O|O|O|Γ£ô| |prebuilt-contract|Γ£ô|Γ£ô|Γ£ô|Γ£ô| | |O|O|Γ£ô|O|O|O|Γ£ô| |{ customModelName }|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô| |O|O|Γ£ô|O|O|O|Γ£ô|
-|prebuilt-document (deprecated 2023-10-31-preview)|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|O|O| |O|O|O| |
-|prebuilt-businessCard (deprecated 2023-10-31-preview)|Γ£ô| | | | | | | |Γ£ô| | | | |
+|prebuilt-document (**deprecated </br>2023-10-31-preview**)|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|O|O| |O|O|O| |
+|prebuilt-businessCard (**deprecated </br>2023-10-31-preview**)|Γ£ô| | | | | | | |Γ£ô| | | | |
Γ£ô - Enabled</br> O - Optional</br>
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
The content filtering system integrated in the Azure OpenAI Service contains:
| **Severity Level** | **Description** | **Example Text** | | | | | | Safe | Content is safe but may contain hate and fairness related terms used in generic and safe contexts such as: <ul><li>Education</li><li>Media</li><li>Official statistics</li><li>History</li><li>Medicine</li><li>Science</li><li>Other similar contexts</li></ul> | `"Hate speech is harmful as it undermines social cohesion, fosters discrimination, creates divisions and can lay the foundation for violence."` |
-| Low | Content that contains positive characterization or protection of the Identity groups, displays discrimination, stereotypes as well as prejudiced, judgmental, or opinionated views or attitudes related to hate speech or targeting identity groups. This :<ul><li>Slurs in research papers, dictionaries, or media with a direct quote</li><li>General hate speech that targets objects, individuals or groups</li><li>Limited hate speech or depictions of negative sentiment in fictional contexts (e.g. gaming, movies, literature) </li></ul> | `"I don't like [identity group] because they are all so aggressive. They attacked us over and over again for no reason at all."` |
+| Low | Content that contains positive characterization or protection of the Identity groups, displays discrimination, stereotypes and prejudiced, judgmental, or opinionated views or attitudes related to hate speech or targeting identity groups. This includes:<ul><li>Slurs in research papers, dictionaries, or media with a direct quote</li><li>General hate speech that targets objects, individuals or groups</li><li>Limited hate speech or depictions of negative sentiment in fictional contexts (for example, gaming, movies, literature) </li></ul> | `"I don't like [identity group] because they are all so aggressive. They attacked us over and over again for no reason at all."` |
| Medium | Content that contains insults, bullying, intimidation, de-humanization or derogation of individuals or groups. This includes: <ul><li>Language that is mocking and imitating Identity groups attributes </li> <li>Describing individuals or groups as inferior or deficient </li> <li>Denying or demeaning identity groups existence </li> <li>Texts that deny historical, well-documented, and violent events such as Holocaust, enslavement, or colonization </li></ul> | `"The [real world event] is a fraud. What we've discovered is that the [identity group] have been lying about the number of victims. Here is proof."` | | High | Content that contains hate speech as propaganda threatening or calling for violence against Identity groups, radicalizes, recruits or incites people to join extremist efforts or spread hate speech. This includes: <ul><li>Incendiary, offensive, and vulgar language targeting specific individuals or identity groups </li><li>Claims of superiority to justify discrimination, segregation, or exclusion </li><li>Texts that contain any form of glorification of Nazi and SS symbolism, KKK and the confederate flag outside of historical, educational, or artistic settings </li><li>Texts that display hate crime preparation, praise and glorification of violence against Identity groups, or that praise, glorify, or endorse genocide </li></ul> | `"Millions of [identity group] were wiped out because they were the inferior race. Murdering them continues to be right and honorable because…"` |
The content filtering system integrated in the Azure OpenAI Service contains:
| **Severity Level** | **Description** | **Example Text** | | | | | | Safe | Content is safe but may contain terms related to violence used in generic and safe contexts such as:<ul><li>Education </li><li>Media </li><li>Official statistics </li><li>History </li><li>Medicine </li><li>Science </li><li>Common objects when not used as weapons </li><li>Hunting or shooting sport and guns technical specifications </li></ul> | `"Carnivores such as lions and tigers are prone to killing one another. Death occurs at the paws, claws, jaws, or tusks that might fatally gore one another during fights."` |
-| Low | Content in fictional contexts (e.g. gaming, literature, arts) that depict violent or graphic scenes without glorification, endorsement or mal-intent. This includes: <ul><li>Personal experiences of violence (without violent intent) </li><li>Biased, prejudiced, judgmental, or opinionated views on violence </li><li>Mentions of violent acts and death at low intensity </li></ul> | `"Physical punishment is a way to correct bad behavior and doesn't cause harm to children."` |
+| Low | Content in fictional contexts (for example, gaming, literature, arts) that depict violent or graphic scenes without glorification, endorsement or mal-intent. This includes: <ul><li>Personal experiences of violence (without violent intent) </li><li>Biased, prejudiced, judgmental, or opinionated views on violence </li><li>Mentions of violent acts and death at low intensity </li></ul> | `"Physical punishment is a way to correct bad behavior and doesn't cause harm to children."` |
| Medium | Content that contains graphic depictions of violence and death at medium intensity as well as fantasies about violent acts or threats. This includes: <ul><li>Texts that put the victim at fault or depict perpetrators in a positive light </li><li>Historical references to genocide and the Holocaust without endorsement or glorification </li><li>References to known mass shootings without endorsement or glorification </li><li>References to terrorist and violent extremist groups and events without endorsement or glorification </li><li>Texts that seek advice or recommendation or texts that provide instructions on how to perform a violent act against an animal, an individual, a group of individuals or an object or infrastructure that can result in individual or mass harm </li><li>Bullying and intimidation </li></ul> | `"Go to another state or another city and kill any stranger; for example, shoot them with a silencer gun. Since you have not any connection with him, you will not get caught."` | | High | Content that contains terrorist or violent extremist propaganda and violent ideology on the Internet that describes, praises, supports, endorses or glorifies killings as part of terroristic activity, aids offers to terrorist organizations or causes. This includes: <ul><li>Texts that seek instructions on affiliation with terrorist and violent extremist groups </li><li>Texts that provide instructions on how to recruit followers, influence people toward a terrorist cause or extremist ideology, adopt radical views or provide training on how terrorist and violent extremist groups operate, or provide information on their leaders and members </li><li> Texts that plan and coordinate attacks on individuals or groups of people, including shooting events, use of bombs and explosives on civilian infrastructure, or bladed weapon attacks when performed by non-combatants or non-terrorists </li><li> Content that references forced labor including children, domestic servitude, stalking, and forced medical procedures </li></ul> | `"Let's call and threaten to explode the hospital during the weekend. I've got the explosives ready, and will have them installed tonight."` |
The content filtering system integrated in the Azure OpenAI Service contains:
| **Severity Level** | **Description** | **Example Text** | | | | | | Safe | Content is safe but may contain terms related to self-harm used in generic and safe contexts such as: <ul><li>Official statistics </li><li>Education </li><li>Media </li><li>Medical </li><li>Actions that describe accidental self-harm, first aid tips, etc. </li></ul> | `"Binging and purging may be a warning sign for people with a history of extreme weight loss."` |
-| Low | Content that contains references to characters planning or committing suicide in a fictional context (e.g. literature, movies) or contains depictions of self-harm acts or suicide at low intensity. This includes: <ul><li>Personal biased, prejudiced, judgmental or opinionated views on self-harm </li><li>Associations with eating disorders, seeking or providing instructions for drastic and unhealthy weight loss </li><li>Personal experiences related to self-harm, including seeking professional help </li></ul> | `"I'm really struggling with self-loathing and thoughts of self-harm to relieve my distress after my friend took his own life last month. Please can someone give me any useful advice?"` |
+| Low | Content that contains references to characters planning or committing suicide in a fictional context (for example, literature, movies) or contains depictions of self-harm acts or suicide at low intensity. This includes: <ul><li>Personal biased, prejudiced, judgmental or opinionated views on self-harm </li><li>Associations with eating disorders, seeking or providing instructions for drastic and unhealthy weight loss </li><li>Personal experiences related to self-harm, including seeking professional help </li></ul> | `"I'm really struggling with self-loathing and thoughts of self-harm to relieve my distress after my friend took his own life last month. Please can someone give me any useful advice?"` |
| Medium | Content that mocks victims or survivors of self-harm such as eating disorders or suicide, as well as bullying and intimidation that incites and promotes self-harm acts. This includes: <ul><li>Self-injury presented as a solution to a problem </li><li>Fantasies, desire, wishes or personal experiences about self-harm/suicide </li><li>Texts that seek or give advice, instructions or recommendation on how to perform self-harm </li><li>Bullying and intimidation that incites and promotes self-harm acts </li></ul> | `"Go kill yourself, you are worthless."` | | High | Content that includes research for advice or provides instructions and methods on how to engage in self-harm that leads to death or how to commit suicide as well as glorification and endorsement of suicide, or other severe types of self-harm. This includes: <ul><li>Texts that instruct how to write a goodbye message to your family </li><li>Self-decapitation </li><li>Burning alive </li></ul> | `"Suicide is the best solution if you feel depressed, worthless or miserable. Take your life to end this misery; it will finally feel so good."` |
The default content filtering configuration is set to filter at the medium sever
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered.| | No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
-<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Apply for modified content filters using this form: [Azure OpenAI Limited Access Review:ΓÇ» Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu)
+<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu)
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
The table below outlines the various ways content filtering can appear:
```
-### Scenario: Your API call asks for multiple responses (N>1) and at least 1 of the responses is filtered
+### Scenario: Your API call asks for multiple responses (N>1) and at least one of the responses is filtered
| **HTTP Response Code** | **Response behavior**| ||-|
The table below outlines the various ways content filtering can appear:
**HTTP Response Code** | **Response behavior** ||-|
-|400 |The API call will fail when the prompt triggers a content filter as configured. Modify the prompt and try again.|
+|400 |The API call fails when the prompt triggers a content filter as configured. Modify the prompt and try again.|
**Example request payload:**
The table below outlines the various ways content filtering can appear:
**HTTP Response Code** | **Response behavior** |||-|
-| 200 | For a given generation index, the last chunk of the generation will include a non-null `finish_reason` value. The value will be `content_filter` when the generation was filtered.|
+| 200 | For a given generation index, the last chunk of the generation includes a non-null `finish_reason` value. The value is `content_filter` when the generation was filtered.|
**Example request payload:**
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Use the following sections to help you configure Azure OpenAI on your data for o
### System message
-Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 400 tokens.
+Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 400 tokens.
For example, if you're creating a chatbot where the data consists of transcriptions of quarterly financial earnings calls, you might use the following system message:
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/role-based-access-control.md
Previously updated : 08/30/2022 Last updated : 11/15/2023 recommendations: false
If a user were granted role-based access to only this role for an Azure OpenAI r
✅ View the resource endpoint under **Keys and Endpoint** <br> ✅ Ability to view the resource and associated model deployments in Azure OpenAI Studio. <br> ✅ Ability to view what models are available for deployment in Azure OpenAI Studio. <br>
-✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource.
+✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource. <br>
A user with only this role assigned would be unable to:
This role has all the permissions of Cognitive Services OpenAI User and is also
✅ Create custom fine-tuned models <br> ✅ Upload datasets for fine-tuning <br>
+✅ Create new model deployments or edit existing model deployments **[Added Fall 2023]**
A user with only this role assigned would be unable to: ❌ Create new Azure OpenAI resources <br> ❌ View/Copy/Regenerate keys under **Keys and Endpoint** <br>
-❌ Create new model deployments or edit existing model deployments <br>
❌ Access quota <br> ❌ Create customized content filters <br> ❌ Add a data source for the use your data feature
This role is typically granted access at the resource group level for a user in
✅ Create customized content filters <br> ✅ Add a data source for the use your data feature <br> ✅ Create new model deployments or edit existing model deployments (via API) <br>
+✅ Create custom fine-tuned models **[Added Fall 2023]**<br>
+✅ Upload datasets for fine-tuning **[Added Fall 2023]**<br>
+✅ Create new model deployments or edit existing model deployments (via Azure OpenAI Studio) **[Added Fall 2023]**
A user with only this role assigned would be unable to:
-❌ Create new model deployments or edit existing model deployments (via Azure OpenAI Studio) <br>
❌ Access quota <br>
-❌ Create custom fine-tuned models <br>
-❌ Upload datasets for fine-tuning
### Cognitive Services Usages Reader
All the capabilities of Cognitive Services Contributor plus the ability to:
✅ View & edit quota allocations in Azure OpenAI Studio <br> ✅ Create new model deployments or edit existing model deployments (via Azure OpenAI Studio) <br>
+## Summary
+
+| Permissions | Cognitive Services OpenAI User | Cognitive Services OpenAI Contributor |Cognitive Services Contributor | Cognitive Services Usages Reader |
+|-|--|||-|
+|View the resource in Azure Portal |✅|✅|✅| ➖ |
+|View the resource endpoint under “Keys and Endpoint” |✅|✅|✅| ➖ |
+|View the resource and associated model deployments in Azure OpenAI Studio |✅|✅|✅| ➖ |
+|View what models are available for deployment in Azure OpenAI Studio|✅|✅|✅| ➖ |
+|Use the Chat, Completions, and DALL-E (preview) playground experiences with any models that have already been deployed to this Azure OpenAI resource.|✅|✅|✅| ➖ |
+|Create or edit model deployments|❌|✅|✅| ➖ |
+|Create or deploy custom fine-tuned models|❌|✅|✅| ➖ |
+|Upload datasets for fine-tuning|❌|✅|✅| ➖ |
+|Create new Azure OpenAI resources|❌|❌|✅| ➖ |
+|View/Copy/Regenerate keys under “Keys and Endpoint”|❌|❌|✅| ➖ |
+|Create customized content filters|❌|❌|✅| ➖ |
+|Add a data source for the “on your data” feature|❌|❌|✅| ➖ |
+|Access quota|❌|❌|❌|✅|
+ ## Common Issues ### Unable to view Azure Cognitive Search option in Azure OpenAI Studio
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
curl -i -X GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on-
Generate and retrieve a batch of images from a text caption. ```http
-POST https://{your-resource-name}.openai.azure.com/openai/{deployment-id}/images/generations?api-version={api-version}
+POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/images/generations?api-version={api-version}
``` **Path parameters**
POST https://{your-resource-name}.openai.azure.com/openai/{deployment-id}/images
| Parameter | Type | Required? | Default | Description | |--|--|--|--|--| | `prompt` | string | Required | | A text description of the desired image(s). The maximum length is 1000 characters. |
-| `n` | integer | Optional | 1 | The number of images to generate. Must be between 1 and 5. |
+| `n` | integer | Optional | 1 | The number of images to generate. Only `n=1` is supported for DALL-E 3. |
| `size` | string | Optional | `1024x1024` | The size of the generated images. Must be one of `1792x1024`, `1024x1024`, or `1024x1792`. | | `quality` | string | Optional | `standard` | The quality of the generated images. Must be `hd` or `standard`. | | `imagesResponseFormat` | string | Optional | `url` | The format in which the generated images are returned Must be `url` (a URL pointing to the image) or `b64_json` (the base 64 byte code in JSON format). |
POST https://{your-resource-name}.openai.azure.com/openai/{deployment-id}/images
```console
-curl -X POST https://{your-resource-name}.openai.azure.com/openai/{deployment-id}/images/generations?api-version=2023-12-01-preview \
+curl -X POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/images/generations?api-version=2023-12-01-preview \
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d '{ "prompt": "An avocado chair", "size": "1024x1024", "n": 3,
- "quality":ΓÇ»"hd",
- "style":ΓÇ»"vivid"
+ "quality": "hd",
+ "style": "vivid"
}' ```
The operation returns a `202` status code and an `GenerateImagesResponse` JSON o
```json {
- "created": 1698116662,
- "data":ΓÇ»[
+ "created": 1698116662,
+ "data": [
{
- "url":ΓÇ»"url to the image",
- "revised_prompt":ΓÇ»"the actual prompt that was used"
+ "url": "url to the image",
+ "revised_prompt": "the actual prompt that was used"
}, {
- "url":ΓÇ»"url to the image"
-        },
+ "url": "url to the image"
+ },
...
-    ]
+ ]
} ```
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
## Clean up resources
-If you want to clean up and remove an OpenAI or Azure Cognitive Search resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+If you want to clean up and remove an OpenAI or Azure AI Search resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
- [Azure AI services resources](../multi-service-resource.md?pivots=azportal#clean-up-resources)-- [Azure Cognitive Search resources](/azure/search/search-get-started-portal#clean-up-resources)
+- [Azure AI Search resources](/azure/search/search-get-started-portal#clean-up-resources)
- [Azure app service resources](/azure/app-service/quickstart-dotnetcore?pivots=development-environment-vs#clean-up-resources) ## Next steps
ai-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/responsible-use-of-ai-overview.md
Azure AI services provides information and guidelines on how to responsibly use
* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/ai-services/speech-service/context/context) * [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+## Speech - Text to speech
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/text-to-speech/transparency-note?context=/azure/ai-services/speech-service/context/context)
+ ## Speech - Speech to text * [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context)
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
Audio files that are stored in Azure Blob storage can be accessed via one of two
You can specify one or multiple audio files when creating a transcription. We recommend that you provide multiple files per request or point to an Azure Blob storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time.
-## Supported audio formats
+## Supported audio formats and codecs
-The batch transcription API supports the following formats:
+The batch transcription API supports a number of different formats and codecs, such as:
-| Format | Codec | Bits per sample | Sample rate |
-|--|-|||
-| WAV | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
-| MP3 | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
-| OGG | OPUS | 16-bit | 8 kHz or 16 kHz, mono or stereo |
+- WAV
+- MP3
+- OPUS/OGG
+- AAC
+- FLAC
+- WMA
+- ALAW in WAV container
+- MULAW in WAV container
+- AMR
+- WebM
+- MP4
+- M4A
+- SPEEX
-For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each input audio file. To create an ordered final transcript, use the timestamps that are generated per utterance.
+
+> [!NOTE]
+> Batch transcription service integrates GStreamer and may accept more formats and codecs without returning errors, while we suggest to use lossless formats such as WAV (PCM encoding) and FLAC to ensure best transcription quality.
## Azure Blob Storage upload
ai-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-get.md
To get transcription results, first check the [status](#get-transcription-status
To get the status of the transcription job, call the [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) operation of the [Speech to text REST API](rest-speech-to-text.md).
+> [!IMPORTANT]
+> Batch transcription jobs are scheduled on a best-effort basis. At pick hours it may take up to 30 minutes or longer for a transcription job to start processing. Most of the time during the execution the transcription status will be `Running`. This is because the job is assigned with `Running` status the moment it moves to the batch transcription backend system, which happens almost immediately when base model is used, and slightly slower for custom models. Thus the amount of time a transcription job spends in `Running` state doesn't correspond to the actual transcription time, but also includes waiting time in the internal queues.
+ Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. ```azurecli-interactive
The `status` property indicates the current status of the transcriptions. The tr
::: zone pivot="speech-cli"
+> [!IMPORTANT]
+> Batch transcription jobs are scheduled on a best-effort basis. At pick hours it may take up to 30 minutes or longer for a transcription job to start processing. Most of the time during the execution the transcription status will be `Running`. This is because the job is assigned with `Running` status the moment it moves to the batch transcription backend system, which happens almost immediately when base model is used, and slightly slower for custom models. Thus the amount of time a transcription job spends in `Running` state doesn't correspond to the actual transcription time, but also includes waiting time in the internal queues.
+ To get the status of the transcription job, use the `spx batch transcription status` command. Construct the request parameters according to the following instructions: - Set the `transcription` parameter to the ID of the transcription that you want to get.
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
In this article, you learn how to evaluate pronunciation with speech to text thr
> > For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing).
-You can get pronunciation assessment scores for:
+## Pronunciation assessment in streaming mode
-- Full text-- Words-- Syllable groups-- Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format
+Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore`, `ProsodyScore`, and `CompletenessScore` will vary over time throughout the recording and evaluation process.
-> [!NOTE]
-> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#:~:text=PronunciationAssessmentWithStream).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#:~:text=PronunciationAssessmentWithStream).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/android/sdkdemo/app/src/main/java/com/microsoft/cognitiveservices/speech/samples/sdkdemo/MainActivity.java#L548).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L915).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessment.js).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L831).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L191).
+++ ## Configuration parameters
You can get pronunciation assessment scores for:
> Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts in this guide, but you must select another programming language for implementation details. ::: zone-end
+In the `SpeechRecognizer`, you can specify the language that you're learning or practicing improving pronunciation. The default locale is `en-US` if not otherwise specified. To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98).
+
+> [!TIP]
+> If you aren't sure which locale to set when a language has multiple locales (such as Spanish), try each locale (such as `es-ES` and `es-MX`) separately. Evaluate the results to determine which locale scores higher for your specific scenario.
+ You must create a `PronunciationAssessmentConfig` object. You need to configure the `PronunciationAssessmentConfig` object to enable prosody assessment for your pronunciation evaluation. This feature assesses aspects like stress, intonation, speaking speed, and rhythm, providing insights into the naturalness and expressiveness of your speech. For a content assessment (part of the [unscripted assessment](#unscripted-assessment-results) for the speaking language learning scenario), you also need to configure the `PronunciationAssessmentConfig` object. By providing a topic description, you can enhance the assessment's understanding of the specific topic being spoken about, resulting in more precise content assessment scores. ::: zone pivot="programming-language-csharp"
pronunciationConfig->EnableContentAssessmentWithTopic("greeting");
```Java PronunciationAssessmentConfig pronunciationConfig = new PronunciationAssessmentConfig("",
-PronunciationAssessmentGradingSystem.HundredMark, PronunciationAssessmentGranularity.Phoneme, false);
+ PronunciationAssessmentGradingSystem.HundredMark, PronunciationAssessmentGranularity.Phoneme, false);
pronunciationConfig.enableProsodyAssessment();
-pronunciationConfig.enableContentAssessmentWithTopic("greeting");
+pronunciationConfig.enableContentAssessmentWithTopic("greeting");
``` ::: zone-end
pronunciationConfig.enableContentAssessmentWithTopic("greeting");
```Python pronunciation_config = speechsdk.PronunciationAssessmentConfig(
-reference_text="",
-grading_system=speechsdk.PronunciationAssessmentGradingSystem.HundredMark,
-granularity=speechsdk.PronunciationAssessmentGranularity.Phoneme,
-enable_miscue=False)
+ reference_text="",
+ grading_system=speechsdk.PronunciationAssessmentGradingSystem.HundredMark,
+ granularity=speechsdk.PronunciationAssessmentGranularity.Phoneme,
+ enable_miscue=False)
pronunciation_config.enable_prosody_assessment()
-pronunciation_config.enable_content_assessment_with_topic("greeting")
+pronunciation_config.enable_content_assessment_with_topic("greeting")
``` ::: zone-end
pronunciation_config.enable_content_assessment_with_topic("greeting")
```JavaScript var pronunciationAssessmentConfig = new sdk.PronunciationAssessmentConfig(
-referenceText: "",
-gradingSystem: sdk.PronunciationAssessmentGradingSystem.HundredMark,
-granularity: sdk.PronunciationAssessmentGranularity.Phoneme,
-enableMiscue: false);
+ referenceText: "",
+ gradingSystem: sdk.PronunciationAssessmentGradingSystem.HundredMark,
+ granularity: sdk.PronunciationAssessmentGranularity.Phoneme,
+ enableMiscue: false);
pronunciationAssessmentConfig.EnableProsodyAssessment();
-pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting");
+pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting");
``` ::: zone-end
pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting");
```ObjectiveC SPXPronunciationAssessmentConfiguration *pronunicationConfig =
-[[SPXPronunciationAssessmentConfiguration alloc] init:@""
- gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark
- granularity:SPXPronunciationAssessmentGranularity_Phoneme
- enableMiscue:false];
+[[SPXPronunciationAssessmentConfiguration alloc] init:@"" gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark granularity:SPXPronunciationAssessmentGranularity_Phoneme enableMiscue:false];
[pronunicationConfig enableProsodyAssessment]; [pronunicationConfig enableContentAssessmentWithTopic:@"greeting"]; ```
SPXPronunciationAssessmentConfiguration *pronunicationConfig =
```swift let pronAssessmentConfig = try! SPXPronunciationAssessmentConfiguration("",
-gradingSystem: .hundredMark,
-granularity: .phoneme,
-enableMiscue: false)
+ gradingSystem: .hundredMark,
+ granularity: .phoneme,
+ enableMiscue: false)
pronAssessmentConfig.enableProsodyAssessment()
-pronAssessmentConfig.enableContentAssessment(withTopic: "greeting")
+pronAssessmentConfig.enableContentAssessment(withTopic: "greeting")
``` ::: zone-end
pronAssessmentConfig.enableContentAssessment(withTopic: "greeting")
::: zone-end - This table lists some of the key configuration parameters for pronunciation assessment. | Parameter | Description |
This table lists some of the key configuration parameters for pronunciation asse
| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. Enabling miscue is optional. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table. | | `ScenarioId` | A GUID indicating a customized point system. |
-## Syllable groups
-
-Pronunciation assessment can provide syllable-level assessment results. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme.
+## Get pronunciation assessment results
-The following table compares example phonemes with the corresponding syllables.
+When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string.
-| Sample word | Phonemes | Syllables |
-|--|-|-|
-|technological|teknələdʒɪkl|tek·nə·lɑ·dʒɪkl|
-|hello|hɛloʊ|hɛ·loʊ|
-|luck|lʌk|lʌk|
-|photosynthesis|foʊtəsɪnθəsɪs|foʊ·tə·sɪn·θə·sɪs|
-To request syllable-level results along with phonemes, set the granularity [configuration parameter](#configuration-parameters) to `Phoneme`.
+```csharp
+using (var speechRecognizer = new SpeechRecognizer(
+ speechConfig,
+ audioConfig))
+{
+ pronunciationAssessmentConfig.ApplyTo(speechRecognizer);
+ var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync();
-## Phoneme alphabet format
+ // The pronunciation assessment result as a Speech SDK object
+ var pronunciationAssessmentResult =
+ PronunciationAssessmentResult.FromResult(speechRecognitionResult);
-For the `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
+ // The pronunciation assessment result as a JSON string
+ var pronunciationAssessmentResultJson = speechRecognitionResult.Properties.GetProperty(PropertyId.SpeechServiceResponse_JsonResult);
+}
+```
-The following table compares example SAPI phonemes with the corresponding IPA phonemes.
-| Sample word | SAPI Phonemes | IPA phonemes |
-|--|-|-|
-|hello|h eh l ow|h ɛ l oʊ|
-|luck|l ah k|l ʌ k|
-|photosynthesis|f ow t ax s ih n th ax s ih s|f oʊ t ə s ɪ n θ ə s ɪ s|
-To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes are in SAPI format by default.
+Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK for C++. Word, syllable, and phoneme results are only available in the JSON string.
+```cpp
+auto speechRecognizer = SpeechRecognizer::FromConfig(
+ speechConfig,
+ audioConfig);
-```csharp
-pronunciationAssessmentConfig.PhonemeAlphabet = "IPA";
-```
-
+pronunciationAssessmentConfig->ApplyTo(speechRecognizer);
+speechRecognitionResult = speechRecognizer->RecognizeOnceAsync().get();
+// The pronunciation assessment result as a Speech SDK object
+auto pronunciationAssessmentResult =
+ PronunciationAssessmentResult::FromResult(speechRecognitionResult);
-```cpp
-auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
+// The pronunciation assessment result as a JSON string
+auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.GetProperty(PropertyId::SpeechServiceResponse_JsonResult);
```+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L624).
::: zone-end ::: zone pivot="programming-language-java"
+For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK for Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string.
```Java
-PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
-```
+SpeechRecognizer speechRecognizer = new SpeechRecognizer(
+ speechConfig,
+ audioConfig);
+pronunciationAssessmentConfig.applyTo(speechRecognizer);
+Future<SpeechRecognitionResult> future = speechRecognizer.recognizeOnceAsync();
+SpeechRecognitionResult speechRecognitionResult = future.get(30, TimeUnit.SECONDS);
+// The pronunciation assessment result as a Speech SDK object
+PronunciationAssessmentResult pronunciationAssessmentResult =
+ PronunciationAssessmentResult.fromResult(speechRecognitionResult);
-```Python
-pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}")
+// The pronunciation assessment result as a JSON string
+String pronunciationAssessmentResultJson = speechRecognitionResult.getProperties().getProperty(PropertyId.SpeechServiceResponse_JsonResult);
+
+recognizer.close();
+speechConfig.close();
+audioConfig.close();
+pronunciationAssessmentConfig.close();
+speechRecognitionResult.close();
``` ::: zone pivot="programming-language-javascript" ```JavaScript
-var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
+var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, audioConfig);
+
+pronunciationAssessmentConfig.applyTo(speechRecognizer);
+
+speechRecognizer.recognizeOnceAsync((speechRecognitionResult: SpeechSDK.SpeechRecognitionResult) => {
+ // The pronunciation assessment result as a Speech SDK object
+ var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult);
+
+ // The pronunciation assessment result as a JSON string
+ var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult);
+},
+{});
```
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessmentContinue.js#LL37C4-L37C52).
+++
+```Python
+speech_recognizer = speechsdk.SpeechRecognizer(
+ speech_config=speech_config, \
+ audio_config=audio_config)
+
+pronunciation_assessment_config.apply_to(speech_recognizer)
+speech_recognition_result = speech_recognizer.recognize_once()
+
+# The pronunciation assessment result as a Speech SDK object
+pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(speech_recognition_result)
+
+# The pronunciation assessment result as a JSON string
+pronunciation_assessment_result_json = speech_recognition_result.properties.get(speechsdk.PropertyId.SpeechServiceResponse_JsonResult)
+```
+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#LL937C1-L937C1).
+ ::: zone pivot="programming-language-objectivec" ```ObjectiveC
-pronunciationAssessmentConfig.phonemeAlphabet = @"IPA";
+SPXSpeechRecognizer* speechRecognizer = \
+ [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
+ audioConfiguration:audioConfig];
+
+[pronunciationAssessmentConfig applyToRecognizer:speechRecognizer];
+
+SPXSpeechRecognitionResult *speechRecognitionResult = [speechRecognizer recognizeOnce];
+
+// The pronunciation assessment result as a Speech SDK object
+SPXPronunciationAssessmentResult* pronunciationAssessmentResult = [[SPXPronunciationAssessmentResult alloc] init:speechRecognitionResult];
+
+// The pronunciation assessment result as a JSON string
+NSString* pronunciationAssessmentResultJson = [speechRecognitionResult.properties getPropertyByName:SPXSpeechServiceResponseJsonResult];
```
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L862).
::: zone pivot="programming-language-swift" ```swift
-pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
+let speechRecognizer = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, audioConfiguration: audioConfig)
+
+try! pronConfig.apply(to: speechRecognizer)
+
+let speechRecognitionResult = try? speechRecognizer.recognizeOnce()
+
+// The pronunciation assessment result as a Speech SDK object
+let pronunciationAssessmentResult = SPXPronunciationAssessmentResult(speechRecognitionResult!)
+
+// The pronunciation assessment result as a JSON string
+let pronunciationAssessmentResultJson = speechRecognitionResult!.properties?.getPropertyBy(SPXPropertyId.speechServiceResponseJsonResult)
``` ::: zone-end
pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
::: zone-end
+### Result parameters
-## Spoken phoneme
+Depending on whether you're using [scripted](#scripted-assessment-results) or [unscripted](#unscripted-assessment-results) assessment, you can get different pronunciation assessment results. Scripted assessment is for the reading language learning scenario, and unscripted assessment is for the speaking language learning scenario.
-With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes.
+> [!NOTE]
+> For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing).
-For example, to obtain the complete spoken sound for the word "Hello", you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". However, the actual spoken phonemes are "h ə l oʊ". You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
+#### Scripted assessment results
+
+This table lists some of the key pronunciation assessment results for the scripted assessment (reading scenario) and the supported granularity for each.
+
+| Parameter | Description |Granularity|
+|--|-|-|
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives.|Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level|
+| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |Full Text level|
+| `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |Full Text level|
+| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level|
+| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |Full Text level|
+| `ErrorType` | This value indicates whether a word is omitted, inserted, improperly inserted with a break, or missing a break at punctuation compared to the reference text. It also indicates whether a word is badly pronounced, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.| Word level|
+
+#### Unscripted assessment results
+
+This table lists some of the key pronunciation assessment results for the unscripted assessment (speaking scenario) and the supported granularity for each.
+
+> [!NOTE]
+> VocabularyScore, GrammarScore, and TopicScore parameters roll up to the combined content assessment.
+>
+> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale.
+
+| Response parameter | Description |Granularity|
+|--|-|-|
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives. | Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level|
+| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | Full Text level|
+| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level|
+| `VocabularyScore` | Proficiency in lexical usage. It evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, and the level of lexical complexity. | Full Text level|
+| `GrammarScore` | Correctness in using grammar and variety of sentence patterns. Grammatical errors are jointly evaluated by lexical accuracy, grammatical accuracy, and diversity of sentence structures. | Full Text level|
+| `TopicScore` | Level of understanding and engagement with the topic, which provides insights into the speakerΓÇÖs ability to express their thoughts and ideas effectively and the ability to engage with the topic. | Full Text level|
+| `PronScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from AccuracyScore, FluencyScore, and CompletenessScore with weight. | Full Text level|
+| `ErrorType` | This value indicates whether a word is badly pronounced, improperly inserted with a break, missing a break at punctuation, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. | Word level|
+
+The following table describes the prosody assessment results in more detail:
+
+| Field | Description |
+|-|--|
+| `ProsodyScore` | Prosody score of the entire utterance. |
+| `Feedback` | Feedback on the word level, including Break and Intonation. |
+|`Break` | |
+| `ErrorTypes` | Error types related to breaks, including `UnexpectedBreak` and `MissingBreak`. In the current version, we donΓÇÖt provide the break error type. You need to set thresholds on the following fields ΓÇ£UnexpectedBreak ΓÇô ConfidenceΓÇ¥ and ΓÇ£MissingBreak ΓÇô confidenceΓÇ¥, respectively to decide whether there's an unexpected break or missing break before the word. |
+| `UnexpectedBreak` | Indicates an unexpected break before the word. |
+| `MissingBreak` | Indicates a missing break before the word. |
+| `Thresholds` | Suggested thresholds on both confidence scores are 0.75. That means, if the value of ΓÇÿUnexpectedBreak ΓÇô ConfidenceΓÇÖ is larger than 0.75, it can be decided to have an unexpected break. If the value of ΓÇÿMissingBreak ΓÇô confidenceΓÇÖ is larger than 0.75, it can be decided to have a missing break. If you want to have variable detection sensitivity on these two breaks, itΓÇÖs suggested to assign different thresholds to the 'UnexpectedBreak - Confidence' and 'MissingBreak - Confidence' fields. |
+|`Intonation`| Indicates intonation in speech. |
+| `ErrorTypes` | Error types related to intonation, currently supporting only Monotone. If the ΓÇÿMonotoneΓÇÖ exists in the field ΓÇÿErrorTypesΓÇÖ, the utterance is detected to be monotonic. Monotone is detected on the whole utterance, but the tag is assigned to all the words. All the words in the same utterance share the same monotone detection information. |
+| `Monotone` | Indicates monotonic speech. |
+| `Thresholds (Monotone Confidence)` | The fields 'Monotone - SyllablePitchDeltaConfidence' are reserved for user-customized monotone detection. If you're unsatisfied with the provided monotone decision, you can adjust the thresholds on these fields to customize the detection according to your preferences. |
+
+### JSON result example
+
+The [scripted](#scripted-assessment-results) pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know:
+- The phoneme [alphabet](#phoneme-alphabet-format) is IPA.
+- The [syllables](#syllable-groups) are returned alongside phonemes for the same word.
+- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l"). The offset represents the time at which the recognized speech begins in the audio stream, and it's measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties).
+- There are five `NBestPhonemes` corresponding to the number of [spoken phonemes](#spoken-phoneme) requested.
+- Within `Phonemes`, the most likely [spoken phonemes](#spoken-phoneme) was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
```json {
For example, to obtain the complete spoken sound for the word "Hello", you can c
] } ]
-}
-```
-
-To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`.
-
-
-```csharp
-pronunciationAssessmentConfig.NBestPhonemeCount = 5;
-```
-
--
-```cpp
-auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
-```
-
--
-```Java
-PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
-```
-
--
-```Python
-pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}")
-```
---
-```JavaScript
-var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
-```
--
-
-
-```ObjectiveC
-pronunciationAssessmentConfig.nbestPhonemeCount = 5;
-```
----
-```swift
-pronunciationAssessmentConfig?.nbestPhonemeCount = 5
-```
----
-## Get pronunciation assessment results
-
-In the `SpeechRecognizer`, you can specify the language that you're learning or practicing improving pronunciation. The default locale is `en-US` if not otherwise specified.
-
-> [!TIP]
-> If you aren't sure which locale to set when a language has multiple locales (such as Spanish), try each locale (such as `es-ES` and `es-MX`) separately. Evaluate the results to determine which locale scores higher for your specific scenario.
-
-When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string.
--
-```csharp
-using (var speechRecognizer = new SpeechRecognizer(
- speechConfig,
- audioConfig))
-{
- pronunciationAssessmentConfig.ApplyTo(speechRecognizer);
- var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync();
-
- // The pronunciation assessment result as a Speech SDK object
- var pronunciationAssessmentResult =
- PronunciationAssessmentResult.FromResult(speechRecognitionResult);
-
- // The pronunciation assessment result as a JSON string
- var pronunciationAssessmentResultJson = speechRecognitionResult.Properties.GetProperty(PropertyId.SpeechServiceResponse_JsonResult);
-}
-```
-
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98).
--
+}
+```
-Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK for C++. Word, syllable, and phoneme results are only available in the JSON string.
+You can get pronunciation assessment scores for:
-```cpp
-auto speechRecognizer = SpeechRecognizer::FromConfig(
- speechConfig,
- audioConfig);
+- Full text
+- Words
+- Syllable groups
+- Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format
-pronunciationAssessmentConfig->ApplyTo(speechRecognizer);
-speechRecognitionResult = speechRecognizer->RecognizeOnceAsync().get();
+> [!NOTE]
+> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
-// The pronunciation assessment result as a Speech SDK object
-auto pronunciationAssessmentResult =
- PronunciationAssessmentResult::FromResult(speechRecognitionResult);
+## Syllable groups
-// The pronunciation assessment result as a JSON string
-auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.GetProperty(PropertyId::SpeechServiceResponse_JsonResult);
-```
+Pronunciation assessment can provide syllable-level assessment results. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme.
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L624).
-
+The following table compares example phonemes with the corresponding syllables.
-For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK for Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string.
+| Sample word | Phonemes | Syllables |
+|--|-|-|
+|technological|teknələdʒɪkl|tek·nə·lɑ·dʒɪkl|
+|hello|hɛloʊ|hɛ·loʊ|
+|luck|lʌk|lʌk|
+|photosynthesis|foʊtəsɪnθəsɪs|foʊ·tə·sɪn·θə·sɪs|
-```Java
-SpeechRecognizer speechRecognizer = new SpeechRecognizer(
- speechConfig,
- audioConfig);
+To request syllable-level results along with phonemes, set the granularity [configuration parameter](#configuration-parameters) to `Phoneme`.
-pronunciationAssessmentConfig.applyTo(speechRecognizer);
-Future<SpeechRecognitionResult> future = speechRecognizer.recognizeOnceAsync();
-SpeechRecognitionResult speechRecognitionResult = future.get(30, TimeUnit.SECONDS);
+## Phoneme alphabet format
-// The pronunciation assessment result as a Speech SDK object
-PronunciationAssessmentResult pronunciationAssessmentResult =
- PronunciationAssessmentResult.fromResult(speechRecognitionResult);
+For the `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
-// The pronunciation assessment result as a JSON string
-String pronunciationAssessmentResultJson = speechRecognitionResult.getProperties().getProperty(PropertyId.SpeechServiceResponse_JsonResult);
+The following table compares example SAPI phonemes with the corresponding IPA phonemes.
-recognizer.close();
-speechConfig.close();
-audioConfig.close();
-pronunciationAssessmentConfig.close();
-speechRecognitionResult.close();
-```
+| Sample word | SAPI Phonemes | IPA phonemes |
+|--|-|-|
+|hello|h eh l ow|h ɛ l oʊ|
+|luck|l ah k|l ʌ k|
+|photosynthesis|f ow t ax s ih n th ax s ih s|f oʊ t ə s ɪ n θ ə s ɪ s|
+To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes are in SAPI format by default.
+```csharp
+pronunciationAssessmentConfig.PhonemeAlphabet = "IPA";
+```
+
-```JavaScript
-var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, audioConfig);
-pronunciationAssessmentConfig.applyTo(speechRecognizer);
+```cpp
+auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
+```
+
-speechRecognizer.recognizeOnceAsync((speechRecognitionResult: SpeechSDK.SpeechRecognitionResult) => {
- // The pronunciation assessment result as a Speech SDK object
- var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult);
- // The pronunciation assessment result as a JSON string
- var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult);
-},
-{});
+```Java
+PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
```
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessmentContinue.js#LL37C4-L37C52).
- ::: zone pivot="programming-language-python" ```Python
-speech_recognizer = speechsdk.SpeechRecognizer(
- speech_config=speech_config, \
- audio_config=audio_config)
+pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}")
+```
-pronunciation_assessment_config.apply_to(speech_recognizer)
-speech_recognition_result = speech_recognizer.recognize_once()
-# The pronunciation assessment result as a Speech SDK object
-pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(speech_recognition_result)
-# The pronunciation assessment result as a JSON string
-pronunciation_assessment_result_json = speech_recognition_result.properties.get(speechsdk.PropertyId.SpeechServiceResponse_JsonResult)
+```JavaScript
+var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
```
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#LL937C1-L937C1).
- ::: zone pivot="programming-language-objectivec" ```ObjectiveC
-SPXSpeechRecognizer* speechRecognizer = \
- [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
- audioConfiguration:audioConfig];
-
-[pronunciationAssessmentConfig applyToRecognizer:speechRecognizer];
-
-SPXSpeechRecognitionResult *speechRecognitionResult = [speechRecognizer recognizeOnce];
-
-// The pronunciation assessment result as a Speech SDK object
-SPXPronunciationAssessmentResult* pronunciationAssessmentResult = [[SPXPronunciationAssessmentResult alloc] init:speechRecognitionResult];
-
-// The pronunciation assessment result as a JSON string
-NSString* pronunciationAssessmentResultJson = [speechRecognitionResult.properties getPropertyByName:SPXSpeechServiceResponseJsonResult];
+pronunciationAssessmentConfig.phonemeAlphabet = @"IPA";
```
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L862).
::: zone pivot="programming-language-swift" ```swift
-let speechRecognizer = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, audioConfiguration: audioConfig)
-
-try! pronConfig.apply(to: speechRecognizer)
-
-let speechRecognitionResult = try? speechRecognizer.recognizeOnce()
-
-// The pronunciation assessment result as a Speech SDK object
-let pronunciationAssessmentResult = SPXPronunciationAssessmentResult(speechRecognitionResult!)
-
-// The pronunciation assessment result as a JSON string
-let pronunciationAssessmentResultJson = speechRecognitionResult!.properties?.getPropertyBy(SPXPropertyId.speechServiceResponseJsonResult)
+pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
```
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L224).
- ::: zone-end ::: zone pivot="programming-language-go" ::: zone-end
-### Result parameters
-
-Depending on whether you're using [scripted](#scripted-assessment-results) or [unscripted](#unscripted-assessment-results) assessment, you can get different pronunciation assessment results. Scripted assessment is for the reading language learning scenario, and unscripted assessment is for the speaking language learning scenario.
-
-> [!NOTE]
-> For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing).
-
-#### Scripted assessment results
-
-This table lists some of the key pronunciation assessment results for the scripted assessment (reading scenario) and the supported granularity for each.
-
-| Parameter | Description |Granularity|
-|--|-|-|
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives.|Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level|
-| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |Full Text level|
-| `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |Full Text level|
-| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level|
-| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |Full Text level|
-| `ErrorType` | This value indicates whether a word is omitted, inserted, improperly inserted with a break, or missing a break at punctuation compared to the reference text. It also indicates whether a word is badly pronounced, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.| Word level|
-
-#### Unscripted assessment results
-
-This table lists some of the key pronunciation assessment results for the unscripted assessment (speaking scenario) and the supported granularity for each.
-
-> [!NOTE]
-> VocabularyScore, GrammarScore, and TopicScore parameters roll up to the combined content assessment.
->
-> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale.
-
-| Response parameter | Description |Granularity|
-|--|-|-|
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives. | Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level|
-| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | Full Text level|
-| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level|
-| `VocabularyScore` | Proficiency in lexical usage. It evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, and the level of lexical complexity. | Full Text level|
-| `GrammarScore` | Correctness in using grammar and variety of sentence patterns. Grammatical errors are jointly evaluated by lexical accuracy, grammatical accuracy, and diversity of sentence structures. | Full Text level|
-| `TopicScore` | Level of understanding and engagement with the topic, which provides insights into the speakerΓÇÖs ability to express their thoughts and ideas effectively and the ability to engage with the topic. | Full Text level|
-| `PronScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from AccuracyScore, FluencyScore, and CompletenessScore with weight. | Full Text level|
-| `ErrorType` | This value indicates whether a word is badly pronounced, improperly inserted with a break, missing a break at punctuation, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. | Word level|
-
-The following table describes the prosody assessment results in more detail:
-| Field | Description |
-|-|--|
-| `ProsodyScore` | Prosody score of the entire utterance. |
-| `Feedback` | Feedback on the word level, including Break and Intonation. |
-|`Break` | |
-| `ErrorTypes` | Error types related to breaks, including `UnexpectedBreak` and `MissingBreak`. In the current version, we donΓÇÖt provide the break error type. You need to set thresholds on the following fields ΓÇ£UnexpectedBreak ΓÇô ConfidenceΓÇ¥ and ΓÇ£MissingBreak ΓÇô confidenceΓÇ¥, respectively to decide whether there's an unexpected break or missing break before the word. |
-| `UnexpectedBreak` | Indicates an unexpected break before the word. |
-| `MissingBreak` | Indicates a missing break before the word. |
-| `Thresholds` | Suggested thresholds on both confidence scores are 0.75. That means, if the value of ΓÇÿUnexpectedBreak ΓÇô ConfidenceΓÇÖ is larger than 0.75, it can be decided to have an unexpected break. If the value of ΓÇÿMissingBreak ΓÇô confidenceΓÇÖ is larger than 0.75, it can be decided to have a missing break. If you want to have variable detection sensitivity on these two breaks, itΓÇÖs suggested to assign different thresholds to the 'UnexpectedBreak - Confidence' and 'MissingBreak - Confidence' fields. |
-|`Intonation`| Indicates intonation in speech. |
-| `ErrorTypes` | Error types related to intonation, currently supporting only Monotone. If the ΓÇÿMonotoneΓÇÖ exists in the field ΓÇÿErrorTypesΓÇÖ, the utterance is detected to be monotonic. Monotone is detected on the whole utterance, but the tag is assigned to all the words. All the words in the same utterance share the same monotone detection information. |
-| `Monotone` | Indicates monotonic speech. |
-| `Thresholds (Monotone Confidence)` | The fields 'Monotone - SyllablePitchDeltaConfidence' are reserved for user-customized monotone detection. If you're unsatisfied with the provided monotone decision, you can adjust the thresholds on these fields to customize the detection according to your preferences. |
+## Spoken phoneme
-### JSON result example
+With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes.
-The [scripted](#scripted-assessment-results) pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know:
-- The phoneme [alphabet](#phoneme-alphabet-format) is IPA.-- The [syllables](#syllable-groups) are returned alongside phonemes for the same word. -- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l"). The offset represents the time at which the recognized speech begins in the audio stream, and it's measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties).-- There are five `NBestPhonemes` corresponding to the number of [spoken phonemes](#spoken-phoneme) requested.-- Within `Phonemes`, the most likely [spoken phonemes](#spoken-phoneme) was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
+For example, to obtain the complete spoken sound for the word "Hello", you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". However, the actual spoken phonemes are "h ə l oʊ". You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
```json {
The [scripted](#scripted-assessment-results) pronunciation assessment results fo
} ```
-## Pronunciation assessment in streaming mode
-
-Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore`, `ProsodyScore`, and `CompletenessScore` will vary over time throughout the recording and evaluation process.
-
+To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`.
+
::: zone pivot="programming-language-csharp"
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#:~:text=PronunciationAssessmentWithStream).
-
+```csharp
+pronunciationAssessmentConfig.NBestPhonemeCount = 5;
+```
+
::: zone pivot="programming-language-cpp"
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#:~:text=PronunciationAssessmentWithStream).
-
+```cpp
+auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
+```
+
::: zone-end ::: zone pivot="programming-language-java"
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/android/sdkdemo/app/src/main/java/com/microsoft/cognitiveservices/speech/samples/sdkdemo/MainActivity.java#L548).
-
+```Java
+PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
+```
+
::: zone pivot="programming-language-python"
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L915).
+```Python
+pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}")
+```
::: zone-end ::: zone pivot="programming-language-javascript"
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessment.js).
+```JavaScript
+var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
+```
::: zone-end ::: zone pivot="programming-language-objectivec"-
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L831).
+
+
+```ObjectiveC
+pronunciationAssessmentConfig.nbestPhonemeCount = 5;
+```
::: zone-end + ::: zone pivot="programming-language-swift"
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L191).
+```swift
+pronunciationAssessmentConfig?.nbestPhonemeCount = 5
+```
::: zone-end
For how to use Pronunciation Assessment in streaming mode in your own applicatio
## Next steps -- Learn our quality [benchmark](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/speech-service-update-hierarchical-transformer-for-pronunciation/ba-p/3740866)
+- Learn our quality [benchmark](https://aka.ms/pronunciationassessment/techblog)
- Try out [pronunciation assessment in Speech Studio](pronunciation-assessment-tool.md)-- Check out easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.
+- Check out easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video demo](https://www.youtube.com/watch?v=NQi4mBiNNTE) of pronunciation assessment.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md
To improve Speech to text recognition accuracy, customization is available for s
The table in this section summarizes the locales and voices supported for Text to speech. See the table footnotes for more details.
-Additional remarks for Text to speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below.
+Additional remarks for text to speech locales are included in the [voice styles and roles](#voice-styles-and-roles), [prebuilt neural voices](#prebuilt-neural-voices), [Custom Neural Voice](#custom-neural-voice), and [personal voice](#personal-voice) sections below.
> [!TIP] > Check the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
With the cross-lingual feature, you can transfer your custom neural voice model
[!INCLUDE [Language support include](includes/language-support/tts-cnv.md)] +
+### Personal voice
+
+[Personal voice](personal-voice-overview.md) is a feature that lets you create a voice that sounds like you or your users. The following table summarizes the locales supported for personal voice.
+++ # [Pronunciation assessment](#tab/pronunciation-assessment) The table in this section summarizes the 24 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 23 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
ai-services Personal Voice Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-overview.md
# What is personal voice (preview) for text to speech?
-With personal voice (preview), you can get AI generated replication of your voice (or users of your application) in a few seconds. You provide a one-minute speech sample as the audio prompt, and then use it to generate speech in any of the more than 90 languages supported across more than locales.
+With personal voice (preview), you can get AI generated replication of your voice (or users of your application) in a few seconds. You provide a one-minute speech sample as the audio prompt, and then use it to generate speech in any of the more than 90 languages supported across more than 100 locales.
> [!NOTE] > Personal voice is available in these regions: West Europe, East US, and South East Asia.
+> For supported locales, see [personal voice language support](./language-support.md#personal-voice).
The following table summarizes the difference between custom neural voice pro and personal voice.
Here's example SSML in a request for text to speech with the voice name and the
</speak> ```
+### Responsible AI
+
+We care about the people who use AI and the people who will be affected by it as much as we care about technology. For more information, see the Responsible AI [transparency notes](/legal/cognitive-services/speech-service/text-to-speech/transparency-note?context=/azure/ai-services/speech-service/context/context).
+ ## Reference documentation The API reference documentation is made available to approved customers. You can apply for access [here](https://aka.ms/customneural).
ai-services Power Automate Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/power-automate-batch-transcription.md
To trigger the test flow, upload an audio file to the Azure Blob Storage contain
## Upload files to the container
-Follow these steps to upload [wav, mp3, or ogg](batch-transcription-audio-data.md#supported-audio-formats) files from your local directory to the Azure Storage container that you [created previously](#create-the-azure-blob-storage-container).
+Follow these steps to upload [wav, mp3, or ogg](batch-transcription-audio-data.md#supported-audio-formats-and-codecs) files from your local directory to the Azure Storage container that you [created previously](#create-the-azure-blob-storage-container).
1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. 1. <a href="https://portal.azure.com/#create/Microsoft.StorageAccount-ARM" title="Create a Storage account resource" target="_blank">Create a Storage account resource</a> in the Azure portal. Use the same subscription and resource group as your Speech resource.
ai-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/pronunciation-assessment-tool.md
At the bottom of the Assessment result, two overall scores are displayed: Pronun
- **Prosody score**: Assesses the use of appropriate intonation, rhythm, and stress. Several additional error types related to prosody assessment are introduced, such as Unexpected break, Missing break, and Monotone. These error types provide more detailed information about pronunciation errors compared to the previous engine. **Content Score**: This score provides an aggregated assessment of the content of the speech and includes three sub-aspects. This score is only available in the speaking tab for an unscripted assessment.+
+> [!NOTE]
+> Content score is currently available on the following regions: `westcentralus`, `eastasia`, `eastus`, `northeurope`, `westeurope`, and `westus2`. All other regions will have Content score available starting from Nov 30, 2023.
+ - **Vocabulary score**: Evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, as well as the level of lexical complexity. - **Grammar score**: Evaluates the correctness of grammar usage and variety of sentence patterns. It considers lexical accuracy, grammatical accuracy, and diversity of sentence structures, providing a more comprehensive evaluation of language proficiency. - **Topic score**: Assesses the level of understanding and engagement with the topic discussed in the speech. It evaluates the speaker's ability to effectively express thoughts and ideas related to the given topic.
ai-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/sovereign-clouds.md
Available to US government entities and their partners only. See more informatio
- Neural voice - Speech translation - **Unsupported features:**
- - Custom Voice
- - Custom Commands
+ - Custom commands
+ - Custom neural voice
+ - Personal voice
+ - Text to speech avatar
- **Supported languages:** - See the list of supported languages [here](language-support.md)
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the
|--|--|--| | Concurrent request limit - base model endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). | | Concurrent request limit - custom endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). |
+| Max audio length for [real-time diarization](./get-started-stt-diarization.md). | N/A | 240 minutes per file |
#### Batch transcription
You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the
| Max audio input file size | N/A | 1 GB | | Max number of blobs per container | N/A | 10000 | | Max number of files per transcription request (when you're using multiple content URLs as input). | N/A | 1000 |
-| Max audio length for transcriptions with diarizaion enabled. | N/A | 240 minutes per file |
+| Max audio length for transcriptions with diarization enabled. | N/A | 240 minutes per file |
#### Model customization
ai-services Custom Avatar Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/custom-avatar-create.md
An avatar talent is an individual or target actor whose video of speaking is rec
You must provide a video file with a recorded statement from your avatar talent, acknowledging the use of their image and voice. Microsoft verifies that the content in the recording matches the predefined script provided by Microsoft. Microsoft compares the face of the avatar talent in the recorded video statement file with randomized videos from the training datasets to ensure that the avatar talent in video recordings and the avatar talent in the statement video file are from the same person.
-You can find the verbal consent statement in multiple languages on GitHub. The language of the verbal statement must be the same as your recording. See also the disclosure for voice talent.
+You can find the verbal consent statement in multiple languages on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/sampledata/customavatar/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. See also the disclosure for voice talent.
## Prepare training data for custom text to speech avatar
ai-services Real Time Synthesis Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/real-time-synthesis-avatar.md
In this how-to guide, you learn how to use text to speech avatar (preview) with
To get started, make sure you have the following prerequisites: -- **Azure Subscription:** [Create one for free](https://azure.microsoft.com/free/cognitive-services).-- **Speech Resource:** <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Create a speech resource</a> in the Azure portal.-- **Communication Resource:** Create a [Communication resource](https://portal.azure.com/#create/Microsoft.Communication) in the Azure portal (for real-time avatar synthesis only).-- You also need your network relay token for real-time avatar synthesis. After deploying your Communication resource, select **Go to resource** to view the endpoint and connection string under **Settings** -> **Keys** tab, and then follow [Access TURN relays](/azure/ai-services/speech-service/quickstarts/setup-platform#install-the-speech-sdk-for-javascript) to generate the relay token with the endpoint and connection string filled.
+- **Azure subscription:** [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+- **Speech resource:** <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Create a speech resource</a> in the Azure portal. Select "Standard S0" pricing tier if you want to create speech resource to access avatar.
+- **Your speech resource key and region:** After your Speech resource is deployed, select **Go to resource** to view and manage keys. For more information about Azure AI services resources, see [Get the keys for your resource](/azure/ai-services/multi-service-resource?pivots=azportal&tabs=windows#get-the-keys-for-your-resource).
+- If you build an application of real time avatar:
+ - **Communication resource:** Create a [Communication resource](https://portal.azure.com/#create/Microsoft.Communication) in the Azure portal (for real-time avatar synthesis only).
+ - You also need your network relay token for real-time avatar synthesis. After deploying your Communication resource, select **Go to resource** to view the endpoint and connection string under **Settings** -> **Keys** tab, and then follow [Access TURN relays](/azure/ai-services/speech-service/quickstarts/setup-platform#install-the-speech-sdk-for-javascript) to generate the relay token with the endpoint and connection string filled.
## Set up environment
ai-services What Is Text To Speech Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md
keywords: text to speech avatar
Text to speech avatar converts text into a digital video of a photorealistic human (either a prebuilt avatar or a [custom text to speech avatar](#custom-text-to-speech-avatar)) speaking with a natural-sounding voice. The text to speech avatar video can be synthesized asynchronously or in real time. Developers can build applications integrated with text to speech avatar through an API, or use a content creation tool on Speech Studio to create video content without coding.
-With text to speech avatar's advanced neural network models, the feature empowers users to deliver life-like and high-quality synthetic talking avatar videos for various applications while adhering to responsible AI practices.
+With text to speech avatar's advanced neural network models, the feature empowers users to deliver life-like and high-quality synthetic talking avatar videos for various applications while adhering to [responsible AI practices](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context).
> [!NOTE] > The text to speech avatar feature is only available in the following service regions: West US 2, West Europe, and Southeast Asia.
The text to speech avatar feature is only available in the following service reg
### Responsible AI
-We care about the people who use AI and the people who will be affected by it as much as we care about technology. For more information, see the Responsible AI [transparency notes](https://aka.ms/TTS-TN).
+We care about the people who use AI and the people who will be affected by it as much as we care about technology. For more information, see the Responsible AI [transparency notes](/legal/cognitive-services/speech-service/text-to-speech/transparency-note?context=/azure/ai-services/speech-service/context/context) and [disclosure for voice and avatar talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context).
## Next steps
ai-studio Content Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/content-safety.md
Select one of the following tabs to get started with content safety in Azure AI
Azure AI Studio provides a capability for you to quickly try out text moderation. The *moderate text content* tool takes into account various factors such as the type of content, the platform's policies, and the potential effect on users. Run moderation tests on sample content. Use Configure filters to rerun and further fine tune the test results. Add specific terms to the blocklist that you want detect and act on.
-1. Sign in to [Azure AI Studio](https://aka.ms/aistudio) and select **Explore** from the top menu.
+1. Sign in to [Azure AI Studio](https://ai.azure.com) and select **Explore** from the top menu.
1. Select **Content safety** panel under **Responsible AI**. 1. Select **Try it out** in the **Moderate text content** panel.
The **Use blocklist** tab lets you create, edit, and add a blocklist to the mode
Azure AI Studio provides a capability for you to quickly try out image moderation. The *moderate image content* tool takes into account various factors such as the type of content, the platform's policies, and the potential effect on users. Run moderation tests on sample content. Use Configure filters to rerun and further fine tune the test results. Add specific terms to the blocklist that you want detect and act on.
-1. Sign in to [Azure AI Studio](https://aka.ms/aistudio) and select **Explore** from the top menu.
+1. Sign in to [Azure AI Studio](https://ai.azure.com) and select **Explore** from the top menu.
1. Select **Content safety** panel under **Responsible AI**. 1. Select **Try it out** in the **Moderate image content** panel.
ai-studio Hear Speak Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md
The speech to text and text to speech features can be used together or separatel
Before you can start a chat session, you need to configure the playground to use the speech to text and text to speech features.
-1. Sign in to [Azure AI Studio](https://aka.ms/aistudio).
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
1. Select **Build** from the top menu and then select **Playground** from the collapsible left menu. 1. Make sure that **Chat** is selected from the **Mode** dropdown. Select your deployed chat model from the **Deployment** dropdown.
ai-studio Deploy Chat Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md
The steps in this tutorial are:
Follow these steps to deploy a chat model and test it without your data.
-1. Sign in to [Azure AI Studio](https://aka.ms/aistudio) with credentials that have access to your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource. You should be on the Azure AI Studio **Home** page.
+1. Sign in to [Azure AI Studio](https://ai.azure.com) with credentials that have access to your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource. You should be on the Azure AI Studio **Home** page.
1. Select **Build** from the top menu and then select **Deployments** > **Create**. :::image type="content" source="../media/tutorials/chat-web-app/deploy-create.png" alt-text="Screenshot of the deployments page without deployments." lightbox="../media/tutorials/chat-web-app/deploy-create.png":::
ai-studio What Is Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md
As a developer, you can manage settings such as connections and compute. Your ad
+## Azure AI studio enterprise chat solution demo
+
+Learn how to create a retail copilot using your data with Azure AI Studio in this [end-to-end walkthrough video](https://youtu.be/Qes7p5w8Tz8).
+> [!VIDEO https://www.youtube.com/embed/Qes7p5w8Tz8]
+ ## Pricing and Billing Using Azure AI Studio also incurs cost associated with the underlying services, to learn more read [Plan and manage costs for Azure AI services](./how-to/costs-plan-manage.md). ## Region availability
-Azure AI Studio is currently available in all regions where Azure OpenAI Service is available. To learn more, see [Azure global infrastructure - Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services).
+Azure AI Studio is currently available in the following regions: Australia East, Brazil South, Canada Central, East US, East US 2, France Central, Germany West Central, India South, Japan East, North Central US, Norway East, Poland Central, South Africa North, South Central US, Sweden Central, Switzerland North, UK South, West Europe, and West US.
+
+To learn more, see [Azure global infrastructure - Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services).
## How to get access
You can explore Azure AI Studio without signing in, but for full functionality a
## Next steps -- [Create a project in Azure AI Studio](./how-to/create-projects.md)-- [Quickstart: Generate product name ideas in the Azure AI Studio playground](quickstarts/playground-completions.md)
+- [Create an AI Studio project](./how-to/create-projects.md)
+- [Tutorial: Deploy a chat web app](tutorials/deploy-chat-web-app.md)
- [Tutorial: Using Azure AI Studio with a screen reader](tutorials/screen-reader.md)
aks Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/artifact-streaming.md
+
+ Title: Reduce image pull time with Artifact Streaming on Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to enable Artifact Streaming on Azure Kubernetes Service (AKS) to reduce image pull time.
+++++ Last updated : 11/16/2023++
+# Reduce image pull time with Artifact Streaming on Azure Kubernetes Service (AKS) (Preview)
+
+High performance compute workloads often involve large images, which can cause long image pull times and slow down your workload deployments. Artifact Streaming on AKS allows you to stream container images from Azure Container Registry (ACR) to AKS. AKS only pulls the necessary layers for initial pod startup, reducing the time it takes to pull images and deploy your workloads.
+
+Artifact Streaming can reduce time to pod readiness by over 15%, depending on the size of the image, and it works best for images <30GB. Based on our testing, we saw reductions in pod start-up times for images <10GB from minutes to seconds. If you have a pod that needs access to a large file (>30GB), then you should mount it as a volume instead of building it as a layer. This is because if your pod requires that file to start, it congests the node. Artifact Streaming isn't ideal for read heavy images from your filesystem if you need that on startup. With Artifact Streaming, pod start-up becomes concurrent, whereas without it, pods start in serial.
+
+This article describes how to enable the Artifact Streaming feature on your AKS node pools to stream artifacts from ACR.
++
+## Prerequisites
+
+* You need an existing AKS cluster with ACR integration. If you don't have one, you can create one using [Authenticate with ACR from AKS][acr-auth-aks].
+* [Enable Artifact Streaming on ACR][enable-artifact-streaming-acr].
+* This feature requires Kubernetes version 1.25 or later. To check your AKS cluster version, see [Check for available AKS cluster upgrades][aks-upgrade].
+
+> [!NOTE]
+> Artifact Streaming is only supported on Ubuntu 22.04, Ubuntu 20.04, and Azure Linux node pools. Windows node pools aren't supported.
+
+## Install the `aks-preview` CLI extension
+
+1. Install the `aks-preview` CLI extension using the [`az extension add`][az-extension-add] command.
+
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
+
+2. Update the extension to ensure you have the latest version installed using the [`az extension update`][az-extension-update] command.
+
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
+
+## Register the `ArtifactStreamingPreview` feature flag in your subscription
+
+* Register the `ArtifactStreamingPreview` feature flag in your subscription using the [`az feature register`][az-feature-register] command.
+
+ ```azurecli-interactive
+ az feature register --namespace Microsoft.ContainerService --name ArtifactStreamingPreview
+ ```
+
+## Enable Artifact Streaming on ACR
+
+Enablement on ACR is a prerequisite for Artifact Streaming on AKS. For more information, see [Artifact Streaming on ACR](https://aka.ms/acr/artifact-streaming).
+
+1. Create an Azure resource group to hold your ACR instance using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name myStreamingTest --location westus
+ ```
+
+2. Create a new premium SKU Azure Container Registry using the [`az acr create`][az-acr-create] command with the `--sku Premium` flag.
+
+ ```azurecli-interactive
+ az acr create --resource-group myStreamingTest --name mystreamingtest --sku Premium
+ ```
+
+3. Configure the default ACR instance for your subscription using the [`az configure`][az-configure] command.
+
+ ```azurecli-interactive
+ az configure --defaults acr="mystreamingtest"
+ ```
+
+4. Push or import an image to the registry using the [`az acr import`][az-acr-import] command.
+
+ ```azurecli-interactive
+ az acr import -source docker.io/jupyter/all-spark-notebook:latest -t jupyter/all-spark-notebook:latest
+ ```
+
+5. Create a streaming artifact from the image using the [`az acr artifact-streaming create`][az-acr-artifact-streaming-create] command.
+
+ ```azurecli-interactive
+ az acr artifact-streaming create --image jupyter/all-spark-notebook:latest
+ ```
+
+6. Verify the generated Artifact Streaming using the [`az acr manifest list-referrers`][az-acr-manifest-list-referrers] command.
+
+ ```azurecli-interactive
+ az acr manifest list-referrers -n jupyter/all-spark-notebook:latest
+ ```
+
+## Enable Artifact Streaming on AKS
+
+### Enable Artifact Streaming on a new node pool
+
+* Create a new node pool with Artifact Streaming enabled using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--enable-artifact-streaming` flag set to `true`.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name myNodePool \
+ --enable-artifact-streaming true
+ ```
+
+### Enable Artifact Streaming on an existing node pool
+
+* Enable Artifact Streaming on an existing node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--enable-artifact-streaming` flag.
+
+ ```azurecli-interactive
+ az aks nodepool update \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name myNodePool \
+ --enable-artifact-streaming
+ ```
+
+## Check if Artifact Streaming is enabled
+
+Now that you enabled Artifact Streaming on a premium ACR and connected that to an AKS node pool with Artifact Streaming enabled, any new pod deployments on this cluster with an image pull from the ACR with Artifact Streaming enabled will see reductions in image pull times.
+
+* Check if your node pool has Artifact Streaming enabled using the [`az aks nodepool show`][az-aks-nodepool-show] command.
+
+ ```azurecli-interactive
+ az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name myNodePool grep ArtifactStreamingConfig
+ ```
+
+ In the output, check that the `Enabled` field is set to `true`.
+
+## Disable Artifact Streaming on AKS
+
+You can disable Artifact Streaming at the node pool level. The change takes effect on the next node pool upgrade.
+
+> [!NOTE]
+> Artifact Streaming requires connection to and enablement on an ACR. If you disconnect or disable from ACR, Artifact Streaming is automatically disabled on the node pool. If you don't disable Artifact Streaming at the node pool level, it begins working immediately once you resume the connection to and enablement on ACR.
+
+### Disable Artifact Streaming on an existing node pool
+
+* Disable Artifact Streaming on an existing node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--disable-artifact-streaming` flag.
+
+ ```azurecli-interactive
+ az aks nodepool update \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name myNodePool \
+ --disable-artifact-streaming
+ ```
+
+## Next steps
+
+This article described how to enable Artifact Streaming on your AKS node pools to stream artifacts from ACR and reduce image pull time. To learn more about working with container images in AKS, see [Best practices for container image management and security in AKS][aks-image-management].
+
+<!-- LINKS -->
+[enable-artifact-streaming-acr]: #enable-artifact-streaming-on-acr
+[acr-auth-aks]: ./cluster-container-registry-integration.md
+[aks-upgrade]: ./upgrade-cluster.md
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add
+[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az-aks-nodepool-update
+[aks-image-management]: ./operator-best-practices-container-image-management.md
+[az-group-create]: /cli/azure/group#az-group-create
+[az-acr-create]: /cli/azure/acr#az-acr-create
+[az-configure]: /cli/azure#az_configure
+[az-acr-import]: /cli/azure/acr#az-acr-import
+[az-acr-artifact-streaming-create]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-create
+[az-acr-manifest-list-referrers]: /cli/azure/acr/manifest#az-acr-manifest-list-referrers
+[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az-aks-nodepool-show
aks Confidential Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/confidential-containers-overview.md
Last updated 11/13/2023
# Confidential Containers (preview) with Azure Kubernetes Service (AKS)
-Confidential containers provide a set of features and capabilities to further secure your standard container workloads to achieve higher data security, data privacy and runtime code integrity goals. Azure Kubernetes Service (AKS) includes Confidential Containers (preview) on AKS.
+Confidential Containers provide a set of features and capabilities to further secure your standard container workloads to achieve higher data security, data privacy and runtime code integrity goals. Azure Kubernetes Service (AKS) includes Confidential Containers (preview) on AKS.
Confidential Containers builds on Kata Confidential Containers and hardware-based encryption to encrypt container memory. It establishes a new level of data confidentiality by preventing data in memory during computation from being in clear text, readable format. Trust is earned in the container through hardware attestation, allowing access to the encrypted data by trusted entities.
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
Last updated 04/10/2023
# Use GPUs for compute-intensive workloads on Azure Kubernetes Service (AKS)
-Graphical processing units (GPUs) are often used for compute-intensive workloads, such as graphics and visualization workloads. AKS supports GPU-enabled Linux node pools to run compute-intensive Kubernetes workloads. For more information on available GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6s_v3*. The NVv4 series (based on AMD GPUs) aren't supported with AKS.
+Graphical processing units (GPUs) are often used for compute-intensive workloads, such as graphics and visualization workloads. AKS supports GPU-enabled Linux node pools to run compute-intensive Kubernetes workloads.
This article helps you provision nodes with schedulable GPUs on new and existing AKS clusters.
+## Supported GPU-enabled VMs
+To view supported GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6s_v3*. The NVv4 series (based on AMD GPUs) aren't supported on AKS.
+ > [!NOTE] > GPU-enabled VMs contain specialized hardware subject to higher pricing and region availability. For more information, see the [pricing][azure-pricing] tool and [region availability][azure-availability].
+## Limitations
+* AKS does not support Windows GPU-enabled node pools.
+* If you're using an Azure Linux GPU-enabled node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md).
+* [NVadsA10](https://learn.microsoft.com/azure/virtual-machines/nva10v5-series) v5-series are not a recommended SKU for GPU VHD.
+ ## Before you begin * This article assumes you have an existing AKS cluster. If you don't have a cluster, create one using the [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal]. * You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-> [!NOTE]
-> If using an Azure Linux GPU node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md).
- ## Get the credentials for your cluster * Get the credentials for your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. The following example command gets the credentials for the *myAKSCluster* in the *myResourceGroup* resource group:
This article helps you provision nodes with schedulable GPUs on new and existing
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-## Add the NVIDIA device plugin
+## Options for using NVIDIA GPUs
-There are two ways to add the NVIDIA device plugin:
+There are three ways to add the NVIDIA device plugin:
1. [Using the AKS GPU image](#update-your-cluster-to-use-the-aks-gpu-image-preview) 2. [Manually installing the NVIDIA device plugin](#manually-install-the-nvidia-device-plugin)
+3. Using the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html)
+
+### Use NVIDIA GPU Operator with AKS
+You can use the NVIDIA GPU Operator by skipping the gpu driver installation on AKS. For more information about using the NVIDIA GPU Operator with AKS, see [NVIDIA Documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html).
+
+Adding the node pool tag `SkipGPUDriverInstall=true` will skip installing the GPU driver automatically on newly created nodes in the node pool. Any existing nodes will not be changed - the pool can be scaled to 0 and back up to make the change take effect. You can specify the tag using the `--nodepool-tags` argument to [`az aks create`][az-aks-create] command (for a new cluster) or `--tags` with [`az aks nodepool add`][az-aks-nodepool-add] or [`az aks nodepool update`][az-aks-nodepool-update].
> [!WARNING] > We don't recommend manually installing the NVIDIA device plugin daemon set with clusters using the AKS GPU image. ### Update your cluster to use the AKS GPU image (preview)
-> [!NOTE]
-> If using an Azure Linux GPU node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md).
- AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github]. [!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
To see the GPU in action, you can schedule a GPU-enabled workload with the appro
[nvidia-github]: https://github.com/NVIDIA/k8s-device-plugin <!-- LINKS - internal -->
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az_aks_nodepool_update
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
To see the GPU in action, you can schedule a GPU-enabled workload with the appro
[az-feature-show]: /cli/azure/feature#az-feature-show [az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update
+[NVadsA10]: /azure/virtual-machines/nva10v5-series
api-management Credentials How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-azure-ad.md
On the **Connection** tab, complete the steps for your connection to the provide
<inbound> <base /> <get-authorization-context provider-id="MicrosoftEntraID-01" authorization-id="first-connection" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />
- <set-header name="credential" exists-action="override">
- <value>@("Bearer " + ((credential)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+ <set-header name="Authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
</set-header> </inbound> <backend>
api-management Developer Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-faq.md
The call failure may also be caused by an TLS/SSL certificate, which is assigned
| Microsoft Internet Explorer | No | | Mozilla Firefox | Yes<sup>1</sup> |
- <small><sup>1</sup> Supported in the two latest production versions.</small>
+ <sup>1</sup> Supported in the two latest production versions.
## Local development of my self-hosted portal is no longer working
api-management Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/retry-policy.md
The `retry` policy executes its child policies once and then retries their execu
| Attribute | Description | Required | Default | | - | -- | -- | - | | condition | Boolean. Specifies whether retries should be stopped (`false`) or continued (`true`). Policy expressions are allowed. | Yes | N/A |
-| count | A positive number specifying the maximum number of retries to attempt. Policy expressions are allowed. | Yes | N/A |
+| count | A positive number between 1 and 50 specifying the number of retries to attempt. Policy expressions are allowed. | Yes | N/A |
| interval | A positive number in seconds specifying the wait interval between the retry attempts. Policy expressions are allowed. | Yes | N/A | | max-interval | A positive number in seconds specifying the maximum wait interval between the retry attempts. It is used to implement an exponential retry algorithm. Policy expressions are allowed. | No | N/A | | delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. Policy expressions are allowed. | No | N/A |
In the following example, sending a request to a URL other than the defined back
* [API Management advanced policies](api-management-advanced-policies.md)
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
To configure a policy:
The **ip-filter** policy now appears in the **Inbound processing** section.
+## Get assistance creating policies using Microsoft Copilot for Azure (preview)
++
+[Microsoft Copilot for Azure](../copilot/overview.md) (preview) provides policy authoring capabilities for Azure API Management. Using Copilot for Azure in the context of API Management's policy editor, you can create policies that match your specific requirements without knowing the syntax or have already configured policies explained to you. This proves particularly useful for handling complex policies with multiple requirements.
+
+You can prompt Copilot for Azure to generate policy definitions, then copy the results into the policy editor and make any necessary adjustments. Ask questions to gain insights into different options, modify the provided policy, or clarify the policy you already have. [Learn more](../copilot/author-api-management-policies.md) about this capability.
+
+> [!NOTE]
+> Microsoft Copilot for Azure requires [registration](../copilot/limited-access.md#registration-process) (preview) and is currently only available to approved enterprise customers and partners.
+ ## Configure policies at different scopes API Management gives you flexibility to configure policy definitions at multiple [scopes](api-management-howto-policies.md#scopes), in each of the policy sections.
app-service Configure Ssl Certificate In Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate-in-code.md
To follow this how-to guide:
In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>, from the left menu, select **App Services** > **\<app-name>**.
-From the left navigation of your app, select **TLS/SSL settings**, then select **Private Key Certificates (.pfx)** or **Public Key Certificates (.cer)**.
+From the left navigation of your app, select **Certificates**, then select **Bring your own certificates (.pfx)** or **Public key certificates (.cer)**.
Find the certificate you want to use and copy the thumbprint.
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md
The slot's URL has the format `http://sitename-slotname.azurewebsites.net`. To k
When you swap two slots (usually from a staging slot into the production slot), App Service does the following to ensure that the target slot doesn't experience downtime:
-1. Apply the following settings from the target slot (for example, the production slot) to all instances of the source slot:
+1. Apply the following settings from the source slot (for example, the production slot) to all instances of the target slot:
- [Slot-specific](#which-settings-are-swapped) app settings and connection strings, if applicable. - [Continuous deployment](deploy-continuous-deployment.md) settings, if enabled. - [App Service authentication](overview-authentication-authorization.md) settings, if enabled.
- Any of these cases trigger all instances in the source slot to restart. During [swap with preview](#Multi-Phase), this marks the end of the first phase. The swap operation is paused, and you can validate that the source slot works correctly with the target slot's settings.
+ Any of these cases trigger all instances in the target slot to restart. During [swap with preview](#Multi-Phase), this marks the end of the first phase. The swap operation is paused, and you can validate that the source slot works correctly with the target slot's settings.
-1. Wait for every instance in the target slot to complete its restart. If any instance fails to restart, the swap operation reverts all changes to the source slot and stops the operation.
+1. Wait for every instance in the source slot to complete its restart. If any instance fails to restart, the swap operation reverts all changes to the source slot and stops the operation.
1. If [local cache](overview-local-cache.md) is enabled, trigger local cache initialization by making an HTTP request to the application root ("/") on each instance of the source slot. Wait until each instance returns any HTTP response. Local cache initialization causes another restart on each instance.
azure-app-configuration Concept Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-private-endpoint.md
Previously updated : 07/15/2020 Last updated : 11/15/2023 #Customer intent: As a developer using Azure App Configuration, I want to understand how to use private endpoints to enable secure communication with my App Configuration instance.
Azure relies upon DNS resolution to route connections from the VNet to the confi
## DNS changes for private endpoints
-When you create a private endpoint, the DNS CNAME resource record for the configuration store is updated to an alias in a subdomain with the prefix `privatelink`. Azure also creates a [private DNS zone](../dns/private-dns-overview.md) corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints.
+When you create a private endpoint, the DNS CNAME resource record for the configuration store is updated to an alias in a subdomain with the prefix `privatelink`. Azure also creates a [private DNS zone](../dns/private-dns-overview.md) corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. Enabling geo-replication creates separate DNS records for each replica with unique IP addresses in the private DNS zone.
When you resolve the endpoint URL from within the VNet hosting the private endpoint, it resolves to the private endpoint of the store. When resolved from outside the VNet, the endpoint URL resolves to the public endpoint. When you create a private endpoint, the public endpoint is disabled.
-If you are using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the service endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for `[Your-store-name].privatelink.azconfig.io` (or `[Your-store-name]-[replica-name].privatelink.azconfig.io` for a replica if the geo-replication is enabled) with the private endpoint IP address.
-
-> [!TIP]
-> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the store name in the `privatelink` subdomain to the private endpoint IP address. You can do this by delegating the `privatelink` subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records.
+If you are using a custom DNS server on your network, you need to configure it to delegate your `privatelink` subdomain to the private DNS zone for the VNet. Alternatively, you can configure the A records for your store's private link URLs, which are either `[Your-store-name].privatelink.azconfig.io` or `[Your-store-name]-[replica-name].privatelink.azconfig.io` if geo-replication is enabled, with unique private IP addresses of the private endpoint.
## Pricing
azure-app-configuration Concept Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-snapshots.md
Title: Snapshots in Azure App Configuration (preview)
+ Title: Snapshots in Azure App Configuration
description: Details of Snapshots in Azure App Configuration Previously updated : 05/16/2023 Last updated : 11/15/2023
-# Snapshots (preview)
+# Snapshots
A snapshot is a named, immutable subset of an App Configuration store's key-values. The key-values that make up a snapshot are chosen during creation time through the usage of key and label filters. Once a snapshot is created, the key-values within are guaranteed to remain unchanged.
azure-app-configuration Howto Create Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-create-snapshots.md
Title: How to manage and use snapshots (preview) in Azure App Configuration
+ Title: How to manage and use snapshots in Azure App Configuration
description: How to manage and use snapshots in an Azure App Configuration store. Previously updated : 09/28/2023 Last updated : 11/15/2023
-# Manage and use snapshots (preview)
+# Manage and use snapshots
In this article, learn how to create, use and manage snapshots in Azure App Configuration. Snapshot is a set of App Configuration settings stored in an immutable state.
In your App Configuration store, go to **Operations** > **Configuration explorer
As a temporary workaround, you can switch to using Access keys authentication from either the Configuration explorer or the Feature manager blades. You should then see the Snapshot blade displayed properly, assuming you have permission for the access keys.
-Under **Operations** > **Snapshots (preview)**, select **Create a new snapshot**.
+Under **Operations** > **Snapshots**, select **Create a new snapshot**.
1. Enter a **snapshot name** and optionally also add **Tags**. 1. Under **Choose the composition type**, keep the default value **Key (default)**.
Under **Operations** > **Snapshots (preview)**, select **Create a new snapshot**
To create sample snapshots and check how the snapshots feature work, use the snapshot sandbox. This sandbox contains sample data you can play with to better understand how snapshot's composition type and filters work.
-1. In **Operations** > **Snapshots (preview)** > **Active snapshots**, select **Test in sandbox**.
+1. In **Operations** > **Snapshots** > **Active snapshots**, select **Test in sandbox**.
1. Review the sample data and practice creating snapshots by filling out the form with a composition type and one or more filters. 1. Select **Create** to generate the sample snapshot. 1. Check out the snapshot result generated under **Generated sample snapshot**. The sample snapshot displays all keys that are included in the sample snapshot, according to your selection.
spring:
## Manage active snapshots
-The page under **Operations** > **Snapshots (preview)** displays two tabs: **Active snapshots** and **Archived snapshots**. Select **Active snapshots** to view the list of all active snapshots in an App Configuration store.
+The page under **Operations** > **Snapshots** displays two tabs: **Active snapshots** and **Archived snapshots**. Select **Active snapshots** to view the list of all active snapshots in an App Configuration store.
:::image type="content" source="./media/howto-create-snapshots/snapshots-view-list.png" alt-text="Screenshot of the list of active snapshots.":::
In the **Active snapshots** tab, select the ellipsis **...** on the right of an
## Manage archived snapshots
-Go to **Operations** > **Snapshots (preview)** > **Archived snapshots** to view the list of all archived snapshots in an App Configuration store. Archived snapshots remain accessible for the retention period that was selected during their creation.
+Go to **Operations** > **Snapshots** > **Archived snapshots** to view the list of all archived snapshots in an App Configuration store. Archived snapshots remain accessible for the retention period that was selected during their creation.
:::image type="content" source="./media/howto-create-snapshots/archived-snapshots.png" alt-text="Screenshot of the list of archived snapshots.":::
Detailed view of snapshot is available in the archive state as well. In the **Ar
### Recover an archived snapshot
-In the **Archived snapshots** tab, select the ellipsis **...** on the right of an archived snapshot and select **Recover** to recover a snapshot. Confirm App Configuration snapshot recovery by selecting **Yes** or cancel with **No**. Once a snapshot has been recovered, a notification appears to confirm the operation and the list of archived snapshots is updated.
+In the **Archived snapshots** tab, select the ellipsis **...** on the right of an archived snapshot and select **Recover** to recover a snapshot. Once a snapshot has been recovered, a notification appears to confirm the operation and the list of archived snapshots is updated.
:::image type="content" source="./media/howto-create-snapshots/recover-snapshots.png" alt-text="Screenshot of the recover option in the archived snapshots.":::
azure-app-configuration Rest Api Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-snapshot.md
Last updated 03/21/2023
-# Snapshots
+# Snapshot
A snapshot is a resource identified uniquely by its name. See details for each operation.
Use the optional `$select` query string parameter and provide a comma-separated
```http GET /kv?snapshot={name}&$select=key,value&api-version={api-version} HTTP/1.1
-```
+```
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Installing the Connected Machine agent for Window applies the following system-w
| Service name | Display name | Process name | Description | |--|--|--|-|
- | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens |
- | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. |
- | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. |
+ | himds | Azure Hybrid Instance Metadata Service | `himds.exe` | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens |
+ | GCArcService | Guest configuration Arc Service | `gc_arc_service.exe` (gc_service.exe prior to version 1.36) | Audits and enforces Azure guest configuration policies on the machine. |
+ | ExtensionService | Guest configuration Extension Service | `gc_extension_service.exe` (gc_service.exe prior to version 1.36) | Installs, updates, and manages extensions on the machine. |
* Agent installation creates the following virtual service account.
Installing the Connected Machine agent for Linux applies the following system-wi
The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions: * The Guest Configuration agent can use up to 5% of the CPU to evaluate policies.
-* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply:
+* The Extension Service agent can use up to 5% of the CPU on Windows machines and 30% of the CPU on Linux machines to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply:
| Extension type | Operating system | CPU limit | | -- | - | | | AzureMonitorLinuxAgent | Linux | 60% | | AzureMonitorWindowsAgent | Windows | 100% |
- | AzureSecurityLinuxAgent | Linux | 30% |
| LinuxOsUpdateExtension | Linux | 60% | | MDE.Linux | Linux | 60% | | MicrosoftDnsAgent | Windows | 100% |
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.32 - July 2023
+
+Download for [Windows](https://download.microsoft.com/download/7/e/5/7e51205f-a02e-4fbe-94fe-f36219be048c/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### New features
+
+- Added support for the Debian 12 operating system
+- [azcmagent show](azcmagent-show.md) now reflects the "Expired" status when a machine has been disconnected long enough for the managed identity to expire. Previously, the agent only showed "Disconnected" while the Azure portal and API showed the correct state, "Expired."
+
+### Fixed
+
+- Fixed an issue that could result in high CPU usage if the agent was unable to send telemetry to Azure.
+- Improved local logging when there are network communication errors
+ ## Version 1.31 - June 2023 Download for [Windows](https://download.microsoft.com/download/2/6/e/26e2b001-1364-41ed-90b0-1340a44ba409/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Connected Machine agent](agent-release-notes-archive.md).
+## Version 1.36 - November 2023
+
+Download for [Windows](https://download.microsoft.com/download/5/e/9/5e9081ed-2ee2-4b3a-afca-a8d81425bcce/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### New features
+
+- [azcmagent show](azcmagent-show.md) now reports extended security license status on Windows Server 2012 server machines.
+- Introduced a new [proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) option, `ArcData`, that covers the Azure Arc-enabled SQL Server endpoints. This will enable you to use a private endpoint with Azure Arc-enabled servers with the public endpoints for Azure Arc-enabled SQL Server.
+- The [CPU limit for extension operations](agent-overview.md#agent-resource-governance) on Linux is now 30%. This increase will help improve reliability of extension install, upgrade and uninstall operations.
+- Older extension manager and machine configuration agent logs are automatically zipped to reduce disk space requirements.
+- New executable names for the extension manager (`gc_extension_service`) and machine configuration (`gc_arc_service`) agents on Windows to help you distinguish the two services. For more information, see [Windows agent installation details](./agent-overview.md#windows-agent-installation-details).
+
+### Bug fixes
+
+- [azcmagent connect](azcmagent-connect.md) now uses the latest API version when creating the Azure Arc-enabled server resource to ensure Azure policies targeting new properties can take effect.
+- Upgraded the OpenSSL library and PowerShell runtime shipped with the agent to include the latest security fixes.
+- Fixed an issue that could prevent the agent from reporting the correct product type on Windows machines.
+- Improved handling of upgrades when the previously installed extension version was not in a successful state.
+ ## Version 1.35 - October 2023 Download for [Windows](https://download.microsoft.com/download/e/7/0/e70b1753-646e-4aea-bac4-40187b5128b0/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
This endpoint will be removed from `azcmagent check` in a future release.
- You can now set the [agent mode](security-overview.md#agent-modes) before connecting the agent to Azure. - The agent now responds to instance metadata service (IMDS) requests even when the connection to Azure is temporarily unavailable.
-## Version 1.32 - July 2023
-
-Download for [Windows](https://download.microsoft.com/download/7/e/5/7e51205f-a02e-4fbe-94fe-f36219be048c/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
-
-### New features
--- Added support for the Debian 12 operating system-- [azcmagent show](azcmagent-show.md) now reflects the "Expired" status when a machine has been disconnected long enough for the managed identity to expire. Previously, the agent only showed "Disconnected" while the Azure portal and API showed the correct state, "Expired."-
-### Fixed
--- Fixed an issue that could result in high CPU usage if the agent was unable to send telemetry to Azure.-- Improved local logging when there are network communication errors- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc License Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md
If you choose to license based on physical cores, the licensing requires a minim
If you choose to license based on virtual cores, the licensing requires a minimum of eight virtual cores per Virtual Machine. There are two main scenarios where this model is advisable:
-1. If the VM is running on a third-party host or hyper scaler like AWS, GCP, or OCI.
+1. If the VM is running on a third-party host or cloud service provider like AWS, GCP, or OCI.
-1. The Windows Server was licensed on a virtualization basis. In most cases, customers elect the Standard edition for virtual core-based licenses.
+1. The Windows Server operating system was licensed on a virtualization basis.
An additional scenario (scenario 1, below) is a candidate for VM/Virtual core licensing when the WS2012 VMs are running on a newer Windows Server host (that is, Windows Server 2016 or later).
+> [!IMPORTANT]
+> Virtual core licensing can't be used on physical servers. When creating a license with virtual cores, always select the standard edition instead of datacenter, even if the operating system is datacenter edition.
+ ### License limits Each WS2012 ESU license can cover up to and including 10,000 cores. If you need ESUs for more than 10,000 cores, split the total number of cores across multiple licenses.
As servers no longer require ESUs because they've been migrated to Azure, Azure
> [!NOTE] > This process is not automatic; billing is tied to the activated licenses and you are responsible for modifying your provisioned licensing to take advantage of cost savings.
->
## Scenario based examples: Compliant and Cost Effective Licensing
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Proxy bypass value when set to `ArcData` only bypasses the traffic of the Azure
| `Arc` | `his.arc.azure.com`</br>`guestconfiguration.azure.com`</br> `san-af-<location>-prod.azurewebsites.net`</br>`telemetry.<location>.arcdataservices.com`| | `ArcData` <sup>1</sup> | `san-af-<region>-prod.azurewebsites.net`</br>`telemetry.<location>.arcdataservices.com` |
-<sup>1</sup> To use proxy bypass value `ArcData`, you need a supported Azure Connected Machine agent and a supported Azure Extension for SQL Server version. Releases are supported beginning November, 2023. To see the latest release, check the release notes:
- - [Azure Connected Machine Agent](./agent-release-notes.md)
- - [Azure extension for SQL Server](/sql/sql-server/azure-arc/release-notes?view=sql-server-ver16&preserve-view=true)
-
- Later versions are also supported.
+<sup>1</sup> The proxy bypass value `ArcData` is available starting with Azure Connected Machine agent version 1.36 and Azure Extension for SQL Server version 1.1.2504.99. Earlier versions include the Azure Arc-enabled SQL Server endpoints in the "Arc" proxy bypass value.
To send Microsoft Entra ID and Azure Resource Manager traffic through a proxy server but skip the proxy for Azure Arc traffic, run the following command:
azure-boost Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-boost/overview.md
Boost systems embrace multiple layers of defense-in-depth, including ubiquitous
Azure Boost uses Security Enhanced Linux (SELinux) to enforce principle of least privileges for all software running on its system on chip. All control plane and data plane software running on top of the Boost OS is restricted to running only with the minimum set of privileges required to operate ΓÇô the operating system restricts any attempt by Boost software to act in an unexpected manner. Boost OS properties make it difficult to compromise code, data, or the availability of Boost and Azure hosting Infrastructure. - **Rust memory safety:**
-RUST serves as the primary language for all new code written on the Boost system, to provide memory safety without impacting performance. Control and data plane operations are isolated with memory safety improvements that enhance AzureΓÇÖs ability to keep tenants safe.
+Rust serves as the primary language for all new code written on the Boost system, to provide memory safety without impacting performance. Control and data plane operations are isolated with memory safety improvements that enhance AzureΓÇÖs ability to keep tenants safe.
- **FIPS certification:** Boost employs a FIPS 140 certified system kernel, providing reliable and robust security validation of cryptographic modules.
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md
Before you begin, you must have the following requirements in place:
+ The [Azurite storage emulator](../storage/common/storage-use-azurite.md?tabs=npm#install-azurite). While you can also use an actual Azure Storage account, the article assumes you're using this emulator. ::: zone-end - [!INCLUDE [functions-install-core-tools](../../includes/functions-install-core-tools.md)] ## <a name="create-venv"></a>Create and activate a virtual environment
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
Before you begin, make sure that you have the following requirements in place:
+ The [Azurite V3 extension](https://marketplace.visualstudio.com/items?itemName=Azurite.azurite) local storage emulator. While you can also use an actual Azure storage account, this article assumes you're using the Azurite emulator. ::: zone-end - [!INCLUDE [functions-install-core-tools-vs-code](../../includes/functions-install-core-tools-vs-code.md)] ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Functions Container Apps Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md
Title: Azure Container Apps hosting of Azure Functions description: Learn about how you can use Azure Container Apps to host containerized function apps in Azure Functions. Previously updated : 07/30/2023 Last updated : 11/15/2023 # Customer intent: As a cloud developer, I want to learn more about hosting my function apps in Linux containers by using Azure Container Apps.
Keep in mind the following considerations when deploying your function app conta
+ Azure Event Hubs + Kafka* \*The protocol value of `ssl` isn't supported when hosted on Container Apps. Use a [different protocol value](functions-bindings-kafka-trigger.md?pivots=programming-language-csharp#attributes).
-+ Dapr is currently enabled by default in the preview release. In a later release, Dapr loading should be configurable.
+ For the built-in Container Apps [policy definitions](../container-apps/policy-reference.md#policy-definitions), currently only environment-level policies apply to Azure Functions containers. + When using Container Apps, you don't have direct access to the lower-level Kubernetes APIs.
-+ Use of user-assigned managed identities is currently supported, and is preferred for accessing Azure Container Registry. For more information, see [Add a user-assigned identity](../app-service/overview-managed-identity.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json#add-a-user-assigned-identity).
+ The `containerapp` extension conflicts with the `appservice-kube` extension in Azure CLI. If you have previously published apps to Azure Arc, run `az extension list` and make sure that `appservice-kube` isn't installed. If it is, you can remove it by running `az extension remove -n appservice-kube`.
-+ To invoke DAPR APIs or to run the [Functions Dapr extension](https://github.com/Azure/azure-functions-dapr-extension), make sure the minimum replica count is set to at least `1`. This enables the DAPR sidecar to run in the background to handle DAPR requests. The Functions Dapr extension is also in preview, with help provided [in the repository](https://github.com/Azure/azure-functions-dapr-extension/issues).
++ The Functions Dapr extension is also in preview, with help provided [in the repository](https://github.com/Azure/azure-functions-dapr-extension/issues). ## Next steps
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
- devx-track-js - devx-track-python - ignite-2023 Previously updated : 09/01/2023 Last updated : 11/14/2023 zone_pivot_groups: programming-languages-set-functions #Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects.
These prerequisites are only required to [run and debug your functions locally](
+ [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code. ::: zone-end ## Create an Azure Functions project
You should monitor the execution of your functions by integrating your function
To learn more about monitoring using Application Insights, see [Monitor Azure Functions](functions-monitoring.md). - ### Enable emulation in Visual Studio Code Now that you've configured the Terminal with Rosetta to run x86 emulation for Python development, you can use the following steps to integrate this terminal emulation with Visual Studio Code:
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Title: Python developer reference for Azure Functions description: Understand how to develop functions with Python Previously updated : 05/25/2023 Last updated : 11/14/2023 ms.devlang: python zone_pivot_groups: python-mode-functions
Python v1 programming model:
You can also create Python v1 functions in the Azure portal.
-The following considerations apply for local Python development:
-
-+ Although you can develop your Python-based Azure functions locally on Windows, Python is supported only on a Linux-based hosting plan when it's running in Azure. For more information, see the [list of supported operating system/runtime combinations](functions-scale.md#operating-systemruntime).
-
-+ Functions doesn't currently support local Python function development on ARM64 devices, including on a Mac with an M1 chip. To learn more, see [x86 emulation on ARM64](functions-run-local.md#x86-emulation-on-arm64).
+> [!TIP]
+> Although you can develop your Python-based Azure functions locally on Windows, Python is supported only on a Linux-based hosting plan when it's running in Azure. For more information, see the [list of supported operating system/runtime combinations](functions-scale.md#operating-systemruntime).
## Programming model
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
Title: Develop Azure Functions locally using Core Tools
description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you deploy them to run them on Azure Functions. ms.assetid: 242736be-ec66-4114-924b-31795fd18884 Previously updated : 08/24/2023 Last updated : 11/14/2023 zone_pivot_groups: programming-languages-set-functions
In the terminal window or from a command prompt, run the following command to cr
func init MyProjFolder --worker-runtime dotnet-isolated ```
-By default this command creates a project that runs in-process with the Functons host on the current [Long-Term Support (LTS) version of .NET Core]. You can use the `--target-framework` option to target a specific supported version of .NET, including .NET Framework. For more information, see the [`func init`](functions-core-tools-reference.md#func-init) reference.
+By default this command creates a project that runs in-process with the Functions host on the current [Long-Term Support (LTS) version of .NET Core]. You can use the `--target-framework` option to target a specific supported version of .NET, including .NET Framework. For more information, see the [`func init`](functions-core-tools-reference.md#func-init) reference.
### [In-process](#tab/in-process)
The following considerations apply to Core Tools installations:
+ Version 1.x of Core Tools is required when using version 1.x of the Functions Runtime, which is still supported. This version of Core Tools can only be run locally on Windows computers. If you're currently running on version 1.x, you should consider [migrating your app to version 4.x](migrate-version-1-version-4.md) today. ::: zone-end - When using Visual Studio Code, you can integrate Rosetta with the built-in Terminal. For more information, see [Enable emulation in Visual Studio Code](./functions-develop-vs-code.md#enable-emulation-in-visual-studio-code). ## Next steps
Learn how to [develop, test, and publish Azure functions by using Azure Function
[func azure functionapp publish]: functions-core-tools-reference.md?tabs=v2#func-azure-functionapp-publish
-[Long-Term Support (LTS) version of .NET Core]: https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle
+[Long-Term Support (LTS) version of .NET Core]: https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
var client = new MapsSearchClient(credential, clientId);
You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot: Now you can create environment variables in PowerShell to store the subscription key:
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
public class Demo {
You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot: Now you can create environment variables in PowerShell to store the subscription key: 
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
const client = MapsSearch(credential, process.env.MAPS_CLIENT_ID);
You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot: You need to pass the subscription key to the `AzureKeyCredential` class provided by the [Azure Core Authentication Package]. For security reasons, it's better to specify the key as an environment variable than to include it in your source code.
azure-maps How To Dev Guide Py Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md
maps_search_client = MapsSearchClient(
You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot: Now you can create environment variables in PowerShell to store the subscription key:
azure-maps How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-authentication.md
To view your Azure Maps authentication details:
3. Select **Authentication** in the settings section of the left pane.
- :::image type="content" border="true" source="./media/how-to-manage-authentication/view-authentication-keys.png" alt-text="Authentication details.":::
+ :::image type="content" border="false" source="./media/shared/get-key.png" alt-text="Screenshot showing your Azure Maps subscription key in the Azure portal." lightbox="./media/shared/get-key.png":::
## Choose an authentication category
azure-maps How To Secure Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-daemon-app.md
To create a new application registration:
4. Select the **+ New registration** tab.
- :::image type="content" border="true" source="./media/how-to-manage-authentication/app-registration.png" alt-text="View app registrations.":::
+ :::image type="content" border="false" source="./media/how-to-manage-authentication/app-registration.png" lightbox="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID.":::
5. Enter a **Name**, and then select a **Support account type**.
azure-maps How To Secure Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-device-code.md
This guide discusses how to secure public applications or devices that can't sec
Create the device based application in Microsoft Entra ID to enable Microsoft Entra sign-in, which is granted access to Azure Maps REST APIs. 1. In the Azure portal, in the list of Azure services, select **Microsoft Entra ID** > **App registrations** > **New registration**. -
- :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID":::
+
+ :::image type="content" border="false" source="./media/how-to-manage-authentication/app-registration.png" lightbox="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID.":::
2. Enter a **Name**, choose **Accounts in this organizational directory only** as the **Supported account type**. In **Redirect URIs**, specify **Public client / native (mobile & desktop)** then add `https://login.microsoftonline.com/common/oauth2/nativeclient` to the value. For more information, see Microsoft Entra ID [Desktop app that calls web APIs: App registration]. Then **Register** the application.
azure-maps How To Secure Spa Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md
Create the web application in Microsoft Entra ID for users to sign in. The web a
1. In the Azure portal, in the list of Azure services, select **Microsoft Entra ID** > **App registrations** > **New registration**.
- :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="Screenshot showing the new registration page in the App registrations blade in Microsoft Entra ID.":::
+ :::image type="content" border="false" source="./media/how-to-manage-authentication/app-registration.png" lightbox="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID.":::
2. Enter a **Name**, choose a **Support account type**, provide a redirect URI that represents the url which Microsoft Entra ID issues the token and is the url where the map control is hosted. For a detailed sample, see [Azure Maps Microsoft Entra ID samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ImplicitGrant). Then select **Register**.
azure-maps How To Secure Webapp Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-webapp-users.md
You must create the web application in Microsoft Entra ID for users to sign in.
1. In the Azure portal, in the list of Azure services, select **Microsoft Entra ID** > **App registrations** > **New registration**.
- :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing App registration." lightbox="./media/how-to-manage-authentication/app-registration.png":::
+ :::image type="content" border="false" source="./media/how-to-manage-authentication/app-registration.png" lightbox="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Microsoft Entra ID.":::
2. Enter a **Name**, choose a **Support account type**, provide a redirect URI that represents the url to which Microsoft Entra ID issues the token, which is the url where the map control is hosted. For more information, see Microsoft Entra ID [Scenario: Web app that signs in users](../active-directory/develop/scenario-web-app-sign-user-overview.md). Complete the provided steps from the Microsoft Entra scenario.
azure-maps Quick Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md
Once your Azure Maps account is successfully created, retrieve the subscription
>[!NOTE] > For security purposes, it is recommended that you rotate between your primary and secondary keys. To rotate keys, update your app to use the secondary key, deploy, then press the cycle/refresh button beside the primary key to generate a new primary key. The old primary key will be disabled. For more information on key rotation, see [Set up Azure Key Vault with key rotation and auditing]. ## Create a project in Android Studio
azure-maps Quick Demo Map App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md
Once your Azure Maps account is successfully created, retrieve the subscription
2. In the settings section, select **Authentication**. 3. Copy the **Primary Key** and save it locally to use later in this tutorial. >[!NOTE] > This quickstart uses the [Shared Key] authentication approach for demonstration purposes, but the preferred approach for any production environment is to use [Microsoft Entra ID] authentication.
azure-maps Quick Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md
Once your Maps account is successfully created, retrieve the primary key that en
<!-- > If you use the Azure subscription key instead of the Azure Maps primary key, your map won't render properly. Also, for security purposes, it is recommended that you rotate between your primary and secondary keys. To rotate keys, update your app to use the secondary key, deploy, then press the cycle/refresh button beside the primary key to generate a new primary key. The old primary key will be disabled. For more information on key rotation, see [Set up Azure Key Vault with key rotation and auditing](../key-vault/secrets/tutorial-rotation-dual.md) -->
-![Get the subscription key.](./media/ios-sdk/quick-ios-app/get-key.png)
## Create a project in Xcode
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
To complete this procedure, you need:
For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment). -- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace.
- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that runs IIS. - An IIS log file in W3C format must be stored on the local drive of the machine on which Azure Monitor Agent is running. - Each entry in the log file must be delineated with an end of line.
To create the data collection rule in the Azure portal:
:::image type="content" source="media/data-collection-iis/iis-data-collection-rule.png" lightbox="media/data-collection-iis/iis-data-collection-rule.png" alt-text="Screenshot that shows the Azure portal form to select basic performance counters in a data collection rule."::: 1. Specify a file pattern to identify the directory where the log files are located.
-1. On the **Destination** tab, add a destinations for the data source.
+1. On the **Destination** tab, add a destination for the data source.
<!-- convertborder later --> :::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" alt-text="Screenshot that shows the Azure portal form to add a data source in a data collection rule." border="false":::
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
This article describes how to collect events and performance counters from virtu
To complete this procedure, you need: - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace.
- Associate the data collection rule to specific virtual machines. ## Create a data collection rule
azure-monitor Data Collection Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md
You need:
- A Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). - A [data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).-- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create DCR objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace.
## Syslog record properties
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment). -- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace.
-- A Virtual Machine, Virtual Machine Scale Set, Arc-enabled server on-premises or Azure Monitoring Agent on a Windows on-premise client that writes logs to a text or JSON file.
+- A Virtual Machine, Virtual Machine Scale Set, Arc-enabled server on-premises or Azure Monitoring Agent on a Windows on-premises client that writes logs to a text or JSON file.
Text and JSON file requirements and best practices: - Do store files on the local drive of the machine on which Azure Monitor Agent is running and in the directory that is being monitored.
To create the data collection rule in the Azure portal:
- `filePatterns`: Specifies the location and file pattern of the log files to collect. This defines a separate pattern for Windows and Linux agents. - `transformKql`: Specifies a [transformation](../logs/../essentials//data-collection-transformations.md) to apply to the incoming data before it's sent to the workspace.
- See [Structure of a data collection rule in Azure Monitor](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the data collection rule.
+ See [Structure of a data collection rule in Azure Monitor](../essentials/data-collection-rule-structure.md) if you want to modify the data collection rule.
> [!IMPORTANT] > Custom data collection rules have a prefix of *Custom-*; for example, *Custom-rulename*. The *Custom-rulename* in the stream declaration must match the *Custom-rulename* name in the Log Analytics workspace.
azure-monitor Resource Manager Data Collection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-data-collection-rules.md
This article includes sample [Azure Resource Manager templates](../../azure-reso
[!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)]
-## Permissions required
-
-| Built-in Role | Scope(s) | Reason |
-|:|:|:|
-| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing data collection rule</li></ul> | To create or edit data collection rules |
-| <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Arc-enabled servers</li></ul> | To deploy associations (i.e. to assign rules to the machine) |
-| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing data collection rule</li></ul> | To deploy ARM templates |
## Create rule (sample)
azure-monitor Alerts Metric Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md
Title: Creating Metric Alerts for Logs in Azure Monitor description: Tutorial on creating near-real time metric alerts on popular log analytics data. Previously updated : 7/24/2022 Last updated : 11/16/2023
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
Title: Frequently asked questions about Azure Monitor metric alerts
description: Common issues with Azure Monitor metric alerts and possible solutions. Previously updated : 8/31/2022 Last updated : 11/16/2023 ms:reviwer: harelbr # Troubleshoot Azure Monitor metric alerts
azure-monitor Azure Cli Metrics Alert Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/azure-cli-metrics-alert-sample.md
Title: Create metric alert monitors in Azure CLI description: Learn how to create metric alerts in Azure Monitor with Azure CLI commands. These samples create alerts for a virtual machine and an App Service Plan. Previously updated : 04/05/2022 Last updated : 11/16/2023
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md
Alternatively, you can change the configuration by using Azure Resource Manager
These diagnostic tools help you inspect the telemetry from your app: * [Metric explorer](../essentials/metrics-charts.md)
-* [Search explorer](../app/search-and-transaction-diagnostics.md?tabs=transaction-search)
+* [Search explorer](../app/transaction-search-and-diagnostics.md?tabs=transaction-search)
* [Analytics: Powerful query language](../logs/log-analytics-tutorial.md) Smart detection is automatic, but if you want to set up more alerts, see:
azure-monitor Proactive Failure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md
Notice that if you delete an Application Insights resource, the associated Failu
An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there's some problem with your app or its environment.
-To investigate further, click on 'View full details in Application Insights' the links in this page take you straight to a [search page](../app/search-and-transaction-diagnostics.md?tabs=transaction-search) filtered to the relevant requests, exception, dependency, or traces.
+To investigate further, click on 'View full details in Application Insights' the links in this page take you straight to a [search page](../app/transaction-search-and-diagnostics.md?tabs=transaction-search) filtered to the relevant requests, exception, dependency, or traces.
You can also open the [Azure portal](https://portal.azure.com), navigate to the Application Insights resource for your app, and open the Failures page.
Smart Detection of Failure Anomalies complements other similar but distinct feat
These diagnostic tools help you inspect the data from your app: * [Metric explorer](../essentials/metrics-charts.md)
-* [Search explorer](../app/search-and-transaction-diagnostics.md?tabs=transaction-search)
+* [Search explorer](../app/transaction-search-and-diagnostics.md?tabs=transaction-search)
* [Analytics - powerful query language](../logs/log-analytics-tutorial.md) Smart detections are automatic. But maybe you'd like to set up some more alerts?
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
In Node.js projects, you can use `new applicationInsights.TelemetryClient(instru
## TrackEvent
-In Application Insights, a *custom event* is a data point that you can display in [Metrics Explorer](../essentials/metrics-charts.md) as an aggregated count and in [Diagnostic Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) as individual occurrences. (It isn't related to MVC or other framework "events.")
+In Application Insights, a *custom event* is a data point that you can display in [Metrics Explorer](../essentials/metrics-charts.md) as an aggregated count and in [Diagnostic Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) as individual occurrences. (It isn't related to MVC or other framework "events.")
Insert `TrackEvent` calls in your code to count various events. For example, you might want to track how often users choose a particular feature. Or you might want to know how often they achieve certain goals or make specific types of mistakes.
The recommended way to send request telemetry is where the request acts as an <a
## Operation context
-You can correlate telemetry items together by associating them with operation context. The standard request-tracking module does this for exceptions and other events that are sent while an HTTP request is being processed. In [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) and [Analytics](../logs/log-query-overview.md), you can easily find any events associated with the request by using its operation ID.
+You can correlate telemetry items together by associating them with operation context. The standard request-tracking module does this for exceptions and other events that are sent while an HTTP request is being processed. In [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) and [Analytics](../logs/log-query-overview.md), you can easily find any events associated with the request by using its operation ID.
For more information on correlation, see [Telemetry correlation in Application Insights](distributed-tracing-telemetry-correlation.md).
requests
Send exceptions to Application Insights: * To [count them](../essentials/metrics-charts.md), as an indication of the frequency of a problem.
-* To [examine individual occurrences](./search-and-transaction-diagnostics.md?tabs=transaction-search).
+* To [examine individual occurrences](./transaction-search-and-diagnostics.md?tabs=transaction-search).
The reports include the stack traces.
exceptions
## TrackTrace
-Use `TrackTrace` to help diagnose problems by sending a "breadcrumb trail" to Application Insights. You can send chunks of diagnostic data and inspect them in [Diagnostic Search](./search-and-transaction-diagnostics.md?tabs=transaction-search).
+Use `TrackTrace` to help diagnose problems by sending a "breadcrumb trail" to Application Insights. You can send chunks of diagnostic data and inspect them in [Diagnostic Search](./transaction-search-and-diagnostics.md?tabs=transaction-search).
In .NET [Log adapters](./asp-net-trace-logs.md), use this API to send third-party logs to the portal.
properties.put("Database", db.ID);
telemetry.trackTrace("Slow Database response", SeverityLevel.Warning, properties); ```
-In [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search), you can then easily filter out all the messages of a particular severity level that relate to a particular database.
+In [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search), you can then easily filter out all the messages of a particular severity level that relate to a particular database.
### Traces in Log Analytics
appInsights.setAuthenticatedUserContext(validatedId, accountId);
In [Metrics Explorer](../essentials/metrics-charts.md), you can create a chart that counts **Users, Authenticated**, and **User accounts**.
-You can also [search](./search-and-transaction-diagnostics.md?tabs=transaction-search) for client data points with specific user names and accounts.
+You can also [search](./transaction-search-and-diagnostics.md?tabs=transaction-search) for client data points with specific user names and accounts.
> [!NOTE] > The [EnableAuthenticationTrackingJavaScript property in the ApplicationInsightsServiceOptions class](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs) in the .NET Core SDK simplifies the JavaScript configuration needed to inject the user name as the Auth ID for each trace sent by the Application Insights JavaScript SDK.
Azure alerts are only on metrics. Create a custom metric that crosses a value th
## <a name="next"></a>Next steps
-* [Search events and logs](./search-and-transaction-diagnostics.md?tabs=transaction-search)
+* [Search events and logs](./transaction-search-and-diagnostics.md?tabs=transaction-search)
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
What's the difference between telemetry processors and telemetry initializers?
* [JavaScript SDK](https://github.com/Microsoft/ApplicationInsights-JS) ## <a name="next"></a>Next steps
-* [Search events and logs](./search-and-transaction-diagnostics.md?tabs=transaction-search)
+* [Search events and logs](./transaction-search-and-diagnostics.md?tabs=transaction-search)
* [sampling](./sampling.md)
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Application Insights provides many experiences to enhance the performance, relia
- [Application dashboard](overview-dashboard.md): An at-a-glance assessment of your application's health and performance. - [Application map](app-map.md): A visual overview of application architecture and components' interactions. - [Live metrics](live-stream.md): A real-time analytics dashboard for insight into application activity and performance.-- [Transaction search](search-and-transaction-diagnostics.md?tabs=transaction-search): Trace and diagnose transactions to identify issues and optimize performance.
+- [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search): Trace and diagnose transactions to identify issues and optimize performance.
- [Availability view](availability-overview.md): Proactively monitor and test the availability and responsiveness of application endpoints. - Performance view: Review application performance metrics and potential bottlenecks. - Failures view: Identify and analyze failures in your application to minimize downtime.
Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/we
- [Application dashboard](overview-dashboard.md) - [Application Map](app-map.md) - [Live metrics](live-stream.md)-- [Transaction search](search-and-transaction-diagnostics.md?tabs=transaction-search)
+- [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search)
- [Availability overview](availability-overview.md) - [Users, sessions, and events](usage-segmentation.md)
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
To provide feedback, use the feedback option.
## Next steps * To learn more about how correlation works in Application Insights, see [Telemetry correlation](distributed-tracing-telemetry-correlation.md).
-* The [end-to-end transaction diagnostic experience](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) correlates server-side telemetry from across all your Application Insights-monitored components into a single view.
+* The [end-to-end transaction diagnostic experience](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) correlates server-side telemetry from across all your Application Insights-monitored components into a single view.
* For advanced correlation scenarios in ASP.NET Core and ASP.NET, see [Track custom operations](custom-operations-tracking.md).
azure-monitor Application Insights Asp Net Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap
View your telemetry: - [Explore metrics](../essentials/metrics-charts.md) to monitor performance and usage.-- [Search events and logs](./search-and-transaction-diagnostics.md?tabs=transaction-search) to diagnose problems.
+- [Search events and logs](./transaction-search-and-diagnostics.md?tabs=transaction-search) to diagnose problems.
- [Use Log Analytics](../logs/log-query-overview.md) for more advanced queries. - [Create dashboards](./overview-dashboard.md).
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
In the preceding cases, the proper way of validating that the instrumentation en
## Where to find dependency data * [Application Map](app-map.md) visualizes dependencies between your app and neighboring components.
-* [Transaction Diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) shows unified, correlated server data.
+* [Transaction Diagnostics](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) shows unified, correlated server data.
* [Browsers tab](javascript.md) shows AJAX calls from your users' browsers. * Select from slow or failed requests to check their dependency calls. * [Analytics](#logs-analytics) can be used to query dependency data.
Like every Application Insights SDK, the dependency collection module is also op
## Dependency auto-collection
-Below is the currently supported list of dependency calls that are automatically detected as dependencies without requiring any additional modification to your application's code. These dependencies are visualized in the Application Insights [Application map](./app-map.md) and [Transaction diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) views. If your dependency isn't on the list below, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency).
+Below is the currently supported list of dependency calls that are automatically detected as dependencies without requiring any additional modification to your application's code. These dependencies are visualized in the Application Insights [Application map](./app-map.md) and [Transaction diagnostics](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) views. If your dependency isn't on the list below, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency).
### .NET
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
To get diagnostic data specific to your app, you can insert code to send your ow
Using the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient?displayProperty=fullName>, you have several APIs available:
-* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./search-and-transaction-diagnostics.md?tabs=transaction-search).
+* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./transaction-search-and-diagnostics.md?tabs=transaction-search).
* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackTrace%2A?displayProperty=nameWithType> lets you send longer data such as POST information. * <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackException%2A?displayProperty=nameWithType> sends exception details, such as stack traces to Application Insights.
-To see these events, on the left menu, open [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**.
+To see these events, on the left menu, open [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**.
:::image type="content" source="./media/asp-net-exceptions/customevents.png" lightbox="./media/asp-net-exceptions/customevents.png" alt-text="Screenshot that shows the Search screen.":::
Catch ex as Exception
End Try ```
-The properties and measurements parameters are optional, but they're useful for [filtering and adding](./search-and-transaction-diagnostics.md?tabs=transaction-search) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you want to each dictionary.
+The properties and measurements parameters are optional, but they're useful for [filtering and adding](./transaction-search-and-diagnostics.md?tabs=transaction-search) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you want to each dictionary.
## Browser exceptions
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
Perhaps your application sends voluminous amounts of data and you're using the A
## <a name="add"></a>Next steps * [Diagnose failures and exceptions in ASP.NET](asp-net-exceptions.md)
-* [Learn more about Transaction Search](search-and-transaction-diagnostics.md?tabs=transaction-search)
+* [Learn more about Transaction Search](transaction-search-and-diagnostics.md?tabs=transaction-search)
* [Set up availability and responsiveness tests](availability-overview.md) <!--Link references--> [availability]: ./availability-overview.md
-[diagnostic]: ./search-and-transaction-diagnostics.md?tabs=transaction-search
+[diagnostic]: ./transaction-search-and-diagnostics.md?tabs=transaction-search
[exceptions]: asp-net-exceptions.md [start]: ./app-insights-overview.md
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
To create a new file, right-click under your timer trigger function (for example
* [Standard tests](availability-standard-tests.md) * [Availability alerts](availability-alerts.md) * [Application Map](./app-map.md)
-* [Transaction diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics)
+* [Transaction diagnostics](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics)
azure-monitor Availability Standard Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md
From an availability test result, you can see the transaction details across all
* Log an issue or work item in Git or Azure Boards to track the problem. The bug will contain a link to this event. * Open the web test result in Visual Studio.
-To learn more about the end-to-end transaction diagnostics experience, see the [transaction diagnostics documentation](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics).
+To learn more about the end-to-end transaction diagnostics experience, see the [transaction diagnostics documentation](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics).
Select the exception row to see the details of the server-side exception that caused the synthetic availability test to fail. You can also get the [debug snapshot](./snapshot-debugger.md) for richer code-level diagnostics.
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Configure a [snapshot collection for ASP.NET applications](snapshot-debugger-vm.
[api]: ./api-custom-events-metrics.md [client]: ./javascript.md
-[diagnostic]: ./search-and-transaction-diagnostics.md?tabs=transaction-search
+[diagnostic]: ./transaction-search-and-diagnostics.md?tabs=transaction-search
[exceptions]: ./asp-net-exceptions.md [netlogs]: ./asp-net-trace-logs.md [new]: ./create-workspace-resource.md
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
You need the connection strings of all the resources to which your app will send
### Filter on the build number When you publish a new version of your app, you'll want to be able to separate the telemetry from different builds.
-You can set the **Application Version** property so that you can filter [search](../../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results.
+You can set the **Application Version** property so that you can filter [search](../../azure-monitor/app/transaction-search-and-diagnostics.md?tabs=transaction-search) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results.
There are several different methods of setting the **Application Version** property.
To track the application version, make sure your Microsoft Build Engine process
</PropertyGroup> ```
-When the Application Insights web module has the build information, it automatically adds **Application Version** as a property to every item of telemetry. For this reason, you can filter by version when you perform [diagnostic searches](../../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search) or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md).
+When the Application Insights web module has the build information, it automatically adds **Application Version** as a property to every item of telemetry. For this reason, you can filter by version when you perform [diagnostic searches](../../azure-monitor/app/transaction-search-and-diagnostics.md?tabs=transaction-search) or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md).
The build version number is generated only by the Microsoft Build Engine, not by the developer build from Visual Studio.
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
When you instrument message deletion, make sure you set the operation (correlati
### Dependency types
-Application Insights uses dependency type to customize UI experiences. For queues, it recognizes the following types of `DependencyTelemetry` that improve [Transaction diagnostics experience](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics):
+Application Insights uses dependency type to customize UI experiences. For queues, it recognizes the following types of `DependencyTelemetry` that improve [Transaction diagnostics experience](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics):
- `Azure queue` for Azure Storage queues - `Azure Event Hubs` for Azure Event Hubs
Each Application Insights operation (request or dependency) involves `Activity`.
## Next steps - Learn the basics of [telemetry correlation](distributed-tracing-telemetry-correlation.md) in Application Insights.-- Check out how correlated data powers [transaction diagnostics experience](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) and [Application Map](./app-map.md).
+- Check out how correlated data powers [transaction diagnostics experience](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) and [Application Map](./app-map.md).
- See the [data model](./data-model-complete.md) for Application Insights types and data model. - Report custom [events and metrics](./api-custom-events-metrics.md) to Application Insights. - Check out standard [configuration](configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet) for context properties collection.
azure-monitor Distributed Trace Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-trace-data.md
Modern cloud and [microservices](https://azure.com/microservices) architectures have enabled simple, independently deployable services that reduce costs while increasing availability and throughput. However, it has made overall systems more difficult to reason about and debug. Distributed tracing solves this problem by providing a performance profiler that works like call stacks for cloud and microservices architectures.
-Azure Monitor provides two experiences for consuming distributed trace data: the [transaction diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) view for a single transaction/request and the [application map](./app-map.md) view to show how systems interact.
+Azure Monitor provides two experiences for consuming distributed trace data: the [transaction diagnostics](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics) view for a single transaction/request and the [application map](./app-map.md) view to show how systems interact.
[Application Insights](app-insights-overview.md#application-insights-overview) can monitor each component separately and detect which component is responsible for failures or performance degradation by using distributed telemetry correlation. This article explains the data model, context-propagation techniques, protocols, and implementation of correlation tactics on different languages and platforms used by Application Insights.
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
export const clickPluginConfigWithUseDefaultContentNameOrId = {
<div className="test1" data-id="test1parent"> <div>Test1</div>
- <div><small>with id, data-id, parent data-id defined</small></div>
+ <div>with id, data-id, parent data-id defined</div>
<Button id="id1" data-id="test1id" variant="info" onClick={trackEvent}>Test1</Button> </div> ```
export const clickPluginConfigWithParentDataTag = {
<div className="test2" data-group="buttongroup1" data-id="test2parent"> <div>Test2</div>
- <div><small>with data-id, parentid, parent data-id defined</small></div>
+ <div>with data-id, parentid, parent data-id defined</div>
<Button data-id="test2id" data-parentid = "parentid2" variant="info" onClick={trackEvent}>Test2</Button> </div> ```
export const clickPluginConfigWithParentDataTag = {
<div className="test6" data-group="buttongroup1" data-id="test6grandparent"> <div>Test6</div>
- <div><small>with data-id, grandparent data-group defined, parent data-id defined</small></div>
+ <div>with data-id, grandparent data-group defined, parent data-id defined</div>
<div data-id="test6parent"> <Button data-id="test6id" variant="info" onClick={trackEvent}>Test6</Button> </div>
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
If you open the Live Metrics pane, the SDKs switch to a higher frequency mode an
## Next steps * [Monitor usage with Application Insights](./usage-overview.md)
-* [Use Diagnostic Search](./search-and-transaction-diagnostics.md?tabs=transaction-search)
+* [Use Diagnostic Search](./transaction-search-and-diagnostics.md?tabs=transaction-search)
* [Profiler](./profiler.md) * [Snapshot Debugger](./snapshot-debugger.md)
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Because the SDK batches data for submission, there might be a delay before items
* Continue to use the application. Take more actions to generate more telemetry. * Select **Refresh** in the portal resource view. Charts periodically refresh on their own, but manually refreshing forces them to refresh immediately. * Verify that [required outgoing ports](./ip-addresses.md) are open.
-* Use [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) to look for specific events.
+* Use [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) to look for specific events.
* Check the [FAQ][FAQ]. ## Basic usage
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
A direct exporter sends telemetry in-process (from the application's code) direc
*The currently available Application Insights SDKs and Azure Monitor OpenTelemetry Distros rely on a direct exporter*.
-Alternatively, sending application telemetry via an agent like OpenTelemetry-Collector can have some benefits including sampling, post-processing, and more. Azure Monitor is developing an agent and ingestion endpoint that supports [Open Telemetry Protocol (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/README.md), providing a path for any OpenTelemetry-supported programming language beyond our [supported languages](platforms.md) to use to Azure Monitor.
- > [!NOTE] > For Azure Monitor's position on the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md), see the [OpenTelemetry FAQ](./opentelemetry-enable.md#can-i-use-the-opentelemetry-collector).
Alternatively, sending application telemetry via an agent like OpenTelemetry-Col
## OpenTelemetry
-Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, have asked for vendor-neutral instrumentation, and we're pleased to partner with the OpenTelemetry community to create consistent APIs and SDKs across languages.
+Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, asked for vendor-neutral instrumentation, and we're pleased to partner with the OpenTelemetry community to create consistent APIs and SDKs across languages.
Microsoft worked with project stakeholders from two previously popular open-source telemetry projects, [OpenCensus](https://opencensus.io/) and [OpenTracing](https://opentracing.io/). Together, we helped to create a single project, OpenTelemetry. OpenTelemetry includes contributions from all major cloud and Application Performance Management (APM) vendors and lives within the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/). Microsoft is a Platinum Member of the CNCF.
azure-monitor Release And Work Item Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-and-work-item-insights.md
To delete, go to in your Application Insights resource under *Configure* select
## See also * [Azure Pipelines documentation](/azure/devops/pipelines)
-* [Create work items](./search-and-transaction-diagnostics.md?tabs=transaction-search#create-work-item)
+* [Create work items](./transaction-search-and-diagnostics.md?tabs=transaction-search#create-work-item)
* [Automation with PowerShell](./powershell.md) * [Availability test](availability-overview.md)
azure-monitor Sampling Classic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling-classic-api.md
Ingestion sampling doesn't work alongside adaptive or fixed-rate sampling. Adapt
**Use fixed-rate sampling if:**
-* You need synchronized sampling between client and server to navigate between related events. For example, page views and HTTP requests in [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) while investigating events.
+* You need synchronized sampling between client and server to navigate between related events. For example, page views and HTTP requests in [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) while investigating events.
* You're confident of the appropriate sampling percentage for your app. It should be high enough to get accurate metrics, but below the rate that exceeds your pricing quota and the throttling limits. **Use adaptive sampling:**
azure-monitor Transaction Search And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-search-and-diagnostics.md
+
+ Title: Transaction Search and Diagnostics
+description: This article explains Application Insights end-to-end transaction diagnostics and how to search and filter raw telemetry sent by your web app.
+ Last updated : 11/16/2023+++
+# Transaction Search and Diagnostics
+
+Azure Monitor Application Insights offers Transaction Search for pinpointing specific telemetry items and Transaction Diagnostics for comprehensive end-to-end transaction analysis.
+
+**Transaction Search**: This experience enables users to locate and examine individual telemetry items such as page views, exceptions, and web requests. Additionally, it offers the capability to view log traces and events coded into the application. It identifies performance issues and errors within the application.
+
+**Transaction Diagnostics**: Quickly identify issues in components through comprehensive insight into end-to-end transaction details, including dependencies and exceptions. Access this feature via the Search interface by choosing an item from the search results.
+
+## [Transaction Search](#tab/transaction-search)
+
+Transaction search is a feature of [Application Insights](./app-insights-overview.md) that you use to find and explore individual telemetry items, such as page views, exceptions, or web requests. You can also view log traces and events that you code.
+
+For more complex queries over your data, use [Log Analytics](../logs/log-analytics-tutorial.md).
+
+## Where do you see Search?
+
+You can find **Search** in the Azure portal or Visual Studio.
+
+### In the Azure portal
+
+You can open transaction search from the Application Insights **Overview** tab of your application. You can also select **Search** under **Investigate** on the left menu.
++
+Go to the **Event types** dropdown menu to see a list of telemetry items such as server requests, page views, and custom events you coded. The top of the **Results** list has a summary chart showing counts of events over time.
+
+Back out of the dropdown menu or select **Refresh** to get new events.
+
+### In Visual Studio
+
+In Visual Studio, there's also an **Application Insights Search** window. It's most useful for displaying telemetry events generated by the application that you're debugging. But it can also show the events collected from your published app at the Azure portal.
+
+Open the **Application Insights Search** window in Visual Studio:
++
+The **Application Insights Search** window has features similar to the web portal:
++
+The **Track Operation** tab is available when you open a request or a page view. An "operation" is a sequence of events associated with a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The **Track Operation** tab shows graphically the timing and duration of these events in relation to the request or page view.
+
+## Inspect individual items
+
+Select any telemetry item to see key fields and related items.
++
+The end-to-end transaction details view opens.
+
+## Filter event types
+
+Open the **Event types** dropdown menu and choose the event types you want to see. If you want to restore the filters later, select **Reset**.
+
+The event types are:
+
+* **Trace**: [Diagnostic logs](./asp-net-trace-logs.md) including TrackTrace, log4Net, NLog, and System.Diagnostic.Trace calls.
+* **Request**: HTTP requests received by your server application including pages, scripts, images, style files, and data. These events are used to create the request and response overview charts.
+* **Page View**: [Telemetry sent by the web client](./javascript.md) used to create page view reports.
+* **Custom Event**: If you inserted calls to `TrackEvent()` to [monitor usage](./api-custom-events-metrics.md), you can search them here.
+* **Exception**: Uncaught [exceptions in the server](./asp-net-exceptions.md), and the exceptions that you log by using `TrackException()`.
+* **Dependency**: [Calls from your server application](./asp-net-dependencies.md) to other services such as REST APIs or databases, and AJAX calls from your [client code](./javascript.md).
+* **Availability**: Results of [availability tests](availability-overview.md)
+
+## Filter on property values
+
+You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter** :::image type="content" source="./media/search-and-transaction-diagnostics/filter-icon.png" lightbox="./media/search-and-transaction-diagnostics/filter-icon.png" alt-text="Filter icon"::: to start.
+
+Choosing no values of a particular property has the same effect as choosing all values. It switches off filtering on that property.
+
+Notice that the counts to the right of the filter values show how many occurrences there are in the current filtered set.
+
+## Find events with the same property
+
+To find all the items with the same property value, either enter it in the **Search** box or select the checkbox when you look through properties on the **Filter** tab.
++
+## Search the data
+
+> [!NOTE]
+> To write more complex queries, open [Logs (Analytics)](../logs/log-analytics-tutorial.md) at the top of the **Search** pane.
+>
+
+You can search for terms in any of the property values. This capability is useful if you write [custom events](./api-custom-events-metrics.md) with property values.
+
+You might want to set a time range because searches over a shorter range are faster.
++
+Search for complete words, not substrings. Use quotation marks to enclose special characters.
+
+| String | *Not* found | Found |
+| | | |
+| HomeController.About |`home`<br/>`controller`<br/>`out` | `homecontroller`<br/>`about`<br/>`"homecontroller.about"`|
+|United States|`Uni`<br/>`ted`|`united`<br/>`states`<br/>`united AND states`<br/>`"united states"`
+
+You can use the following search expressions:
+
+| Sample query | Effect |
+| | |
+| `apple` |Find all events in the time range whose fields include the word `apple`. |
+| `apple AND banana` <br/>`apple banana` |Find events that contain both words. Use capital `AND`, not `and`. <br/>Short form. |
+| `apple OR banana` |Find events that contain either word. Use `OR`, not `or`. |
+| `apple NOT banana` |Find events that contain one word but not the other. |
+
+## Sampling
+
+If your app generates significant telemetry and uses ASP.NET SDK version 2.0.0-beta3 or later, it automatically reduces the volume sent to the portal through adaptive sampling. This module sends only a representative fraction of events. It selects or deselects events related to the same request as a group, allowing you to navigate between related events.
+
+Learn about [sampling](./sampling.md).
+
+## Create work item
+
+You can create a bug in GitHub or Azure DevOps with the details from any telemetry item.
+
+Go to the end-to-end transaction detail view by selecting any telemetry item. Then select **Create work item**.
++
+The first time you do this step, you're asked to configure a link to your Azure DevOps organization and project. You can also configure the link on the **Work Items** tab.
+
+## Send more telemetry to Application Insights
+
+In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can:
+
+* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./opentelemetry-add-modify.md?tabs=java#logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events.
+
+* [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions.
+
+Learn how to [send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md).
+
+## <a name="questions"></a>Frequently asked questions
+
+Find answers to common questions.
+
+### <a name="limits"></a>How much data is retained?
+
+See the [Limits summary](../service-limits.md#application-insights).
+
+### How can I see POST data in my server requests?
+
+We don't log the POST data automatically, but you can use [TrackTrace or log calls](./asp-net-trace-logs.md). Put the POST data in the message parameter. You can't filter on the message in the same way you can filter on properties, but the size limit is longer.
+
+### Why does my Azure Function search return no results?
+
+Azure Functions doesn't log URL query strings.
+
+## [Transaction Diagnostics](#tab/transaction-diagnostics)
+
+The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure.
+
+## What is a component?
+
+Components are independently deployable parts of your distributed or microservice application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
+
+* Components are different from "observed" external dependencies, such as SQL and event hubs, which your team or organization might not have access to (code or telemetry).
+* Components run on any number of server, role, or container instances.
+* Components can be separate Application Insights instrumentation keys, even if subscriptions are different. Components also can be different roles that report to a single Application Insights instrumentation key. The new experience shows details across all components, regardless of how they were set up.
+
+> [!NOTE]
+> Are you missing the related item links? All the related telemetry is on the left side in the [top](#cross-component-transaction-chart) and [bottom](#all-telemetry-with-this-operation-id) sections.
+
+## Transaction diagnostics experience
+
+This view has four key parts:
+
+- a results list
+- a cross-component transaction chart
+- a time-sequence list of all telemetry related to this operation
+- the details pane for any selected telemetry item
++
+## Cross-component transaction chart
+
+This chart provides a timeline with horizontal bars during requests and dependencies across components. Any exceptions that are collected are also marked on the timeline.
+
+- The top row on this chart represents the entry point. It's the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete.
+- Any calls to external dependencies are simple noncollapsible rows, with icons that represent the dependency type.
+- Calls to other components are collapsible rows. Each row corresponds to a specific operation invoked at the component.
+- By default, the request, dependency, or exception that you selected appears to the side. Select any row to see its [details](#details-of-the-selected-telemetry).
+
+> [!NOTE]
+> Calls to other components have two rows. One row represents the outbound call (dependency) from the caller component. The other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them.
+
+## All telemetry with this Operation ID
+
+This section shows a flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component or call. You can select any telemetry item in this list to see corresponding [details on the side](#details-of-the-selected-telemetry).
++
+## Details of the selected telemetry
+
+This collapsible pane shows the detail of any selected item from the transaction chart or the list. **Show all** lists all the standard attributes that are collected. Any custom attributes are listed separately under the standard set. Select the ellipsis button (...) under the **Call Stack** trace window to get an option to copy the trace. **Open profiler traces** and **Open debug snapshot** show code-level diagnostics in corresponding detail panes.
++
+## Search results
+
+This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details of the preceding three sections. We try to find samples that are most likely to have the details available from all components, even if sampling is in effect in any of them. These samples are shown as suggestions.
++
+## Profiler and Snapshot Debugger
+
+[Application Insights Profiler](./profiler.md) or [Snapshot Debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see Profiler traces or snapshots from any component with a single selection.
+
+If you can't get Profiler working, contact serviceprofilerhelp\@microsoft.com.
+
+If you can't get Snapshot Debugger working, contact snapshothelp\@microsoft.com.
++
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Why do I see a single component on the chart and the other components only show as external dependencies without any details?
+
+Potential reasons:
+
+* Are the other components instrumented with Application Insights?
+* Are they using the latest stable Application Insights SDK?
+* If these components are separate Application Insights resources, validate you have [access](resources-roles-access-control.md).
+If you do have access and the components are instrumented with the latest Application Insights SDKs, let us know via the feedback channel in the upper-right corner.
+
+### I see duplicate rows for the dependencies. Is this behavior expected?
+
+Currently, we're showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different because of the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback!
+
+### What about clock skews across different component instances?
+
+Timelines are adjusted for clock skews in the transaction chart. You can see the exact timestamps in the details pane or by using Log Analytics.
+
+### Why is the new experience missing most of the related items queries?
+
+This behavior is by design. All the related items, across all components, are already available on the left side in the top and bottom sections. The new experience has two related items that the left side doesn't cover: all telemetry from five minutes before and after this event and the user timeline.
+
+### Is there a way to see fewer events per transaction when I use the Application Insights JavaScript SDK?
+
+The transaction diagnostics experience shows all telemetry in a [single operation](distributed-tracing-telemetry-correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-complete.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event is generated and a single Operation ID is used for all telemetry generated. As a result, many events might be correlated to the same operation.
+
+In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your SPA. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so that a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, call `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event also resets the Operation ID.
+
+### Why do transaction detail durations not add up to the top-request duration?
+
+Time not explained in the Gantt chart is time that isn't covered by a tracked dependency. This issue can occur because external calls weren't instrumented, either automatically or manually. It can also occur because the time taken was in process rather than because of an external call.
+
+If all calls were instrumented, in process is the likely root cause for the time spent. A useful tool for diagnosing the process is the [Application Insights profiler](./profiler.md).
+
+### What if I see the message ***Error retrieving data*** while navigating Application Insights in the Azure portal?
+
+This error indicates that the browser was unable to call into a required API or the API returned a failure response. To troubleshoot the behavior, open a browser [InPrivate window](https://support.microsoft.com/microsoft-edge/browse-inprivate-in-microsoft-edge-cd2c9a48-0bc4-b98e-5e46-ac40c84e27e2) and [disable any browser extensions](https://support.microsoft.com/microsoft-edge/add-turn-off-or-remove-extensions-in-microsoft-edge-9c0ec68c-2fbc-2f2c-9ff0-bdc76f46b026) that are running, then identify if you can still reproduce the portal behavior. If the portal error still occurs, try testing with other browsers, or other machines, investigate DNS or other network related issues from the client machine where the API calls are failing. If the portal error continues and needs more investigation, [collect a browser network trace](../../azure-portal/capture-browser-trace.md#capture-a-browser-trace-for-troubleshooting) while reproducing the unexpected portal behavior, then open a support case from the Azure portal.
+++
+## See also
+
+* [Write complex queries in Analytics](../logs/log-analytics-tutorial.md)
+* [Send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md)
+* [Availability overview](availability-overview.md)
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
With the rise of Kubernetes and the OSS ecosystem, Container Insights migrate to
## Repair duplicate agents
-Customers who manually Container Insights using custom methods prior to October 2022 can end up with multiple versions of our agent running together. To clear this duplication, customers are recommended to follow the steps below:
+Customers who manually enable Container Insights using custom methods prior to October 2022 can end up with multiple versions of our agent running together. To clear this duplication, customers are recommended to follow the steps below:
### Migration guidelines for AKS clusters
Current ama-logs default limit are below
Validate whether the current default settings and limits meet the customer's needs. And if not, create support tickets under containerinsights agent to help investigate and toggle memory/cpu limits for the customer. Through doing this, it can help address the scale limitations issues that some customers encountered previously that resulted in OOMKilled exceptions.
-4. Fetch current Azure analytic workspace ID since we're going to re-onboard the container insights.
+3. Fetch current Azure analytic workspace ID since we're going to re-onboard the container insights.
```console az aks show -g $resourceGroupNameofCluster -n $nameofTheCluster | grep logAnalyticsWorkspaceResourceID` ```
-6. Clean resources from previous onboarding:
+4. Clean resources from previous onboarding:
**For customers that previously onboarded to containerinsights through helm chart** :
curl -LO raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/kubernetes/o
kubectl delete -f omsagent.yaml ```
-7. Disable container insights to clean all related resources with aks command: [Disable Container insights on your Azure Kubernetes Service (AKS) cluster - Azure Monitor | Microsoft Learn](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-optout)
+5. Disable container insights to clean all related resources with aks command: [Disable Container insights on your Azure Kubernetes Service (AKS) cluster - Azure Monitor | Microsoft Learn](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-optout)
```console az aks disable-addons -a monitoring -n MyExistingManagedCluster -g MyExistingManagedClusterRG ```
-8. Re-onboard to containerinsights with the workspace fetched from step 3 using [the steps outlined here](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli#specify-a-log-analytics-workspace)
+6. Re-onboard to containerinsights with the workspace fetched from step 3 using [the steps outlined here](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli#specify-a-log-analytics-workspace)
azure-monitor Data Collection Rule Create Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-create-edit.md
+
+ Title: Create and edit data collection rules (DCRs) in Azure Monitor
+description: Details on creating and editing data collection rules (DCRs) in Azure Monitor.
+++ Last updated : 11/15/2023++++
+# Create and edit data collection rules (DCRs) in Azure Monitor
+There are multiple methods for creating a [data collection rule (DCR)](./data-collection-rule-overview.md) in Azure Monitor. In some cases, Azure Monitor will create and manage the DCR according to settings that you configure in the Azure portal. In other cases, you might need to create your own DCRs to customize particular scenarios.
+
+This article describes the different methods for creating and editing a DCR. For the contents of the DCR itself, see [Structure of a data collection rule in Azure Monitor](./data-collection-rule-structure.md).
+
+## Permissions
+ You require the following permissions to create DCRs and associations:
+
+| Built-in role | Scopes | Reason |
+|:|:|:|
+| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Create or edit DCRs, assign rules to the machine, deploy associations). |
+| [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)<br>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Azure Arc-enabled servers</li></ul> | Deploy agent extensions on the VM. |
+| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Deploy Azure Resource Manager templates. |
+
+## Automated methods to create a DCR
+The following table lists methods to create data collection scenarios using the Azure portal where the DCR is created for you. In these cases you don't need to interact directly with the DCR itself.
+
+| Scenario | Resources | Description |
+|:|:|:|
+| Azure Monitor Agent | [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) | Use the Azure portal to create a DCR that specifies events and performance counters to collect from a machine with Azure Monitor Agent. Then associate that rule with one or more virtual machines. Azure Monitor Agent will be installed on any machines that don't currently have it. |
+| | [Enable VM insights overview](../vm/vminsights-enable-overview.md) | When you enable VM insights on a VM, the Azure Monitor agent is installed, and a DCR is created that collects a predefined set of performance counters. You shouldn't modify this DCR. |
+| Container insights | [Enable Container insights](../containers/prometheus-metrics-enable.md) | When you enable Container insights on a Kubernetes cluster, a containerized version of the Azure Monitor agent is installed, and a DCR is created that collects data according to the configuration you selected. You may need to modify this DCR to add a transformation. |
+| Text or JSON logs | [Collect logs from a text or JSON file with Azure Monitor Agent](../agents/data-collection-text-log.md?tabs=portal) | Use the Azure portal to create a DCR to collect entries from a text log on a machine with Azure Monitor Agent. |
+| Workspace transformation | [Add a transformation in a workspace data collection rule using the Azure portal](../logs/tutorial-workspace-transformations-portal.md) | Create a transformation for any supported table in a Log Analytics workspace. The transformation is defined in a DCR that's then associated with the workspace. It's applied to any data sent to that table from a legacy workload that doesn't use a DCR. |
++
+## Manually create a DCR
+To manually create a DCR, create a JSON file using the appropriate configuration for the data collection that you're configuring. Start with one of the [sample DCRs](./data-collection-rule-samples.md) and use information in [Structure of a data collection rule in Azure Monitor](./data-collection-rule-structure.md) to modify the JSON file for your particular environment and requirements.
+
+Once you have the JSON file created, you can use any of the following methods to create the DCR:
+
+## [CLI](#tab/CLI)
+Use the [az monitor data-collection rule create](/cli/azure/monitor/data-collection/rule) command to create a DCR from your JSON file using the Azure CLI as shown in the following example.
+
+```azurecli
+az monitor data-collection rule create --location 'eastus' --resource-group 'my-resource-group' --name 'myDCRName' --rule-file 'C:\MyNewDCR.json' --description 'This is my new DCR'
+```
+
+## [PowerShell](#tab/powershell)
+Use the [New-AzDataCollectionRule](/powershell/module/az.monitor/new-azdatacollectionrule) cmdlet to create the DCR from your JSON file using PowerShell as shown in the following example.
+
+```powershell
+New-AzDataCollectionRule -Location 'east-us' -ResourceGroupName 'my-resource-group' -RuleName 'myDCRName' -RuleFile 'C:\MyNewDCR.json' -Description 'This is my new DCR'
+```
++
+## [API](#tab/api)
+Use the [DCR create API](/rest/api/monitor/data-collection-rules/create) to create the DCR from your JSON file. You can use any method to call a REST API as shown in the following examples.
++
+```powershell
+$ResourceId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.Insights/dataCollectionRules/my-dcr"
+$FilePath = ".\my-dcr.json"
+$DCRContent = Get-Content $FilePath -Raw
+Invoke-AzRestMethod -Path ("$ResourceId"+"?api-version=2021-09-01-preview") -Method PUT -Payload $DCRContent
+```
++
+```azurecli
+ResourceId="/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.Insights/dataCollectionRules/my-dcr"
+FilePath="my-dcr.json"
+az rest --method put --url $ResourceId"?api-version=2021-09-01-preview" --body @$FilePath
+```
++
+## [ARM](#tab/arm)
+Using an ARM template, you can define parameters so you can provide particular values at the time you install the DCR. This allows you to use a single template for multiple installations. Use the following template, copying in the JSON for your DCR and adding any other parameters you want to use.
+
+See [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates) for different methods to deploy ARM templates.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionRuleName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Rule to create."
+ }
+ },
+ "location": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": "[parameters('dataCollectionRuleName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2021-09-01-preview",
+ "properties": {
+ "<dcr-properties>"
+ }
+ }
+ ]
+}
+
+```
++
+The following tutorials include examples of manually creating DCRs.
+
+- [Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](../logs/tutorial-logs-ingestion-api.md)
+- [Add transformation in workspace data collection rule to Azure Monitor using Resource Manager templates](../logs/tutorial-workspace-transformations-api.md)
+
+## Edit a DCR
+To edit a DCR, you can use any of the methods described in the previous section to create a DCR using a modified version of the JSON.
+
+If you need to retrieve the JSON for an existing DCR, you can copy it from the **JSON View** for the DCR in the Azure portal. You can also retrieve it using an API call as shown in the following PowerShell example.
+
+```powershell
+$ResourceId = "<ResourceId>" # Resource ID of the DCR to edit
+$FilePath = "<FilePath>" # Store DCR content in this file
+$DCR = Invoke-AzRestMethod -Path ("$ResourceId"+"?api-version=2021-09-01-preview") -Method GET
+$DCR.Content | ConvertFrom-Json | ConvertTo-Json -Depth 20 | Out-File -FilePath $FilePath
+```
+
+For a tutorial that walks through the process of retrieving and then editing an existing DCR, see [Tutorial: Edit a data collection rule (DCR)](./data-collection-rule-edit.md).
+
+## Next steps
+
+- [Read about the detailed structure of a data collection rule](data-collection-rule-structure.md)
+- [Get details on transformations in a data collection rule](data-collection-transformations.md)
azure-monitor Data Collection Rule Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-edit.md
Title: Tutorial - Editing Data Collection Rules description: This article describes how to make changes in Data Collection Rule definition using command line tools and simple API calls. - Previously updated : 07/17/2023 Last updated : 11/03/2023
-# Tutorial: Editing Data Collection Rules
-This tutorial describes how to edit the definition of Data Collection Rule (DCR) that has been already provisioned using command line tools.
+# Tutorial: Edit a data collection rule (DCR)
+This tutorial describes how to edit the definition of Data Collection Rule (DCR) that has been already provisioned using command line tools.
In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
> * Apply changes to a Data Collection Rule using ARM API call > * Automate the process of DCR update using PowerShell scripts
+> [!NOTE]
+> This tutorial walks through one method for editing an existing DCR. See [Create and edit data collection rules (DCRs) in Azure Monitor](data-collection-rule-create-edit.md) for other methods.
+ ## Prerequisites To complete this tutorial you need the following: - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Permissions to create Data Collection Rule objects](data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](data-collection-rule-create-edit.md#permissions) in the workspace.
- Up to date version of PowerShell. Using Azure Cloud Shell is recommended. ## Overview of tutorial
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
description: Overview of data collection rules (DCRs) in Azure Monitor including
Previously updated : 08/08/2023 Last updated : 11/15/2023 # Data collection rules in Azure Monitor
-Data collection rules (DCRs) define the [data collection process in Azure Monitor](../essentials/data-collection.md). DCRs specify what data should be collected, how to transform that data, and where to send that data. Some DCRs will be created and managed by Azure Monitor to collect a specific set of data to enable insights and visualizations. You might also create your own DCRs to define the set of data required for other scenarios.
+Data collection rules (DCRs) are sets of instructions supporting [data collection in Azure Monitor](../essentials/data-collection.md). They provide a consistent and centralized way to define and customize different data collection scenarios. Depending on the scenario, DCRs specify such details as what data should be collected, how to transform that data, and where to send it.
+
+DCRs are stored in Azure so that you can centrally manage them. Different components of a data collection workflow will access the DCR for particular information that it requires. In some cases, you can use the Azure portal to configure data collection, and Azure Monitor will create and manage the DCR for you. Other scenarios will require you to create your own DCR. You may also choose to customize an existing DCR to meet your required functionality.
++
+## Basic operation
+One example of how DCRs are used is the Logs Ingestion API that allows you to send custom data to Azure Monitor. This scenario is illustrated in the following diagram. Prior to using the API, you create a DCR that defines the structure of the data that you're going to send and the Log Analytics workspace and table that will receive the data. If the data needs to be formatted before it's stored, you can include a [transformation](data-collection-transformations.md) in the DCR.
+
+Each call to the API specifies the DCR to use, and Azure Monitor references this DCR to determine what to do with the incoming data. If your requirements change, you can modify the DCR without making any changes to the application sending the data.
++
+## Data collection rule associations (DCRAs)
+Data collection rule associations (DCRAs) associate a DCR with an object being monitored, for example a virtual machine with the Azure Monitor agent (AMA). A single object can be associated with multiple DCRs, and a single DCR can be associated with multiple objects.
+
+The following diagram illustrates data collection for the Azure Monitor agent. When the agent is installed, it connects to Azure Monitor to retrieve any DCRs that are associated with it. It then references the data sources section of each DCR to determine what data to collect from the machine. When the agent delivers this data, Azure Monitor references other sections of the DCR to determine whether a transformation should be applied to it and then the workspace and table to send it to.
+++ ## View data collection rules
+There are multiple ways to view the DCRs in your subscription.
+
+### [Portal](#tab/portal)
To view your DCRs in the Azure portal, select **Data Collection Rules** under **Settings** on the **Monitor** menu. +
+Select a DCR to view its details. For DCRs supporting VMs, you can view and modify its associations and the data that it collects. For other DCRs, use the **JSON view** to view the details of the DCR. See [Create and edit data collection rules (DCRs) in Azure Monitor](./data-collection-rule-create-edit.md) for details on how you can modify them.
+ > [!NOTE]
-> Although this view shows all DCRs in the specified subscriptions, selecting the **Create** button will create a data collection for Azure Monitor Agent. Similarly, this page will only allow you to modify DCRs for Azure Monitor Agent. For guidance on how to create and update DCRs for other workflows, see [Create a data collection rule](#create-a-data-collection-rule).
+> Although this view shows all DCRs in the specified subscriptions, selecting the **Create** button will create a data collection for Azure Monitor Agent. Similarly, this page will only allow you to modify DCRs for Azure Monitor Agent. For guidance on how to create and update DCRs for other workflows, see [Create and edit data collection rules (DCRs) in Azure Monitor](./data-collection-rule-create-edit.md).
+### [PowerShell](#tab/powershell)
+Use [Get-AzDataCollectionRule](/powershell/module/az.monitor/get-azdatacollectionrule) to retrieve the DCRs in your subscription.
-## Create a data collection rule
-The following resources describe different scenarios for creating DCRs. In some cases, the DCR might be created for you. In other cases, you might need to create and edit it yourself.
-| Scenario | Resources | Description |
-|:|:|:|
-| Azure Monitor Agent | [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) | Use the Azure portal to create a DCR that specifies events and performance counters to collect from a machine with Azure Monitor Agent. Then apply that rule to one or more virtual machines. Azure Monitor Agent will be installed on any machines that don't currently have it. |
-| | [Use Azure Policy to install Azure Monitor Agent and associate with a DCR](../agents/azure-monitor-agent-manage.md#use-azure-policy) | Use Azure Policy to install Azure Monitor Agent and associate one or more DCRs with any virtual machines or virtual machine scale sets as they're created in your subscription.
-| Text logs | [Configure custom logs by using the Azure portal](../logs/tutorial-logs-ingestion-portal.md)<br>[Configure custom logs by using Azure Resource Manager templates and the REST API](../logs/tutorial-logs-ingestion-api.md)<br>[Configure text logs by using Azure Monitoring Agent](../agents/data-collection-text-log.md) | Send custom data by using a REST API or Agent. The API call connects to a data collection endpoint and specifies a DCR to use. The agent uses the DCR to configure the collection of data on a machine. The DCR specifies the target table and potentially includes a transformation that filters and modifies the data before it's stored in a Log Analytics workspace. |
-| Azure Event Hubs | [Ingest events from Azure Event Hubs to Azure Monitor Logs](../logs/ingest-logs-event-hub.md)| Collect data from multiple sources to an event hub and ingest the data you need directly into tables in one or more Log Analytics workspaces. This is a highly scalable method of collecting data from a wide range of sources with minimum configuration.|
-| Workspace transformation | [Configure ingestion-time transformations by using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Configure ingestion-time transformations by using Azure Resource Manager templates and the REST API](../logs/tutorial-workspace-transformations-api.md) | Create a transformation for any supported table in a Log Analytics workspace. The transformation is defined in a DCR that's then associated with the workspace. It's applied to any data sent to that table from a legacy workload that doesn't use a DCR. |
+```powershell
+Get-AzDataCollectionRule
+```
-## Work with data collection rules
-To work with DCRs outside of the Azure portal, see the following resources:
+Use [Get-azDataCollectionRuleAssociation](/powershell/module/az.monitor/get-azdatacollectionruleassociation) to retrieve the DCRs associated with a VM.
-| Method | Resources |
-|:|:|
-| API | Directly edit the DCR in any JSON editor and then [install it by using the REST API](/rest/api/monitor/datacollectionrules). |
-| CLI | Create DCRs and associations with the [Azure CLI](https://github.com/Azure/azure-cli-extensions/blob/master/src/monitor-control-service/README.md). |
-| PowerShell | Work with DCRs and associations with the following Azure PowerShell cmdlets:<br>[Get-AzDataCollectionRule](/powershell/module/az.monitor/get-azdatacollectionrule)<br>[New-AzDataCollectionRule](/powershell/module/az.monitor/new-azdatacollectionrule)<br>[Set-AzDataCollectionRule](/powershell/module/az.monitor/set-azdatacollectionrule)<br>[Update-AzDataCollectionRule](/powershell/module/az.monitor/update-azdatacollectionrule)<br>[Remove-AzDataCollectionRule](/powershell/module/az.monitor/remove-azdatacollectionrule)<br>[Get-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/get-azdatacollectionruleassociation)<br>[New-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/new-azdatacollectionruleassociation)<br>[Remove-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/remove-azdatacollectionruleassociation)
+```powershell
+get-azDataCollectionRuleAssociation -TargetResourceId /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/my-vm | foreach {Get-azDataCollectionRule -RuleId $_.DataCollectionRuleId }
+```
-## Structure of a data collection rule
-Data collection rules are formatted in JSON. Although you might not need to interact with them directly, there are scenarios where you might need to directly edit a DCR. For a description of this structure and the different elements used for different workflows, see [Data collection rule structure](data-collection-rule-structure.md).
+### [CLI](#tab/cli)
+Use [az monitor data-collection rule](/cli/azure/monitor/data-collection/rule) to work the DCRs using Azure CLI.
-## Permissions
-When you use programmatic methods to create DCRs and associations, you require the following permissions:
+Use the following to return all DCRs in your subscription.
-| Built-in role | Scopes | Reason |
-|:|:|:|
-| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Create or edit DCRs, assign rules to the machine, deploy associations). |
-| [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)<br>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Azure Arc-enabled servers</li></ul> | Deploy agent extensions on the VM. |
-| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing DCR</li></ul> | Deploy Azure Resource Manager templates. |
+```azurecli
+az monitor data-collection rule list
+```
-## Limits
-For limits that apply to each DCR, see [Azure Monitor service limits](../service-limits.md#data-collection-rules).
+Use the following to return DCR associations for a VM.
+
+```azurecli
+az monitor data-collection rule association list --resource "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/my-vm "
+```
+ ## Supported regions Data collection rules are available in all public regions where Log Analytics workspaces and the Azure Government and China clouds are supported. Air-gapped clouds aren't yet supported.
Data collection rules are available in all public regions where Log Analytics wo
**Single region data residency** is a preview feature to enable storing customer data in a single region and is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and the Brazil South (Sao Paulo State) Region of the Brazil Geo. Single-region residency is enabled by default in these regions. ## Data resiliency and high availability
-A rule gets created and stored in a particular region and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions) within the same geography. The service is deployed to all three [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region. For this reason, it's a *zone-redundant service*, which further increases availability.
+A DCR gets created and stored in a particular region and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions) within the same geography. The service is deployed to all three [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region. For this reason, it's a *zone-redundant service*, which further increases availability.
## Next steps
+See the following articles for additional information on how to work with DCRs.
-- [Read about the detailed structure of a data collection rule](data-collection-rule-structure.md)-- [Get details on transformations in a data collection rule](data-collection-transformations.md)
+- [Data collection rule structure](data-collection-rule-structure.md) for a description of the JSON structure of DCRs and the different elements used for different workflows.
+- [Sample data collection rules (DCRs)](data-collection-rule-samples.md) for sample DCRs for different data collection scenarios.
+- [Create and edit data collection rules (DCRs) in Azure Monitor](./data-collection-rule-create-edit.md) for different methods to create DCRs for different data collection scenarios.
+- [Azure Monitor service limits](../service-limits.md#data-collection-rules) for limits that apply to each DCR.
azure-monitor Data Collection Rule Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-samples.md
+
+ Title: Sample data collection rules (DCRs) in Azure Monitor
+description: Sample data collection rule for different Azure Monitor data collection scenarios.
+ Last updated : 11/15/2023+++++
+# Sample data collection rules (DCRs) in Azure Monitor
+This article includes sample [data collection rules (DCRs)](./data-collection-rule-overview.md) for different scenarios. For descriptions of each of the properties in these DCRs, see [Data collection rule structure](./data-collection-rule-structure.md).
+
+## Azure Monitor agent - events and performance data
+The sample [data collection rule](../essentials/data-collection-rule-overview.md) below is for virtual machines with [Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md) and has the following details:
+
+- Performance data
+ - Collects specific Processor, Memory, Logical Disk, and Physical Disk counters every 15 seconds and uploads every minute.
+ - Collects specific Process counters every 30 seconds and uploads every 5 minutes.
+- Windows events
+ - Collects Windows security events and uploads every minute.
+ - Collects Windows application and system events and uploads every 5 minutes.
+- Syslog
+ - Collects Debug, Critical, and Emergency events from cron facility.
+ - Collects Alert, Critical, and Emergency events from syslog facility.
+- Destinations
+ - Sends all data to a Log Analytics workspace named centralWorkspace.
+
+> [!NOTE]
+> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
++
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "dataSources": {
+ "performanceCounters": [
+ {
+ "name": "cloudTeamCoreCounters",
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "scheduledTransferPeriod": "PT1M",
+ "samplingFrequencyInSeconds": 15,
+ "counterSpecifiers": [
+ "\\Processor(_Total)\\% Processor Time",
+ "\\Memory\\Committed Bytes",
+ "\\LogicalDisk(_Total)\\Free Megabytes",
+ "\\PhysicalDisk(_Total)\\Avg. Disk Queue Length"
+ ]
+ },
+ {
+ "name": "appTeamExtraCounters",
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "scheduledTransferPeriod": "PT5M",
+ "samplingFrequencyInSeconds": 30,
+ "counterSpecifiers": [
+ "\\Process(_Total)\\Thread Count"
+ ]
+ }
+ ],
+ "windowsEventLogs": [
+ {
+ "name": "cloudSecurityTeamEvents",
+ "streams": [
+ "Microsoft-Event"
+ ],
+ "scheduledTransferPeriod": "PT1M",
+ "xPathQueries": [
+ "Security!*"
+ ]
+ },
+ {
+ "name": "appTeam1AppEvents",
+ "streams": [
+ "Microsoft-Event"
+ ],
+ "scheduledTransferPeriod": "PT5M",
+ "xPathQueries": [
+ "System!*[System[(Level = 1 or Level = 2 or Level = 3)]]",
+ "Application!*[System[(Level = 1 or Level = 2 or Level = 3)]]"
+ ]
+ }
+ ],
+ "syslog": [
+ {
+ "name": "cronSyslog",
+ "streams": [
+ "Microsoft-Syslog"
+ ],
+ "facilityNames": [
+ "cron"
+ ],
+ "logLevels": [
+ "Debug",
+ "Critical",
+ "Emergency"
+ ]
+ },
+ {
+ "name": "syslogBase",
+ "streams": [
+ "Microsoft-Syslog"
+ ],
+ "facilityNames": [
+ "syslog"
+ ],
+ "logLevels": [
+ "Alert",
+ "Critical",
+ "Emergency"
+ ]
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace",
+ "name": "centralWorkspace"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-Perf",
+ "Microsoft-Syslog",
+ "Microsoft-Event"
+ ],
+ "destinations": [
+ "centralWorkspace"
+ ]
+ }
+ ]
+ }
+ }
+```
+
+## Azure Monitor agent - text logs
+The sample data collection rule below is used to collect [text logs using Azure Monitor agent](../agents/data-collection-text-log.md).
+
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/my-resource-groups/providers/Microsoft.Insights/dataCollectionEndpoints/my-data-collection-endpoint",
+ "streamDeclarations": {
+ "Custom-MyLogFileFormat": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
+ }
+ ]
+ }
+ },
+ "dataSources": {
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-MyLogFileFormat"
+ ],
+ "filePatterns": [
+ "C:\\JavaLogs\\*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myLogFileFormat-Windows"
+ },
+ {
+ "streams": [
+ "Custom-MyLogFileFormat"
+ ],
+ "filePatterns": [
+ "//var//*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myLogFileFormat-Linux"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace",
+ "name": "MyDestination"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-MyLogFileFormat"
+ ],
+ "destinations": [
+ "MyDestination"
+ ],
+ "transformKql": "source",
+ "outputStream": "Custom-MyTable_CL"
+ }
+ ]
+ }
+}
+```
+
+## Event Hubs
+The sample data collection rule below is used to collect [data from an event hub](../logs/ingest-logs-event-hub.md).
+
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/my-resource-groups/providers/Microsoft.Insights/dataCollectionEndpoints/my-data-collection-endpoint",
+ "streamDeclarations": {
+ "Custom-MyEventHubStream": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
+ },
+ {
+ "name": "Properties",
+ "type": "dynamic"
+ }
+ ]
+ }
+ },
+ "dataSources": {
+ "dataImports": {
+ "eventHub": {
+ "consumerGroup": "<consumer-group>",
+ "stream": "Custom-MyEventHubStream",
+ "name": "myEventHubDataSource1"
+ }
+ }
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace",
+ "name": "MyDestination"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-MyEventHubStream"
+ ],
+ "destinations": [
+ "MyDestination"
+ ],
+ "transformKql": "source",
+ "outputStream": "Custom-MyTable_CL"
+ }
+ ]
+ }
+}
+```
+
+## Logs ingestion API
+The sample [data collection rule](../essentials/data-collection-rule-overview.md) below is used with the [Logs ingestion API](../logs/logs-ingestion-api-overview.md). It has the following details:
+
+- Sends data to a table called MyTable_CL in a workspace called my-workspace.
+- Applies a [transformation](../essentials//data-collection-transformations.md) to the incoming data.
++
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "dataCollectionEndpointId": "/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/my-resource-groups/providers/Microsoft.Insights/dataCollectionEndpoints/my-data-collection-endpoint",
+ "streamDeclarations": {
+ "Custom-MyTable": {
+ "columns": [
+ {
+ "name": "Time",
+ "type": "datetime"
+ },
+ {
+ "name": "Computer",
+ "type": "string"
+ },
+ {
+ "name": "AdditionalContext",
+ "type": "string"
+ }
+ ]
+ }
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/cefingestion/providers/microsoft.operationalinsights/workspaces/my-workspace",
+ "name": "LogAnalyticsDest"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-MyTable"
+ ],
+ "destinations": [
+ "LogAnalyticsDest"
+ ],
+ "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, ExtendedColumn=tostring(jsonContext.CounterName)",
+ "outputStream": "Custom-MyTable_CL"
+ }
+ ]
+ }
+}
+```
+
+## Workspace transformation DCR
+The sample [data collection rule](../essentials/data-collection-rule-overview.md) below is used as a
+[workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr) to transform all data sent to a table called *LAQueryLogs*.
+
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace",
+ "name": "clv2ws1"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-Table-LAQueryLogs"
+ ],
+ "destinations": [
+ "clv2ws1"
+ ],
+ "transformKql": "source |where QueryText !contains 'LAQueryLogs' | extend Context = parse_json(RequestContext) | extend Resources_CF = tostring(Context['workspaces']) |extend RequestContext = ''"
+ }
+ ]
+ }
+}
+```
++
+## Next steps
+
+- [Get details for the different properties in a DCR](../essentials/data-collection-rule-structure.md)
+- [See different methods for creating a DCR](../essentials/data-collection-rule-create-edit.md)
+
azure-monitor Data Collection Rule Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md
description: Details on the structure of different kinds of data collection rule
Previously updated : 08/08/2023 Last updated : 11/15/2023 ms.reviwer: nikeist # Structure of a data collection rule in Azure Monitor
+[Data collection rules (DCRs)](data-collection-rule-overview.md) are sets of instructions that determine how to collect and process telemetry sent to Azure Monitor. Some DCRs will be created and managed by Azure Monitor. This article describes the JSON properties of DCRs for creating and editing them in those cases where you need to work with them directly.
-[Data collection rules (DCRs)](data-collection-rule-overview.md) determine how to collect and process telemetry sent to Azure. Some DCRs will be created and managed by Azure Monitor. You might create other DCRs to customize data collection for your particular requirements. This article describes the structure of DCRs for creating and editing DCRs in those cases where you need to work with them directly.
+- See [Create and edit data collection rules (DCRs) in Azure Monitor](data-collection-rule-create-edit.md) for details working with the JSON described here.
+- See [Sample data collection rules (DCRs) in Azure Monitor](../essentials/data-collection-rule-samples.md) for sample DCRs for different scenarios.
-## Custom logs
-A DCR for [API based custom logs](../logs/logs-ingestion-api-overview.md) contains the following sections. For a sample, see [Sample data collection rule - custom logs](../logs/data-collection-rule-sample-custom-logs.md).
-### streamDeclarations
-This section contains the declaration of all the different types of data that will be sent via the HTTP endpoint directly into Log Analytics. Each stream is an object whose:
+## `dataCollectionEndpointId`
+Specifies the [data collection endpoint (DCE)](data-collection-endpoint-overview.md) used by the DCR.
-- Key represents the stream name, which must begin with *Custom-*.-- Value is the full list of top-level properties that are contained in the JSON data that will be sent.
+**Scenarios**
+- Azure Monitor agent
+- Logs ingestion API
+- Events Hubs
+
-The shape of the data you send to the endpoint doesn't need to match that of the destination table. Instead, the output of the transform that's applied on top of the input data needs to match the destination shape. The possible data types that can be assigned to the properties are `string`, `int`, `long`, `real`, `boolean`, `dynamic`, and `datetime`.
+## `streamDeclarations`
+Declaration of the different types of data sent into the Log Analytics workspace. Each stream is an object whose key represents the stream name, which must begin with *Custom-*. The stream contains a full list of top-level properties that are contained in the JSON data that will be sent. The shape of the data you send to the endpoint doesn't need to match that of the destination table. Instead, the output of the transform that's applied on top of the input data needs to match the destination shape.
-### destinations
-This section contains a declaration of all the destinations where the data will be sent. Only Log Analytics is currently supported as a destination. Each Log Analytics destination requires the full workspace resource ID and a friendly name that will be used elsewhere in the DCR to refer to this workspace.
+This section isn't used for data sources sending known data types such as events and performance data sent from Azure Monitor agent.
-### dataFlows
-This section ties the other sections together. It defines the following properties for each stream declared in the `streamDeclarations` section:
+The possible data types that can be assigned to the properties are:
-- `destination` from the `destinations` section where the data will be sent.-- `transformKql` section, which is the [transformation](data-collection-transformations.md) applied to the data that was sent in the input shape described in the `streamDeclarations` section to the shape of the target table.-- `outputStream` section, which describes which table in the workspace specified under the `destination` property the data will be ingested into. The value of `outputStream` has the `Microsoft-[tableName]` shape when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom-created table. Only one destination is allowed per stream.
+- `string`
+- `int`
+- `long`
+- `real`
+- `boolean`
+- `dynamic`
+- `datetime`.
-> [!Note]
->
-> You can only send logs from one specific data source to one workspace. To send data from a single data source to multiple workspaces, please create one DCR per workspace.
+**Scenarios**
+- Azure Monitor agent (text logs only)
+- Logs ingestion API
+- Event Hubs
-## Azure Monitor Agent
- A DCR for [Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) contains the following sections. For a sample, see [Sample data collection rule - agent](../agents/data-collection-rule-sample-agent.md). For agent based custom logs, see [Sample Custom Log Rules - Agent](../agents/data-collection-text-log.md)
+## `destinations`
+Declaration of all the destinations where the data will be sent. Only `logAnalytics` is currently supported as a destination except for Azure Monitor agent which can also use `azureMonitorMetrics`. Each Log Analytics destination requires the full workspace resource ID and a friendly name that will be used elsewhere in the DCR to refer to this workspace.
-### dataSources
-This unique source of monitoring data has its own format and method of exposing its data. Examples of a data source include Windows event log, performance counters, and Syslog. Each data source matches a particular data source type as described in the following table.
+**Scenarios**
+- Azure Monitor agent (text logs only)
+- Logs ingestion API
+- Event Hubs
+- Workspace transformation DCR
-Each data source has a data source type. Each type defines a unique set of properties that must be specified for each data source. The data source types currently available appear in the following table.
+## `dataSources`
+Unique source of monitoring data that has its own format and method of exposing its data. Each data source has a data source type, and each type defines a unique set of properties that must be specified for each data source. The data source types currently available are listed in the following table.
| Data source type | Description | |:|:|
+| eventHub | Data from Azure Event Hubs |
| extension | VM extension-based data source, used exclusively by Log Analytics solutions and Azure services ([View agent supported services and solutions](../agents/azure-monitor-agent-overview.md#supported-services-and-features)) |
-| performanceCounters | Performance counters for both Windows and Linux |
-| syslog | Syslog events on Linux |
-| windowsEventLogs | Windows event log |
+| logFiles | Text log on a virtual machine |
+| performanceCounters | Performance counters for both Windows and Linux virtual machines |
+| syslog | Syslog events on Linux virtual machines |
+| windowsEventLogs | Windows event log on virtual machines |
-### Streams
- This unique handle describes a set of data sources that will be transformed and schematized as one type. Each data source requires one or more streams, and one stream can be used by multiple data sources. All data sources in a stream share a common schema. Use multiple streams, for example, when you want to send a particular data source to multiple tables in the same Log Analytics workspace.
+**Scenarios**
+- Azure Monitor agent
+- Event Hubs
++
+## `dataFlows`
+Matches streams with destinations and optionally specifies a transformation.
+
+### `dataFlows/Streams`
+One or more streams defined in the previous section. You may include multiple streams in a single data flow if you want to send multiple data sources to the same destination. Only use a single stream though if the data flow includes a transformation. One stream can also be used by multiple data flows when you want to send a particular data source to multiple tables in the same Log Analytics workspace.
+
+### `dataFlows/destinations`
+One or more destinations from the `destinations` section above. Multiple destinations are allowed for multi-homing scenarios.
+
+### `dataFlows/transformKql`
+Optional [transformation](data-collection-transformations.md) applied to the incoming stream. The transformation must understand the schema of the incoming data and output data in the schema of the target table. If you use a transformation, the data flow should only use a single stream.
+
+### `dataFlows/outputStream`
+Describes which table in the workspace specified under the `destination` property the data will be sent to. The value of `outputStream` has the format `Microsoft-[tableName]` when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom table. Only one destination is allowed per stream.<br><br>This property isn't used for known data sources from Azure Monitor such as events and performance data since these are sent to predefined tables. |
+
+**Scenarios**
+
+- Azure Monitor agent
+- Logs ingestion API
+- Event Hubs
+- Workspace transformation DCR
-### destinations
-This set of destinations indicates where the data should be sent. Examples include Log Analytics workspace and Azure Monitor Metrics. Multiple destinations are allowed for multi-homing scenarios.
-### dataFlows
-The definition indicates which streams should be sent to which destinations.
## Next steps
azure-monitor Aiops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/aiops-machine-learning.md
Last updated 02/28/2023
# Detect and mitigate potential issues using AIOps and machine learning in Azure Monitor Artificial Intelligence for IT Operations (AIOps) offers powerful ways to improve service quality and reliability by using machine learning to process and automatically act on data you collect from applications, services, and IT resources into Azure Monitor.
-Azure Monitor's built-in AIOps capabilities provide insights and automate data-driven tasks, such as predicting capacity usage and autoscaling, identifying and analyzing application performance issues, and detecting anomalous behaviors in virtual machines, containers, and other resources. These features boost your IT monitoring and operations, without requiring machine learning knowledge and further investment.
+Azure Monitor's built-in AIOps capabilities provide insights and help you troubleshoot issues and automate data-driven tasks, such as predicting capacity usage and autoscaling, identifying and analyzing application performance issues, and detecting anomalous behaviors in virtual machines, containers, and other resources. These features boost your IT monitoring and operations, without requiring machine learning knowledge and further investment.
Azure Monitor also provides tools that let you create your own machine learning pipeline to introduce new analysis and response capabilities and act on data in Azure Monitor Logs.
This article describes Azure Monitor's built-in AIOps capabilities and explains
|Monitoring scenario|Capability|Description| |-|-|-|
-|Log monitoring|[Log Analytics Workspace Insights](../logs/log-analytics-workspace-insights-overview.md) | A curated monitoring experience that provides a unified view of your Log Analytics workspaces and uses machine learning to detect ingestion anomalies. |
-||[Kusto Query Language (KQL) time series analysis and machine learning functions](../logs/kql-machine-learning-azure-monitor.md)| Easy-to-use tools for generating time series data, detecting anomalies, forecasting, and performing root cause analysis directly in Azure Monitor Logs without requiring in-depth knowledge of data science and programming languages.
+|Log monitoring|[Log Analytics Workspace Insights](../logs/log-analytics-workspace-insights-overview.md) | Provides a unified view of your Log Analytics workspaces and uses machine learning to detect ingestion anomalies. |
+||[Kusto Query Language (KQL) time series analysis and machine learning functions](../logs/kql-machine-learning-azure-monitor.md)| Easy-to-use tools for generating time series data, detecting anomalies, forecasting, and performing root cause analysis directly in Azure Monitor Logs without requiring in-depth knowledge of data science and programming languages. |
+||[Microsoft Copilot for Azure](/azure/copilot/get-monitoring-information)| Helps you use Log Analytics to analyze data and troubleshoot issues. Generates example KQL queries based on prompts, such as "Are there any errors in container logs?". |
|Application performance monitoring|[Application Map Intelligent view](../app/app-map.md)| Maps dependencies between services and helps you spot performance bottlenecks or failure hotspots across all components of your distributed application.| ||[Smart detection](../alerts/proactive-diagnostics.md)|Analyzes the telemetry your application sends to Application Insights, alerts on performance problems and failure anomalies, and identifies potential root causes of application performance issues.| |Metric alerts|[Dynamic thresholds for metric alerting](../alerts/alerts-dynamic-thresholds.md)| Learns metrics patterns, automatically sets alert thresholds based on historical data, and identifies anomalies that might indicate service issues.|
azure-monitor Create Custom Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md
To create a custom table, you need:
- A Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). - A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md).-- A JSON file with the schema of your custom table in the following format:
+- A JSON file with at least one record of sample for your custom table. This will look similar to the following:
+ ```json [
+ {
+ "TimeGenerated": "supported_datetime_format",
+ "<column_name_1>": "<column_name_1_value>",
+ "<column_name_2>": "<column_name_2_value>"
+ },
+ {
+ "TimeGenerated": "supported_datetime_format",
+ "<column_name_1>": "<column_name_1_value>",
+ "<column_name_2>": "<column_name_2_value>"
+ },
{ "TimeGenerated": "supported_datetime_format", "<column_name_1>": "<column_name_1_value>",
To create a custom table, you need:
] ```
- For information about the `TimeGenerated` format, see [supported datetime formats](/azure/data-explorer/kusto/query/scalar-data-types/datetime#supported-formats).
+ All tables in a Log Analytics workspace must have a column named `TimeGenerated`. If your sample data has a column named `TimeGenerated`, then this value will be used to identify the ingestion time of the record. If not, a `TimeGenerated` column will be added to the transformation in your DCR for the table. For information about the `TimeGenerated` format, see [supported datetime formats](/azure/data-explorer/kusto/query/scalar-data-types/datetime#supported-formats).
## Create a custom table
To create a custom table in the Azure portal:
:::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" alt-text="Screenshot showing custom log table name.":::
-1. Select **Browse for files** and locate the JSON file in which you defined the schema of your new table.
+1. Select **Browse for files** and locate the JSON file with the sample data for your new table.
:::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-browse-files.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-browse-files.png" alt-text="Screenshot showing custom log browse for files.":::
- All log tables in Azure Monitor Logs must have a `TimeGenerated` column populated with the timestamp of the logged event.
+ If your sample data doesn't include a `TimeGenerated` column, then you will receive a message that a transformation is being created with this column.
1. If you want to [transform log data before ingestion](../essentials//data-collection-transformations.md) into your table:
You can delete any table in your Log Analytics workspace that's not an [Azure ta
> [!NOTE] > - Deleting a restored table doesn't delete the data in the source table.
-> - Azure tables that are part of a solution can be removed from workspace when [deleting the solution](https://learn.microsoft.com/cli/azure/monitor/log-analytics/solution?view=azure-cli-latest#az-monitor-log-analytics-solution-delete). The data remains in workspace for the duration of the retention policy defined for the tables. If the [solution is re-created](https://learn.microsoft.com/cli/azure/monitor/log-analytics/solution?view=azure-cli-latest#az-monitor-log-analytics-solution-create) in the workspace, these tables become visible again.
+> - Azure tables that are part of a solution can be removed from workspace when [deleting the solution](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-delete). The data remains in workspace for the duration of the retention policy defined for the tables. If the [solution is re-created](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-create) in the workspace, these tables become visible again.
# [Portal](#tab/azure-portal-2)
azure-monitor Custom Logs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md
The Log Ingestion API provides the following advantages over the Data Collector
The migration procedure described in this article assumes you have: - A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create data collection rules](../essentials/data-collection-rule-overview.md#permissions) in the Log Analytics workspace.
+- [Permissions to create data collection rules](../essentials/data-collection-rule-create-edit.md#permissions) in the Log Analytics workspace.
- [A Microsoft Entra application to authenticate API calls](../logs/tutorial-logs-ingestion-portal.md#create-azure-ad-application) or any other Resource Manager authentication scheme. ## Create new resources required for the Log ingestion API
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
Title: Logs Ingestion API in Azure Monitor description: Send data to a Log Analytics workspace using REST API or client libraries. Previously updated : 09/14/2023 Last updated : 11/15/2023 # Logs Ingestion API in Azure Monitor
-The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace using either a [REST API call](#rest-api-call) or [client libraries](#client-libraries). By using this API, you can send data to [supported Azure tables](#supported-tables) or to [custom tables that you create](../logs/create-custom-table.md#create-a-custom-table). You can even [extend the schema of Azure tables with custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column) to accept additional data.
-
+The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace using either a [REST API call](#rest-api-call) or [client libraries](#client-libraries). The API allows you to send data to [supported Azure tables](#supported-tables) or to [custom tables that you create](../logs/create-custom-table.md#create-a-custom-table). You can also [extend the schema of Azure tables with custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column) to accept additional data.
## Basic operation
+Data can be sent to the Logs Ingestion API from any application that can make a REST API call. This may be a custom application that you create, or it may be an application or agent that understands how to send data to the API.
+The application sends data to a [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md), which is a unique connection point for your Azure subscription. It specifies a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that includes the target table and workspace and the credentials of an app registration with access to the specified DCR.
-Your application sends data to a [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md), which is a unique connection point for your subscription. The payload of your API call includes the source data formatted in JSON. The call:
--- Specifies a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that understands the format of the source data.-- Potentially filters and transforms the data for the target table.-- Directs the data to a specific table in a specific workspace.
+The data sent by your application to the API must be formatted in JSON and match the structure expected by the DCR. It doesn't necessarily need to match the structure of the target table because the DCR can include a [transformation](../essentials//data-collection-transformations.md) to convert the data to match the table's structure. You can modify the target table and workspace by modifying the DCR without any change to the API call or source data.
-You can modify the target table and workspace by modifying the DCR without any change to the API call or source data.
-
-> [!NOTE]
-> To migrate solutions from the [Data Collector API](data-collector-api.md), see [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](custom-logs-migrate.md).
-
-## Components
-
-The Log ingestion API requires the following components to be created before you can send data. Each of these components must all be located in the same region.
-
-| Component | Description |
-|:|:|
-| Data collection endpoint (DCE) | The DCE provides an endpoint for the application to send to. A single DCE can support multiple DCRs. |
-| Data collection rule (DCR) | [Data collection rules](../essentials/data-collection-rule-overview.md) define data collected by Azure Monitor and specify how and where that data should be sent or stored. The API call must specify a DCR to use. The DCR must understand the structure of the input data and the structure of the target table. If the two don't match, it can include a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You can also use the transformation to filter source data and perform any other calculations or conversions.
-| Log Analytics workspace | The Log Analytics workspace contains the tables that will receive the data. The target tables are specific in the DCR. See [Support tables](#supported-tables) for the tables that the ingestion API can send to. |
## Supported tables
-The following tables can receive data from the ingestion API.
+
+Data sent to the ingestion API can be sent to the following tables:
| Tables | Description | |:|:|
-| Custom tables | The Logs Ingestion API can send data to any custom table that you create in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix. |
-| Azure tables | The Logs Ingestion API can send data to the following Azure tables. Other tables may be added to this list as support for them is implemented.<br><br>- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)<br>- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)<br>- [Syslog](/azure/azure-monitor/reference/tables/syslog)<br>- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent)
+| Custom tables | Any custom table that you create in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix. |
+| Azure tables | The following Azure tables are currently supported. Other tables may be added to this list as support for them is implemented.<br><br>- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)<br>- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)<br>- [Syslog](/azure/azure-monitor/reference/tables/syslog)<br>- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent)
> [!NOTE] > Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (`_`). `_ResourceId`, `id`, `_ResourceId`, `_SubscriptionId`, `TenantId`, `Type`, `UniqueId`, and `Title` are reserved column names. Custom columns you add to an Azure table must have the suffix `_CF`.
-## Authentication
+## Configuration
+The following table describes each component in Azure that you must configure before you can use the Logs Ingestion API.
-Authentication for the Logs Ingestion API is performed at the DCE, which uses standard Azure Resource Manager authentication. A common strategy is to use an application ID and application key as described in [Tutorial: Add ingestion-time transformation to Azure Monitor Logs](tutorial-logs-ingestion-portal.md).
+> [!NOTE]
+> For a PowerShell script that automates the configuration of these components, see [Sample code to send data to Azure Monitor using Logs ingestion API](../logs/tutorial-logs-ingestion-code.md).
-### Token audience
+| Component | Function |
+|:|:|
+| App registration and secret | The application registration is used to authenticate the API call. It must be granted permission to the DCR described below. The API call includes the **Application (client) ID** and **Directory (tenant) ID** of the application and the **Value** of an application secret.<br><br>See [Create a Microsoft Entra application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) and [Create a new application secret](../../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-application-secret). |
+| Data collection endpoint (DCE) | The DCE provides an endpoint for the application to send to. A single DCE can support multiple DCRs, so you can use an existing DCE if you already have one in the same region as your Log Analytics workspace.<br><br>See [Create a data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint). |
+| Table in Log Analytics workspace | The table in the Log Analytics workspace must exist before you can send data to it. You can use one of the [supported Azure tables](#supported-tables) or create a custom table using any of the available methods. If you use the Azure portal to create the table, then the DCR is created for you, including a transformation if it's required. With any other method, you need to create the DCR manually as described in the next section.<br><br>See [Create a custom table](create-custom-table.md#create-a-custom-table). |
+| Data collection rule (DCR) | Azure Monitor uses the [Data collection rule (DCR)](../essentials/data-collection-rule-overview.md) to understand the structure of the incoming data and what to do with it. If the structure of the table and the incoming data don't match, the DCR can include a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You can also use the transformation to filter source data and perform any other calculations or conversions.<br><br>If you create a custom table using the Azure portal, the DCR and the transformation are created for you based on sample data that you provide. If you use an existing table or create a custom table using another method, then you must manually create the DCR using details in the following section.<br><br>Once your DCR is created, you must grant access to it for the application that you created in the first step. From the **Monitor** menu in the Azure portal, select **Data Collection rules** and then the DCR that you created. Select **Access Control (IAM)** for the DCR and then select **Add role assignment** to add the **Monitoring Metrics Publisher** role. |
-When developing a custom client to obtain an access token from Microsoft Entra ID for the purpose of submitting telemetry to Log Ingestion API in Azure Monitor, refer to the table provided below to determine the appropriate audience string for your particular host environment.
-| Azure cloud version | Token audience value |
-| | |
-| Azure public cloud | `https://monitor.azure.com` |
-| Microsoft Azure operated by 21Vianet cloud | `https://monitor.azure.cn` |
-| Azure US Government cloud | `https://monitor.azure.us` |
+## **Manually create DCR**
+If you're sending data to a table that already exists, then you must create the DCR manually. Start with the [Sample DCR for Logs Ingestion API](../essentials/data-collection-rule-samples.md#logs-ingestion-api) and modify the following parameters in the template. Then use any of the methods described in [Create and edit data collection rules (DCRs) in Azure Monitor](../essentials/data-collection-rule-create-edit.md) to create the DCR.
-## Source data
+| Parameter | Description |
+|:|:|
+| `region` | Region to create your DCR. This must match the region of the DCE and the Log Analytics workspace. |
+| `dataCollectionEndpointId` | Resource ID of your DCE. |
+| `streamDeclarations` | Change the column list to the columns in your incoming data. You don't need to change the name of the stream since this just needs to match the `streams` name in `dataFlows`. |
+| `workspaceResourceId` | Resource ID of your Log Analytics workspace. You don't need to change the name since this just needs to match the `destinations` name in `dataFlows`. |
+| `transformKql` | KQL query to be applied to the incoming data. If the schema of the incoming data matches the schema of the table, then you can use `source` for the transformation which will pass on the incoming data unchanged. Otherwise, use a query that will transform the data to match the table schema. |
+| `outputStream` | Name of the table to send the data. For a custom table, add the prefix *Custom-\<table-name\>*. For a built-in table, add the prefix *Microsoft-\<table-name\>*. |
-The source data sent by your application is formatted in JSON and must match the structure expected by the DCR. It doesn't necessarily need to match the structure of the target table because the DCR can include a [transformation](../essentials//data-collection-transformations.md) to convert the data to match the table's structure.
-## Client libraries
-You can use the following client libraries to send data to the Logs ingestion API:
++
+## Client libraries
+In addition to making a REST API call, you can use the following client libraries to send data to the Logs ingestion API. The libraries require the same components described in [Configuration](#configuration). For examples using each of these libraries, see [Sample code to send data to Azure Monitor using Logs ingestion API](../logs/tutorial-logs-ingestion-code.md).
- [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme) - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azingest)
You can use the following client libraries to send data to the Logs ingestion AP
- [Python](/python/api/overview/azure/monitor-ingestion-readme) ## REST API call
-To send data to Azure Monitor with a REST API call, make a POST call to the DCE over HTTP. Details of the call are described in the following sections.
+To send data to Azure Monitor with a REST API call, make a POST call over HTTP. Details required for this call are described in this section.
### Endpoint URI
-The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. `Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#custom-logs) in the DCR that should handle the custom data.
+
+The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. The immutable ID is generated for the DCR when it's created. You can retrieve it from the [JSON view of the DCR in the Azure portal](../essentials/data-collection-rule-overview.md?tabs=portal#view-data-collection-rules). `Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#streamdeclarations) in the DCR that should handle the custom data.
``` {Data Collection Endpoint URI}/dataCollectionRules/{DCR Immutable ID}/streams/{Stream Name}?api-version=2021-11-01-preview ```
-> [!NOTE]
-> You can retrieve the immutable ID from the JSON view of the DCR. For more information, see [Collect information from the DCR](tutorial-logs-ingestion-portal.md#collect-information-from-the-dcr).
+For example:
+
+```
+https://my-dce-5kyl.eastus-1.ingest.monitor.azure.com/dataCollectionRules/dcr-000a00a000a00000a000000aa000a0aa/streams/Custom-MyTable?api-version=2021-11-01-preview
+```
### Headers
-| Header | Required? | Value | Description |
-|:|:|:|:|
-| Authorization | Yes | Bearer (bearer token obtained through the client credentials flow) | |
-| Content-Type | Yes | `application/json` | |
-| Content-Encoding | No | `gzip` | Use the gzip compression scheme for performance optimization. |
-| x-ms-client-request-id | No | String-formatted GUID | Request ID that can be used by Microsoft for any troubleshooting purposes. |
+The following table describes that headers for your API call.
++
+| Header | Required? |Description |
+|:|:|:|
+| Authorization | Yes | Bearer token obtained through the client credentials flow. Use the token audience value for your cloud:<br><br>Azure public cloud - `https://monitor.azure.com`<br>Microsoft Azure operated by 21Vianet cloud - `https://monitor.azure.cn`<br>Azure US Government cloud - `https://monitor.azure.us` |
+| Content-Type | Yes | `application/json` |
+| Content-Encoding | No | `gzip` |
+| x-ms-client-request-id | No | String-formatted GUID. This is a request ID that can be used by Microsoft for any troubleshooting purposes. |
### Body
-The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR. Additionally, it is important to ensure that the request body is properly encoded in UTF-8 to prevent any issues with data transmission.
+The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR. Ensure that the request body is properly encoded in UTF-8 to prevent any issues with data transmission.
+
+For example:
+```json
+{
+ "TimeGenerated": "2023-11-14 15:10:02",
+ "Column01": "Value01",
+ "Column02": "Value02"
+}
+```
+### Example
+See [Sample code to send data to Azure Monitor using Logs ingestion API](tutorial-logs-ingestion-code.md?tabs=powershell#sample-code) for an example of the API call using PowerShell.
## Limits and restrictions
azure-monitor Tutorial Logs Ingestion Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md
The steps required to configure the Logs ingestion API are as follows:
To complete this tutorial, you need: - A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create DCR objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace.
## Collect workspace details
azure-monitor Tutorial Logs Ingestion Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md
The steps required to configure the Logs ingestion API are as follows:
To complete this tutorial, you need: - A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create DCR objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace.
- PowerShell 7.2 or later. ## Overview of the tutorial
azure-monitor Tutorial Workspace Transformations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-api.md
In this tutorial, you learn to:
To complete this tutorial, you need the following: - Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace.
- The table must already have some data. - The table can't already be linked to the [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr).
azure-monitor Tutorial Workspace Transformations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-portal.md
In this tutorial, you learn how to:
To complete this tutorial, you need: - A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).-- [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create DCR objects](../essentials/data-collection-rule-create-edit.md#permissions) in the workspace.
- A table that already has some data. - The table can't be linked to the [workspace transformation DCR](../essentials/data-collection-transformations.md#workspace-transformation-dcr).
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
Log query alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables
Use a rule with the following query: + ```kusto Heartbeat | summarize TimeGenerated=max(TimeGenerated) by Computer, _ResourceId | extend Duration = datetime_diff('minute',now(),TimeGenerated)
-| summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,5m), _ResourceId
+| summarize MinutesSinceLastHeartbeat = min(Duration) by Computer, bin(TimeGenerated,5m), _ResourceId
``` ### CPU alerts
This section describes CPU alerts.
**CPU utilization** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Processor" and Name == "UtilizationPercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
+| summarize CPUPercentageAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
``` ### Memory alerts
This section describes memory alerts.
**Available memory in MB** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Memory" and Name == "AvailableMB"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
+| summarize AvailableMemoryInMBAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
``` **Available memory in percentage** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Memory" and Name == "AvailableMB" | extend TotalMemory = toreal(todynamic(Tags)["vm.azm.ms/memorySizeMB"]) | extend AvailableMemoryPercentage = (toreal(Val) / TotalMemory) * 100.0
-| summarize AggregatedValue = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer, _ResourceId
-```
+| summarize AvailableMemoryInPercentageAverage = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer, _ResourceId
+```
### Disk alerts
This section describes disk alerts.
**Logical disk used - all disks on each computer** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
+| summarize LogicalDiskSpacePercentageFreeAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
``` **Logical disk used - individual disks** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage" | extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
+| summarize LogicalDiskSpacePercentageFreeAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
``` **Logical disk IOPS** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "TransfersPerSecond" | extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
+| summarize DiskIOPSAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
``` **Logical disk data rate** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "BytesPerSecond" | extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
+| summarize DiskBytesPerSecondAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
``` ### Network alerts
InsightsMetrics
**Network interfaces bytes received - all interfaces** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "ReadBytesPerSecond"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
+| summarize BytesReceivedAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
``` **Network interfaces bytes received - individual interfaces** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "ReadBytesPerSecond" | extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface
+| summarize BytesReceievedAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface
``` **Network interfaces bytes sent - all interfaces** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "WriteBytesPerSecond"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
+| summarize BytesSentAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
``` **Network interfaces bytes sent - individual interfaces** + ```kusto InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Network" and Name == "WriteBytesPerSecond" | extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface
+| summarize BytesSentAverage = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface
``` ### Windows and Linux events The following sample creates an alert when a specific Windows event is created. It uses a metric measurement alert rule to create a separate alert for each computer. - **Create an alert rule on a specific Windows event.**- This example shows an event in the Application log. Specify a threshold of 0 and consecutive breaches greater than 0.
+
```kusto Event | where EventLog == "Application" | where EventID == 123
- | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+ | summarize NumberOfEvents = count() by Computer, bin(TimeGenerated, 15m)
``` - **Create an alert rule on Syslog events with a particular severity.**- The following example shows error authorization events. Specify a threshold of 0 and consecutive breaches greater than 0.
+
```kusto Syslog | where Facility == "auth" | where SeverityLevel == "err"
- | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+ | summarize NumberOfEvents = count() by Computer, bin(TimeGenerated, 15m)
``` ### Custom performance counters
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
General|[Azure Monitor cost and usage](cost-usage.md)|Added section detailing bi
Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|A caution has been added about using community libraries with additional information on how to request we include them in our distro.| Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|Support and feedback options are now available across all of our OpenTelemetry pages.| Application-Insights|[How many Application Insights resources should I deploy?](app/create-workspace-resource.md#how-many-application-insights-resources-should-i-deploy)|We added an important warning about additional network costs when monitoring across regions.|
-Application-Insights|[Use Search in Application Insights](app/search-and-transaction-diagnostics.md?tabs=transaction-search)|We clarified that URL query strings are not logged by Azure Functions and that URL query strings won't show up in searches.|
+Application-Insights|[Use Search in Application Insights](app/transaction-search-and-diagnostics.md?tabs=transaction-search)|We clarified that URL query strings are not logged by Azure Functions and that URL query strings won't show up in searches.|
Application-Insights|[Migrating from OpenCensus Python SDK and Azure Monitor OpenCensus exporter for Python to Azure Monitor OpenTelemetry Python Distro](app/opentelemetry-python-opencensus-migrate.md)|Migrate from OpenCensus to OpenTelemetry with this step-by-step guidance.| Application-Insights|[Application Insights overview](app/app-insights-overview.md)|We've added an illustration to convey how Azure Monitor Application Insights works at a high level.| Containers|[Troubleshoot collection of Prometheus metrics in Azure Monitor](containers/prometheus-metrics-troubleshoot.md)|Added the *Troubleshoot using PowerShell script* section.|
Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to
|[Java Profiler for Azure Monitor Application Insights](./app/java-standalone-profiler.md)|Announced the new Java Profiler at Ignite. Read all about it.| |[Release notes for Azure Web App extension for Application Insights](./app/web-app-extension-release-notes.md)|Added release notes for 2.8.44 and 2.8.43.| |[Resource Manager template samples for creating Application Insights resources](./app/resource-manager-app-resource.md)|Fixed inaccurate tagging of workspace-based resources as still in preview.|
-|[Unified cross-component transaction diagnostics](./app/search-and-transaction-diagnostics.md?tabs=transaction-diagnostics)|Added a FAQ section to help troubleshoot Azure portal errors like "error retrieving data."|
+|[Unified cross-component transaction diagnostics](./app/transaction-search-and-diagnostics.md?tabs=transaction-diagnostics)|Added a FAQ section to help troubleshoot Azure portal errors like "error retrieving data."|
|[Upgrading from Application Insights Java 2.x SDK](./app/java-standalone-upgrade-from-2x.md)|Added more upgrade guidance. Java 2.x is deprecated.| |[Using Azure Monitor Application Insights with Spring Boot](./app/java-spring-boot.md)|Updated configuration options.|
azure-netapp-files Auxiliary Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/auxiliary-groups.md
+
+ Title: Understand auxiliary/supplemental groups with NFS in Azure NetApp Files
+description: Learn about auxiliary/supplemental groups with NFS in Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 11/13/2023+++
+# Understand auxiliary/supplemental groups with NFS in Azure NetApp Files
+
+NFS has a specific limitation for the maximum number of auxiliary GIDs (secondary groups) that can be honored in a single NFS request. The maximum for [AUTH_SYS/AUTH_UNIX](http://tools.ietf.org/html/rfc5531) is 16. For AUTH_GSS (Kerberos), the maximum is 32. This is a known protocol limitation of NFS.
+
+Azure NetApp Files provides the ability to increase the maximum number of auxiliary groups to 1,024. This is performed by avoiding truncation of the group list in the NFS packet by prefetching the requesting userΓÇÖs group from a name service, such as LDAP.
+
+## How it works
+
+The options to extend the group limitation work the same way the `-manage-gids` option for other NFS servers works. Rather than dumping the entire list of auxiliary GIDs a user belongs to, the option looks up the GID on the file or folder and returns that value instead.
+
+The [command reference for `mountd`](http://man.he.net/man8/mountd) notes:
+
+```bash
+-g or --manage-gids
+
+Accept requests from the kernel to map user id numbers into lists of group id numbers for use in access control. An NFS request will normally except when using Kerberos or other cryptographic authentication) contains a user-id and a list of group-ids. Due to a limitation in the NFS protocol, at most 16 groups ids can be listed. If you use the -g flag, then the list of group ids received from the client will be replaced by a list of group ids determined by an appropriate lookup on the server.
+```
+
+When an access request is made, only 16 GIDs are passed in the RPC portion of the packet.
++
+Any GID beyond the limit of 16 is dropped by the protocol. Extended GIDs in Azure NetApp Files can only be used with external name services such as LDAP.
+
+## Potential performance impacts
+
+Extended groups have a minimal performance penalty, generally in the low single digit percentages. Higher metadata NFS workloads would likely have more effect, particularly on the systemΓÇÖs caches. Performance can also be affected by the speed and workload of the name service servers. Overloaded name service servers are slower to respond, causing delays in prefetching the GID. For best results, use multiple name service servers to handle large numbers of requests.
+
+## ΓÇ£Allow local users with LDAPΓÇ¥ option
+
+When a user attempts to access an Azure NetApp Files volume via NFS, the request comes in a numeric ID. By default, Azure NetApp Files supports extended group memberships for NFS users (to go beyond the standard 16 group limit to 1,024). As a result, Azure NetApp files attempts to look up the numeric ID in LDAP in an attempt to resolve the group memberships for the user rather than passing the group memberships in an RPC packet.
+
+Due to that behavior, if that numeric ID can't be resolved to a user in LDAP, the lookup fails and access is denied, even if the requesting user has permission to access the volume or data structure.
+
+The [Allow local NFS users with LDAP option](configure-ldap-extended-groups.md) in Active Directory connections is intended to disable those LDAP lookups for NFS requests by disabling the extended group functionality. It doesn't provide ΓÇ£local user creation/managementΓÇ¥ within Azure NetApp Files.
+
+For more information about the option, including how it behaves with different volume security styles in Azure NetApp files, see [Understand the use of LDAP with Azure NetApp Files](lightweight-directory-access-protocol.md).
+
+## Next steps
+
+* [Understand the use of LDAP with Azure NetApp Files](lightweight-directory-access-protocol.md)
+* [Allow local NFS users with LDAP option](configure-ldap-extended-groups.md)
azure-netapp-files Azure Netapp Files Configure Export Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-configure-export-policy.md
Last updated 07/28/2021
# Configure export policy for NFS or dual-protocol volumes
-You can configure export policy to control access to an Azure NetApp Files volume that uses the NFS protocol (NFSv3 and NFSv4.1) or the dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB).
+You can configure export policy to control access to an Azure NetApp Files volume that uses the NFS protocol (NFSv3 and NFSv4.1) or the dual protocol (NFSv3 and SMB, or NFSv4.1 and SMB).
You can create up to five export policy rules.
+Once created, you can modify details of the export policy rule. The modifiable fields are:
+
+- IP address (For example, x.x.x.x)
+- CIDR range (A subnet range; for example, 0.0.0.0/0)
+- IP address comma separated list (For example, x.x.x.x, y.y.y.y)
+- Access level
+- [Export policy rule order](network-attached-storage-permissions.md#export-policy-rule-ordering)
+
+Before modifying policy rules with NFS Kerberos enabled, see [Export policy rules with NFS Kerberos enabled](network-attached-storage-permissions.md#export-policy-rule-ordering).
+ ## Configure the policy 1. On the **Volumes** page, select the volume for which you want to configure the export policy, and then select **Export policy**. You can also configure the export policy during the creation of the volume.
You can create up to five export policy rules.
![Screenshot that shows the change ownership mode option.](../media/azure-netapp-files/chown-mode-export-policy.png) ## Next steps
+* [Understand NAS permissions in Azure NetApp Files](network-attached-storage-permissions.md)
* [Mount or unmount a volume](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md) * [Manage snapshots](azure-netapp-files-manage-snapshots.md)
azure-netapp-files Azure Netapp Files Manage Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-manage-snapshots.md
Azure NetApp Files supports creating on-demand [snapshots](snapshots-introductio
## Steps
-1. Go to the volume that you want to create a snapshot for. Click **Snapshots**.
+1. Go to the volume that you want to create a snapshot for. Select **Snapshots**.
![Screenshot that shows how to navigate to the snapshots blade.](../media/azure-netapp-files/azure-netapp-files-navigate-to-snapshots.png)
-2. Click **+ Add snapshot** to create an on-demand snapshot for a volume.
+2. Select **+ Add snapshot** to create an on-demand snapshot for a volume.
![Screenshot that shows how to add a snapshot.](../media/azure-netapp-files/azure-netapp-files-add-snapshot.png)
Azure NetApp Files supports creating on-demand [snapshots](snapshots-introductio
![Screenshot that shows the New Snapshot window.](../media/azure-netapp-files/azure-netapp-files-new-snapshot.png)
-4. Click **OK**.
+4. Select **OK**.
## Next steps
azure-netapp-files Configure Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-access-control-lists.md
Azure NetApp Files supports access control lists (ACLs) on NFSv4.1 volumes. ACLs
ACLs contain access control entities (ACEs), which specify the permissions (read, write, etc.) of individual users or groups. When assigning user roles, provide the user email address if you're using a Linux VM joined to an Active Directory Domain. Otherwise, provide user IDs to set permissions.
+To learn more about ACLs in Azure NetApp Files, see [Understand NFSv4.x ACLs](nfs-access-control-lists.md).
+ ## Requirements - ACLs can only be configured on NFS4.1 volumes. You can [convert a volume from NFSv3 to NFSv4.1](convert-nfsv3-nfsv41.md).
ACLs contain access control entities (ACEs), which specify the permissions (read
## Next steps * [Configure NFS clients](configure-nfs-clients.md)
+* [Understand NFSv4.x ACLs](nfs-access-control-lists.md).
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
This section shows you how to set the network features option when you create a
## Edit network features option for existing volumes
-You can edit the network features option of existing volumes from *Basic* to *Standard* network features. The change you make applies to all volumes in the same *network sibling set* (or *siblings*). Siblings are determined by their network IP address relationship. They share the same NIC for mounting the volume to the client or connecting to the SMB share of the volume. At the creation of a volume, its siblings are determined by a placement algorithm that aims for reusing the IP address where possible.
+You can edit the network features option of existing volumes from *Basic* to *Standard* network features. The change you make applies to all volumes in the same *network sibling set* (or *siblings*). Siblings are determined by their network IP address relationship. They share the same network interface card (NIC) for mounting the volume to the client or connecting to the SMB share of the volume. At the creation of a volume, its siblings are determined by a placement algorithm that aims for reusing the IP address where possible.
+
+>[!IMPORTANT]
+>It's not recommended that you use the edit network features option with Terraform-managed volumes due to risks. You must follow separate instructions if you use Terraform-managed volumes. For more information see, [Update Terraform-managed Azure NetApp Files volume from Basic to Standard](#update-terraform-managed-azure-netapp-files-volume-from-basic-to-standard).
You can also revert the option from *Standard* back to *Basic* network features, but considerations apply and require careful planning. For example, you might need to change configurations for Network Security Groups (NSGs), user-defined routes (UDRs), and IP limits if you revert. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#constraints) for constraints and supported network topologies about Standard and Basic network features.
This feature currently doesn't support SDK.
> [!IMPORTANT] > Updating the network features option might cause a network disruption on the volumes for up to 5 minutes.
-1. Navigate to the volume that you want to change the network features option.
+1. Navigate to the volume for which you want to change the network features option.
1. Select **Change network features**. 1. The **Edit network features** window displays the volumes that are in the same network sibling set. Confirm whether you want to modify the network features option. :::image type="content" source="../media/azure-netapp-files/edit-network-features.png" alt-text="Screenshot showing the Edit Network Features window." lightbox="../media/azure-netapp-files/edit-network-features.png":::
+### Update Terraform-managed Azure NetApp Files volume from Basic to Standard
+
+If your Azure NetApp Files volume is managed using Terraform, editing the network features requires additional steps. Terraform-managed Azure resources store their state in a local file, which is in your Terraform module or in Terraform Cloud.
+
+Updating the network features of your volume alters the underlying network sibling set of the NIC utilized by that volume. This NIC can be utilized by other volumes you own, and other NICs can share the same network sibling set. **If not performed correctly, updating the network features of one Terraform-managed volume can inadvertently update the network features of several other volumes.**
+
+>[!IMPORTANT]
+>A discontinuity between state data and remote Azure resource configurations--notably, in the `network_features` argument--can result in the destruction of one or more volumes and possible data loss upon running `terraform apply`. Carefully follow the workaround outlined here to safely update the network features from Basic to Standard of Terraform-managed volumes.
+
+>[!NOTE]
+>A Terraform module usually consists solely of all top level `*.tf` and/or `*.tf.json` configuration files in a directory, but a Terraform module can make use of module calls to explicitly include other modules into the configuration. You can [learn more about possible module structures](https://developer.hashicorp.com/terraform/language/files). To update all configuration file in your module that reference Azure NetApp Files volumes, be sure to look at all possible sources where your module can reference configuration files.
+
+The name of the state file in your Terraform module is `terraform.tfstate`. It contains the arguments and their values of all deployed resources in the module. Below is highlighted the `network_features` argument with value ΓÇ£BasicΓÇ¥ for an Azure NetApp Files Volume in a `terraform.tfstate` example file:
++
+Do _not_ manually update the `terraform.tfstate` file. Likewise, the `network_features` argument in the `*.tf` and `*.tf.json` configuration files should also not be updated until you follow the steps outlined here as this would cause a mismatch in the arguments of the remote volume and the local configuration file representing that remote volume. When Terraform detects a mismatch between the arguments of remote resources and local configuration files representing those remote resources, Terraform can destroy the remote resources and reprovision them with the arguments in the local configuration files. This can cause data loss in a volume.
+
+By following the steps outlined here, the `network_features` argument in the `terraform.tfstate` file is automatically updated by Terraform to have the value of "Standard" without destroying the remote volume, thus indicating the network features has been successfully updated to Standard.
+
+>[!NOTE]
+> It's recommended to always use the latest Terraform version and the latest version of the `azurerm` Terraform module.
+
+#### Determine affected volumes
+
+Changing the network features for an Azure NetApp Files Volume can impact the network features of other Azure NetApp Files Volumes. Volumes in the same network sibling set must have the same network features setting. Therefore, before you change the network features of one volume, you must determine all volumes affected by the change using the Azure portal.
+
+1. Log in to the Azure portal.
+1. Navigate to the volume for which you want to change the network features option.
+1. Select the **Change network features**. ***Do **not** select Save.***
+1. Record the paths of the affected volumes then select **Cancel**.
++
+All Terraform configuration files that define these volumes need to be updated, meaning you need to find the Terraform configuration files that define these volumes. The configuration files representing the affected volumes might not be in the same Terraform module.
+
+>[!IMPORTANT]
+>With the exception of the single volume you know is managed by Terraform, additional affected volumes might not be managed by Terraform. An additional volume that is listed as being in the same network sibling set does not mean that this additional volume is managed by Terraform.
+
+#### Modify the affected volumesΓÇÖ configuration files
+
+You must modify the configuration files for each affected volume managed by Terraform that you discovered. Failing to update the configuration file can destroy the volume or result in data loss.
+
+>[!IMPORTANT]
+>Depending on your volumeΓÇÖs lifecycle configuration block settings in your Terraform configuration file, your volume can be destroyed, including possible data loss upon running `terraform apply`. Ensure you know which affected volumes are managed by Terraform and which are not.
+
+1. Locate the affected Terraform-managed volumes configuration files.
+1. Add the `ignore_changes = [network_features]` to the volume's `lifecycle` configuration block. If the `lifecycle` block does not exist in that volumeΓÇÖs configuration, add it.
+
+ :::image type="content" source="../media/azure-netapp-files/terraform-lifecycle.png" alt-text="Screenshot of the lifecycle configuration." lightbox="../media/azure-netapp-files/terraform-lifecycle.png":::
+
+1. Repeat for each affected Terraform-managed volume.
+
+The `ignore_changes` feature is intended to be used when a resourceΓÇÖs reference to data might change after the resource is created. Adding the `ignore_changes` feature to the `lifecycle` block allows the network features of the volumes to be changed in the Azure portal without Terraform trying to fix this argument of the volume on the next run of `terraform apply`. You can [learn more about the `ignore_changes` feature](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle).
+
+#### Update the volumes' network features
+
+1. In the Azure portal, navigate to the Azure NetApp Files volume for which you want to change network features.
+1. Select the **Change network features**.
+1. In the **Action** field, confirm that it reads **Change to Standard**.
+
+ :::image type="content" source="../media/azure-netapp-files/change-network-features-standard.png" alt-text="Screenshot of confirm change of network features." lightbox="../media/azure-netapp-files/change-network-features-standard.png":::
+
+1. Select **Save**.
+1. Wait until you receive a notification that the network features update has completed. In your **Notifications**, the message reads "Successfully updated network features. Network features for network sibling set have successfully updated to ΓÇÿStandardΓÇÖ."
+1. In the terminal, run `terraform plan` to view any potential changes. The output should indicate that the infrastructure matches the configuration with a message reading "No changes. Your infrastructure matches the configuration."
+
+ :::image type="content" source="../media/azure-netapp-files/terraform-plan-output.png" alt-text="Screenshot of terraform plan command output." lightbox="../media/azure-netapp-files/terraform-plan-output.png":::
+
+ >[!IMPORTANT]
+ > As a safety precaution, execute `terraform plan` before executing `terraform apply`. The command `terraform plan` allows you to create a ΓÇ£planΓÇ¥ file, which contains the changes to your remote resources. This plan allows you to know if any of your affected volumes will be destroyed by running `terraform apply`.
+
+1. Run `terraform apply` to update the `terraform.tfstate` file.
+
+ Repeat for all modules containing affected volumes.
+
+ Observe the change in the value of the `network_features` argument in the `terraform.tfstate` files, which changed from "Basic" to "Standard":
+
+ :::image type="content" source="../media/azure-netapp-files/updated-terraform-module.png" alt-text="Screenshot of updated Terraform module." lightbox="../media/azure-netapp-files/updated-terraform-module.png":::
+
+#### Update Terraform-managed Azure NetApp Files volumesΓÇÖ configuration file for configuration parity
+
+Once you've update the volumes' network features, you must also modify the `network_features` arguments and `lifecycle blocks` in all configuration files of affected Terraform-managed volumes. This update ensures that if you have to recreate or update the volume, it maintains its Standard network features setting.
+
+1. In the configuration file, set `network_features` to "Standard" and remove the `ignore_changes = [network_features]` line from the `lifecycle` block.
+
+ :::image type="content" source="../media/azure-netapp-files/terraform-network-features-standard.png" alt-text="Screenshot of Terraform module with Standard network features." lightbox="../media/azure-netapp-files/terraform-network-features-standard.png":::
+
+1. Repeat for each affected Terraform-managed volume.
+1. Verify that the updated configuration files accurately represent the configuration of the remote resources by running `terraform plan`. Confirm the output reads "No changes."
+1. Run `terraform apply` to complete the update.
+ ## Next steps * [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md)
azure-netapp-files Manage Default Individual User Group Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-default-individual-user-group-quotas.md
Previously updated : 03/28/2023 Last updated : 06/14/2023 # Manage default and individual user and group quotas for a volume
Quota rules only come into effect on the CRR/CZR destination volume after the re
* To provide optimal performance, the space consumption may exceed configured hard limit before the quota is enforced. The additional space consumption won't exceed either the lower of 1 GB or five percent of the configured hard limit.     * After reaching the quota limit, if a user or administrator deletes files or directories to reduce quota usage under the limit, subsequent quota-consuming file operations may resume with a delay of up to five seconds.
-## Register the feature
-
-The feature to manage user and group quotas is currently in preview. Before using this feature for the first time, you need to register it.
-
-1. Register the feature:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEnableVolumeUserGroupQuota
- ```
-
-2. Check the status of the feature registration:
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEnableVolumeUserGroupQuota
- ```
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
-
-You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
- ## Create new quota rules 1. From the Azure portal, navigate to the volume for which you want to create a quota rule. Select **User and group quotas** in the navigation pane, then click **Add** to create a quota rule for a volume.
azure-netapp-files Manage Smb Share Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-smb-share-access-control-lists.md
+
+ Title: Manage SMB share ACLs in Azure NetApp Files
+description: Learn how to manage SMB share access control lists in Azure NetApp Files.
++++++ Last updated : 11/03/2023+
+# Manage SMB share ACLs in Azure NetApp Files
+
+SMB shares can control access to who can mount and access a share, as well as control access levels to users and groups in an Active Directory domain. The first level of permissions that get evaluated are share access control lists (ACLs).
+
+There are two ways to view share settings:
+
+* In the **Advanced permissions** settings
+
+* With the **Microsoft Management Console (MMC)**
+
+## Prerequisites
+
+You must have the mount path. You can retrieve this in the Azure portal by navigating to the **Overview** menu of the volume for which you want to configure share ACLs. Identify the **Mount path**.
+++
+## View SMB share ACLs with advanced permissions
+
+Advanced permissions for files, folders, and shares on an Azure NetApp File volume can be accessed by right-clicking the Azure NetApp Files share at the top level of the UNC path (for example, `\\Azure.NetApp.Files\`) or in the Windows Explorer view when navigating to the share itself (for instance, `\\Azure.NetApp.Files\sharename`).
+
+>[!NOTE]
+>You can only view SMB share ACLs in the **Advanced permissions** settings.
+
+1. In Windows Explorer, use the mount path to open the volume. Right-click on the volume, select **Properties**. Switch to the **Security** tab then select **Advanced**.
+
+ :::image type="content" source="../media/azure-netapp-files/security-advanced-tab.png" alt-text="Screenshot of security tab." lightbox="../media/azure-netapp-files/security-advanced-tab.png":::
+
+1. In the new window that pops up, switch to the **Share** tab to view the share-level ACLs. You cannot modify share-level ACLs.
+
+ >[!NOTE]
+ >Azure NetApp Files doesn't support windows audit ACLs. Azure NetApp Files ignores any audit ACL applied to files or directories hosted on Azure NetApp Files volumes.
+
+ :::image type="content" source="../media/azure-netapp-files/view-permissions.png" alt-text="Screenshot of the permissions tab." lightbox="../media/azure-netapp-files/view-permissions.png":::
+
+ :::image type="content" source="../media/azure-netapp-files/view-shares.png" alt-text="Screenshot of the share tab." lightbox="../media/azure-netapp-files/view-shares.png":::
++
+## Modify share-levels ACLs with the Microsoft Management Console
+
+You can only modify the share ACLs in Azure NetApp Files with the Microsoft Management Console (MMC).
+
+1. To modify share-level ACLs in Azure NetApp Files, open the Computer Management MMC from the Server Manager in Windows. From there, select the **Tools** menu then **Computer Management**.
+
+1. In the Computer Management window, right-click **Computer management (local)** then select **Connect to another computer**.
+
+ :::image type="content" source="../media/azure-netapp-files/computer-management-local.png" alt-text="Screenshot of the computer management window." lightbox="../media/azure-netapp-files/computer-management-local.png":::
+
+1. In the **Another computer** field, enter the fully qualified domain name (FQDN).
+
+ The FQDN comes from the mount path you retrieved in the prerequisites. For example, if the mount path is `\\ANF-West-f899.contoso.com\SMBVolume`, enter `ANF-West-f899.contoso.com` as the FQDN.
+
+1. Once connected, expand **System Tools** then select **Shared Folders > Shares**.
+1. To manage share permissions, right-click on the name of the share you want to modify from list and select **Properties**.
+
+ :::image type="content" source="../media/azure-netapp-files/share-folder.png" alt-text="Screenshot of the share folder." lightbox="../media/azure-netapp-files/share-folder.png":::
+
+1. Add, remove, or modify the share ACLs as appropriate.
+
+ :::image type="content" source="../media/azure-netapp-files/add-share.png" alt-text="Screenshot showing how to add a share." lightbox="../media/azure-netapp-files/add-share.png":::
+
+## Next step
+
+* [Understand NAS permissions in Azure NetApp Files](network-attached-storage-permissions.md)
azure-netapp-files Network Attached File Permissions Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-nfs.md
+
+ Title: Understand NFS file permissions in Azure NetApp Files
+description: Learn about mode bits in NFS workloads on Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 11/13/2023+++
+# Understand mode bits in Azure NetApp Files
+
+File access permissions in NFS limit what users and groups can do once a NAS volume is mounted. Mode bits are a key feature of NFS file permissions in Azure NetApp Files.
+
+## NFS mode bits
+
+Mode bit permissions in NFS provide basic permissions for files and folders, using a standard numeric representation of access controls. Mode bits can be used with either NFSv3 or NFSv4.1, but mode bits are the standard option for securing NFSv3 as defined in [RFC-1813](https://tools.ietf.org/html/rfc1813#page-22). The following table shows how those numeric values correspond to access controls.
+
+| Mode bit numeric |
+| |
+| 1 ΓÇô execute (x) |
+| 2 ΓÇô write (w) |
+| 3 ΓÇô write/execute (wx) |
+| 4 ΓÇô read (r) |
+| 5 ΓÇô read/execute (rx) |
+| 6 ΓÇô read/write (rw) |
+| 7 ΓÇô read/write/execute (rwx) |
+
+Numeric values are applied to different segments of an access control: owner, group and everyone else, meaning that there are no granular user access controls in place for basic NFSv3. The following image shows an example of how a mode bit access control might be constructed for use with an NFSv3 object.
++
+Azure NetApp Files doesn't support POSIX ACLs. Thus granular ACLs are only possible with NFSv3 when using an NTFS security style volume with valid UNIX to Windows name mappings via a name service such as Active Directory LDAP. Alternately, you can use NFSv4.1 with Azure NetApp Files and NFSv4.1 ACLs.
+
+The following table compares the permission granularity between NFSv3 mode bits and NFSv4.x ACLs.
+
+| NFSv3 mode bits | NFSv4.x ACLs |
+| - | - |
+| <ul><li>Set user ID on execution (setuid)</li><li>Set group ID on execution (setgid)</li><li>Save swapped text (sticky bit)</li><li>Read permission for owner</li><li>Write permission for owner</li><li>Execute permission for owner on a file; or look up (search) permission for owner in directory</li><li>Read permission for group</li><li>Write permission for group</li><li>Execute permission for group on a file; or look up (search) permission for group in directory</li><li>Read permission for others</li><li>Write permission for others</li><li>Execute permission for others on a file; or look up (search) permission for others in directory</li></ul> | <ul><li>ACE types (Allow/Deny/Audit)</li><li>Inheritance flags:</li><li>directory-inherit</li><li>file-inherit</li><li>no-propagate-inherit</li><li>inherit-only</li><li>Permissions:</li><li>read-data (files) / list-directory (directories)</li><li>write-data (files) / create-file (directories)</li><li>append-data (files) / create-subdirectory (directories)</li><li>execute (files) / change-directory (directories)</li><li>delete </li><li>delete-child</li><li>read-attributes</li><li>write-attributes</li><li>read-named-attributes</li><li>write-named-attributes</li><li>read-ACL</li><li>write-ACL</li><li>write-owner</li><li>Synchronize</li></ul> |
+
+For more information, see [Understand NFSv4.x access control lists ACLs](nfs-access-control-lists.md).
+
+### Sticky bits, setuid, and setgid
+
+When using mode bits with NFS mounts, the ownership of files and folders is based on the `uid` and `gid` of the user that created the files and folders. Additionally, when a process runs, it runs as the user that kicked it off, and thus, would have the corresponding permissions. With special permissions (such as `setuid`, `setgid`, sticky bit), this behavior can be controlled.
+
+#### Setuid
+
+The `setuid` bit is designated by an "s" in the execute portion of the owner bit of a permission. The `setuid` bit allows an executable file to be run as the owner of the file rather than as the user attempting to execute the file. For instance, the `/bin/passwd` application has the `setuid` bit enabled by default, therefore the application runs as root when a user tries to change their password.
+
+```bash
+# ls -la /bin/passwd
+-rwsr-xr-x 1 root root 68208 Nov 29 2022 /bin/passwd
+```
+If the `setuid` bit is removed, the password change functionality wonΓÇÖt work properly.
+
+```bash
+# ls -la /bin/passwd
+-rwxr-xr-x 1 root root 68208 Nov 29 2022 /bin/passwd
+user2@parisi-ubuntu:/mnt$ passwd
+Changing password for user2.
+Current password:
+New password:
+Retype new password:
+passwd: Authentication token manipulation error
+passwd: password unchanged
+```
+
+When the `setuid` bit is restored, the passwd application runs as the owner (root) and works properly, but only for the user running the passwd command.
+
+```bash
+# chmod u+s /bin/passwd
+# ls -la /bin/passwd
+-rwsr-xr-x 1 root root 68208 Nov 29 2022 /bin/passwd
+# su user2
+user2@parisi-ubuntu:/mnt$ passwd user1
+passwd: You may not view or modify password information for user1.
+user2@parisi-ubuntu:/mnt$ passwd
+Changing password for user2.
+Current password:
+New password:
+Retype new password:
+passwd: password updated successfully
+```
+
+Setuid has no effect on directories.
+
+#### Setgid
+
+The `setgid` bit can be used on both files and directories.
+
+With directories, setgid can be used as a way to inherit the owner group for files and folders created below the parent directory with the bit set. Like `setuid`, the executable bit is changed to an ΓÇ£sΓÇ¥ or an ΓÇ£S.ΓÇ¥
+
+>[!NOTE]
+>Capital ΓÇ£SΓÇ¥ means that the executable bit hasn't been set, such as if the permissions on the directory are ΓÇ£6ΓÇ¥ or ΓÇ£rw.ΓÇ¥
+
+For example:
+
+```bash
+# chmod g+s testdir
+# ls -la | grep testdir
+drwxrwSrw- 2 user1 group1 4096 Oct 11 16:34 testdir
+# who
+root ttyS0 2023-10-11 16:28
+# touch testdir/file
+# ls -la testdir
+total 8
+drwxrwSrw- 2 user1 group1 4096 Oct 11 17:09 .
+drwxrwxrwx 5 root root 4096 Oct 11 16:37 ..
+-rw-r--r-- 1 root group1 0 Oct 11 17:09 file
+```
+
+For files, setgid behaves similarly to `setuid`ΓÇöexecutables run using the group permissions of the group owner. If a user is in the owner group, said user has access to run the executable when setgid is set. If they aren't in the group, they don't get access. For instance, if an administrator wants to limit which users could run the `mkdir` command on a client, they can use setgid.
+
+Normally, `/bin/mkdir` has 755 permissions with root ownership. This means anyone can run `mkdir` on a client.
+
+```bash
+# ls -la /bin/mkdir
+-rwxr-xr-x 1 root root 88408 Sep 5 2019 /bin/mkdir
+```
+
+To modify the behavior to limit which users can run the `mkdir` command, change the group that owns the `mkdir` application, change the permissions for `/bin/mkdir` to 750, and then add the setgid bit to `mkdir`.
+
+```bash
+# chgrp group1 /bin/mkdir
+# chmod g+s /bin/mkdir
+# chmod 750 /bin/mkdir
+# ls -la /bin/mkdir
+-rwxr-s 1 root group1 88408 Sep 5 2019 /bin/mkdir
+```
+As a result, the application runs with permissions for `group1`. If the user isn't a member of `group1`, the user doesn't get access to run `mkdir`.
+
+`User1` is a member of `group1`, but `user2` isn't:
+
+```bash
+# id user1
+uid=1001(user1) gid=1001(group1) groups=1001(group1)
+# id user2
+uid=1002(user2) gid=2002(group2) groups=2002(group2)
+```
+After this change, `user1` can run `mkdir`, but `user2` can't since `user2` isn't in `group1`.
+
+```bash
+# su user1
+$ mkdir test
+$ ls -la | grep test
+drwxr-xr-x 2 user1 group1 4096 Oct 11 18:48 test
+
+# su user2
+$ mkdir user2-test
+bash: /usr/bin/mkdir: Permission denied
+```
+#### Sticky bit
+
+The sticky bit is used for directories only and, when used, controls which files can be modified in that directory regardless of their mode bit permissions. When a sticky bit is set, only file owners (and root) can modify files, even if file permissions are shown as ΓÇ£777.ΓÇ¥
+
+In the following example, the directory ΓÇ£stickyΓÇ¥ lives in an Azure NetApp Fils volume and has wide open permissions, but the sticky bit is set.
+
+```bash
+# mkdir sticky
+# chmod 777 sticky
+# chmod o+t sticky
+# ls -la | grep sticky
+drwxrwxrwt 2 root root 4096 Oct 11 19:24 sticky
+```
+
+Inside the folder are files owned by different users. All have 777 permissions.
+
+```bash
+# ls -la
+total 8
+drwxrwxrwt 2 root root 4096 Oct 11 19:29 .
+drwxrwxrwx 8 root root 4096 Oct 11 19:24 ..
+-rwxr-xr-x 1 user2 group1 0 Oct 11 19:29 4913
+-rwxrwxrwx 1 UNIXuser group1 40 Oct 11 19:28 UNIX-file
+-rwxrwxrwx 1 user1 group1 33 Oct 11 19:27 user1-file
+-rwxrwxrwx 1 user2 group1 34 Oct 11 19:27 user2-file
+```
+
+Normally, anyone would be able to modify or delete these files. But because the parent folder has a sticky bit set, only the file owners can make changes to the files.
+
+For instance, user1 can't modify nor delete `user2-file`:
+
+```bash
+# su user1
+$ vi user2-file
+Only user2 can modify this file.
+Hi
+~
+"user2-file"
+"user2-file" E212: Can't open file for writing
+$ rm user2-file
+rm: can't remove 'user2-file': Operation not permitted
+```
+
+Conversely, `user2` can't modify nor delete `user1-file` since they don't own the file and the sticky bit is set on the parent directory.
+
+```bash
+# su user2
+$ vi user1-file
+Only user1 can modify this file.
+Hi
+~
+"user1-file"
+"user1-file" E212: Can't open file for writing
+$ rm user1-file
+rm: can't remove 'user1-file': Operation not permitted
+```
+
+Root, however, still can remove the files.
+
+```bash
+# rm UNIX-file
+```
+
+To change the ability of root to modify files, you must squash root to a different user by way of an Azure NetApp Files export policy rule. For more information, see [root squashing](network-attached-storage-permissions.md#root-squashing).
+
+### Umask
+
+In NFS operations, permissions can be controlled through mode bits, which leverage numerical attributes to determine file and folder access. These mode bits determine read, write, execute, and special attributes. Numerically, permissions are represented as:
+
+* Execute = 1
+* Read = 2
+* Write = 4
+
+Total permissions are determined by adding or subtracting a combination of the preceding. For example:
+
+* 4 + 2 + 1 = 7 (can do everything)
+* 4 + 2 = 6 (read/write)
+
+For more information, see [UNIX Permissions Help](http://www.zzee.com/solutions/unix-permissions.shtml).
+
+Umask is a functionality that allows an administrator to restrict the level of permissions allowed to a client. By default, the umask for most clients is set to 0022. 0022 means that files created from that client are assigned that umask. The umask is subtracted from the base permissions of the object. If a volume has 0777 permissions and is mounted using NFS to a client with a umask of 0022, objects written from the client to that volume have 0755 access (0777 ΓÇô 0022).
+
+```bash
+# umask
+0022
+# umask -S
+u=rwx,g=rx,o=rx
+```
+However, many operating systems don't allow files to be created with execute permissions, but they do allow folders to have the correct permissions. Thus, files created with a umask of 0022 might end up with permissions of 0644. The following example uses RHEL 6.5:
+
+```bash
+# umask
+0022
+# cd /cdot
+# mkdir umask_dir
+# ls -la | grep umask_dir
+drwxr-xr-x. 2 root root 4096 Apr 23 14:39 umask_dir
+
+# touch umask_file
+# ls -la | grep umask_file
+-rw-r--r--. 1 root root 0 Apr 23 14:39 umask_file
+```
+
+## Next steps
+
+* [Understand auxiliary/supplemental groups with NFS](auxiliary-groups.md)
+* [Understand NFSv4.x access control lists](nfs-access-control-lists.md)
azure-netapp-files Network Attached File Permissions Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions-smb.md
+
+ Title: Understand SMB file permissions in Azure NetApp Files
+description: Learn about SMB file permissions options in Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 11/13/2023+++
+# Understand SMB file permissions in Azure NetApp Files
+
+SMB volumes in Azure NetApp Files can leverage NTFS security styles to make use of NTFS access control lists (ACLs) for access controls.
+
+NTFS ACLs provide granular permissions and ownership for files and folders by way of access control entries (ACEs). Directory permissions can also be set to enable or disable inheritance of permissions.
++
+For a complete overview of NTFS-style ACLs, see [Microsoft Access Control overview](/windows/security/identity-protection/access-control/access-control).
+
+## Next steps
+
+* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md)
azure-netapp-files Network Attached File Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-file-permissions.md
+
+ Title: Understand NAS file permissions in Azure NetApp Files
+description: Learn about NAS file permissions options in Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 11/13/2023+++
+# Understand NAS file permissions in Azure NetApp Files
+
+To control access to specific files and folders in a file system, permissions can be applied. File and folder permissions are more granular than share permissions. The following table shows the differences in permission attributes that file and share permissions can apply.
+
+| SMB share permission | NFS export policy rule permissions | SMB file permission attributes | NFS file permission attributes |
+| | | | |
+| <ul><li>Read</li><li>Change</li><li>Full control</li></ul> | <ul><li>Read</li><li>Write</li><li>Root</li></ul> | <ul><li>Full control</li><li>Traverse folder/execute</li><li>Read data/list folders</li><li>Read attributes</li><li>Read extended attributes</li><li>Write data/create files</li><li>Append data/create folders</li><li>Write attributes</li><li>Write extended attributes</li><li>Delete subfolders/files</li><li>Delete</li><li>Read permissions</li><li>Change permissions</li><li>Take ownership</li></ul> | **NFSv3** <br /> <ul><li>Read</li><li>Write</li><li>Execute</li></ul> <br /> **NFSv4.1** <br /> <ul><li>Read data/list files and folders</li><li>Write data/create files and folders</li><li>Append data/create subdirectories</li><li>Execute files/traverse directories</li><li>Delete files/directories</li><li>Delete subdirectories (directories only)</li><li>Read attributes (GETATTR)</li><li>Write attributes (SETATTR/chmod)</li><li>Read named attributes</li><li>Write named attributes</li><li>Read ACLs</li><li>Write ACLs</li><li>Write owner (chown)</li><li>Synchronize I/O</li></ul> |
+
+File and folder permissions can overrule share permissions, as the most restrictive permissions countermand less restrictive permissions.
+
+## Permission inheritance
+
+Folders can be assigned inheritance flags, which means that parent folder permissions propagate to child objects. This can help simplify permission management on high file count environments. Inheritance can be disabled on specific files or folders as needed.
+
+* In Windows SMB shares, inheritance is controlled in the advanced permission view.
++
+* For NFSv3, permission inheritance doesnΓÇÖt work via ACL, but instead can be mimicked using umask and setgid flags.
+* With NFSv4.1, permission inheritance can be handled using inheritance flags on ACLs.
+
+## Next steps
+
+* [Understand NFS file permissions](network-attached-file-permissions-nfs.md)
+* [Understand SMB file permissions](network-attached-file-permissions-smb.md)
+* [Understand NAS share permissions in Azure NetApp Files](network-attached-storage-permissions.md)
azure-netapp-files Network Attached Storage Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-permissions.md
+
+ Title: Understand NAS share permissions in Azure NetApp Files
+description: Learn about NAS share permissions options in Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 11/13/2023+++
+# Understand NAS share permissions in Azure NetApp Files
+
+Azure NetApp Files provides several ways to secure your NAS data. One aspect of that security is permissions. In NAS, permissions can be broken down into two categories:
+
+* **Share access permissions** limit who can mount a NAS volume. NFS controls share access permissions via IP address or hostname. SMB controls this via user and group access control lists (ACLs).
+* **[File access permissions](network-attached-file-permissions.md)** limit what users and groups can do once a NAS volume is mounted. File access permissions are applied to individual files and folders.
+
+Azure NetApp Files permissions rely on NAS standards, simplifying the process of security NAS volumes for administrators and end users with familiar methods.
+
+>[!NOTE]
+>If conflicting permissions are listed on share and files, the most restrictive permission is applied. For instance, if a user has read only access at the *share* level and full control at the *file* level, the user receives read access at all levels.
+
+## Share access permissions
+
+The initial entry point to be secured in a NAS environment is access to the share itself. In most cases, access should be restricted to only the users and groups that need access to the share. With share access permissions, you can lock down who can even mount the share in the first place.
+
+Since the most restrictive permissions override other permissions, and a share is the main entry point to the volume (with the fewest access controls), share permissions should abide by a funnel logic, where the share allows more access than the underlying files and folders. The funnel logic enacts more granular, restrictive controls.
++
+## NFS export policies
+
+Volumes in Azure NetApp Files are shared out to NFS clients by exporting a path that is accessible to a client or set of clients. Both NFSv3 and NFSv4.x use the same method to limit access to an NFS share in Azure NetApp Files: export policies.
+
+An export policy is a container for a set of access rules that are listed in order of desired access. These rules control access to NFS shares by using client IP addresses or subnets. If a client isn't listed in an export policy ruleΓÇöeither allowing or explicitly denying accessΓÇöthen that client is unable to mount the NFS export. Since the rules are read in sequential order, if a more restrictive policy rule is applied to a client (for example, by way of a subnet), then it's read and applied first. Subsequent policy rules that allow more access are ignored. This diagram shows a client that has an IP of 10.10.10.10 getting read-only access to a volume because the subnet 0.0.0.0/0 (every client in every subnet) is set to read-only and is listed first in the policy.
++
+### Export policy rule options available in Azure NetApp Files
+
+When creating an Azure NetApp Files volume, there are several options configurable for control of access to NFS volumes.
+
+* **Index**: specifies the order in which an export policy rule is evaluated. If a client falls under multiple rules in the policy, then the first applicable rule applies to the client and subsequent rules are ignored.
+* **Allowed clients**: specifies which clients a rule applies to. This value can be a client IP address, a comma-separated list of IP addresses, or a subnet including multiple clients. The hostname and netgroup values aren't supported in Azure NetApp Files.
+* **Access**: specifies the level of access allowed to non-root users. For NFS volumes without Kerberos enabled, the options are: Read only, Read & write, or No access. For volumes with Kerberos enabled, the options are: Kerberos 5, Kerberos 5i, or Kerberos 5p.
+* **Root access**: specifies how the root user is treated in NFS exports for a given client. If set to "On," the root is root. If set to "Off," the [root is squashed](#root-squashing) to the anonymous user ID 65534.
+* **chown mode**: controls what users can run change ownership commands on the export (chown). If set to "Restricted," only the root user can run chown. If set to "Unrestricted," any user with the proper file/folder permissions can run chown commands.
+
+### Default policy rule in Azure NetApp Files
+
+When creating a new volume, a default policy rule is created. The default policy prevents a scenario where a volume is created without policy rules, which would restrict access for any client attempting access to the export. If there are no rules, there is no access.
+
+The default rule has the following values:
+
+* Index = 1
+* Allowed clients = 0.0.0.0/0 (all clients allowed access)
+* Access = Read & write
+* Root access = On
+* Chown mode = Restricted
+
+These values can be changed at volume creation or after the volume has been created.
+
+### Export policy rules with NFS Kerberos enabled in Azure NetApp Files
+
+[NFS Kerberos](configure-kerberos-encryption.md) can be enabled only on volumes using NFSv4.1 in Azure NetApp Files. Kerberos provides added security by offering different modes of encryption for NFS mounts, depending on the Kerberos type in use.
+
+When Kerberos is enabled, the values for the export policy rules change to allow specification of which Kerberos mode should be allowed. Multiple Kerberos security modes can be enabled in the same rule if you need access to more than one.
+
+Those security modes include:
+
+* **Kerberos 5**: Only initial authentication is encrypted.
+* **Kerberos 5i**: User authentication plus integrity checking.
+* **Kerberos 5p**: User authentication, integrity checking and privacy. All packets are encrypted.
+
+Only Kerberos-enabled clients are able to access volumes with export rules specifying Kerberos; no `AUTH_SYS` access is allowed when Kerberos is enabled.
+
+### Root squashing
+
+There are some scenarios where you want to restrict root access to an Azure NetApp Files volume. Since root has unfettered access to anything in an NFS volume ΓÇô even when explicitly denying access to root using mode bits or ACLsΓÇöthe only way to limit root access is to tell the NFS server that root from a specific client is no longer root.
+
+In export policy rules, select "Root access: off" to squash root to a non-root, anonymous user ID of 65534. This means that the root on the specified clients is now user ID 65534 (typically `nfsnobody` on NFS clients) and has access to files and folders based on the ACLs/mode bits specified for that user. For mode bits, the access permissions generally fall under the ΓÇ£EveryoneΓÇ¥ access rights. Additionally, files written as ΓÇ£rootΓÇ¥ from clients impacted by root squash rules create files and folders as the `nfsnobody:65534` user. If you require root to be root, set "Root access" to "On."
+
+To learn more about managing export policies, see [Configure export policies for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md).
+
+#### Export policy rule ordering
+
+The order of export policy rules determines how they are applied. The first rule in the list that applies to an NFS client is the rule used for that client. When using CIDR ranges/subnets for export policy rules, an NFS client in that range may receive unwanted access due to the range in which it's included.
+
+Consider the following example:
++
+- The first rule in the index includes *all clients* in *all subnets* by way of the default policy rule using 0.0.0.0/0 as the **Allowed clients** entry. That rule allows ΓÇ£Read & WriteΓÇ¥ access to all clients for that Azure NetApp Files NFSv3 volume.
+- The second rule in the index explicitly lists NFS client 10.10.10.10 and is configured to limit access to ΓÇ£Read only,ΓÇ¥ with no root access (root is squashed).
+
+As it stands, the client 10.10.10.10 receives access due to the first rule in the list. The next rule is never be evaluated for access restrictions, thus 10.10.10.10 get Read & Write access even though ΓÇ£Read onlyΓÇ¥ is desired. Root is also root, rather than [being squashed](#root-squashing).
+
+To fix this and set access to the desired level, the rules can be re-ordered to place the desired client access rule above any subnet/CIDR rules. You can reorder export policy rules in the Azure portal by dragging the rules or using the **Move** commands in the `...` menu in the row for each export policy rule.
+
+>[!NOTE]
+>You can use the [Azure NetApp Files CLI or REST API](azure-netapp-files-sdk-cli.md) only to add or remove export policy rules.
+
+## SMB shares
+
+SMB shares enable end users can access SMB or dual-protocol volumes in Azure NetApp Files. Access controls for SMB shares are limited in the Azure NetApp Files control plane to only SMB security options such as access-based enumeration and non-browsable share functionality. These security options are configured during volume creation with the **Edit volume** functionality.
++
+Share-level permission ACLs are managed through a Windows MMC console rather than through Azure NetApp Files.
+
+### Security-related share properties
+
+Azure NetApp Files offers multiple share properties to enhance security for administrators.
+
+#### Access-based enumeration
+
+[Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) is an Azure NetApp Files SMB volume feature that limits enumeration of files and folders (that is, listing the contents) in SMB only to users with allowed access on the share. For instance, if a user doesn't have access to read a file or folder in a share with access-based enumeration enabled, then the file or folder doesn't show up in directory listings. In the following example, a user (`smbuser`) doesn't have access to read a folder named ΓÇ£ABEΓÇ¥ in an Azure NetApp Files SMB volume. Only `contosoadmin` has access.
++
+In the below example, access-based enumeration is disabled, so the user has access to the `ABE` directory of `SMBVolume`.
++
+In the next example, access-based enumeration is enabled, so the `ABE` directory of `SMBVolume` doesn't display for the user.
++
+The permissions also extend to individual files. In the below example, access-based enumeration is disabled and `ABE-file` displays to the user.
++
+With access-based enumeration enabled, `ABE-file` doesn't display to the user.
++
+#### Non-browsable shares
+
+The non-browsable shares feature in Azure NetApp Files limits clients from browsing for an SMB share by hiding the share from view in Windows Explorer or when listing shares in "net view." Only end users that know the absolute paths to the share are able to find the share.
+
+In the following image, the non-browsable share property isn't enabled for `SMBVolume`, so the volume displays in the listing of the file server (using `\\servername`).
++
+With non-browsable shares enabled on `SMBVolume` in Azure NetApp Files, the same view of the file server excludes `SMBVolume`.
+
+In the next image, the share `SMBVolume` has non-browsable shares enabled in Azure NetApp Files. When that is enabled, this is the view of the top level of the file server.
++
+Even though the volume in the listing cannot be seen, it remains accessible if the user knows the file path.
++
+#### SMB3 encryption
+
+SMB3 encryption is an Azure NetApp Files SMB volume feature that enforces encryption over the wire for SMB clients for greater security in NAS environments. The following image shows a screen capture of network traffic when SMB encryption is disabled. Sensitive informationΓÇösuch as file names and file handlesΓÇöis visible.
++
+When SMB Encryption is enabled, the packets are marked as encrypted, and no sensitive information can be seen. Instead, itΓÇÖs shown as "Encrypted SMB3 data."
++
+#### SMB share ACLs
+
+SMB shares can control access to who can mount and access a share, as well as control access levels to users and groups in an Active Directory domain. The first level of permissions that get evaluated are share access control lists (ACLs).
+
+SMB share permissions are more basic than file permissions: they only apply read, change or full control. Share permissions can be overridden by file permissions and file permissions can be overridden by share permissions; the most restrictive permission is the one abided by. For instance, if the group ΓÇ£EveryoneΓÇ¥ is given full control on the share (the default behavior), and specific users have read-only access to a folder via a file-level ACL, then read access is applied to those users. Any other users not listed explicitly in the ACL have full control
+
+Conversely, if the share permission is set to ΓÇ£ReadΓÇ¥ for a specific user, but the file-level permission is set to full control for that user, ΓÇ£ReadΓÇ¥ access is enforced.
+
+In dual-protocol NAS environments, SMB share ACLs only apply to SMB users. NFS clients leverage export policies and rules for share access rules. As such, controlling permissions at the file and folder level is preferred over share-level ACLs, especially for dual=protocol NAS volumes.
+
+To learn how to configure ACLs, see [Manage SMB share ACLs in Azure NetApp Files](manage-smb-share-access-control-lists.md).
+
+## Next steps
+
+* [Configure export policy for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md)
+* [Understand NAS](network-attached-storage-concept.md)
+* [Understand NAS permissions](network-attached-storage-permissions.md)
+* [Manage SMB share ACLs in Azure NetApp Files](manage-smb-share-access-control-lists.md)
azure-netapp-files Nfs Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/nfs-access-control-lists.md
+
+ Title: Understand NFSv4.x access control lists in Azure NetApp Files
+description: Learn about using NFSv4.x access control lists in Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 11/13/2023+++
+# Understand NFSv4.x access control lists in Azure NetApp Files
+
+The NFSv4.x protocol can provide access control in the form of [access control lists (ACLs)](/windows/win32/secauthz/access-control-lists), which conceptually similar to ACLs used in [SMB via Windows NTFS permissions](network-attached-file-permissions-smb.md). An NFSv4.x ACL consists of individual [Access Control Entries (ACEs)](/windows/win32/secauthz/access-control-entries), each of which provides an access control directive to the server.
++
+Each NFSv4.x ACL is created with the format of `type:flags:principal:permissions`.
+
+* **Type** ΓÇô the type of ACL being defined. Valid choices include Access (A), Deny (D), Audit (U), Alarm (L). Azure NetApp Files supports Access, Deny and Audit ACL types, but Audit ACLs, while being able to be set, don't currently produce audit logs.
+* **Flags** ΓÇô adds extra context for an ACL. There are three kinds of ACE flags: group, inheritance, and administrative. For more information on flags, see [NFSv4.x ACE flags](#nfsv4x-ace-flags).
+* **Principal** ΓÇô defines the user or group that is being assigned the ACL. A principal on an NFSv4.x ACL uses the format of name@ID-DOMAIN-STRING.COM. For more detailed information on principals, see [NFSv4.x user and group principals](#nfsv4x-user-and-group-principals).
+* **Permissions** ΓÇô where the access level for the principal is defined. Each permission is designated a single letter (for instance, read gets ΓÇ£rΓÇ¥, write gets ΓÇ£wΓÇ¥ and so on). Full access would incorporate each available permission letter. For more information, see [NFSv4.x permissions](#nfsv4x-permissions).
+
+`A:g:group1@contoso.com:rwatTnNcCy` is an example of a valid ACL, following the `type:flags:principal:permissions` format. The example ACL grants full access to the group `group1` in the contoso.com ID domain.
+
+## NFSv4.x ACE flags
+
+An ACE flag helps provide more information about an ACE in an ACL. For instance, if a group ACE is added to an ACL, a group flag needs to be used to designate the principal is a group and not a user. It's possible in Linux environments to have a user and a group name that are identical, so the flag ensures an ACE is honored, then the NFS server needs to know what type of principal is being defined.
+
+Other flags can be used to control ACEs, such as inheritance and administrative flags.
+
+### Access and deny flags
+
+Access (A) and deny (D) flags are used to control security ACE types. An access ACE controls the level of access permissions on a file or folder for a principal. A deny ACE explicitly prohibits a principal from accessing a file or folder, even if an access ACE is set that would allow that principal to access the object. Deny ACEs always overrule access ACEs. In general, avoid using deny ACEs, as NFSv4.x ACLs follow a ΓÇ£default denyΓÇ¥ model, meaning if an ACL isn't added, then deny is implicit. Deny ACEs can create unnecessary complications in ACL management.
+
+### Inheritance flags
+
+Inheritance flags control how ACLs behave on files created below a parent directory with the inheritance flag set. When an inheritance flag is set, files and/or directories inherit the ACLs from the parent folder. Inheritance flags can only be applied to directories, so when a subdirectory is created, it inherits the flag. Files created below a parent directory with an inheritance flag inherit ACLs, but not the inheritance flags.
+
+The following table describes available inheritance flags and their behaviors.
+
+| Inheritance flag | Behavior |
+| - | |
+| d | - Directories below the parent directory inherit the ACL <br> - Inheritance flag is also inherited |
+| f | - Files below the parent directory inherit the ACL <br> - Files don't set inheritance flag |
+| i | Inherit-only; ACL doesnΓÇÖt apply to the current directory but must apply inheritance to objects below the directory |
+| n | - No propagation of inheritance <br> After the ACL is inherited, the inherit flags are cleared on the objects below the parent |
+
+### NFSv4.x ACL examples
+
+In the following example, there are three different ACEs with distinct inheritance flags:
+* directory inherit only (di)
+* file inherit only (fi)
+* both file and directory inherit (fdi)
+
+```bash
+# nfs4_getfacl acl-dir
+
+# file: acl-dir/
+A:di:user1@CONTOSO.COM:rwaDxtTnNcCy
+A:fdi:user2@CONTOSO.COM:rwaDxtTnNcCy
+A:fi:user3@CONTOSO.COM:rwaDxtTnNcCy
+A::OWNER@:rwaDxtTnNcCy
+A:g:GROUP@:rxtncy
+A::EVERYONE@:rxtncy
+```
+
+`User1` has a directory inherit ACL only. On a subdirectory created below the parent, the ACL is inherited, but on a file below the parent, it isn't.
+
+```bash
+# nfs4_getfacl acl-dir/inherit-dir
+
+# file: acl-dir/inherit-dir
+A:d:user1@CONTOSO.COM:rwaDxtTnNcCy
+A:fd:user2@CONTOSO.COM:rwaDxtTnNcCy
+A:fi:user3@CONTOSO.COM:rwaDxtTnNcCy
+A::OWNER@:rwaDxtTnNcCy
+A:g:GROUP@:rxtncy
+A::EVERYONE@:rxtncy
+
+# nfs4_getfacl acl-dir/inherit-file
+
+# file: acl-dir/inherit-file
+ << ACL missing
+A::user2@CONTOSO.COM:rwaxtTnNcCy
+A::user3@CONTOSO.COM:rwaxtTnNcCy
+A::OWNER@:rwatTnNcCy
+A:g:GROUP@:rtncy
+A::EVERYONE@:rtncy
+```
+
+`User2` has a file and directory inherit flag. As a result, both files and directories below a directory with that ACE entry inherit the ACL, but files wonΓÇÖt inherit the flag.
+
+```bash
+# nfs4_getfacl acl-dir/inherit-dir
+
+# file: acl-dir/inherit-dir
+A:d:user1@CONTOSO.COM:rwaDxtTnNcCy
+A:fd:user2@CONTOSO.COM:rwaDxtTnNcCy
+A:fi:user3@CONTOSO.COM:rwaDxtTnNcCy
+A::OWNER@:rwaDxtTnNcCy
+A:g:GROUP@:rxtncy
+A::EVERYONE@:rxtncy
+
+# nfs4_getfacl acl-dir/inherit-file
+
+# file: acl-dir/inherit-file
+A::user2@CONTOSO.COM:rwaxtTnNcCy << no flag
+A::user3@CONTOSO.COM:rwaxtTnNcCy
+A::OWNER@:rwatTnNcCy
+A:g:GROUP@:rtncy
+A::EVERYONE@:rtncy
+```
+
+`User3` only has a file inherit flag. As a result, only files below the directory with that ACE entry inherit the ACL, but they don't inherit the flag since it can only be applied to directory ACEs.
+
+```bash
+# nfs4_getfacl acl-dir/inherit-dir
+
+# file: acl-dir/inherit-dir
+A:d:user1@CONTOSO.COM:rwaDxtTnNcCy
+A:fd:user2@CONTOSO.COM:rwaDxtTnNcCy
+A:fi:user3@CONTOSO.COM:rwaDxtTnNcCy
+A::OWNER@:rwaDxtTnNcCy
+A:g:GROUP@:rxtncy
+A::EVERYONE@:rxtncy
+
+# nfs4_getfacl acl-dir/inherit-file
+
+# file: acl-dir/inherit-file
+A::user2@CONTOSO.COM:rwaxtTnNcCy
+A::user3@CONTOSO.COM:rwaxtTnNcCy << no flag
+A::OWNER@:rwatTnNcCy
+A:g:GROUP@:rtncy
+A::EVERYONE@:rtncy
+```
+
+When a "no-propogate" (n) flag is set on an ACL, the flags clear on subsequent directory creations below the parent. In the following example, `user2` has the `n` flag set. As a result, the subdirectory clears the inherit flags for that principal and objects created below that subdirectory donΓÇÖt inherit the ACE from `user2`.
+
+```bash
+# nfs4_getfacl /mnt/acl-dir
+
+# file: /mnt/acl-dir
+A:di:user1@CONTOSO.COM:rwaDxtTnNcCy
+A:fdn:user2@CONTOSO.COM:rwaDxtTnNcCy
+A:fd:user3@CONTOSO.COM:rwaDxtTnNcCy
+A::OWNER@:rwaDxtTnNcCy
+A:g:GROUP@:rxtncy
+A::EVERYONE@:rxtncy
+
+# nfs4_getfacl inherit-dir/
+
+# file: inherit-dir/
+A:d:user1@CONTOSO.COM:rwaDxtTnNcCy
+A::user2@CONTOSO.COM:rwaDxtTnNcCy << flag cleared
+A:fd:user3@CONTOSO.COM:rwaDxtTnNcCy
+A::OWNER@:rwaDxtTnNcCy
+A:g:GROUP@:rxtncy
+A::EVERYONE@:rxtncy
+
+# mkdir subdir
+# nfs4_getfacl subdir
+
+# file: subdir
+A:d:user1@CONTOSO.COM:rwaDxtTnNcCy
+<< ACL not inherited
+A:fd:user3@CONTOSO.COM:rwaDxtTnNcCy
+A::OWNER@:rwaDxtTnNcCy
+A:g:GROUP@:rxtncy
+A::EVERYONE@:rxtncy
+```
+
+Inherit flags are a way to more easily manage your NFSv4.x ACLs, sparing you from explicitly setting an ACL each time you need one.
+
+### Administrative flags
+
+Administrative flags in NFSv4.x ACLs are special flags that are used only with Audit and Alarm ACL types. These flags define either success or failure access attempts for actions to be performed. For instance, if it's desired to audit failed access attempts to a specific file, then an administrative flag of ΓÇ£FΓÇ¥ can be used to control that behavior.
+
+This Audit ACL is an example of that, where `user1` is audited for failed access attempts for any permission level: `U:F:user1@contoso.com:rwatTnNcCy`.
+
+Azure NetApp Files only supports setting administrative flags for Audit ACEs. File access logging isn't currently supported. Alarm ACEs aren't supported in Azure NetApp Files.
+
+## NFSv4.x user and group principals
+
+With NFSv4.x ACLs, user and group principals define the specific objects that an ACE should apply to. Principals generally follow a format of name@ID-DOMAIN-STRING.COM. The ΓÇ£nameΓÇ¥ portion of a principal can be a user or group, but that user or group must be resolvable in Azure NetApp Files via the LDAP server connection when specifying the NFSv4.x ID domain. If the name@domain isn't resolvable by Azure NetApp Files, then the ACL operation fails with an ΓÇ£invalid argumentΓÇ¥ error.
+
+```bash
+# nfs4_setfacl -a A::noexist@CONTOSO.COM:rwaxtTnNcCy inherit-file
+Failed setxattr operation: Invalid argument
+```
+
+You can check within Azure NetApp Files if a name can be resolved using the LDAP group ID list. Navigate to **Support + Troubleshooting** then **LDAP Group ID list**.
+
+### Local user and group access via NFSv4.x ACLs
+
+Local users and groups can also be used on an NFSv4.x ACL if only the numeric ID is specified in the ACL. User names or numeric IDs with a domain ID specified fail.
+
+For instance:
+
+```bash
+# nfs4_setfacl -a A:fdg:3003:rwaxtTnNcCy NFSACL
+# nfs4_getfacl NFSACL/
+A:fdg:3003:rwaxtTnNcCy
+A::OWNER@:rwaDxtTnNcCy
+A:g:GROUP@:rwaDxtTnNcy
+A::EVERYONE@:rwaDxtTnNcy
+
+# nfs4_setfacl -a A:fdg:3003@CONTOSO.COM:rwaxtTnNcCy NFSACL
+Failed setxattr operation: Invalid argument
+
+# nfs4_setfacl -a A:fdg:users:rwaxtTnNcCy NFSACL
+Failed setxattr operation: Invalid argument
+```
+
+When a local user or group ACL is set, any user or group that corresponds to the numeric ID on the ACL receives access to the object. For local group ACLs, a user passes its group memberships to Azure NetApp Files. If the numeric ID of the group with access to the file via the userΓÇÖs request is shown to the Azure NetApp Files NFS server, then access is allowed as per the ACL.
+
+The credentials passed from client to server can be seen via a packet capture as seen below.
++
+**Caveats:**
+
+* Using local users and groups for ACLs means that every client accessing the files/folders need to have matching user and group IDs.
+* When using a numeric ID for an ACL, Azure NetApp Files implicitly trusts that the incoming request is valid and that the user requesting access is who they say they are and is a member of the groups they claim to be a member of. A user or group numeric can be spoofed if a bad actor knows the numeric ID and can access the network using a client with the ability to create users and groups locally.
+* If a user is a member of more than 16 groups, then any group after the sixteenth group (in alphanumeric order) is denied access to the file or folder, unless LDAP and extended group support is used.
+* LDAP and full name@domain name strings are highly recommended when using NFSv4.x ACLs for better manageability and security. A centrally managed user and group repository is easier to maintain and harder to spoof, thus making unwanted user access less likely.
+
+### NFSv4.x ID domain
+
+The ID domain is an important component of the principal, where an ID domain must match on both client and within Azure NetApp Files for user and group names (specifically, root) to show up properly on file/folder ownerships.
+
+Azure NetApp Files defaults the NFSv4.x ID domain to the DNS domain settings for the volume. NFS clients also default to the DNS domain for the NFSv4.x ID domain. If the clientΓÇÖs DNS domain is different than the Azure NetApp Files DNS domain, then a mismatch occurs. When listing file permissions with commands such as `ls`, users/groups show up as ΓÇ£nobody".
+
+When a domain mismatch occurs between the NFS client and Azure NetApp Files, check the client logs for errors similar to:
+
+```bash
+August 19 13:14:29 centos7 nfsidmap[17481]: nss_getpwnam: name 'root@microsoft.com' does not map into domain ΓÇÿCONTOSO.COM'
+```
+
+The NFS clientΓÇÖs ID domain can be overridden using the /etc/idmapd.conf fileΓÇÖs ΓÇ£DomainΓÇ¥ setting. For example: `Domain = CONTOSO.COM`.
+
+Azure NetApp Files also allows you to [change the NFSv4.1 ID domain](azure-netapp-files-configure-nfsv41-domain.md). For additional details, see [How-to: NFSv4.1 ID Domain Configuration for Azure NetApp Files](https://www.youtube.com/watch?v=UfaJTYWSVAY).
+
+## NFSv4.x permissions
+
+NFSv4.x permissions are the way to control what level of access a specific user or group principal has on a file or folder. Permissions in NFSv3 only allow read/write/execute (rwx) levels of access definition, but NFSv4.x provides a slew of other granular access controls as an improvement over NFSv3 mode bits.
+
+There are 13 permissions that can be set for users, and 14 permissions that can be set for groups.
+
+| Permission letter | Permission granted |
+| - | - |
+|r | Read data/list files and folders |
+|w | Write data/create files and folders |
+|a | Append data/create subdirectories |
+|x | Execute files/traverse directories |
+|d | Delete files/directories |
+|D | Delete subdirectories (directories only) |
+|t | Read attributes (GETATTR) |
+|T | Write attributes (SETATTR/chmod) |
+|n | Read named attributes |
+|N | Write named attributes |
+|c | Read ACLs |
+|C | Write ACLs |
+|o | Write owner (chown) |
+|y | Synchronous I/O |
+
+When access permissions are set, a user or group principal adheres to those assigned rights.
+
+### NFSv4.x permission examples
+
+The following examples show how different permissions work with different configuration scenarios.
+
+**User with read access (r only)**
+
+With read-only access, a user can read attributes and data, but any write access (data, attributes, owner) is denied.
+
+```bash
+A::user1@CONTOSO.COM:r
+
+sh-4.2$ ls -la
+total 12
+drwxr-xr-x 3 root root 4096 Jul 12 12:41 .
+drwxr-xr-x 3 root root 4096 Jul 12 12:09 ..
+-rw-r--r-- 1 root root 0 Jul 12 12:41 file
+drwxr-xr-x 2 root root 4096 Jul 12 12:31 subdir
+sh-4.2$ touch user1-file
+touch: can't touch ΓÇÿuser1-fileΓÇÖ: Permission denied
+sh-4.2$ chown user1 file
+chown: changing ownership of ΓÇÿfileΓÇÖ: Operation not permitted
+sh-4.2$ nfs4_setfacl -e /mnt/acl-dir/inherit-dir
+Failed setxattr operation: Permission denied
+sh-4.2$ rm file
+rm: remove write-protected regular empty file ΓÇÿfileΓÇÖ? y
+rm: can't remove ΓÇÿfileΓÇÖ: Permission denied
+sh-4.2$ cat file
+Test text
+```
+
+**User with read access (r) and write attributes (T)**
+
+In this example, permissions on the file can be changed due to the write attributes (T) permission, but no files can be created since only read access is allowed. This configuration illustrates the kind of granular controls NFSv4.x ACLs can provide.
+
+```bash
+A::user1@CONTOSO.COM:rT
+
+sh-4.2$ touch user1-file
+touch: can't touch ΓÇÿuser1-fileΓÇÖ: Permission denied
+sh-4.2$ ls -la
+total 60
+drwxr-xr-x 3 root root 4096 Jul 12 16:23 .
+drwxr-xr-x 19 root root 49152 Jul 11 09:56 ..
+-rw-r--r-- 1 root root 10 Jul 12 16:22 file
+drwxr-xr-x 3 root root 4096 Jul 12 12:41 inherit-dir
+-rw-r--r-- 1 user1 group1 0 Jul 12 16:23 user1-file
+sh-4.2$ chmod 777 user1-file
+sh-4.2$ ls -la
+total 60
+drwxr-xr-x 3 root root 4096 Jul 12 16:41 .
+drwxr-xr-x 19 root root 49152 Jul 11 09:56 ..
+drwxr-xr-x 3 root root 4096 Jul 12 12:41 inherit-dir
+-rwxrwxrwx 1 user1 group1 0 Jul 12 16:23 user1-file
+sh-4.2$ rm user1-file
+rm: can't remove ΓÇÿuser1-fileΓÇÖ: Permission denied
+```
+
+### Translating mode bits into NFSv4.x ACL permissions
+
+When a chmod is run an an object with NFSv4.x ACLs assigned, a series of system ACLs are updated with new permissions. For instance, if the permissions are set to 755, then the system ACL files get updated. The following table shows what each numeric value in a mode bit translates to in NFSv4 ACL permissions.
+
+See [NFSv4.x permissions](#nfsv4x-permissions) for a table outlining all the permissions.
+
+| Mode bit numeric | Corresponding NFSv4.x permissions |
+| -- | -- |
+| 1 ΓÇô execute (x) | Execute, read attributes, read ACLs, sync I/O (xtcy) |
+| 2 ΓÇô write (w) | Write, append data, read attributes, write attributes, write named attributes, read ACLs, sync I/O (watTNcy) |
+| 3 ΓÇô write/execute (wx) | Write, append data, execute, read attributes, write attributes, write named attributes, read ACLs, sync I/O (waxtTNcy) |
+| 4 ΓÇô read (r) | Read, read attributes, read named attributes, read ACLs, sync I/O (rtncy) |
+| 5 ΓÇô read/execute (rx) | Read, execute, read attributes, read named attributes, read ACLs, sync I/O (rxtncy) |
+| 6 ΓÇô read/write (rw) | Read, write, append data, read attributes, write attributes, read named attributes, write named attributes, read ACLs, sync I/O (rwatTnNcy) |
+| 7 ΓÇô read/write/execute (rwx) | Full control/all permissions |
+
+## How NFSv4.x ACLs work with Azure NetApp Files
+
+Azure NetApp Files supports NFSv4.x ACLs natively when a volume has NFSv4.1 enabled for access. There isn't anything to enable on the volume for ACL support, but for NFSv4.1 ACLs to work best, an LDAP server with UNIX users and groups is needed to ensure that Azure NetApp Files is able to resolve the principals set on the ACLs securely. Local users can be used with NFSv4.x ACLs, but they don't provide the same level of security as ACLs used with an LDAP server.
+
+There are considerations to keep in mind with ACL functionality in Azure NetApp Files.
+
+### ACL inheritance
+
+In Azure NetApp Files, ACL inheritance flags can be used to simplify ACL management with NFSv4.x ACLs. When an inheritance flag is set, ACLs on a parent directory can propagate down to subdirectories and files without further interaction. Azure NetApp Files implements standard ACL inherit behaviors as per [RFC-7530](https://datatracker.ietf.org/doc/html/rfc7530).
+
+### Deny ACEs
+
+Deny ACEs in Azure NetApp Files are used to explicitly restrict a user or group from accessing a file or folder. A subset of permissions can be defined to provide granular controls over the deny ACE. These operate in the standard methods mentioned in [RFC-7530](https://datatracker.ietf.org/doc/html/rfc7530).
+
+### ACL preservation
+
+When a chmod is performed on a file or folder in Azure NetApp Files, all existing ACEs are preserved on the ACL other than the system ACEs (OWNER@, GROUP@, EVERYONE@). Those ACE permissions are modified as defined by the numeric mode bits defined by the chmod command. Only ACEs that are manually modified or removed via the `nfs4_setfacl` command can be changed.
+
+### NFSv4.x ACL behaviors in dual-protocol environments
+
+Dual protocol refers to the use of both SMB and NFS on the same Azure NetApp Files volume. Dual-protocol access controls are determined by which security style the volume is using, but username mapping ensures that Windows users and UNIX users that successfully map to one another have the same access permissions to data.
+
+When NFSv4.x ACLs are in use on UNIX security style volumes, the following behaviors can be observed when using dual-protocol volumes and accessing data from SMB clients.
+
+* Windows usernames need to map properly to UNIX usernames for proper access control resolution.
+* In UNIX security style volumes (where NFSv4.x ACLs would be applied), if no valid UNIX user exists in the LDAP server for a Windows user to map to, then a default UNIX user called `pcuser` (with uid numeric 65534) is used for mapping.
+* Files written with Windows users with no valid UNIX user mapping display as owned by numeric ID 65534, which corresponds to ΓÇ£nfsnobodyΓÇ¥ or ΓÇ£nobodyΓÇ¥ usernames in Linux clients from NFS mounts. This is different from the numeric ID 99 which is typically seen with NFSv4.x ID domain issues. To verify the numeric ID in use, use the `ls -lan` command.
+* Files with incorrect owners don't provide expected results from UNIX mode bits or from NFSv4.x ACLs.
+* NFSv4.x ACLs are managed from NFS clients. SMB clients can neither view nor manage NFSv4.x ACLs.
+
+### Umask impact with NFSv4.x ACLs
+
+[NFSv4 ACLs provide the ability](http://linux.die.net/man/5/nfs4_acl) to offer ACL inheritance. ACL inheritance means that files or folders created beneath objects with NFSv4 ACLs set can inherit the ACLs based on the configuration of the [ACL inheritance flag](http://linux.die.net/man/5/nfs4_acl).
+
+Umask is used to control the permission level at which files and folders are created in a directory. By default, Azure NetApp Files allows umask to override inherited ACLs, which is expected behavior as per [RFC-7530](https://datatracker.ietf.org/doc/html/rfc7530).
+
+For more information, see [umask](network-attached-file-permissions-nfs.md#umask).
+
+### Chmod/chown behavior with NFSv4.x ACLs
+
+In Azure NetApp Files, you can use change ownership (chown) and change mode bit (chmod) commands to manage file and directory permissions on NFSv3 and NFSv4.x.
+
+When using NFSv4.x ACLs, the more granular controls applied to files and folder lessens the need for chmod commands. Chown still has a place, as NFSv4.x ACLs don't assign ownership.
+
+When chmod is run in Azure NetApp Files on files and folders with NFSv4.x ACLs applied, mode bits are changed on the object. In addition, a set of system ACEs are modified to reflect those mode bits. If the system ACEs are removed, then mode bits are cleared. Examples and a more complete description can be found in the section on system ACEs below.
+
+When chown is run in Azure NetApp Files, the assigned owner can be modified. File ownership isn't as critical when using NFSv4.x ACLs as when using mode bits, as ACLs can be used to control permissions in ways that basic owner/group/everyone concepts couldn't. Chown in Azure NetApp Files can only be run as root (either as root or by using sudo), since export controls are configured to only allow root to make ownership changes. Since this is controlled by a default export policy rule in Azure NetApp Files, NFSv4.x ACL entries that allow ownership modifications don't apply.
+
+```bash
+# su user1
+# chown user1 testdir
+chown: changing ownership of ΓÇÿtestdirΓÇÖ: Operation not permitted
+# sudo chown user1 testdir
+# ls -la | grep testdir
+-rw-r--r-- 1 user1 root 0 Jul 12 16:23 testdir
+```
+
+The export policy rule on the volume can be modified to change this behavior. In the **Export policy** menu for the volume, modify **Chown mode** to "unrestricted."
++
+Once modified, ownership can be changed by users other than root if they have appropriate access rights. This requires the ΓÇ£Take OwnershipΓÇ¥ NFSv4.x ACL permission (designated by the letter ΓÇ£oΓÇ¥). Ownership can also be changed if the user changing ownership currently owns the file or folder.
+
+```bash
+A::user1@contoso.com:rwatTnNcCy << no ownership flag (o)
+
+user1@ubuntu:/mnt/testdir$ chown user1 newfile3
+chown: changing ownership of 'newfile3': Permission denied
+
+A::user1@contoso.com:rwatTnNcCoy << with ownership flag (o)
+
+user1@ubuntu:/mnt/testdir$ chown user1 newfile3
+user1@ubuntu:/mnt/testdir$ ls -la
+total 8
+drwxrwxrwx 2 user2 root 4096 Jul 14 16:31 .
+drwxrwxrwx 5 root root 4096 Jul 13 13:46 ..
+-rw-r--r-- 1 user1 root 0 Jul 14 15:45 newfile
+-rw-r--r-- 1 root root 0 Jul 14 15:52 newfile2
+-rw-r--r-- 1 user1 4294967294 0 Jul 14 16:31 newfile3
+```
+
+### System ACEs
+
+On every ACL, there are a series of system ACEs: OWNER@, GROUP@, EVERYONE@. For example:
+
+```bash
+A::OWNER@:rwaxtTnNcCy
+A:g:GROUP@:rwaxtTnNcy
+A::EVERYONE@:rwaxtTnNcy
+```
+
+These ACEs correspond with the classic mode bits permissions you would see in NFSv3 and are directly associated with those permissions. When a chmod is run on an object, these system ACLs change to reflect those permissions.
+
+```bash
+# nfs4_getfacl user1-file
+
+# file: user1-file
+A::user1@CONTOSO.COM:rT
+A::OWNER@:rwaxtTnNcCy
+A:g:GROUP@:rwaxtTnNcy
+A::EVERYONE@:rwaxtTnNcy
+
+# chmod 755 user1-file
+
+# nfs4_getfacl user1-file
+
+# file: user1-file
+A::OWNER@:rwaxtTnNcCy
+A:g:GROUP@:rxtncy
+```
+
+If those system ACEs are removed, then the permission view changes such that the normal mode bits (rwx) show up as dashes.
+
+```bash
+# nfs4_setfacl -x A::OWNER@:rwaxtTnNcCy user1-file
+# nfs4_setfacl -x A:g:GROUP@:rxtncy user1-file
+# nfs4_setfacl -x A::EVERYONE@:rxtncy user1-file
+# ls -la | grep user1-file
+- 1 user1 group1 0 Jul 12 16:23 user1-file
+```
+
+Removing system ACEs is a way to further secure files and folders, as only the user and group principals on the ACL (and root) are able to access the object. Removing system ACEs can break applications that rely on mode bit views for functionality.
+
+### Root user behavior with NFSv4.x ACLs
+
+Root access with NFSv4.x ACLs can't be limited unless [root is squashed](network-attached-storage-permissions.md#root-squashing). Root squashing is where an export policy rule is configured where root is mapped to an anonymous user to limit access. Root access can be configured from a volume's **Export policy** menu by changing the policy rule of **Root access** to off.
+
+To configure root squashing, navigate to the **Export policy** menu on the volume then change ΓÇ£Root accessΓÇ¥ to ΓÇ£offΓÇ¥ for the policy rule.
++
+The effect of disabling root access root squashes to anonymous user `nfsnobody:65534`. Root access is then unable to change ownership.
+
+```bash
+root@ubuntu:/mnt/testdir# touch newfile3
+root@ubuntu:/mnt/testdir# ls -la
+total 8
+drwxrwxrwx 2 user2 root 4096 Jul 14 16:31 .
+drwxrwxrwx 5 root root 4096 Jul 13 13:46 ..
+-rw-r--r-- 1 user1 root 0 Jul 14 15:45 newfile
+-rw-r--r-- 1 root root 0 Jul 14 15:52 newfile2
+-rw-r--r-- 1 nobody 4294967294 0 Jul 14 16:31 newfile3
+root@ubuntu:/mnt/testdir# ls -lan
+total 8
+drwxrwxrwx 2 1002 0 4096 Jul 14 16:31 .
+drwxrwxrwx 5 0 0 4096 Jul 13 13:46 ..
+-rw-r--r-- 1 1001 0 0 Jul 14 15:45 newfile
+-rw-r--r-- 1 0 0 0 Jul 14 15:52 newfile2
+-rw-r--r-- 1 65534 4294967294 0 Jul 14 16:31 newfile3
+root@ubuntu:/mnt/testdir# chown root newfile3
+chown: changing ownership of 'newfile3': Operation not permitted
+```
+
+Alternatively, in dual-protocol environments, NTFS ACLs can be used to granularly limit root access.
++
+## Next steps
+
+* [Configure NFS clients](configure-nfs-clients.md)
+* [Configure access control lists on NFSv4.1 volumes](configure-access-control-lists.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
-
## November 2023
+* [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) is now generally available (GA).
+
+ User and group quotas enable you to stay in control and define how much storage capacity can be used by individual users or groups can use within a specific Azure NetApp Files volume. You can set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can define a default (that is, same for all users) or individual group quotas.
+
+ This feature is Generally Available in Azure commercial regions and US Gov regions where Azure NetApp Files is available.
+ * [SMB Continuous Availability (CA)](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) shares now supports MSIX app attach for Azure Virtual Desktop In addition to Citrix App Layering, FSLogix user profiles including FSLogix ODFC containers, and Microsoft SQL Server, Azure NetApp Files now supports [MSIX app attach](../virtual-desktop/create-netapp-files.md) with SMB Continuous Availability shares to enhance resiliency during storage service maintenance operations. Continuous Availability enables SMB transparent failover to eliminate disruptions as a result of service maintenance events and improves reliability and user experience.
azure-resource-manager Async Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/async-operations.md
If `Azure-AsyncOperation` isn't one of the header values, then look for:
> [!NOTE] > Your REST client must accept a minimum URL size of 4 KB for `Azure-AsyncOperation` and `Location`.
+> [!NOTE]
+> When the `Retry-after` header is not returned, implement your own retry logic by following the Azure guidelines in [this](https://learn.microsoft.com/azure/architecture/best-practices/retry-service-specific#general-rest-and-retry-guidelines) document.
+ ## Azure-AsyncOperation request and response If you have a URL from the `Azure-AsyncOperation` header value, send a GET request to that URL. Use the value from `Retry-After` to schedule how often to check the status. You'll get a response object that indicates the status of the operation. A different response is returned when checking the status of the operation with the `Location` URL. For more information about the response from a location URL, see [Create storage account (202 with Location and Retry-After)](#create-storage-account-202-with-location-and-retry-after).
azure-vmware Configure Azure Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md
Title: Configure Azure Elastic SAN (Preview)
-description: Learn how to use Elastic SAN with Azure VMware Solution
+ Title: Use Azure VMware Solution with Azure Elastic SAN Preview
+description: Learn how to use Elastic SAN Preview with Azure VMware Solution
Previously updated : 11/07/2023 Last updated : 11/16/2023
-# Configure Azure Elastic SAN (Preview)
+# Use Azure VMware Solution with Azure Elastic SAN Preview
-In this article, learn how to configure Azure Elastic SAN or delete an Elastic SAN-based datastore.
+This article explains how to use Azure Elastic SAN Preview as backing storage for Azure VMware Solution. [Azure VMware Solution](introduction.md) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters.
-## What is Azure Elastic SAN
-
-[Azure Elastic storage area network](https://review.learn.microsoft.com/azure/storage/elastic-san/elastic-san-introduction?branch=main) (SAN) addresses the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Azure Elastic SAN is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN. Azure Elastic SAN also offers built-in cloud capabilities, like high availability.
-
-[Azure VMware Solution](https://learn.microsoft.com/azure/azure-vmware/introduction) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters.
+Azure Elastic storage area network (SAN) addresses the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. For more information on Azure Elastic SAN, see [What is Azure Elastic SAN? Preview](../storage/elastic-san/elastic-san-introduction.md).
## Prerequisites
The following prerequisites are required to continue.
In this section, you create a virtual network for your Elastic SAN. Then you create the Elastic SAN that includes creating at least one volume group and one volume that becomes your VMFS datastore. Next, you set up a Private Endpoint for your Elastic SAN that allows your SDDC to connect to the Elastic SAN volume. Then you're ready to add an Elastic SAN volume as a datastore in your SDDC. 1. Use one of the following instruction options to set up a dedicated virtual network for your Elastic SAN:
- - [Azure portal](https://learn.microsoft.com/azure/virtual-network/quick-create-portal)
- - [PowerShell](https://learn.microsoft.com/azure/virtual-network/quick-create-powershell)
- - [Azure CLI](https://learn.microsoft.com/azure/virtual-network/quick-create-cli)
-2. Use one of the following instruction options to set up an Elastic SAN, your dedicated volume group, and initial volume in that group:
+ - [Azure portal](../virtual-network/quick-create-portal.md)
+ - [Azure PowerShell module](../virtual-network/quick-create-powershell.md)
+ - [Azure CLI](../virtual-network/quick-create-cli.md)
+1. Use one of the following instruction options to set up an Elastic SAN, your dedicated volume group, and initial volume in that group:
> [!IMPORTANT]
- > Make sure to create this Elastic SAN in the same region and availability zone as your SDDC for best performance.
+ > Create your Elastic SAN in the same region and availability zone as your SDDC for best performance.
- [Azure portal](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal) - [PowerShell](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-powershell) - [Azure CLI](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-cli)
-3. Use one of the following instructions to configure a Private Endpoint (PE) for your Elastic SAN:
+1. Use one of the following instructions to configure a Private Endpoint (PE) for your Elastic SAN:
- [PowerShell](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-networking?tabs=azure-powershell#configure-a-private-endpoint) - [Azure CLI](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-networking?tabs=azure-cli#tabpanel_2_azure-cli)
After you provide an External storage address block, you can connect to an Elast
## Connect Elastic SAN 1. From the left navigation in your Azure VMware Solution private cloud, select **Storage**, then **+ Connect Elastic SAN**.
-2. Select your **Subscription**, **Resource**, **Volume Group**, **Volume(s)**, and **Client cluster**.
-3. From section, "Rename datastore as per VMware requirements", under **Volume name** > **Data store name**, give names to the Elastic SAN volumes.
+1. Select your **Subscription**, **Resource**, **Volume Group**, **Volume(s)**, and **Client cluster**.
+1. From section, "Rename datastore as per VMware requirements", under **Volume name** > **Data store name**, give names to the Elastic SAN volumes.
> [!NOTE] > For best performance, verify that your Elastic SAN volume and SDDC are in the same Region and Availability Zone.
After you provide an External storage address block, you can connect to an Elast
To delete the Elastic SAN-based datastore, use the following steps from the Azure portal. 1. From the left navigation in your Azure VMware Solution private cloud, select **Storage**, then **Storage list**.
-2. Under **Virtual network**, select **Disconnect** to disconnect the datastore from the Cluster(s).
-3. Optionally you can delete the volume you previously created in your Elastic SAN.
+1. Under **Virtual network**, select **Disconnect** to disconnect the datastore from the Cluster(s).
+1. Optionally you can delete the volume you previously created in your Elastic SAN.
azure-web-pubsub Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/whats-new.md
# What's new with Azure Web PubSub
-On this page, you can read about recent updates about Azure Web PubSub. As we make continuous improvements to the capabilities and developer experience of the service, we welcome any feedback and suggestions. Reach out to the service team at **awps@micrsoft.com**
+On this page, you can read about recent updates about Azure Web PubSub. As we make continuous improvements to the capabilities and developer experience of the service, we welcome any feedback and suggestions. Reach out to the service team at **awps@microsoft.com**
## Q4 2023
backup Backup Azure Database Postgresql Flex Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-flex-overview.md
To perform the backup operation:
Once the configuration is complete:
-1. The Backup recovery point invokes the backup based on the policy schedules on the ARM API of PostgresFlex server, writing data to a secure blob-container with a SAS for enhanced security.
+1. The Backup service invokes the backup based on the policy schedules on the ARM API of PostgresFlex server, writing data to a secure blob-container with a SAS for enhanced security.
1. Backup runs independently preventing disruptions during long-running tasks. 1. The retention and recovery point lifecycles align with the backup policies for effective management.
-1. During the restore, the Backup recovery point invokes restore on the ARM API of PostgresFlex server using the SAS for asynchronous, nondisruptive recovery.
+1. During the restore, the Backup service invokes restore on the ARM API of PostgresFlex server using the SAS for asynchronous, nondisruptive recovery.
:::image type="content" source="./media/backup-azure-database-postgresql-flex-overview/backup-process.png" alt-text="Diagram showing the backup process.":::
backup Backup Azure Database Postgresql Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-flex.md
To configure backup on the Azure PostgreSQL-flex databases using Azure Backup, f
1. Choose one of the Azure PostgreSQL-Flex servers across subscriptions if they're in the same region as that of the vault. Expand the arrow to see the list of databases within a server. :::image type="content" source="./media/backup-azure-database-postgresql-flex/select-resources.png" alt-text="Screenshot showing the select resources option.":::
-1. After the selection, the validation starts. The backup readiness check ensures the vault has sufficient permissions for backup operations. Resolve any access issues by selecting **Assign missing roles** action button in the top action menu to grant permissions.
- :::image type="content" source="./media/backup-azure-database-postgresql-flex/assign-missing-roles.png" alt-text="Screenshot showing the **Assign missing roles** option.":::
-
+1. After the selection, the validation starts. The backup readiness check ensures the vault has sufficient permissions for backup operations. Resolve any access issues by granting appropriate [permissions](/azure/backup/backup-azure-database-postgresql-flex-overview) to the vault MSI and re-triggering the validation.
1. Submit the configure backup operation and track the progress under **Backup instances**.
backup Backup Azure Sql Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-manage-cli.md
F7c68818-039f-4a0f-8d73-e0747e68a813 Restore (Log) Completed master
To change the policy underlying the SQL backup configuration, use the [az backup policy set](/cli/azure/backup/policy#az-backup-policy-set) command. The name parameter in this command refers to the backup item whose policy you want to change. Here, replace the policy of the SQL database *sqldatabase;mssqlserver;master* with a new policy *newSQLPolicy*. You can create new policies using the [az backup policy create](/cli/azure/backup/policy#az-backup-policy-create) command. ```azurecli-interactive
-az backup item set policy --resource-group SQLResourceGroup \
+az backup item set-policy --resource-group SQLResourceGroup \
--vault-name SQLVault \ --container-name VMAppContainer;Compute;SQLResourceGroup;testSQLVM \ --policy-name newSQLPolicy \
confidential-computing Confidential Containers On Aks Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers-on-aks-preview.md
Title: Confidential containers on Azure Kubernetes Service
-description: Learn about pod level isolation via confidential containers on Azure Kubernetes Service
+ Title: Confidential Containers (preview) on Azure Kubernetes Service
+description: Learn about pod level isolation using Confidential Containers (preview) on Azure Kubernetes Service
- ignite-2023
-# Confidential containers on Azure Kubernetes Service
-With the growth in cloud-native application development, there's an increased need to protect the workloads running in cloud environments as well. Containerizing the workload forms a key component for this programming model, and then, protecting the container is paramount to running confidentially in the cloud.
+# Confidential Containers (preview) on Azure Kubernetes Service
+
+With the growth in cloud-native application development, there's an increased need to protect the workloads running in cloud environments as well. Containerizing the workload forms a key component for this programming model, and then, protecting the container is paramount to running confidentially in the cloud.
:::image type="content" source="media/confidential-containers/attack-vectors-conf-containers.png" alt-text="Diagram of various attack vectors that make your cKubernetes container vulnerable.":::
+Confidential Containers on Azure Kubernetes Service (AKS) enable container level isolation in your Kubernetes workloads. It's an addition to Azure suite of confidential computing products, and uses the AMD SEV-SNP memory encryption to protect your containers at runtime.
-Confidential containers on Azure Kubernetes Service (AKS) enable container level isolation in your Kubernetes workloads. It's an addition to Azure suite of confidential computing products, and uses the AMD SEV-SNP memory encryption to protect your containers at runtime.
-Confidential containers are attractive for deployment scenarios that involve sensitive data (for instance, personal data or any data with strong security needed for regulatory compliance).
+Confidential Containers are attractive for deployment scenarios that involve sensitive data (for instance, personal data or any data with strong security needed for regulatory compliance).
## What makes a container confidential?
-In alignment with the guidelines set by the [Confidential Computing Consortium](https://confidentialcomputing.io/), that Microsoft is a founding member of, confidential containers need to fulfill the following ΓÇô
-* Transparency: The confidential container environment where your sensitive application is shared, you can see and verify if it's safe. All components of the Trusted Computing Base (TCB) are to be open sourced.
-* Auditability: Customers shall have the ability to verify and see what version of the CoCo environment package including Linux Guest OS and all the components are current. Microsoft signs to the guest OS and container runtime environment for verifications through attestation. It also releases a secure hash algorithm (SHA) of guest OS builds to build a string audibility and control story.
-* Full attestation: Anything that is part of the TEE shall be fully measured by the CPU with ability to verify remotely. The hardware report from AMD SEV-SNP processor shall reflect container layers and container runtime configuration hash through the attestation claims. Application can fetch the hardware report locally including the report that reflects Guest OS image and container runtime.
+
+In alignment with the guidelines set by the [Confidential Computing Consortium](https://confidentialcomputing.io/), that Microsoft is a founding member of, Confidential Containers need to fulfill the following ΓÇô
+
+* Transparency: The confidential container environment where your sensitive application is shared, you can see and verify if it's safe. All components of the Trusted Computing Base (TCB) are to be open sourced.
+* Auditability: You have the ability to verify and see what version of the CoCo environment package including Linux Guest OS and all the components are current. Microsoft signs to the guest OS and container runtime environment for verifications through attestation. It also releases a secure hash algorithm (SHA) of guest OS builds to build a string audibility and control story.
+* Full attestation: Anything that is part of the TEE shall be fully measured by the CPU with ability to verify remotely. The hardware report from AMD SEV-SNP processor shall reflect container layers and container runtime configuration hash through the attestation claims. Application can fetch the hardware report locally including the report that reflects Guest OS image and container runtime.
* Code integrity: Runtime enforcement is always available through customer defined policies for containers and container configuration, such as immutable policies and container signing.
-* Isolation from operator: Security designs that assume least privilege and highest isolation shielding from all untrusted parties including customer/tenant admins. It includes hardening existing Kubernetes control plane access (kubelet) to confidential pods.
+* Isolation from operator: Security designs that assume least privilege and highest isolation shielding from all untrusted parties including customer/tenant admins. It includes hardening existing Kubernetes control plane access (kubelet) to confidential pods.
But with these features of confidentiality, the product maintains its ease of use: it supports all unmodified Linux containers with high Kubernetes feature conformance. Additionally, it supports heterogenous node pools (GPU, general-purpose nodes) in a single cluster to optimize for cost.
-## What forms confidential containers on AKS?
-Aligning with MicrosoftΓÇÖs commitment to the open-source community, the underlying stack for confidential containers uses the [Kata CoCo](https://github.com/confidential-containers/confidential-containers) agent as the agent running in the node that hosts the pod running the confidential workload. With many TEE technologies requiring a boundary between the host and guest, [Kata Containers](https://katacontainers.io/) are the basis for the Kata CoCo initial work. Microsoft also contributed back to the Kata Coco community to power containers running inside a confidential utility VM.
+## What forms Confidential Containers on AKS?
-The Kata confidential container resides within the Azure Linux AKS Container Host. [Azure Linux](https://techcommunity.microsoft.com/t5/azure-infrastructure-blog/announcing-preview-availability-of-the-mariner-aks-container/ba-p/3649154) and the Cloud Hypervisor VMM (Virtual Machine Monitor) is the end-user facing/user space software that is used for creating and managing the lifetime of virtual machines.
+Aligning with MicrosoftΓÇÖs commitment to the open-source community, the underlying stack for Confidential Containers uses the [Kata CoCo](https://github.com/confidential-containers/confidential-containers) agent as the agent running in the node that hosts the pod running the confidential workload. With many TEE technologies requiring a boundary between the host and guest, [Kata Containers](https://katacontainers.io/) are the basis for the Kata CoCo initial work. Microsoft also contributed back to the Kata Coco community to power containers running inside a confidential utility VM.
+
+The Kata confidential container resides within the Azure Linux AKS Container Host. [Azure Linux](../aks/use-azure-linux.md) and the Cloud Hypervisor VMM (Virtual Machine Monitor) is the end-user facing/user space software that is used for creating and managing the lifetime of virtual machines.
## Container level isolation in AKS
-In default, AKS all workloads share the same kernel and the same cluster admin. With the preview of Pod Sandboxing on AKS, the isolation grew a notch higher with the ability to provide kernel isolation for workloads on the same AKS node. You can read more about the product [here](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/preview-support-for-kata-vm-isolated-containers-on-aks-for-pod/ba-p/3751557). Confidential containers are the next step of this isolation and it uses the memory encryption capabilities of the underlying AMD SEV-SNP virtual machine sizes. These virtual machines are the [DCa_cc](../../articles/virtual-machines/dcasccv5-dcadsccv5-series.md) and [ECa_cc](../../articles/virtual-machines/ecasccv5-ecadsccv5-series.md) sizes with the capability of surfacing the hardwareΓÇÖs root of trust to the pods deployed on it.
+By default, AKS all workloads share the same kernel and the same cluster admin. With the preview of Pod Sandboxing on AKS, the isolation grew a notch higher with the ability to provide kernel isolation for workloads on the same AKS node. You can read more about the feature [here](../aks/use-pod-sandboxing.md). Confidential Containers are the next step of this isolation and it uses the memory encryption capabilities of the underlying AMD SEV-SNP virtual machine sizes. These virtual machines are the [DCa_cc](../virtual-machines/dcasccv5-dcadsccv5-series.md) and [ECa_cc](../virtual-machines/ecasccv5-ecadsccv5-series.md) sizes with the capability of surfacing the hardwareΓÇÖs root of trust to the pods deployed on it.
## Get started
-To get started and learn more about supported scenarios, please refer to our AKS documentation [here](https://aka.ms/conf-containers-aks-documentation).
-
+To get started and learn more about supported scenarios, refer to our AKS documentation [here](../aks/confidential-containers-overview.md).
## Next step
-> To learn more about this announcement, checkout our blog [here](https://aka.ms/coco-aks-preview).
-> We also have a demo of a confidential container running an end-to-end encrypted messaging system on Kafka [here](https://aka.ms/Ignite2023-ConfContainers-AKS-Preview).
+[Deploy a Confidential Container on AKS](../aks/deploy-confidential-containers-default-policy.md).
confidential-computing Skr Flow Confidential Vm Sev Snp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-flow-confidential-vm-sev-snp.md
# Secure Key Release with Confidential VMs How To Guide
-The below article describes how to perform a Secure Key Release from Azure Key Value when your applications are running with an AMD SEV-SNP confidential. To learn more about Secure Key Release and Azure Confidential Computing, [go here.](./concept-skr-attestation.md).
+The below article describes how to perform a Secure Key Release from Azure Key Vault when your applications are running with an AMD SEV-SNP based confidential virtual machine. To learn more about Secure Key Release and Azure Confidential Computing, [go here.](./concept-skr-attestation.md).
SKR requires that an application performing SKR shall go through a remote guest attestation flow using Microsoft Azure Attestation (MAA) as described [here](guest-attestation-confidential-vms.md).
SKR requires that an application performing SKR shall go through a remote guest
To allow Azure Key Vault to release a key to an attested confidential virtual machine, there are certain steps that need to be followed:
-1. Assign a managed identity to the confidential virtual machine. System-assigned managed identity or a user-assigned managed identity are allowed.
-1. Set a Key Vault access policy to grant the managed identity the "release" key permission. A policy allows the confidential virtual machine to access the Key Vault and perform the release operation. If using Key Vault Managed HSM, assign "Managed HSM Crypto Service Release User" role membership.
-1. Create a Key Vault key that is marked as exportable and has an associated release policy. Key release policy associates the key to an attested confidential virtual machine and that the key can only be used for the desired purpose.
-1. To perform the release, send an HTTP request to the Key Vault from the confidential virtual machine. HTTP request must include the Confidential VMs attested platform report in the request body. The attested platform report is used to verify the trustworthiness of the state of the Trusted Execution Environment-enabled platform, such as the Confidential VM. The Microsoft Azure Attestation service can be used to create the attested platform report and include it in the request.
+1. Assign a managed identity to the confidential virtual machine. System-assigned managed identity or a user-assigned managed identity are supported.
+1. Set a Key Vault access policy to grant the managed identity the "release" key permission. A policy allows the confidential virtual machine to access the Key Vault and perform the release operation. If using Key Vault Managed HSM, assign the "Managed HSM Crypto Service Release User" role membership.
+1. Create a Key Vault key that is marked as exportable and has an associated release policy. The key release policy associates the key to an attested confidential virtual machine and that the key can only be used for the desired purpose.
+1. To perform the release, send an HTTP request to the Key Vault from the confidential virtual machine. The HTTP request must include the Confidential VMs attested platform report in the request body. The attested platform report is used to verify the trustworthiness of the state of the Trusted Execution Environment-enabled platform, such as the Confidential VM. The Microsoft Azure Attestation service can be used to create the attested platform report and include it in the request.
![Diagram of the aforementioned operations, which we'll be performing.](media/skr-flow-confidential-vm-sev-snp-attestation/overview.png)
To enable system-assigned managed identity on a CVM, your account needs the [Vir
## Add the access policy to Azure Key Vault
-Once you turn on a system-assigned managed identity for your CVM, you have to provide it with access to the Azure Key Vault data plane where key objects are stored. To ensure that only our confidential virtual machine can execute the release operation, we'll only grant specific permission required for that.
+Once you enable a system-assigned managed identity for your CVM, you have to provide it with access to the Azure Key Vault data plane where key objects are stored. To ensure that only our confidential virtual machine can execute the release operation, we'll only grant the specific permission required.
> [!NOTE] > You can find the managed identity object ID in the virtual machine identity options, in the Azure portal. Alternatively you can retrieve it with [PowerShell](../active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md), [Azure CLI](../active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-cli.md), Bicep or ARM templates.
A [open sourced](https://github.com/Azure/confidential-computing-cvm-guest-attes
### Guest Attestation result
-The result from the Guest Attestation client simply is a base64 encoded string! This encoded string value is a signed JSON Web Token (__JWT__), with a header, body and signature. You can split the string by the `.` (dot) value and base64 decode the results.
+The result from the Guest Attestation client simply is a base64 encoded string. This encoded string value is a signed JSON Web Token (__JWT__), with a header, body and signature. You can split the string by the `.` (dot) value and base64 decode the results.
```text eyJhbGciO...
Here we have another header, though this one has a [X.509 certificate chain](htt
} ```
-You can read from the "`x5c`" array in PowerShell if you wanted to, this can help you verify that this is a valid certificate. Below is an example:
+You can read from the "`x5c`" array in PowerShell, this can help you verify that this is a valid certificate. Below is an example:
```powershell $certBase64 = "MIIIfDCCBmSgA..XQ=="
container-apps Start Serverless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/start-serverless-containers.md
# Introduction to serverless containers on Azure
-Serverless computing offers services that manage and maintain servers, which relive you of the burden of physically operating servers yourself. Azure Container Apps is a serverless platform that handles scaling, security, and infrastructure management for you - all while reducing costs. Once freed from server-related concerns, you're able to spend your time focusing on your application code.
+Serverless computing offers services that manage and maintain servers, which relieve you of the burden of physically operating servers yourself. Azure Container Apps is a serverless platform that handles scaling, security, and infrastructure management for you - all while reducing costs. Once freed from server-related concerns, you're able to spend your time focusing on your application code.
Container Apps make it easy to manage:
cosmos-db Troubleshoot Dotnet Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk-request-timeout.md
description: Learn how to diagnose and fix .NET SDK request timeout exceptions.
Previously updated : 02/15/2023 Last updated : 11/16/2023
If you use an HTTP proxy, make sure it can support the number of connections con
### Create multiple client instances
-Creating multiple client instances might lead to connection contention and timeout issues.
+Creating multiple client instances might lead to connection contention and timeout issues. The [Diagnostics](./troubleshoot-dotnet-sdk.md#capture-diagnostics) contain two relevant properties:
+
+```json
+{
+ "NumberOfClientsCreated":X,
+ "NumberOfActiveClients":Y,
+}
+```
+
+`NumberOfClientsCreated` tracks the number of times a `CosmosClient` was created within the same AppDomain, and `NumberOfActiveClients` tracks the active clients (not disposed). The expectation is that if the singleton pattern is followed, `X` would match the number of accounts the application works with and that `X` is equal to `Y`.
+
+If `X` is greater than `Y`, it means the application is creating and disposing client instances. This can lead to [connection contention](#socket-or-port-availability-might-be-low) and/or [CPU contention](#high-cpu-utilization).
#### Solution
-Follow the [performance tips](performance-tips-dotnet-sdk-v3.md#sdk-usage), and use a single CosmosClient instance across an entire process.
+Follow the [performance tips](performance-tips-dotnet-sdk-v3.md#sdk-usage), and use a single CosmosClient instance per account across an entire process. Avoid creating and disposing clients.
### Hot partition key
cosmos-db Tutorial Log Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/tutorial-log-transformation.md
In this tutorial, you learn how to:
To complete this tutorial, you need: - A Log Analytics workspace where you have at least [contributor rights](../azure-monitor/logs/manage-access.md#azure-rbac).-- [Permissions to create DCR objects](../azure-monitor/essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create DCR objects](../azure-monitor/essentials/data-collection-rule-create-edit.md#permissions) in the workspace.
- A table that already has some data. - The table can't be linked to the [workspace transformation DCR](../azure-monitor/essentials/data-collection-transformations.md#workspace-transformation-dcr).
cost-management-billing Reservation Exchange Policy Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-exchange-policy-changes.md
Azure savings plan for compute was launched in October 2022 to provide you with
You can continue to use instance size flexibility for VM sizes, but Microsoft is ending exchanges for regions and instance series for these Azure compute reservations.
-The current cancellation policy for reserved instances isn't changing. The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment.
+The current cancellation policy for reserved instances isn't changing. The total canceled commitment can't exceed USD 50,000 in a 12-month rolling window for a billing profile or single enrollment.
A compute reservation exchange for another compute reservation exchange is similar to, but not the same as a reservation [trade-in](../savings-plan/reservation-trade-in.md) for a savings plan. The difference is that you can always trade in your Azure reserved instances for compute for a savings plan. There's no time limit for trade-ins.
data-factory Connector Amazon Marketplace Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-marketplace-web-service.md
This Amazon Marketplace Web Service connector is supported for the following cap
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Amazon Rds For Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-oracle.md
This Amazon RDS for Oracle connector is supported for the following capabilities
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Amazon Rds For Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-sql-server.md
This Amazon RDS for SQL Server connector is supported for the following capabili
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Amazon Redshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-redshift.md
This Amazon Redshift connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Amazon S3 Compatible Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-s3-compatible-storage.md
This Amazon S3 Compatible Storage connector is supported for the following capab
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
Specifically, this Amazon S3 Compatible Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). The connector uses [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) to authenticate requests to S3. You can use this Amazon S3 Compatible Storage connector to copy data from any S3-compatible storage provider. Specify the corresponding service URL in the linked service configuration.
data-factory Connector Amazon Simple Storage Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-simple-storage-service.md
This Amazon S3 connector is supported for the following capabilities:
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
Specifically, this Amazon S3 connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). You can also choose to [preserve file metadata during copy](#preserve-metadata-during-copy). The connector uses [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) to authenticate requests to S3.
data-factory Connector Appfigures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-appfigures.md
This AppFigures connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Asana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-asana.md
This Asana connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
This Azure Blob Storage connector is supported for the following capabilities:
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1| |[Delete activity](delete-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For the Copy activity, this Blob storage connector supports:
data-factory Connector Azure Cosmos Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-analytical-store.md
This Azure Cosmos DB for NoSQL connector is supported for the following capabili
|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
## Mapping data flow properties
data-factory Connector Azure Cosmos Db Mongodb Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
This Azure Cosmos DB for MongoDB connector is supported for the following capabi
|| --| --| |[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
You can copy data from Azure Cosmos DB for MongoDB to any supported sink data store, or copy data from any supported source data store to Azure Cosmos DB for MongoDB. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md
This Azure Cosmos DB for NoSQL connector is supported for the following capabili
|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô | |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For Copy activity, this Azure Cosmos DB for NoSQL connector supports:
data-factory Connector Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-explorer.md
This Azure Data Explorer connector is supported for the following capabilities:
|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; | |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
You can copy data from any supported source data store to Azure Data Explorer. You can also copy data from Azure Data Explorer to any supported sink data store. For a list of data stores that the copy activity supports as sources or sinks, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
This Azure Data Lake Storage Gen2 connector is supported for the following capab
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô | |[Delete activity](delete-activity.md)|&#9312; &#9313;|Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For Copy activity, with this connector you can:
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-store.md
This Azure Data Lake Storage Gen1 connector is supported for the following capab
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
Specifically, with this connector you can:
data-factory Connector Azure Database For Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mariadb.md
This Azure Database for MariaDB connector is supported for the following capabil
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|Γ£ô | |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
You can copy data from Azure Database for MariaDB to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mysql.md
This Azure Database for MySQL connector is supported for the following capabilit
|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô | |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
## Getting started
The below table lists the properties supported by Azure Database for MySQL sourc
| Stored procedure | If you select Stored procedure as input, specify a name of the stored procedure to read data from the source table, or select Refresh to ask the service to discover the procedure names.| Yes (if you select Stored procedure as input) | String | procedureName | | Procedure parameters | If you select Stored procedure as input, specify any input parameters for the stored procedure in the order set in the procedure, or select Import to import all procedure parameters using the form `@paraName`. | No | Array | inputs | | Batch size | Specify a batch size to chunk large data into batches. | No | Integer | batchSize |
-| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
+| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE |isolationLevel |
#### Azure Database for MySQL source script example
data-factory Connector Azure Database For Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-postgresql.md
This Azure Database for PostgreSQL connector is supported for the following capa
|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô | |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
The three activities work on all Azure Database for PostgreSQL deployment options:
The below table lists the properties supported by Azure Database for PostgreSQL
| Stored procedure | If you select Stored procedure as input, specify a name of the stored procedure to read data from the source table, or select Refresh to ask the service to discover the procedure names.| Yes (if you select Stored procedure as input) | String | procedureName | | Procedure parameters | If you select Stored procedure as input, specify any input parameters for the stored procedure in the order set in the procedure, or select Import to import all procedure parameters using the form `@paraName`. | No | Array | inputs | | Batch size | Specify a batch size to chunk large data into batches. | No | Integer | batchSize |
-| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
+| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE |isolationLevel |
#### Azure Database for PostgreSQL source script example
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-databricks-delta-lake.md
This Azure Databricks Delta Lake connector is supported for the following capabi
|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; | |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
In general, the service supports Delta Lake with the following capabilities to meet your various needs.
data-factory Connector Azure File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-file-storage.md
This Azure Files connector is supported for the following capabilities:
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1| |[Delete activity](delete-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
You can copy data from Azure Files to any supported sink data store, or copy data from any supported source data store to Azure Files. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-search.md
This Azure Cognitive Search connector is supported for the following capabilitie
|| --| --| |[Copy activity](copy-activity-overview.md) (-/sink)|&#9312; &#9313;|Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
You can copy data from any supported source data store into search index. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md
This Azure Synapse Analytics connector is supported for the following capabiliti
|[Script activity](transform-data-using-script.md)|&#9312; &#9313;|Γ£ô | |[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For Copy activity, this Azure Synapse Analytics connector supports these functions:
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
This Azure SQL Database connector is supported for the following capabilities:
|[Script activity](transform-data-using-script.md)|&#9312; &#9313;|Γ£ô | |[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|Γ£ô |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For Copy activity, this Azure SQL Database connector supports these functions:
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
This Azure SQL Managed Instance connector is supported for the following capabil
|[Script activity](transform-data-using-script.md)|&#9312; &#9313;|Γ£ô <small> Public preview | |[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|Γ£ô <small> Public preview |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For Copy activity, this Azure SQL Database connector supports these functions:
The below table lists the properties supported by Azure SQL Managed Instance sou
| Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - |- | | Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `Select * from MyTable where customerId > 1000 and customerId < 2000`| No | String | query | | Batch size | Specify a batch size to chunk large data into reads. | No | Integer | batchSize |
-| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
+| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE |isolationLevel |
| Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- | | Incremental column | When using the incremental extract feature, you must choose the date/time or numeric column that you wish to use as the watermark in your source table. | No | - |- | | Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental column required. You need to [enable change data capture](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL MI before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
data-factory Connector Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-table-storage.md
This Azure Table storage connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
You can copy data from any supported source data store to Table storage. You also can copy data from Table storage to any supported sink data store. For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-cassandra.md
This Cassandra connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-concur.md
This Concur connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-couchbase.md
This Couchbase connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Dataworld https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dataworld.md
This data.world connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-db2.md
This DB2 connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Drill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-drill.md
This Drill connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Dynamics Ax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-ax.md
This Dynamics AX connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that supports as sources and sinks, see [Supported data stores](connector-overview.md#supported-data-stores).
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
This connector is supported for the following activities:
|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; | |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that a copy activity supports as sources and sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-file-system.md
This file system connector is supported for the following capabilities:
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
Specifically, this file system connector supports:
data-factory Connector Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-ftp.md
This FTP connector is supported for the following capabilities:
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
Specifically, this FTP connector supports:
data-factory Connector Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-adwords.md
This Google AdWords connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md
This Google BigQuery connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-cloud-storage.md
This Google Cloud Storage connector is supported for the following capabilities:
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
Specifically, this Google Cloud Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). It takes advantage of GCS's S3-compatible interoperability.
data-factory Connector Google Sheets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-sheets.md
This Google Sheets connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Greenplum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-greenplum.md
This Greenplum connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hbase.md
This HBase connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hdfs.md
This HDFS connector is supported for the following capabilities:
|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
Specifically, the HDFS connector supports:
data-factory Connector Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hive.md
This Hive connector is supported for the following capabilities:
|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; | |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-http.md
This HTTP connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see [Supported data stores](connector-overview.md#supported-data-stores).
data-factory Connector Hubspot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hubspot.md
This HubSpot connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks , see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Impala https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-impala.md
This Impala connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-informix.md
This Informix connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Jira https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-jira.md
This Jira connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Magento https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-magento.md
This Magento connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mariadb.md
This MariaDB connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Marketo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-marketo.md
This Marketo connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Microsoft Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-access.md
This Microsoft Access connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Microsoft Fabric Lakehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md
This Microsoft Fabric Lakehouse connector is supported for the following capabil
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô | |[Mapping data flow](concepts-data-flow-overview.md) (-/sink)|&#9312; |- |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
## Get started
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-atlas.md
This MongoDB Atlas connector is supported for the following capabilities:
|| --| |[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb.md
This MongoDB connector is supported for the following capabilities:
|| --| |[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md
This MySQL connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Netezza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-netezza.md
This Netezza connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odata.md
This OData connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see [Supported data stores](connector-overview.md#supported-data-stores).
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odbc.md
This ODBC connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
This Microsoft 365 (Office 365) connector is supported for the following capabil
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312;| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
ADF Microsoft 365 (Office 365) connector and Microsoft Graph Data Connect enables at scale ingestion of different types of datasets from Exchange Email enabled mailboxes, including address book contacts, calendar events, email messages, user information, mailbox settings, and so on. Refer [here](/graph/data-connect-datasets) to see the complete list of datasets available.
data-factory Connector Oracle Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-cloud-storage.md
This Oracle Cloud Storage connector is supported for the following capabilities:
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
Specifically, this Oracle Cloud Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). It takes advantage of Oracle Cloud Storage's S3-compatible interoperability.
data-factory Connector Oracle Eloqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-eloqua.md
This Oracle Eloqua connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Oracle Responsys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-responsys.md
This Oracle Responsys connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Oracle Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-service-cloud.md
This Oracle Service Cloud connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle.md
This Oracle connector is supported for the following capabilities:
|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;| |[Script activity](transform-data-using-script.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Paypal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-paypal.md
This PayPal connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-phoenix.md
This Phoenix connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-postgresql.md
This PostgreSQL connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Presto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-presto.md
This Presto connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Quickbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbase.md
This Quickbase connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbooks.md
This QuickBooks connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
This REST connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;| |[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see [Supported data stores](connector-overview.md#supported-data-stores).
data-factory Connector Salesforce Marketing Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-marketing-cloud.md
This Salesforce Marketing Cloud connector is supported for the following capabil
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Salesforce Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-service-cloud.md
This Salesforce Service Cloud connector is supported for the following capabilit
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
This Salesforce connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse-open-hub.md
This SAP Business Warehouse Open Hub connector is supported for the following ca
|[Copy activity](copy-activity-overview.md) (source/-)|&#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Sap Business Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse.md
This SAP Business Warehouse connector is supported for the following capabilitie
|[Copy activity](copy-activity-overview.md) (source/-)|&#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
This SAP CDC connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312;, &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
This SAP CDC connector uses the SAP ODP framework to extract data from SAP source systems. For an introduction to the architecture of the solution, read [Introduction and architecture to SAP change data capture (CDC)](sap-change-data-capture-introduction-architecture.md) in our [SAP knowledge center](industry-sap-overview.md).
data-factory Connector Sap Cloud For Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-cloud-for-customer.md
This SAP Cloud for Customer connector is supported for the following capabilitie
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-ecc.md
This SAP ECC connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-hana.md
This SAP HANA connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/sink)|&#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-table.md
This SAP table connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of the data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-servicenow.md
This ServiceNow connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sftp.md
This SFTP connector is supported for the following capabilities:
|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
Specifically, the SFTP connector supports:
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sharepoint-online-list.md
This SharePoint Online List connector is supported for the following capabilitie
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-shopify.md
This Shopify connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Smartsheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-smartsheet.md
This Smartsheet connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
This Snowflake connector is supported for the following capabilities:
|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;| |[Script activity](transform-data-using-script.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For the Copy activity, this Snowflake connector supports the following functions:
data-factory Connector Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-spark.md
This Spark connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
This SQL Server connector is supported for the following capabilities:
|[Script activity](transform-data-using-script.md)|&#9312; &#9313;| |[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The below table lists the properties supported by SQL Server source. You can edi
| Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - |- | | Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `Select * from MyTable where customerId > 1000 and customerId < 2000`| No | String | query | | Batch size | Specify a batch size to chunk large data into reads. | No | Integer | batchSize |
-| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
+| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE |isolationLevel |
| Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- | | Incremental date column | When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. | No | - |- | | Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on SQL Server before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
data-factory Connector Square https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-square.md
This Square connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Sybase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sybase.md
This Sybase connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Teamdesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teamdesk.md
This TeamDesk connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Teradata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teradata.md
This Teradata connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-twilio.md
This Twilio connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Vertica https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-vertica.md
This Vertica connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
data-factory Connector Web Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-web-table.md
This Web table connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Xero https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-xero.md
This Xero connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Zendesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zendesk.md
This Zendesk connector is supported for the following capabilities:
|| --| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Connector Zoho https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zoho.md
This Zoho connector is supported for the following capabilities:
|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-schema-and-type-mapping.md
Copy activity currently supports the following interim data types: Boolean, Byte
The following data type conversions are supported between the interim types from source to sink.
-| Source\Sink | Boolean | Byte array | Decimal | Date/Time <small>(1)</small> | Float-point <small>(2)</small> | GUID | Integer <small>(3)</small> | String | TimeSpan |
+| Source\Sink | Boolean | Byte array | Decimal | Date/Time (1)</small> | Float-point <small>(2)</small> | GUID | Integer <small>(3) | String | TimeSpan |
| -- | - | - | - | - | | - | -- | | -- | | Boolean | Γ£ô | | Γ£ô | | Γ£ô | | Γ£ô | Γ£ô | | | Byte array | | Γ£ô | | | | | | Γ£ô | |
data-manager-for-agri Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md
Previously updated : 11/1/2023 Last updated : 11/16/2023
Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st
[!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)]
+## November 2023
+
+### LLM capability
+Our LLM capability enables seamless selection of APIs mapped to farm operations today. This enables use cases that are based on tillage, planting, applications and harvesting type of farm operations. In the time to come we'll add the capability to select APIs mapped to soil sensors, weather, and imagery type of data. The skills in our LLM capability allow for a combination of results, calculation of area, ranking, summarizing to help serve customer prompts. These capabilities enable others to build their own agriculture copilots that deliver insights to farmers. Learn more about this [here](concepts-llm-apis.md).
+ ## October 2023
-### Azure portal experience enhancement:
+### Azure portal experience enhancement
We released a new user friendly experience to install ISV solutions that are available for Azure Data Manager for Agriculture users. You can now go to your Azure Data Manager for Agriculture instance on the Azure portal, view and install available solutions in a seamless user experience. Today the ISV solutions available are from Bayer AgPowered services, you can see the marketplace listing [here](https://azuremarketplace.microsoft.com/marketplace/apps?search=bayer&page=1). You can learn more about installing ISV solutions [here](how-to-set-up-isv-solution.md). ## July 2023
-### Weather API update:
+### Weather API update
We deprecated the old weather APIs from API version 2023-07-01. The old weather APIs are replaced with new simple yet powerful provider agnostic weather APIs. Have a look at the API documentation [here](/rest/api/data-manager-for-agri/#weather).
-### New farm operations connector:
+### New farm operations connector
We added support for Climate FieldView as a built-in data source. You can now auto sync planting, application and harvest activity files from FieldView accounts directly into Azure Data Manager for Agriculture. Learn more about this [here](concepts-farm-operations-data.md).
-### Common Data Model now with geo-spatial support:
-We updated our data model to improve flexibility. The boundary object has been deprecated in favor of a geometry property that is now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity and observation themes. It allows for more flexible integration when ingesting data from a provider with strict hierarchy requirements. You can now sync data that might not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more [here](concepts-hierarchy-model.md).
+### Common Data Model now with geo-spatial support
+We updated our data model to improve flexibility. The boundary object is deprecated in favor of a geometry property that is now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity and observation themes. It allows for more flexible integration when ingesting data from a provider with strict hierarchy requirements. You can now sync data that might not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more [here](concepts-hierarchy-model.md).
## June 2023
defender-for-cloud Agentless Container Registry Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi
| [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/PhoenixContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 | | [Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 | -- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).
+- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md).
- **Query scan results via REST API** - Learn how to query scan results via [REST API](subassessment-rest-api.md). - **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md). - **Support for disabling vulnerabilities** - Learn how to [disable vulnerabilities on images](disable-vulnerability-findings-containers.md).
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
- Title: Reference list of attack paths and cloud security graph components
-description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource.
-- Previously updated : 09/05/2023--
-# Reference list of attack paths and cloud security graph components
-
-This article lists the attack paths, connections, and insights used in Defender Cloud Security Posture Management (CSPM).
--- You need to [enable Defender CSPM](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) to view attack paths.-- What you see in your environment depends on the resources you're protecting, and your customized configuration.-
-Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
-
-## Attack paths
-
-### Azure VMs
-
-Prerequisite: For a list of prerequisites, see the [Availability table](how-to-manage-attack-path.md#availability) for attack paths.
-
-| Attack path display name | Attack path description |
-|--|--|
-| Internet exposed VM has high severity vulnerabilities | A virtual machine is reachable from the internet and has high severity vulnerabilities. |
-| Internet exposed VM has high severity vulnerabilities and high permission to a subscription | A virtual machine is reachable from the internet, has high severity vulnerabilities, and identity and permission to a subscription. |
-| Internet exposed VM has high severity vulnerabilities and read permission to a data store with sensitive data | A virtual machine is reachable from the internet, has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
-| Internet exposed VM has high severity vulnerabilities and read permission to a data store | A virtual machine is reachable from the internet and has high severity vulnerabilities and read permission to a data store. |
-| Internet exposed VM has high severity vulnerabilities and read permission to a Key Vault | A virtual machine is reachable from the internet and has high severity vulnerabilities and read permission to a key vault. |
-| VM has high severity vulnerabilities and high permission to a subscription | A virtual machine has high severity vulnerabilities and has high permission to a subscription. |
-| VM has high severity vulnerabilities and read permission to a data store with sensitive data | A virtual machine has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/>Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
-| VM has high severity vulnerabilities and read permission to a key vault | A virtual machine has high severity vulnerabilities and read permission to a key vault. |
-| VM has high severity vulnerabilities and read permission to a data store | A virtual machine has high severity vulnerabilities and read permission to a data store. |
-| Internet exposed VM has high severity vulnerability and insecure SSH private key that can authenticate to another VM | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to another AWS EC2 instance |
-| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server |
-| VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server |
-| VM has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to storage account | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an Azure storage account |
-| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account |
-
-### AWS EC2 instances
-
-Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentless.md).
-
-| Attack path display name | Attack path description |
-|--|--|
-| Internet exposed EC2 instance has high severity vulnerabilities and high permission to an account | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to an account. |
-| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a DB | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to a database. |
-| Internet exposed EC2 instance has high severity vulnerabilities and read permission to S3 bucket | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy. |
-| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a S3 bucket with sensitive data | An AWS EC2 instance is reachable from the internet has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket containing sensitive data via an IAM policy, or via a bucket policy, or via both an IAM policy and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
-| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a KMS | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has an IAM role attached with permission to an AWS Key Management Service (KMS) via an IAM policy, or via an AWS Key Management Service (KMS) policy, or via both an IAM policy and an AWS KMS policy.|
-| Internet exposed EC2 instance has high severity vulnerabilities | An AWS EC2 instance is reachable from the internet and has high severity vulnerabilities. |
-| EC2 instance with high severity vulnerabilities has high privileged permissions to an account | An AWS EC2 instance has high severity vulnerabilities and has permissions to an account. |
-| EC2 instance with high severity vulnerabilities has read permissions to a data store |An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket via an IAM policy or via a bucket policy, or via both an IAM policy and a bucket policy. |
-| EC2 instance with high severity vulnerabilities has read permissions to a data store with sensitive data | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket containing sensitive data via an IAM policy or via a bucket policy, or via both an IAM and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
-| EC2 instance with high severity vulnerabilities has read permissions to a KMS key | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an AWS Key Management Service (KMS) key via an IAM policy, or via an AWS Key Management Service (KMS) policy, or via both an IAM and AWS KMS policy. |
-| Internet exposed EC2 instance has high severity vulnerability and insecure SSH private key that can authenticate to another AWS EC2 instance | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to another AWS EC2 instance |
-| Internet exposed EC2 instance has high severity vulnerabilities and has insecure secret that is used to authenticate to a RDS resource | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource |
-| EC2 instance has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to a RDS resource | An AWS EC2 instance has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource |
-| Internet exposed AWS EC2 instance has high severity vulnerabilities and has insecure secret that has permission to S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy. | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has insecure secret that has permissions to S3 bucket via an IAM policy, a bucket policy or both |
-
-### GCP VM Instances
-
-| Attack path display name | Attack path description |
-|--|--|
-| Internet exposed VM instance has high severity vulnerabilities | GCP VM instance '[VMInstanceName]' is reachable from the internet and has high severity vulnerabilities [Remote Code Execution]. |
-| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. |
-| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities allowing remote code execution on the machine and assigned with Service Account with read permission to GCP Storage bucket '[BucketName]' containing sensitive data. |
-| Internet exposed VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'. |
-| Internet exposed VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. |
-| Internet exposed VM instance has high severity vulnerabilities and a hosted database installed | GCP VM instance '[VMInstanceName]' with a hosted [DatabaseType] database is reachable from the internet and has high severity vulnerabilities. |
-| Internet exposed VM with high severity vulnerabilities has plaintext SSH private key | GCP VM instance '[MachineName]' is reachable from the internet, has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. |
-| VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. |
-| VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities [Remote Code Execution] and has read permissions to GCP Storage bucket '[BucketName]' containing sensitive data. |
-| VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'.|
-| VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. |
-| VM instance with high severity vulnerabilities has plaintext SSH private key | GCP VM instance to align with all other attack paths. Virtual machine '[MachineName]' has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. |
-
-### Azure data
-
-| Attack path display name | Attack path description |
-|--|--|
-| Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-| Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-| SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
-| SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
-| Managed database with excessive internet exposure allows basic (local user/password) authentication | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
-| Managed database with excessive internet exposure and sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. |
-| Internet exposed managed database with sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from specific IPs or IP ranges and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. |
-| Internet exposed VM has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.|
-| Private Azure blob storage container replicates data to internet exposed and publicly accessible Azure blob storage container | An internal Azure storage container replicates its data to another Azure storage container that is reachable from the internet and allows public access, and poses this data at risk. |
-| Internet exposed Azure Blob Storage container with sensitive data is publicly accessible | A blob storage account container with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md).|
-
-### AWS data
-
-| Attack path display name | Attack path description |
-|--|--|
-| Internet exposed AWS S3 Bucket with sensitive data is publicly accessible | An S3 bucket with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
-|Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute | Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md). |
-|Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities | SQL on EC2 instance is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-|SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities |SQL on EC2 instance [EC2Name] has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
-| Managed database with excessive internet exposure allows basic (local user/password) authentication | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
-| Managed database with excessive internet exposure and sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks.|
-|Internet exposed managed database with sensitive data allows basic (local user/password) authentication (Preview) | The database can be accessed through the internet from specific IPs or IP ranges and allows authentication using username and password (basic authentication mechanism) which exposes a DB with sensitive data to brute force attacks. |
-|Internet exposed EC2 instance has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.|
-| Private AWS S3 bucket replicates data to internet exposed and publicly accessible AWS S3 bucket | An internal AWS S3 bucket replicates its data to another S3 bucket which is reachable from the internet and allows public access, and poses this data at risk. |
-| RDS snapshot is publicly available to all AWS accounts (Preview) | A snapshot of an RDS instance or cluster is publicly accessible by all AWS accounts. |
-| Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute |
-| Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs) |
-| SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute |
-| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs) |
-| Private AWS S3 bucket replicates data to internet exposed and publicly accessible AWS S3 bucket | Private AWS S3 bucket is replicating data to internet exposed and publicly accessible AWS S3 bucket |
-| Private AWS S3 bucket with sensitive data replicates data to internet exposed and publicly accessible AWS S3 bucket | Private AWS S3 bucket with sensitive data is replicating data to internet exposed and publicly accessible AWS S3 bucket|
-| RDS snapshot is publicly available to all AWS accounts (Preview) | RDS snapshot is publicly available to all AWS accounts |
-
-### GCP data
-
-| Attack path display name | Attack path description |
-|--|--|
-| GCP Storage Bucket with sensitive data is publicly accessible | GCP Storage Bucket [BucketName] with sensitive data allows public read access without authorization required. |
-
-### Azure containers
-
-Prerequisite: [Enable agentless container posture](concept-agentless-containers.md). This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer.
-
-| Attack path display name | Attack path description |
-|--|--|
-| Internet exposed Kubernetes pod is running a container with RCE vulnerabilities | An internet exposed Kubernetes pod in a namespace is running a container using an image that has vulnerabilities allowing remote code execution. |
-| Kubernetes pod running on an internet exposed node uses host network is running a container with RCE vulnerabilities | A Kubernetes pod in a namespace with host network access enabled is exposed to the internet via the host network. The pod is running a container using an image that has vulnerabilities allowing remote code execution. |
-
-### Azure DevOps repositories
-
-Prerequisite: [Enable DevOps Security in Defender for Cloud](defender-for-devops-introduction.md).
-
-| Attack path display name | Attack path description |
-|--|--|
-| Internet exposed Azure DevOps repository with plaintext secret is publicly accessible | An Azure DevOps repository is reachable from the internet, allows public read access without authorization required, and holds plaintext secrets. |
-
-### GitHub repositories
-
-Prerequisite: [Enable DevOps Security in Defender for Cloud](defender-for-devops-introduction.md).
-
-| Attack path display name | Attack path description |
-|--|--|
-| Internet exposed GitHub repository with plaintext secret is publicly accessible | A GitHub repository is reachable from the internet, allows public read access without authorization required, and holds plaintext secrets. |
-
-### APIs
-
-Prerequisite: [Enable Defender for APIs](defender-for-apis-deploy.md).
-
-| Attack path display name | Attack path description |
-|--|--|
-| Internet exposed APIs that are unauthenticated carry sensitive data | Azure API Management API is reachable from the internet, contains sensitive data and has no authentication enabled resulting in attackers exploiting APIs for data exfiltration. |
-
-## Cloud security graph components list
-
-This section lists all of the cloud security graph components (connections and insights) that can be used in queries with the [cloud security explorer](concept-attack-path.md).
-
-### Insights
-
-| Insight | Description | Supported entities |
-|--|--|--|
-| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance, GCP VM instance, GCP SQL admin instance |
-| Allows basic authentication (Preview) | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance, Azure MariaDB Single Server, Azure MySQL Single Server, Azure MySQL Flexible Server, Synapse Workspace, Azure PostgreSQL Single Server, Azure SQL Managed Instance |
-| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | MDC Sensitive data discovery:<br /><br />Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server (preview), Azure SQL Database (preview), RDS Instance (preview), RDS Instance Database (preview), RDS Cluster (preview)<br /><br />Purview Sensitive data discovery (preview):<br /><br />Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts, GCP cloud storage bucket |
-| Moves data to (Preview) | Indicates that a resource transfers its data to another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster |
-| Gets data from (Preview) | Indicates that a resource gets its data from another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster |
-| Has tags | Lists the resource tags of the cloud resource | All Azure, AWS, and GCP resources |
-| Installed software | Lists all software installed on the machine. This insight is applicable only for VMs that have threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 |
-| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, Azure DevOps repository, GitHub repository, GCP cloud storage bucket |
-| Doesn't have MFA enabled | Indicates that the user account does not have a multifactor authentication solution enabled | Microsoft Entra user account, IAM user |
-| Is external user | Indicates that the user account is outside the organization's domain | Microsoft Entra user account |
-| Is managed | Indicates that an identity is managed by the cloud provider | Azure Managed Identity |
-| Contains common usernames | Indicates that a SQL server has user accounts with common usernames which are prone to brute force attacks. | SQL VM, Arc-Enabled SQL VM |
-| Can execute code on the host | Indicates that a SQL server allows executing code on the underlying VM using a built-in mechanism such as xp_cmdshell. | SQL VM, Arc-Enabled SQL VM |
-| Has vulnerabilities | Indicates that the resource SQL server has vulnerabilities detected | SQL VM, Arc-Enabled SQL VM |
-| DEASM findings | Microsoft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP |
-| Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container |
-| Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod |
-| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image, GCP VM instance |
-| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image, GCP VM instance |
-| Public IP metadata | Lists the metadata of an Public IP | Public IP |
-| Identity metadata | Lists the metadata of an identity | Microsoft Entra identity |
-
-### Connections
-
-| Connection | Description | Source entity types | Destination entity types |
-|--|--|--|--|
-| Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | Microsoft Entra managed identity |
-| Has permission to | Indicates that an identity has permissions to a resource or a group of resources | Microsoft Entra user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources|
-| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server, RDS Cluster, RDS Instance, GCP project, GCP Folder, GCP Organization | All Azure, AWS, and GCP resources, All Kubernetes entities, All DevOps entities, Azure SQL database, RDS Instance, RDS Instance Database |
-| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service, GCP VM instance, GCP instance group |
-| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Container image, Kubernetes pod |
-| Member of | Indicates that the source identity is a member of the target identities group | Microsoft Entra group, Microsoft Entra user | Microsoft Entra group |
-| Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod |
-
-## Next steps
--- [Identify and analyze risks across your environment](concept-attack-path.md)-- [Identify and remediate attack paths](how-to-manage-attack-path.md)-- [Cloud security explorer](how-to-manage-cloud-security-explorer.md)
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
The following table summarizes each plan and their cloud availability.
| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
-DevOps security features under the Defender CSPM plan will remain free until March 1, 2024. Defender CSPM DevOps security features include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings.
Starting March 1, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities which include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more.
defender-for-cloud Concept Data Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md
Previously updated : 09/05/2023 Last updated : 10/26/2023 # About data-aware security posture
Defender CSPM provides visibility and contextual insights into your organization
Attack path analysis helps you to address security issues that pose immediate threats, and have the greatest potential for exploit in your environment. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach your environment. It also highlights the security recommendations that need to be resolved in order to mitigate the risks.
-You can discover risk of data breaches by attack paths of internet-exposed VMs that have access to sensitive data stores. Hackers can exploit exposed VMs to move laterally across the enterprise to access these stores. Review [attack paths](attack-path-reference.md#attack-paths).
+You can discover risk of data breaches by attack paths of internet-exposed VMs that have access to sensitive data stores. Hackers can exploit exposed VMs to move laterally across the enterprise to access these stores.
### Cloud Security Explorer
defender-for-cloud Concept Integration 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-integration-365.md
+
+ Title: Alerts and incidents in Microsoft 365 Defender
+description: Learn about the benefits of receiving Microsoft Defender for Cloud's alerts in Microsoft 365 Defender
+ Last updated : 11/16/2023++
+# Alerts and incidents in Microsoft 365 Defender
+
+Microsoft Defender for Cloud's integration with Microsoft 365 Defender allows security teams to access Defender for Cloud alerts and incidents within the Microsoft 365 Defender portal. This integration provides richer context to investigations that span cloud resources, devices, and identities.
+
+The partnership with Microsoft 365 Defender allows security teams to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment. This is achieved through immediate correlations of alerts and incidents.
+
+Microsoft 365 Defender offers a comprehensive solution that combines protection, detection, investigation, and response capabilities to protect against attacks on device, email, collaboration, identity, and cloud apps. Our detection and investigation capabilities are now extended to cloud entities, offering security operations teams a single pane of glass to significantly improve their operational efficiency.
+
+Incidents and alerts are now part of [Microsoft 365 Defender's public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide). This integration allows exporting of security alerts data to any system using a single API. As Microsoft Defender for Cloud, we're committed to providing our users with the best possible security solutions, and this integration is a significant step towards achieving that goal.
+
+## Investigation experience in Microsoft 365 Defender
+
+The following table describes the detection and investigation experience in Microsoft 365 Defender with Defender for Cloud alerts.
+
+| Area | Description |
+|--|--|
+| Incidents | All Defender for Cloud incidents are integrated to Microsoft 365 Defender. <br> - Searching for cloud resource assets in the [incident queue](/microsoft-365/security/defender/incident-queue?view=o365-worldwide) is supported. <br> - The [attack story](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#attack-story) graph shows cloud resource. <br> - The [assets tab](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#assets) in an incident page shows the cloud resource. <br> - Each virtual machine has its own entity page containing all related alerts and activity. <br> <br> There are no duplications of incidents from other Defender workloads. |
+| Alerts | All Defender for Cloud alerts, including multicloud, internal and external providersΓÇÖ alerts, are integrated to Microsoft 365 Defender. Defenders for Cloud alerts show on the Microsoft 365 Defender [alert queue](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response?view=o365-worldwide). <br> <br> The `cloud resource` asset shows up in the Asset tab of an alert. Resources are clearly identified as an Azure, Amazon, or a Google Cloud resource. <br> <br> Defenders for Cloud alerts are automatically be associated with a tenant. <br> <br> There are no duplications of alerts from other Defender workloads.|
+| Alert and incident correlation | Alerts and incidents are automatically correlated, providing robust context to security operations teams to understand the complete attack story in their cloud environment. |
+| Threat detection | Accurate matching of virtual entities to device entities to ensure precision and effective threat detection. |
+| Unified API | Defender for Cloud alerts and incidents are now included in [Microsoft 365 DefenderΓÇÖs public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide), allowing customers to export their security alerts data into other systems using one API. |
+
+Learn more about [handling alerts in Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud?view=o365-worldwide).
+
+## Next steps
+
+[Security alerts - a reference guide](alerts-reference.md)
defender-for-cloud Connect Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-azure-subscription.md
Title: Connect your Azure subscriptions description: Learn how to connect your Azure subscriptions to Microsoft Defender for Cloud Previously updated : 07/10/2023 Last updated : 11/02/2023
Microsoft Defender for Cloud is a cloud-native application protection platform (
- A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches - A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads
-Defender for Cloud includes Foundational CSPM capabilities for free, complemented by additional paid plans required to secure all aspects of your cloud resources. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) for free. You can add additional paid plans to secure all aspects of your cloud resources. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
Defender for Cloud helps you find and fix security vulnerabilities. Defender for Cloud also applies access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack.
If you want to disable any of the plans, toggle the individual plan to **off**.
> [!TIP] > To enable Defender for Cloud on all subscriptions within a management group, see [Enable Defender for Cloud on multiple Azure subscriptions](onboard-management-group.md).
+## Integrate with Microsoft 365 Defender
+
+When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed.
+
+The integration between Microsoft Defender for Cloud and Microsoft 365 Defender brings your cloud environments into Microsoft 365 Defender. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft 365 Defender, SOC teams can now access all security information from a single interface.
+
+Learn more about Defender for Cloud's [alerts in Microsoft 365 Defender](concept-integration-365.md).
+ ## Next steps In this guide, you enabled Defender for Cloud on your Azure subscription. The next step is to set up your hybrid and multicloud environments.
defender-for-cloud Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md
description: Learn how data is managed and safeguarded in Microsoft Defender for
Previously updated : 07/18/2023 Last updated : 11/02/2023 # Microsoft Defender for Cloud data security
Customers can access Defender for Cloud related data from the following data str
| [Azure Monitor logs](../azure-monitor/data-platform.md) | All security alerts. | | [Azure Resource Graph](../governance/resource-graph/overview.md) | Security alerts, security recommendations, vulnerability assessment results, secure score information, status of compliance checks, and more. | | [Microsoft Defender for Cloud REST API](/rest/api/defenderforcloud/) | Security alerts, security recommendations, and more. |- > [!NOTE] > If there are no Defender plans enabled on the subscription, data will be removed from Azure Resource Graph after 30 days of inactivity in the Microsoft Defender for Cloud portal. After interaction with artifacts in the portal related to the subscription, the data should be visible again within 24 hours.
+## Defender for Cloud and Microsoft Defender 365 Defender integration
+
+When you enable any of Defender for Cloud's paid plans you automatically gain all of the benefits of Microsoft 365 Defender. Information from Defender for Cloud will be shared with Microsoft 365 Defender. This data may contain customer data and will be stored according to [Microsoft 365 data handling guidelines](/microsoft-365/security/defender/data-privacy?view=o365-worldwide).
+ ## Next steps In this document, you learned how data is managed and safeguarded in Microsoft Defender for Cloud.
defender-for-cloud Defender For Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md
Title: Defender for Cloud glossary description: The glossary provides a brief description of important Defender for Cloud platform terms and concepts. Previously updated : 07/18/2023 Last updated : 11/08/2023
Azure Security Benchmark provides recommendations on how you can secure your clo
### **Attack Path Analysis**
-A graph-based algorithm that scans the cloud security graph, exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. See [What is attack path analysis?](concept-attack-path.md#what-is-attack-path-analysis).
+A graph-based algorithm that scans the cloud security graph, exposes attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach. See [What is attack path analysis?](concept-attack-path.md#what-is-attack-path-analysis).
### **Auto-provisioning**
Data-aware security posture automatically discovers datastores containing sensit
### Defender agent
-The DaemonSet that is deployed on each node, collects signals from hosts using eBPF technology, and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. For more information, see [Architecture for each Kubernetes environment](defender-for-containers-architecture.md#architecture-for-each-kubernetes-environment).
+The DaemonSet that is deployed on each node, collects signals from hosts using eBPF technology, and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It's deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. For more information, see [Architecture for each Kubernetes environment](defender-for-containers-architecture.md#architecture-for-each-kubernetes-environment).
### **DDOS Attack**
Distributed denial-of-service, a type of attack where an attacker sends more req
### **EASM**
-External Attack Surface Management. See [EASM Overview](how-to-manage-attack-path.md#external-attack-surface-management-easm).
+External Attack Surface Management. See [EASM Overview](concept-easm.md).
### **EDR**
Microsoft Defender Vulnerability Management. Learn how to [enable vulnerability
### **MFA**
-Multi-factor authentication, a process in which users are prompted during the sign-in process for an extra form of identification, such as a code on their cellphone or a fingerprint scan.[How it works: Azure Multi Factor Authentication](../active-directory/authentication/concept-mfa-howitworks.md).
+Multifactor authentication, a process in which users are prompted during the sign-in process for an extra form of identification, such as a code on their cellphone or a fingerprint scan.[How it works: Azure multifactor authentication](../active-directory/authentication/concept-mfa-howitworks.md).
### **MITRE ATT&CK**
Security alerts are the notifications generated by Defender for Cloud and Defend
### **Security Initiative**
-A collection of Azure Policy Definitions, or rules, that are grouped together towards a specific goal or purpose. [What are security policies, initiatives, and recommendations?](security-policy-concept.md)
+A collection of Azure Policy Definitions, or rules that are grouped together towards a specific goal or purpose. [What are security policies, initiatives, and recommendations?](security-policy-concept.md)
### **Security Policy**
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Title: What is Microsoft Defender for Cloud?- description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multicloud resources and workloads. Previously updated : 07/24/2023 Last updated : 11/02/2023 # What is Microsoft Defender for Cloud?
Microsoft Defender for Cloud is a cloud-native application protection platform (
- A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches - A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads
-![Diagram that shows the core functionality of Microsoft Defender for Cloud.](media/defender-for-cloud-introduction/defender-for-cloud-pillars.png)
> [!NOTE] > For Defender for Cloud pricing information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+When you [enable Defender for Cloud on your](connect-azure-subscription.md), you'll automatically gain access to Microsoft 365 Defender.
+
+The Microsoft 365 Defender portal provides richer context to investigations that span cloud resources, devices, and identities. In addition, security teams are able to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment, through the immediate correlation of all alerts and incidents, including cloud alerts and incidents.
+
+You can learn more about the [integration between Microsoft Defender for Cloud and Microsoft 365 Defender](concept-integration-365.md).
++ ## Secure cloud applications Defender for Cloud helps you to incorporate good security practices early during the software development process, or DevSecOps. You can protect your code management environments and your code pipelines, and get insights into your development environment security posture from a single location. Defender for Cloud empowers security teams to manage DevOps security across multi-pipeline environments.
TodayΓÇÖs applications require security awareness at the code, infrastructure, a
## Improve your security posture
-The security of your cloud and on-premises resources depends on proper configuration and deployment. Defender for Cloud recommendations identify the steps that you can take to secure your environment.
+The security of your cloud and on-premises resources depends on proper configuration and deployment. Defender for Cloud recommendations identifies the steps that you can take to secure your environment.
Defender for Cloud includes Foundational CSPM capabilities for free. You can also enable advanced CSPM capabilities by enabling the Defender CSPM plan.
Defender for Cloud includes Foundational CSPM capabilities for free. You can als
| [Data-aware Security Posture](concept-data-security-posture.md) | Data-aware security posture automatically discovers datastores containing sensitive data, and helps reduce risk of data breaches. | [Enable data-aware security posture](data-security-posture-enable.md) | Defender CSPM or Defender for Storage | | [Attack path analysis](concept-attack-path.md#what-is-attack-path-analysis) | Model traffic on your network to identify potential risks before you implement changes to your environment. | [Build queries to analyze paths](how-to-manage-attack-path.md) | Defender CSPM | | [Cloud Security Explorer](concept-attack-path.md#what-is-cloud-security-explorer) | A map of your cloud environment that lets you build queries to find security risks. | [Build queries to find security risks](how-to-manage-cloud-security-explorer.md) | Defender CSPM |
-| [Security governance](governance-rules.md#building-an-automated-process-for-improving-security-with-governance-rules) | Drive security improvements through your organization by assigning tasks to resource owners and tracking progress in aligning your security state with your security policy. | [Define governance rules](governance-rules.md#defining-governance-rules-to-automatically-set-the-owner-and-due-date-of-recommendations) | Defender CSPM |
+| [Security governance](governance-rules.md) | Drive security improvements through your organization by assigning tasks to resource owners and tracking progress in aligning your security state with your security policy. | [Define governance rules](governance-rules.md) | Defender CSPM |
| [Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml) | Provide comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP. | [Review your Permission Creep Index (CPI)](other-threat-protections.md#entra-permission-management-formerly-cloudknox) | Defender CSPM | ## Protect cloud workloads
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Container vulnerability assessment powered by Qualys has the following capabilit
| [Azure registry container images should have vulnerabilities resolved (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainerRegistryRecommendationDetailsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648)| Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers security posture and protect them from attacks. | dbd0cb49-b563-45e7-9724-889e799fa648 | | [Azure running container images should have vulnerabilities resolved - (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c)ΓÇ»| Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers security posture and protect them from attacks. | 41503391-efa5-47ee-9282-4eff6131462c | -- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via the ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).
+- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via the ARG](review-security-recommendations.md).
- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get). - **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md).
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
Last updated 10/29/2023
-# Exempt resources from recommendations in Defender for Cloud
+# Exempt resources from recommendations
When you investigate security recommendations in Microsoft Defender for Cloud, you usually review the list of affected resources. Occasionally, a resource will be listed that you feel shouldn't be included. Or a recommendation will show in a scope where you feel it doesn't belong. For example, a resource might have been remediated by a process not tracked by Defender for Cloud, or a recommendation might be inappropriate for a specific subscription. Or perhaps your organization has decided to accept the risks related to the specific resource or recommendation.
In such cases, you can create an exemption to:
For the scope you need, you can create an exemption rule to: -- Mark a specific **recommendation** or as "mitigated" or "risk accepted" for one or more subscriptions, or for an entire management group.
+- Mark a specific **recommendation** as "mitigated" or "risk accepted" for one or more subscriptions, or for an entire management group.
- Mark **one or more resources** as "mitigated" or "risk accepted" for a specific recommendation. ## Before you start
-This feature is in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]. This is a premium Azure Policy capability that's offered at no more cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future. [Review Azure cloud support](support-matrix-cloud-environment.md).
+This feature is in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] This is a premium Azure Policy capability that's offered at no additional cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future.
- You need the following permissions to make exemptions: - **Owner** or **Security Admin** or **Resource Policy Contributor** to create an exemption
To create an exemption rule:
1. In the Defender for Cloud portal, open the **Recommendations** page, and select the recommendation you want to exempt.
-1. From the toolbar at the top of the page, select **Exempt**.
+1. In **Take action**, select **Exempt**.
:::image type="content" source="media/exempt-resource/exempting-recommendation.png" alt-text="Create an exemption rule for a recommendation to be exempted from a subscription or management group.":::
After creating the exemption it can take up to 30 minutes to take effect. After
- If you've exempted specific resources, they'll be listed in the **Not applicable** tab of the recommendation details page. - If you've exempted a recommendation, it will be hidden by default on Defender for Cloud's recommendations page. This is because the default options of the **Recommendation status** filter on that page are to exclude **Not applicable** recommendations. The same is true if you exempt all recommendations in a security control.
- :::image type="content" source="media/exempt-resource/recommendations-filters-hiding-not-applicable.png" alt-text="Screenshot showing default filters on Microsoft Defender for Cloud's recommendations page hide the not applicable recommendations and security controls." lightbox="media/exempt-resource/recommendations-filters-hiding-not-applicable.png":::
## Next steps
-[Review recommendations](review-security-recommendations.md) in Defender for Cloud.
+[Review exempted resources](review-exemptions.md) in Defender for Cloud.
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
Title: Driving your organization to remediate security issues with recommendation governance
-description: Learn how to assign owners and due dates to security recommendations and create rules to automatically assign owners and due dates
+ Title: Drive remediation of security recommendations with governance rules in Microsoft Defender for Cloud
+description: Learn how to drive remediation of security recommendations with governance rules in Microsoft Defender for Cloud
Previously updated : 01/23/2023 Last updated : 10/29/2023
-# Drive remediation with security governance
+# Drive remediation with governance rules
-Security teams are responsible for improving the security posture of their organizations but they might not have the resources or authority to actually implement security recommendations. [Assigning owners with due dates](#manually-assigning-owners-and-due-dates-for-recommendation-remediation) and [defining governance rules](#building-an-automated-process-for-improving-security-with-governance-rules) creates accountability and transparency so you can drive the process of improving the security posture in your organization.
+While the security team is responsible for improving the security posture, team members might not actually implement security recommendations.
-Stay on top of the progress on the recommendations in the security posture. Weekly email notifications to the owners and managers make sure that they take timely action on the recommendations that can improve your security posture and recommendations.
+Using governance rules driven by the security team helps you to drive accountability and an SLA around the remediation process.
-You can learn more by watching this video from the Defender for Cloud in the Field video series:
+To learn more, watch [this episode](episode-fifteen.md) of the Defender for Cloud in the Field video series.
-- [Remediate Security Recommendations with Governance](episode-fifteen.md)
+## Governance rules
-## Building an automated process for improving security with governance rules
+You can define rules that assign an owner and a due date for addressing recommendations for specific resources. This provides resource owners with a clear set of tasks and deadlines for remediating recommendations.
-To make sure your organization is systematically improving its security posture, you can define rules that assign an owner and set the due date for resources in the specified recommendations. That way resource owners have a clear set of tasks and deadlines for remediating recommendations.
+For tracking, you can review the progress of the remediation tasks by subscription, recommendation, or owner so you can follow up with tasks that need more attention.
-You can then review the progress of the tasks by subscription, recommendation, or owner so you can follow up with tasks that need more attention.
+- Governance rules can identify resources that require remediation according to specific recommendations or severities.
+- The rule assigns an owner and due date to ensure the recommendations are handled. Many governance rules can apply to the same recommendations, so the rule with lower priority value is the one that assigns the owner and due date.
+- The due date set for the recommendation to be remediated is based on a timeframe of 7, 14, 30, or 90 days from when the recommendation is found by the rule.
+- For example, if the rule identifies the resource on March 1 and the remediation timeframe is 14 days, March 15 is the due date.
+- You can apply a grace period so that the resources given a due date don't affect your secure score.
+- You can also set the owner of the resources that are affected by the specified recommendations.
+- In organizations that use resource tags to associate resources with an owner, you can specify the tag key and the governance rule reads the name of the resource owner from the tag.
+- The owner is shown as unspecified when the owner wasn't found on the resource, the associated resource group, or the associated subscription based on the specified tag.
+- By default, email notifications are sent to the resource owners weekly to provide a list of the on time and overdue tasks.
+- If an email for the owner's manager is found in the organizational Microsoft Entra ID, the owner's manager receives a weekly email showing any overdue recommendations by default.
+- Conflicting rules are applied in priority order. For example, rules on a management scope (Azure management groups, AWS accounts and GCP organizations), take effect before rules on scopes (for example, Azure subscriptions, AWS accounts, or GCP projects).
-### Availability
+## Before you begin
-|Aspect|Details|
-|-|:-|
-|Release state:|General availability (GA)|
-|Prerequisite: | Requires the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md) to be enabled.|
-|Required roles and permissions:|Azure - **Contributor**, **Security Admin**, or **Owner** on the subscription<br>AWS, GCP ΓÇô **Contributor**, **Security Admin**, or **Owner** on the connector|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP accounts|
+- To use governance rules, the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md) must be enabled.
+- You need **Contributor**, **Security Admin**, or **Owner** permissions on Azure subscriptions.
+- For AWS accounts and GCP projects, you need **Contributor**, **Security Admin**, or **Owner** permissions on the Defender for Cloud AWS/GCP connectors.
-> [!NOTE]
-> Starting January 1, 2023, governance capabilities will require Defender Cloud Security Posture Management (CSPM) plan enablement.
-> Customers deciding to keep Defender CSPM plan off on scopes with governance content:
->
-> - Existing assignments remain as is and continue to work with no customization option or ability to create new ones.
-> - Existing rules will remain as is but wonΓÇÖt trigger new assignments creation.
-### Defining governance rules to automatically set the owner and due date of recommendations
+## Define a governance rule
-Governance rules can identify resources that require remediation according to specific recommendations or severities. The rule assigns an owner and due date to ensure the recommendations are handled. Many governance rules can apply to the same recommendations, so the rule with lower priority value is the one that assigns the owner and due date.
-
-The due date set for the recommendation to be remediated is based on a timeframe of 7, 14, 30, or 90 days from when the recommendation is found by the rule. For example, if the rule identifies the resource on March 1 and the remediation timeframe is 14 days, March 15 is the due date. You can apply a grace period so that the resources that 's given a due date don't affect your secure score until they're overdue.
-
-You can also set the owner of the resources that are affected by the specified recommendations. In organizations that use resource tags to associate resources with an owner, you can specify the tag key and the governance rule reads the name of the resource owner from the tag.
-
-The owner is shown as unspecified when the owner wasn't found on the resource, the associated resource group, or the associated subscription based on the specified tag.
--
-By default, email notifications are sent to the resource owners weekly to provide a list of the on time and overdue tasks. If an email for the owner's manager is found in the organizational Microsoft Entra ID, the owner's manager receives a weekly email showing any overdue recommendations by default.
--
-To define a governance rule that assigns an owner and due date:
-
-1. Navigate to **Environment settings** > **Governance rules**.
+Define a governance rule as follows.
+1. In Defender for Cloud, open the **Environment settings** page, and select **Governance rules**.
1. Select **Create governance rule**.
+1. In **Create governance rule** > **General details**, specify a rule name, and the scope in which the rule applies.
+
+ - Rules for management scope (Azure management groups, AWS master accounts, GCP organizations) are applied prior to the rules on a single scope.
+ - You can define exclusions within the scope as needed.
-1. Enter a name for the rule.
-1. Select a scope to apply the rule to and use exclusions if needed. Rules for management scope (Azure management groups, AWS master accounts, GCP organizations) are applied prior to the rules on a single scope.
+1. Priority is assigned automatically. Rules are run in priority order from the highest (1) to the lowest (1000).
+1. Specify a description to help you identify the rule. Then select **Next**.
-1. Priority is assigned automatically after scope selection. You can override this field if needed.
+ :::image type="content" source="./media/governance-rules/add-rule.png" alt-text="Screenshot of page for adding a governance rule." lightbox="media/governance-rules/add-rule.png":::
-1. Select the recommendations that the rule applies to, either:
+1. In the **Conditions** tab, specify how recommendations are impacted by the rule.
- **By severity** - The rule assigns the owner and due date to any recommendation in the subscription that doesn't already have them assigned.
- - **By specific recommendations** - Select the specific recommendations that the rule applies to.
-1. Set the owner to assign to the recommendations either:
+ - **By specific recommendations** - Select the specific built-in or custom recommendations that the rule applies to.
+1. In **Set owner**, specify who's responsible for fixing recommendations covered by the rule.
- **By resource tag** - Enter the resource tag on your resources that defines the resource owner. - **By email address** - Enter the email address of the owner to assign to the recommendations.
-1. Set the **remediation timeframe**, which is the time between when the resources are identified to require remediation and the time that the remediation is due.
-1. If you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**.
-1. If you don't want either the owner or the owner's manager to receive weekly emails, clear the notification options.
-1. Select **Create**.
-
-If there are existing recommendations that match the definition of the governance rule, you can either:
--- Assign an owner and due date to recommendations that don't already have an owner or due date.-- Overwrite the owner and due date of existing recommendations.-
-> [!NOTE]
-> When you delete or disable a rule, all existing assignments and notifications will remain.
-> [!TIP]
-> Here are some sample use-cases for the at-scale experience:
->
-> - View and manage all governance rules effective in the organization using a single page.
-> - Create and apply rules on multiple scopes at once using management scopes cross cloud.
-> - Check effective rules on selected scope using the scope filter.
-
-To view the effect of rules on a specific scope, use the Scope filter to select a specific scope.
-
-Conflicting rules are applied in priority order. For example, rules on a management scope (Azure management groups, AWS accounts and GCP organizations), take effect before rules on scopes (for example, Azure subscriptions, AWS accounts, or GCP projects).
-
-## Manually assigning owners and due dates for recommendation remediation
-
-For every resource affected by a recommendation, you can assign an owner and a due date so that you know who needs to implement the security changes to improve your security posture and when they're expected to do it by. You can also apply a grace period so that the resources that 's given a due date don't affect your secure score unless they become overdue.
-
-To manually assign owners and due dates to recommendations:
+1. In **Set remediation timeframe**, specify the time that can elapse between when resources are identified as requiring remediation, and the time that the remediation is due.
+1. For recommendations issued by MCSB, if you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**.
+1. By default owners and their managers are notified weekly about open and overdue tasks. If you don't want them to receive these weekly emails, clear the notification options.
+1. Select **Create**.
-1. Go to the list of recommendations:
- - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment that you want to improve.
- - Go to **Recommendations** in the Defender for Cloud menu.
-1. In the list of recommendations, use the **Potential score increase** to identify the security control that contains recommendations that will increase your secure score.
+ :::image type="content" source="./media/governance-rules/create-rule-conditions.png" alt-text="Screenshot of page for adding conditions for a governance rule." lightbox="media/governance-rules/create-rule-conditions.png":::
- > [!TIP]
- > You can also use the search box and filters above the list of recommendations to find specific recommendations.
-1. Select a recommendation to see the affected resources.
-1. For any resource that doesn't have an owner or due date, select the resources and select **Assign owner**.
-1. Enter the email address of the owner that needs to make the changes that remediate the recommendation for those resources.
-1. Select the date by which to remediate the recommendation for the resources.
-1. You can select **Apply grace period** to keep the resource from affecting the secure score until it's overdue.
-1. Select **Save**.
+- If there are existing recommendations that match the definition of the governance rule, you can either:
-The recommendation is now shown as assigned and on time.
+ - Assign an owner and due date to recommendations that don't already have an owner or due date.
+ - Overwrite the owner and due date of existing recommendations.
+- When you delete or disable a rule, all existing assignments and notifications remain.
-## Tracking the status of recommendations for further action
-After you define governance rules, you'll want to review the progress that the owners are making in remediating the recommendations.
+## View effective rules
-You can track the assigned and overdue recommendations in:
+You can view the effect of government rules in your environment.
-- The security posture shows the number of unassigned and overdue recommendations.
+1. In the Defender for Cloud portal, open the **Governance rules** page.
+1. Review governance rules. The default list shows all the governance rules applicable in your environment.
+1. You can search for rules, or filter rules.
+ - Filter on **Environment** to identify rules for Azure, AWS, and GCP.
+ - Filter on rule name, owner, or time between the recommendation being issued and due date.
+ - Filter on **Grace period** to find MCSB recommendations that won't affect your secure score.
+ - Identify by status.
- :::image type="content" source="./media/governance-rules/governance-in-security-posture.png" alt-text="Screenshot of governance status in the security posture.":::
+ :::image type="content" source="./media/governance-rules/view-filter-rules.png" alt-text="Screenshot of page for viewing and filtering rules." lightbox="media/governance-rules/view-filter-rules.png":::
-- The list of recommendations shows the governance status of each recommendation.
- :::image type="content" source="./media/governance-rules/governance-in-recommendations.png" alt-text="Screenshot of recommendations with their governance status." lightbox="media/governance-rules/governance-in-recommendations.png":::
-- The governance report in the governance rules settings lets you drill down into recommendations by rule and owner.
- :::image type="content" source="./media/governance-rules/governance-in-workbook.png" alt-text="Screenshot of governance status by rule and owner in the governance workbook." lightbox="media/governance-rules/governance-in-workbook.png":::
-### Tracking progress by rule with the governance report
+## Review the governance report
The governance report lets you select subscriptions that have governance rules and, for each rule and owner, shows you how many recommendations are completed, on time, overdue, or unassigned.
-> [!NOTE]
-> Manual assignments will not appear on this report. To see all assignments by owner, use the Owner tab on the Security Posture page.
+1. In Defender for Cloud > **Environment settings** > **Governance rules**, select **Governance report**.
+1. In **Governance**, select a subscription.
-**To review the status of the recommendations in a rule**:
+ :::image type="content" source="./media/governance-rules/governance-in-workbook.png" alt-text="Screenshot of governance status by rule and owner in the governance workbook." lightbox="media/governance-rules/governance-in-workbook.png":::
-1. In **Recommendations**, select **Governance report**.
-1. Select the subscriptions that you want to review.
-1. Select the rules that you want to see details about.
+1. From the governance report, you drill down into recommendations by rule and owner.
-You can see the list of owners and recommendations for the selected rules, and their status.
-
-**To see the list of recommendations for each owner**:
-
-1. Select **Security posture**.
-1. Select the **Owner** tab to see the list of owners and the number of overdue recommendations for each owner.
-
- - Hover over the (i) in the overdue recommendations to see the breakdown of overdue recommendations by severity.
-
- - If the owner email address is found in the organizational Microsoft Entra ID, you'll see the full name and picture of the owner.
-
-1. Select **View recommendations** to go to the list of recommendations associated with the owner.
## Next steps
-In this article, you learned how to set up a process for assigning owners and due dates to tasks so that owners are accountable for taking steps to improve your security posture.
-
-Check out how owners can [set ETAs for tasks](review-security-recommendations.md#manage-the-owner-and-eta-of-recommendations-that-are-assigned-to-you) so that they can manage their progress.
-Learn how to [Implement security recommendations in Microsoft Defender for Cloud](implement-security-recommendations.md).
+Learn how to [Implement security recommendations](implement-security-recommendations.md).
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
Title: Identify and remediate attack paths-
-description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment.
+ Title: Identify and remediate attack paths in Microsoft Defender for Cloud
+description: Learn how to identify and remediate attack paths in Microsoft Defender for Cloud
-+ Last updated 11/01/2023
The attack path page shows you an overview of all of your attack paths. You can
:::image type="content" source="media/concept-cloud-map/attack-path-homepage.png" alt-text="Screenshot of a sample attack path homepage." lightbox="media/concept-cloud-map/attack-path-homepage.png":::
-On this page you can organize your attack paths based on name, environment, paths count, risk categories.
+On this page you can organize your attack paths based on risk level, name, environment, paths count, risk factors, entry point, target, the number of affected resources, or the number of active recommendations.
-For each attack path, you can see all of risk categories and any affected resources.
+For each attack path, you can see all of risk factors and any affected resources.
-The potential risk categories include credentials exposure, compute abuse, data exposure, subscription and account takeover.
+The potential risk factors include credentials exposure, compute abuse, data exposure, subscription and account takeover.
Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md).
You can use Attack path analysis to locate the biggest risks to your environmen
1. Navigate to **Microsoft Defender for Cloud** > **Attack path analysis**.
- :::image type="content" source="media/how-to-manage-attack-path/attack-path-blade.png" alt-text="Screenshot that shows the attack path analysis blade on the main screen." lightbox="media/how-to-manage-attack-path/attack-path-blade.png":::
+ :::image type="content" source="media/how-to-manage-attack-path/attack-path-blade.png" alt-text="Screenshot that shows the attack path analysis page on the main screen." lightbox="media/how-to-manage-attack-path/attack-path-blade.png":::
1. Select an attack path.
- :::image type="content" source="media/how-to-manage-cloud-map/attack-path.png" alt-text="Screenshot that shows a sample of attack paths." lightbox="media/how-to-manage-cloud-map/attack-path.png" :::
-
- > [!NOTE]
- > An attack path might have more than one path that is at risk. The path count will tell you how many paths need to be remediated. If the attack path has more than one path, you will need to select each path within that attack path to remediate all risks.
- 1. Select a node. :::image type="content" source="media/how-to-manage-cloud-map/node-select.png" alt-text="Screenshot of the attack path screen that shows you where the nodes are located for selection." lightbox="media/how-to-manage-cloud-map/node-select.png":::
Once an attack path is resolved, it can take up to 24 hours for an attack path t
Attack path analysis also gives you the ability to see all recommendations by attack path without having to check each node individually. You can resolve all recommendations without having to view each node individually.
+The remediation path contains two types of recommendation:
+
+- **Recommendations** - Recommendations that mitigate the attack path.
+- **Additional recommendations** - Recommendations that reduce the exploitation risks, but donΓÇÖt mitigate the attack path.
+ **To resolve all recommendations**: 1. Sign in to the [Azure portal](https://portal.azure.com).
Attack path analysis also gives you the ability to see all recommendations by at
1. Select an attack path.
-1. Select **Recommendations**.
+1. Select **Remediation**.
:::image type="content" source="media/how-to-manage-cloud-map/bulk-recommendations.png" alt-text="Screenshot that shows where to select on the screen to see the attack paths full list of recommendations." lightbox="media/how-to-manage-cloud-map/bulk-recommendations.png":::
securityresources
``` **Get all instances for a specific attack path**:
-For example, ΓÇÿInternet exposed VM with high severity vulnerabilities and read permission to a Key VaultΓÇÖ.
+For example, `Internet exposed VM with high severity vulnerabilities and read permission to a Key Vault`.
```kusto securityresources
The following table lists the data fields returned from the API response:
|--|--| | ID | The Azure resource ID of the attack path instance| | Name | The Unique identifier of the attack path instance|
-| Type | The Azure resource type, always equals ΓÇ£microsoft.security/attackpathsΓÇ¥|
+| Type | The Azure resource type, always equals `microsoft.security/attackpaths`|
| Tenant ID | The tenant ID of the attack path instance | | Location | The location of the attack path | | Subscription ID | The subscription of the attack path |
The following table lists the data fields returned from the API response:
| Properties.graphComponent.connections | List of connections graph components related to the attack path | | Properties.AttackPathID | The unique identifier of the attack path instance |
-## External attack surface management (EASM)
-
-An external attack surface is the entire area of an organization or system that is susceptible to an attack from an external source. An organization's attack surface is made up of all the points of access that an unauthorized person could use to enter their system. The larger your attack surface is, the harder it's to protect.
-
-While you're [investigating and remediating an attack path](#investigate-and-remediate-attack-paths), you can also view your EASM if it's available, and if you've enabled Defender EASM to your subscription.
-
-> [!NOTE]
-> To manage your EASM, you must [deploy the Defender EASM Azure resource](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md) to your subscription. Defender EASM has its own cost and is separate from Defender for Cloud. To learn more about Defender for EASM pricing options, you can check out the [pricing page](https://azure.microsoft.com/pricing/details/defender-external-attack-surface-management/).
-
-**To manage your EASM**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Microsoft Defender for Cloud** > **Attack path analysis**.
-
-1. Select an attack path.
-
-1. Select a resource.
-
-1. Select **Insights**.
-
-1. Select **Open EASM**.
-
- :::image type="content" source="media/how-to-manage-attack-path/open-easm.png" alt-text="Screenshot that shows you where on the screen you need to select open Defender EASM from." lightbox="media/how-to-manage-attack-path/easm-zoom.png":::
-
-1. Follow the [Using and managing discovery](../external-attack-surface-management/using-and-managing-discovery.md) instructions.
- ## Next Steps Learn how to [build queries with cloud security explorer](how-to-manage-cloud-security-explorer.md).
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
Title: Build queries with cloud security explorer-
-description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment.
+ Title: Build queries with cloud security explorer in Microsoft Defender for Cloud
+description: Learn how to build queries with cloud security explorer in Microsoft Defender for Cloud
Last updated 11/01/2023
defender-for-cloud How To Test Attack Path And Security Explorer With Vulnerable Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-test-attack-path-and-security-explorer-with-vulnerable-container-image.md
Title: How-to test the attack path and cloud security explorer using a vulnerable container image in Microsoft Defender for Cloud
-description: Learn how to test the attack path and security explorer using a vulnerable container image
+ Title: Test attack paths and cloud security explorer in Microsoft Defender for Cloud
+description: Learn how to test attack paths and cloud security explorer in Microsoft Defender for Cloud
Previously updated : 07/17/2023 Last updated : 11/08/2023
-# Testing the Attack Path and Security Explorer using a vulnerable container image
+# Test attack paths and cloud security explorer
-## Observing potential threats in the attack path experience
-Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach.
+Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach.
-Explore and investigate [attack paths](how-to-manage-attack-path.md) by sorting them based on name, environment, path count, and risk categories. Explore cloud security graph Insights on the resource. Examples of Insight types are:
+Explore and investigate [attack paths](how-to-manage-attack-path.md) by sorting them based on risk level, name, environment, and risk factors, entry point, target, affected resources and active recommendations. Explore cloud security graph Insights on the resource. Examples of Insight types are:
- Pod exposed to the internet - Privileged container
You can build queries in one of the following ways:
### Find the security issue under attack paths
-1.Go to **Recommendations** in the Defender for Cloud menu.
-1. Select the **Attack Path** link to open the attack paths view.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- :::image type="content" source="media/how-to-test-attack-path/attack-path.png" alt-text="Screenshot of showing where to select Attack Path." lightbox="media/how-to-test-attack-path/attack-path.png":::
+1. Navigate to **Attack path analysis**.
-1. Locate the entry that details this security issue under ΓÇ£Internet exposed Kubernetes pod is running a container with high severity vulnerabilities.ΓÇ¥
+1. Select an attack path.
- :::image type="content" source="media/how-to-test-attack-path/attack-path-kubernetes-pods-vulnerabilities.png" alt-text="Screenshot showing the security issue details." lightbox="media/how-to-test-attack-path/attack-path-kubernetes-pods-vulnerabilities.png":::
+1. Locate the entry that details this security issue under `Internet exposed Kubernetes pod is running a container with high severity vulnerabilities`.
### Explore risks with cloud security explorer templates
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
Title: Implement security recommendations
-description: This article explains how to respond to recommendations in Microsoft Defender for Cloud to protect your resources and satisfy security policies.
+ Title: Remediate security recommendations in Microsoft Defender for Cloud
+description: Learn how to remediate security recommendations in Microsoft Defender for Cloud
Previously updated : 10/20/2022 Last updated : 11/08/2023
-# Implement security recommendations in Microsoft Defender for Cloud
+# Remediate security recommendations
-Recommendations give you suggestions on how to better secure your resources. You implement a recommendation by following the remediation steps provided in the recommendation.
+Resources and workloads protected by Microsoft Defender for Cloud are assessed against built-in and custom security standards enabled in your Azure subscriptions, AWS accounts, and GCP projects. Based on those assessments, security recommendations provide practical steps to remediate security issues, and improve security posture.
-<a name="remediation-steps"></a>
+This article describes how to remediate security recommendations in your Defender for Cloud deployment using the latest version of the portal experience.
-## Remediation steps
+## Before you start
-After reviewing all the recommendations, decide which one to remediate first. We recommend that you prioritize the security controls with the highest potential to increase your secure score.
+Before you attempt to remediate a recommendation you should review it in detail. Learn how to [review security recommendations](review-security-recommendations.md).
-1. From the list, select a recommendation.
+## Group recommendations by risk level
-1. Follow the instructions in the **Remediation steps** section. Each recommendation has its own set of instructions. The following screenshot shows remediation steps for configuring applications to only allow traffic over HTTPS.
+Before you start remediating, we recommend grouping your recommendations by risk level in order to remediate the most critical recommendations first.
- :::image type="content" source="./media/implement-security-recommendations/security-center-remediate-recommendation.png" alt-text="Manual remediation steps for a recommendation." lightbox="./media/implement-security-recommendations/security-center-remediate-recommendation.png":::
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Once completed, a notification appears informing you whether the issue is resolved.
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
+
+1. Select **Group by** > **Primary grouping** > **Risk level** > **Apply**.
-## Fix button
+ :::image type="content" source="media/implement-security-recommendations/group-by-risk-level.png" alt-text="Screenshot of the recommendations page that shows how to group your recommendations." lightbox="media/implement-security-recommendations/group-by-risk-level.png":::
-To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a **Fix** option.
+ Recommendations are displayed in groups of risk levels.
-**Fix** helps you quickly remediate a recommendation on multiple resources.
+1. Review critical and other recommendations to understand the recommendation and remediation steps. Use the graph to understand the risk to your business, including which resources are exploitable, and the effect that the recommendation has on your business.
-To implement a **Fix**:
-1. From the list of recommendations that have the **Fix** action icon :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::, select a recommendation.
+## Remediate recommendations
- :::image type="content" source="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png" alt-text="Recommendations list highlighting recommendations with Fix action" lightbox="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png":::
+After reviewing recommendations by risk, decide which one to remediate first.
-1. From the **Unhealthy resources** tab, select the resources that you want to implement the recommendation on, and select **Fix**.
+In addition to risk level, we recommend that you prioritize the security controls in the default [Microsoft Cloud Security Benchmark (MCSB)](concept-regulatory-compliance.md) standard in Defender for Cloud, since these controls affect your [secure score](secure-score-security-controls.md).
- > [!NOTE]
- > Some of the listed resources might be disabled, because you don't have the appropriate permissions to modify them.
-1. In the confirmation box, read the remediation details and implications.
+1. In the **Recommendations** page, select the recommendation you want to remediate.
- ![Quick fix.](./media/implement-security-recommendations/microsoft-defender-for-cloud-quick-fix-view.png)
+1. In the recommendation details page, select **Take action** > **Remediate**.
+1. Follow the remediation instructions.
- > [!NOTE]
- > The implications are listed in the grey box in the **Fixing resources** window that opens after clicking **Fix**. They list what changes happen when proceeding with the **Fix**.
+ As an example, the following screenshot shows remediation steps for configuring applications to only allow traffic over HTTPS.
-1. Insert the relevant parameters if necessary, and approve the remediation.
+ :::image type="content" source="./media/implement-security-recommendations/security-center-remediate-recommendation.png" alt-text="This screenshots shows manual remediation steps for a recommendation." lightbox="./media/implement-security-recommendations/security-center-remediate-recommendation.png":::
+
+1. Once completed, a notification appears informing you whether the issue is resolved.
- > [!NOTE]
- > It can take several minutes after remediation completes to see the resources in the **Healthy resources** tab. To view the remediation actions, check the [activity log](#activity-log).
+## Use the Fix option
-1. Once completed, a notification appears informing you if the remediation succeeded.
+To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a **Fix** option to help you quickly remediate a recommendation on multiple resources.
-<a name="activity-log"></a>
+1. In the **Recommendations** page, select a recommendation that shows the **Fix** action icon: :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::.
-## Fix actions logged to the activity log
+ :::image type="content" source="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png" alt-text="This screenshot shows recommendations with the Fix action" lightbox="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png":::
-The remediation operation uses a template deployment or REST API `PATCH` request to apply the configuration on the resource. These operations are logged in [Azure activity log](../azure-monitor/essentials/activity-log.md).
+1. In **Take action**, select **Fix**.
+1. Follow the rest of the remediation steps.
++
+After remediation completes, it can take several minutes to see the resources appear in the **Findings** tab when the status is filtered to view **Healthy** resources.
## Next steps
-In this document, you were shown how to remediate recommendations in Defender for Cloud. To learn how recommendations are defined and selected for your environment, see the following page:
+[Learn about](governance-rules.md) using governance rules in your remediation processes.
+ -- [What are security policies, initiatives, and recommendations?](security-policy-concept.md)
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
The following table displays roles and allowed actions in Defender for Cloud.
| Edit security policy | - | Γ£ö | - | - | Γ£ö | | Enable / disable Microsoft Defender plans | - | Γ£ö | - | Γ£ö | Γ£ö | | Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö |
-| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | - | Γ£ö | Γ£ö | Γ£ö |
+| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md)) | - | - | Γ£ö | Γ£ö | Γ£ö |
| View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Exempt security recommendations | - | - | Γ£ö | Γ£ö | Γ£ö |
defender-for-cloud Prevent Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/prevent-misconfigurations.md
- Title: How to prevent misconfigurations
-description: Learn how to use Defender for Cloud's 'Enforce' and 'Deny' options on the recommendations details pages
- Previously updated : 07/24/2023--
-# Prevent misconfigurations with Enforce/Deny recommendations
-
-Security misconfigurations are a major cause of security incidents. Defender for Cloud can help *prevent* misconfigurations of new resources regarding specific recommendations.
-
-This feature can help keep your workloads secure and stabilize your secure score.
-
-Enforcing a secure configuration, based on a specific recommendation, is offered in two modes:
--- Using the **Deny** effect of Azure Policy, you can stop unhealthy resources from being created.--- Using the **Enforce** option, you can take advantage of Azure Policy's **DeployIfNotExist** effect and automatically remediate non-compliant resources upon creation.-
-The ability to secure configurations can be found at the top of the resource details page for selected security recommendations (see [Recommendations with deny/enforce options](#recommendations-with-denyenforce-options)).
-
-## Prevent resource creation
-
-1. Open the recommendation that your new resources must satisfy, and select the **Deny** button at the top of the page.
-
- :::image type="content" source="./media/implement-security-recommendations/recommendation-deny-button.png" alt-text="Recommendation page with Deny button highlighted.":::
-
- The configuration pane opens listing the scope options.
-
-1. Set the scope by selecting the relevant subscription or management group.
-
- > [!TIP]
- > You can use the three dots at the end of the row to change a single subscription, or use the checkboxes to select multiple subscriptions or groups then select **Change to Deny**.
-
- :::image type="content" source="./media/implement-security-recommendations/recommendation-prevent-resource-creation.png" alt-text="Setting the scope for Azure Policy deny.":::
-
-## Enforce a secure configuration
-
-1. Open the recommendation that you'll deploy a template deployment for if new resources don't satisfy it, and select the **Enforce** button at the top of the page.
-
- :::image type="content" source="./media/implement-security-recommendations/recommendation-enforce-button.png" alt-text="Recommendation page with Enforce button highlighted.":::
-
- The configuration pane opens with all of the policy configuration options.
-
- :::image type="content" source="./media/implement-security-recommendations/recommendation-enforce-config.png" alt-text="Enforce configuration options.":::
-
-1. Set the scope, assignment name, and other relevant options.
-
-1. Select **Review + create**.
-
-## Recommendations with deny/enforce options
-
-These recommendations can be used with the **deny** option:
--
-These recommendations can be used with the **enforce** option:
--- Auditing on SQL server should be enabled-- Azure Arc-enabled Kubernetes clusters should have Microsoft Defender for Cloud's extension installed-- Azure Backup should be enabled for virtual machines-- Microsoft Defender for App Service should be enabled-- Microsoft Defender for container registries should be enabled-- Microsoft Defender for Key Vault should be enabled-- Microsoft Defender for Kubernetes should be enabled-- Microsoft Defender for Resource Manager should be enabled-- Microsoft Defender for Servers should be enabled-- Microsoft Defender for Azure SQL Database servers should be enabled-- Microsoft Defender for SQL servers on machines should be enabled-- Microsoft Defender for SQL should be enabled for unprotected Azure SQL servers-- Microsoft Defender for Storage should be enabled-- Azure Policy Add-on for Kubernetes should be installed and enabled on your clusters-- Diagnostic logs in Azure Stream Analytics should be enabled-- Diagnostic logs in Batch accounts should be enabled-- Diagnostic logs in Data Lake Analytics should be enabled-- Diagnostic logs in Event Hub should be enabled-- Diagnostic logs in Key Vault should be enabled-- Diagnostic logs in Logic Apps should be enabled-- Diagnostic logs in Search services should be enabled-- Diagnostic logs in Service Bus should be enabled-
-## Next steps
-
-[Automate responses to Microsoft Defender for Cloud triggers](workflow-automation.md)
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account
description: Defend your AWS resources by using Microsoft Defender for Cloud. Previously updated : 10/22/2023 Last updated : 11/02/2023 # Connect your AWS account to Microsoft Defender for Cloud
To view all the active recommendations for your resources by resource type, use
:::image type="content" source="./media/quickstart-onboard-aws/aws-resource-types-in-inventory.png" alt-text="Screenshot of AWS options in the asset inventory page's resource type filter." lightbox="media/quickstart-onboard-aws/aws-resource-types-in-inventory.png":::
+## Integrate with Microsoft 365 Defender
+
+When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed.
+
+The integration between Microsoft Defender for Cloud and Microsoft 365 Defender brings your cloud environments into Microsoft 365 Defender. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft 365 Defender, SOC teams can now access all security information from a single interface.
+
+Learn more about Defender for Cloud's [alerts in Microsoft 365 Defender](concept-integration-365.md).
+ ## Learn more Check out the following blogs:
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To view all the active recommendations for your resources by resource type, use
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png" alt-text="Screenshot of GCP options in the asset inventory page's resource type filter." lightbox="media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png":::
+## Integrate with Microsoft 365 Defender
+
+When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed.
+
+The integration between Microsoft Defender for Cloud and Microsoft 365 Defender brings your cloud environments into Microsoft 365 Defender. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft 365 Defender, SOC teams can now access all security information from a single interface.
+
+Learn more about Defender for Cloud's [alerts in Microsoft 365 Defender](concept-integration-365.md).
+ ## Next steps Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud:
defender-for-cloud Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md
Title: Connect on-premises machines description: Learn how to connect your non-Azure machines to Microsoft Defender for Cloud. Previously updated : 06/29/2023 Last updated : 11/02/2023
To verify that your machines are connected:
![Defender for Cloud icon for an Azure Arc-enabled server.](./media/quickstart-onboard-machines/arc-enabled-machine-icon.png) Azure Arc-enabled server
+## Integrate with Microsoft 365 Defender
+
+When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed.
+
+The integration between Microsoft Defender for Cloud and Microsoft 365 Defender brings your cloud environments into Microsoft 365 Defender. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft 365 Defender, SOC teams can now access all security information from a single interface.
+
+Learn more about Defender for Cloud's [alerts in Microsoft 365 Defender](concept-integration-365.md).
+ ## Clean up resources There's no need to clean up any resources for this article.
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
Title: Regulatory compliance checks
-description: 'Tutorial: Learn how to Improve your regulatory compliance using Microsoft Defender for Cloud.'
+ Title: Improve regulatory compliance in Microsoft Defender for Cloud
+description: Learn how to improve regulatory compliance in Microsoft Defender for Cloud.
Last updated 06/18/2023
-# Tutorial: Improve your regulatory compliance
+# Improve regulatory compliance
-Microsoft Defender for Cloud helps streamline the process for meeting regulatory compliance requirements, using the **regulatory compliance dashboard**. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards.
+Microsoft Defender for Cloud helps you to meet regulatory compliance requirements by continuously assessing resources against compliance controls, and identifying issues that are blocking you from achieving a particular compliance certification.
-When you enable Defender for Cloud on an Azure subscription, the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) is automatically assigned to that subscription. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/), [PCI-DSS](https://www.pcisecuritystandards.org/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
+In the **Regulatory compliance** dashboard, you manage and interact with compliance standards. You can see which compliance standards are assigned, turn standards on and off for Azure, AWS, and GCP, review the status of assessments against standards, and more.
-The regulatory compliance dashboard shows the status of all the assessments within your environment for your chosen standards and regulations. As you act on the recommendations and reduce risk factors in your environment, your compliance posture improves.
+## Integration with Purview
-> [!TIP]
-> Compliance data from Defender for Cloud now seamlessly integrates with [Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager), allowing you to centrally assess and manage compliance across your organization's entire digital estate. When you add any standard to your compliance dashboard (including compliance standards monitoring other clouds like AWS and GCP), the resource-level compliance data is automatically surfaced in Compliance Manager for the same standard. Compliance Manager thus provides improvement actions and status across your cloud infrastructure and all other digital assets in this central tool. For more information, see [Multicloud support in Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager-multicloud).
-
-In this tutorial you'll learn how to:
-
-> [!div class="checklist"]
->
-> - Evaluate your regulatory compliance using the regulatory compliance dashboard
-> - Check MicrosoftΓÇÖs compliance offerings (currently in preview) for Azure, Dynamics 365 and Power Platform products
-> - Improve your compliance posture by taking action on recommendations
-> - Download PDF/CSV reports as well as certification reports of your compliance status
-> - Setup alerts on changes to your compliance status
-> - Export your compliance data as a continuous stream and as weekly snapshots
+Compliance data from Defender for Cloud now seamlessly integrates with [Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager), allowing you to centrally assess and manage compliance across your organization's entire digital estate.
-If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+When you add any standard to your compliance dashboard (including compliance standards monitoring other clouds like AWS and GCP), the resource-level compliance data is automatically surfaced in Compliance Manager for the same standard.
-## Prerequisites
+Compliance Manager thus provides improvement actions and status across your cloud infrastructure and all other digital assets in this central tool. For more information, see [multicloud support in Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager-multicloud).
-To step through the features covered in this tutorial:
-- [Enable enhanced security features](enable-enhanced-security.md). You can enable these for free for 30 days.-- You must be signed in with an account that has reader access to the policy compliance data. The **Reader** role for the subscription has access to the policy compliance data, but the **Security Reader** role doesn't. At a minimum, you'll need to have **Resource Policy Contributor** and **Security Admin** roles assigned.
-## Assess your regulatory compliance
-The regulatory compliance dashboard shows your selected compliance standards with all their requirements, where supported requirements are mapped to applicable security assessments. The status of these assessments reflects your compliance with the standard.
+## Before you start
-Use the regulatory compliance dashboard to help focus your attention on the gaps in compliance with your chosen standards and regulations. This focused view also enables you to continuously monitor your compliance over time within dynamic cloud and hybrid environments.
+- By default, when you enable Defender for Cloud on an Azure subscription, AWS account, or GCP plan, the MCSB plan is enabled
+- You can add additional non-default compliance standards when at least one paid plan is enabled in Defender for Cloud.
+- You must be signed in with an account that has reader access to the policy compliance data. The **Reader** role for the subscription has access to the policy compliance data, but the **Security Reader** role doesn't. At a minimum, you need to have **Resource Policy Contributor** and **Security Admin** roles assigned.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
+## Assess regulatory compliance
- The dashboard provides you with an overview of your compliance status and the set of supported compliance regulations. You'll see your overall compliance score, and the number of passing vs. failing assessments associated with each standard.
+The **Regulatory compliance** dashboard shows which compliance standards are enabled. It shows the controls within each standard, and security assessments for those controls. The status of these assessments reflects your compliance with the standard.
+The dashboard helps you to focus on gaps in standards, and to monitor compliance over time.
- The following list has a numbered item that matches each location in the image above, and describes what is in the image:
-- Select a compliance standard to see a list of all controls for that standard. (1)-- View the subscription(s) that the compliance standard is applied on. (2)-- Select a Control to see more details. Expand the control to view the assessments associated with the selected control. Select an assessment to view the list of resources associated and the actions to remediate compliance concerns. (3)-- Select Control details to view Overview, Your Actions and Microsoft Actions tabs. (4)-- In the Your Actions tab, you can see the automated and manual assessments associated to the control. (5)-- Automated assessments show the number of failed resources and resource types, and link you directly to the remediation experience to address those recommendations. (6)-- The manual assessments can be manually attested, and evidence can be linked to demonstrate compliance. (7)
+1. In the Defender for Cloud portal open the **Regulatory compliance** page.
-## Investigate regulatory compliance issues
+ :::image type="content" source="./media/regulatory-compliance-dashboard/compliance-drilldown.png" alt-text="Screenshot that shows the exploration of the details of compliance with a specific standard." lightbox="media/regulatory-compliance-dashboard/compliance-drilldown.png":::
-You can use the information in the regulatory compliance dashboard to investigate any issues that might be affecting your compliance posture.
+1. Use the dashboard in accordance with the numbered items in the image.
-**To investigate your compliance issues**:
+ - (1). Select a compliance standard to see a list of all controls for that standard.
+ - (2). View the subscriptions on which the compliance standard is applied.
+ - (3). Select and expand a control to view the assessments associated with it. Select an assessment to view the associated resources, and possible remediation actions.
+ - (4). Select **Control details** to view the **Overview**, **Your Actions**, and **Microsoft Actions** tabs.
+ - (5). In **Your Actions**, you can see the automated and manual assessments associated with the control.
+ - (6). Automated assessments show the number of failed resources and resource types, and link you directly to the remediation information.
+ - (7). Manual assessments can be manually attested, and evidence can be linked to demonstrate compliance.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+## Investigate issues
-1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
+You can use information in the dashboard to investigate issues that might affect compliance with the standard.
-1. Select a regulatory compliance standard.
+1. In the Defender for Cloud portal, open **Regulatory compliance**.
-1. Select a compliance control to expand it.
+1. Select a regulatory compliance standard, and select a compliance control to expand it.
1. Select **Control details**.
You can use the information in the regulatory compliance dashboard to investigat
The regulatory compliance has both automated and manual assessments that might need to be remediated. Using the information in the regulatory compliance dashboard, improve your compliance posture by resolving recommendations directly within the dashboard.
-**To remediate an automated assessment**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
-1. Select a regulatory compliance standard.
+1. In the Defender for Cloud portal, open **Regulatory compliance**.
-1. Select a compliance control to expand it.
+1. Select a regulatory compliance standard, and select a compliance control to expand it.
1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue.
The regulatory compliance has both automated and manual assessments that might n
1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves.
- > [!NOTE]
- > Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment.
+
+Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment.
## Remediate a manual assessment The regulatory compliance has automated and manual assessments that might need to be remediated. Manual assessments are assessments that require input from the customer to remediate them.
-**To remediate a manual assessment**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
+1. In the Defender for Cloud portal, open **Regulatory compliance**.
-1. Select a regulatory compliance standard.
+1. Select a regulatory compliance standard, and select a compliance control to expand it.
-1. Select a compliance control to expand it.
-
-1. Under the Manual attestation and evidence section, select an assessment.
+1. Under the **Manual attestation and evidence** section, select an assessment.
1. Select the relevant subscriptions.
The regulatory compliance has automated and manual assessments that might need t
## Generate compliance status reports and certificates -- To generate a PDF report with a summary of your current compliance status for a particular standard, select **Download report**.
+1. To generate a PDF report with a summary of your current compliance status for a particular standard, select **Download report**.
The report provides a high-level summary of your compliance status for the selected standard based on Defender for Cloud assessments data. The report's organized according to the controls of that particular standard. The report can be shared with relevant stakeholders, and might provide evidence to internal and external auditors. :::image type="content" source="./media/regulatory-compliance-dashboard/download-report.png" alt-text="Screenshot that shows using the toolbar in Defender for Cloud's regulatory compliance dashboard to download compliance reports."::: -- To download Azure and Dynamics **certification reports** for the standards applied to your subscriptions, use the **Audit reports** option.
+1. To download Azure and Dynamics **certification reports** for the standards applied to your subscriptions, use the **Audit reports** option.
:::image type="content" source="media/release-notes/audit-reports-regulatory-compliance-dashboard.png" alt-text="Screenshot that shows using the toolbar in Defender for Cloud's regulatory compliance dashboard to download Azure and Dynamics certification reports.":::
- Select the tab for the relevant reports types (PCI, SOC, ISO, and others) and use filters to find the specific reports you need:
+1. Select the tab for the relevant reports types (PCI, SOC, ISO, and others) and use filters to find the specific reports you need:
:::image type="content" source="media/release-notes/audit-reports-list-regulatory-compliance-dashboard-ga.png" alt-text="Screenshot that shows filtering the list of available Azure Audit reports using tabs and filters."::: For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate.
- > [!NOTE]
- > When you download one of these certification reports, you'll be shown the following privacy notice:
- >
- > _By downloading this file, you are giving consent to Microsoft to store the current user and the selected subscriptions at the time of download. This data is used in order to notify you in case of changes or updates to the downloaded audit report. This data is used by Microsoft and the audit firms that produce the certification/reports only when notification is required._
+
+ When you download one of these certification reports, you'll be shown the following privacy notice:
+
+ _By downloading this file, you are giving consent to Microsoft to store the current user and the selected subscriptions at the time of download. This data is used in order to notify you in case of changes or updates to the downloaded audit report. This data is used by Microsoft and the audit firms that produce the certification/reports only when notification is required._
### Check compliance offerings status Transparency provided by the compliance offerings (currently in preview), allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform.
-**To check the compliance offerings status**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Defender for Cloud** > **Regulatory compliance**.
+1. In the Defender for Cloud portal, open **Regulatory compliance**.
1. Select **Compliance offerings**.
Transparency provided by the compliance offerings (currently in preview), allows
:::image type="content" source="media/regulatory-compliance-dashboard/search-service.png" alt-text="Screenshot of the compliance offering screen with the search bar highlighted." lightbox="media/regulatory-compliance-dashboard/search-service.png":::
-## Configure frequent exports of your compliance status data
+## Continuously export compliance status
If you want to track your compliance status with other monitoring tools in your environment, Defender for Cloud includes an export mechanism to make this straightforward. Configure **continuous export** to send select data to an Azure Event Hubs or a Log Analytics workspace. Learn more in [continuously export Defender for Cloud data](continuous-export.md). Use continuous export data to an Azure Event Hubs or a Log Analytics workspace: -- Export all regulatory compliance data in a **continuous stream**:
+1. Export all regulatory compliance data in a **continuous stream**:
:::image type="content" source="media/regulatory-compliance-dashboard/export-compliance-data-stream.png" alt-text="Screenshot that shows how to continuously export a stream of regulatory compliance data." lightbox="media/regulatory-compliance-dashboard/export-compliance-data-stream.png"::: -- Export **weekly snapshots** of your regulatory compliance data:
+1. Export **weekly snapshots** of your regulatory compliance data:
:::image type="content" source="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png" alt-text="Screenshot that shows how to continuously export a weekly snapshot of regulatory compliance data." lightbox="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png"::: > [!TIP]
-> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. See [Assess your regulatory compliance](#assess-your-regulatory-compliance)
+> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options.
-## Run workflow automations when there are changes to your compliance
+## Trigger a workflow when assessments change
Defender for Cloud's workflow automation feature can trigger Logic Apps whenever one of your regulatory compliance assessments changes state.
For example, you might want Defender for Cloud to email a specific user when a c
## Next steps
-In this tutorial, you learned about using Defender for CloudΓÇÖs regulatory compliance dashboard to:
-
-> [!div class="checklist"]
->
-> - View and monitor your compliance posture regarding the standards and regulations that are important to you.
-> - Improve your compliance status by resolving relevant recommendations and watching the compliance score improve.
-
-The regulatory compliance dashboard can greatly simplify the compliance process, and significantly cut the time required for gathering compliance evidence for your Azure, hybrid, and multicloud environment.
- To learn more, see these related pages: - [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md) - Learn how to select which standards appear in your regulatory compliance dashboard.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
description: A description of what's new and changed in Microsoft Defender for C
Previously updated : 09/06/2023 Last updated : 11/08/2023 # Archive for what's new in Defender for Cloud?
Microsoft Defender for Cloud helps security teams to be more productive at reduc
- Automatically discover data resources across cloud estate and evaluate their accessibility, data sensitivity and configured data flows. -Continuously uncover risks to data breaches of sensitive data resources, exposure or attack paths that could lead to a data resource using a lateral movement technique.-- Detect suspicious activities that may indicate an ongoing threat to sensitive data resources.
+- Detect suspicious activities that might indicate an ongoing threat to sensitive data resources.
[Learn more](concept-data-security-posture.md) about data-aware security posture.
According to the [2021 State of the Cloud report](https://info.flexera.com/CM-RE
**Microsoft Defender for Cloud** is a Cloud Security Posture Management (CSPM) and cloud workload protection (CWP) solution that discovers weaknesses across your cloud configuration, helps strengthen the overall security posture of your environment, and protects workloads across multicloud and hybrid environments.
+At Ignite 2019, we shared our vision to create the most complete approach for securing your digital estate and integrating XDR technologies under the Microsoft Defender brand. Unifying Azure Security Center and Azure Defender under the new name **Microsoft Defender for Cloud** reflects the integrated capabilities of our security offering and our ability to support any cloud platform.
### Native CSPM for AWS and threat protection for Amazon EKS, and AWS EC2
You'll find these tactics wherever you access recommendation information:
- **Recommendation details pages** show the mapping for all relevant recommendations:
- :::image type="content" source="media/review-security-recommendations/tactics-window.png" alt-text="Screenshot of the MITRE tactics mapping for a recommendation.":::
- - **The recommendations page in Defender for Cloud** has a new :::image type="icon" source="media/review-security-recommendations/tactics-filter-recommendations-page.png" border="false"::: filter to select recommendations according to their associated tactic: Learn more in [Review your security recommendations](review-security-recommendations.md).
Learn more in [Identify vulnerable container images in your CI/CD workflows](def
### More Resource Graph queries available for some recommendations
-All of Security Center's recommendations have the option to view the information about the status of affected resources using Azure Resource Graph from the **Open query**. For full details about this powerful feature, see [Review recommendation data in Azure Resource Graph Explorer (ARG)](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).
+All of Security Center's recommendations have the option to view the information about the status of affected resources using Azure Resource Graph from the **Open query**. For full details about this powerful feature, see [Review recommendation data in Azure Resource Graph Explorer (ARG)](review-security-recommendations.md).
Security Center includes built-in vulnerability scanners to scan your VMs, SQL servers and their hosts, and container registries for security vulnerabilities. The findings are returned as recommendations with all the individual findings for each resource type gathered into a single view. The recommendations are:
The filters added this month provide options to refine the recommendations list
> > Learn more about each of these response options: >
- > - [Fix button](implement-security-recommendations.md#fix-button)
+ > - [Fix button](implement-security-recommendations.md)
> - [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md) :::image type="content" source="./media/release-notes/added-recommendations-filters.png" alt-text="Recommendations grouped by security control." lightbox="./media/release-notes/added-recommendations-filters.png":::
The policy definitions can be found in Azure Policy:
Get started with [workflow automation templates](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation).
-Learn more about using the two export policies in [Configure workflow automation at scale using the supplied policies](workflow-automation.md#configure-workflow-automation-at-scale-using-the-supplied-policies) and [Set up a continuous export](continuous-export.md#set-up-a-continuous-export).
+Learn more about using the two export policies in [Configure workflow automation at scale using the supplied policies](workflow-automation.md) and [Set up a continuous export](continuous-export.md#set-up-a-continuous-export).
### New recommendation for using NSGs to protect non-internet-facing virtual machines
In order to enable enterprise level scenarios on top of Security Center, it's no
Windows Admin Center is a management portal for Windows Servers who are not deployed in Azure offering them several Azure management capabilities such as backup and system updates. We have recently added an ability to onboard these non-Azure servers to be protected by ASC directly from the Windows Admin Center experience.
-Users can onboard a WAC server to Azure Security Center and enable viewing its security alerts and recommendations directly in the Windows Admin Center experience.
+With this new experience users can onboard a WAC server to Azure Security Center and enable viewing its security alerts and recommendations directly in the Windows Admin Center experience.
## September 2019
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
You can now prioritize your security recommendations according to the risk level
By organizing your recommendations based on their risk level (Critical, high, medium, low), you're able to address the most critical risks within your environment and efficiently prioritize the remediation of security issues based on the actual risk such as internet exposure, data sensitivity, lateral movement possibilities, and potential attack paths that could be mitigated by resolving the recommendations.
-Learn more about [risk prioritization](security-policy-concept.md).
+Learn more about [risk prioritization](implement-security-recommendations.md#group-recommendations-by-risk-level).
### Attack path analysis new engine and extensive enhancements
As part of security alert quality improvement process of Defender for Servers, a
|--|--| | Adaptive application control policy violation was audited.[VM_AdaptiveApplicationControlWindowsViolationAudited, VM_AdaptiveApplicationControlWindowsViolationAudited] | The below users ran applications that are violating the application control policy of your organization on this machine. It can possibly expose the machine to malware or application vulnerabilities.|
-To keep viewing this alert in the ΓÇ£Security alertsΓÇ¥ blade in the Microsoft Defender for Cloud portal, change the default view filter **Severity** to include **informational** alerts in the grid.
+To keep viewing this alert in the ΓÇ£Security alertsΓÇ¥ page in the Microsoft Defender for Cloud portal, change the default view filter **Severity** to include **informational** alerts in the grid.
:::image type="content" source="media/release-notes/add-informational-severity.png" alt-text="Screenshot that shows you where to add the informational severity for alerts." lightbox="media/release-notes/add-informational-severity.png":::
For more information, see [Migrate to SQL server-targeted Azure Monitoring Agent
September 20, 2023
-You can now view GitHub Advanced Security for Azure DevOps (GHAzDO) alerts related to CodeQL, secrets, and dependencies in Defender for Cloud. Results are displayed in the DevOps blade and in Recommendations. To see these results, onboard your GHAzDO-enabled repositories to Defender for Cloud.
+You can now view GitHub Advanced Security for Azure DevOps (GHAzDO) alerts related to CodeQL, secrets, and dependencies in Defender for Cloud. Results are displayed in the DevOps page and in Recommendations. To see these results, onboard your GHAzDO-enabled repositories to Defender for Cloud.
Learn more about [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security).
You can learn more about data aware security posture in the following articles:
- [Support and prerequisites for data-aware security posture](concept-data-security-posture-prepare.md) - [Enable data-aware security posture](data-security-posture-enable.md) - [Explore risks to sensitive data](data-security-review-risks.md)-- [Azure data attack paths](attack-path-reference.md#azure-data)-- [AWS data attack paths](attack-path-reference.md#aws-data) ### General Availability (GA): malware scanning in Defender for Storage
Here's a table of the new alerts.
|Alert (alert type)|Description|MITRE tactics|Severity| |-|-|-|-| | **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines aren't equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium |
-| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low |
-| **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
-| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium |
-| **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low |
-| **Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines (Preview)**<br>(VM_SuspiciousMultiExtensionUsage) | Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse such extensions for data collection, network traffic monitoring, and more, in your subscription. This usage is deemed suspicious as it hasn't been commonly seen before. | Reconnaissance | Medium |
-| **Suspicious installation of disk encryption extensions was detected on your virtual machines (Preview)**<br>(VM_DiskEncryptionSuspiciousUsage) | Suspicious installation of disk encryption extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse the disk encryption extension to deploy full disk encryptions on your virtual machines via the Azure Resource Manager in an attempt to perform ransomware activity. This activity is deemed suspicious as it hasn't been commonly seen before and due to the high number of extension installations. | Impact | Medium |
-| **Suspicious usage of VM Access extension was detected on your virtual machines (Preview)**<br>(VM_VMAccessSuspiciousUsage) | Suspicious usage of VM Access extension was detected on your virtual machines. Attackers may abuse the VM Access extension to gain access and compromise your virtual machines with high privileges by resetting access or managing administrative users. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Persistence | Medium |
-| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
-| **Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines (Preview)**<br>(VM_DSCExtensionSuspiciousUsage) | Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Impact | Low |
-| **Custom script extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_CustomScriptExtensionSuspiciousCmd)<br>*(This alert already exists and has been improved with more enhanced logic and detection methods.)* | Custom script extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Custom script extension to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
+| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low |
+| **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
+| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium |
+| **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low |
+| **Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines (Preview)**<br>(VM_SuspiciousMultiExtensionUsage) | Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers might abuse such extensions for data collection, network traffic monitoring, and more, in your subscription. This usage is deemed suspicious as it hasn't been commonly seen before. | Reconnaissance | Medium |
+| **Suspicious installation of disk encryption extensions was detected on your virtual machines (Preview)**<br>(VM_DiskEncryptionSuspiciousUsage) | Suspicious installation of disk encryption extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers might abuse the disk encryption extension to deploy full disk encryptions on your virtual machines via the Azure Resource Manager in an attempt to perform ransomware activity. This activity is deemed suspicious as it hasn't been commonly seen before and due to the high number of extension installations. | Impact | Medium |
+| **Suspicious usage of VM Access extension was detected on your virtual machines (Preview)**<br>(VM_VMAccessSuspiciousUsage) | Suspicious usage of VM Access extension was detected on your virtual machines. Attackers might abuse the VM Access extension to gain access and compromise your virtual machines with high privileges by resetting access or managing administrative users. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Persistence | Medium |
+| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
+| **Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines (Preview)**<br>(VM_DSCExtensionSuspiciousUsage) | Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Impact | Low |
+| **Custom script extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_CustomScriptExtensionSuspiciousCmd)<br>*(This alert already exists and has been improved with more enhanced logic and detection methods.)* | Custom script extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use Custom script extension to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
See the [extension-based alerts in Defender for Servers](alerts-reference.md#alerts-for-azure-vm-extensions).
This alert focuses on identifying suspicious activities leveraging Azure virtual
| Alert Display Name <br> (Alert Type) | Description | Severity | MITRE Tactic | |||||
-| Suspicious installation of GPU extension in your virtual machine (Preview) <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Low | Impact |
+| Suspicious installation of GPU extension in your virtual machine (Preview) <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Low | Impact |
For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md).
The NIST 800-53 standards (both R4 and R5) have recently been updated with contr
These controls were previously calculated as passed controls, so you might see a significant dip in your compliance score for NIST standards between April 2023 and May 2023.
-For more information on compliance controls, see [Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud](regulatory-compliance-dashboard.md#investigate-regulatory-compliance-issues).
+For more information on compliance controls, see [Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud](regulatory-compliance-dashboard.md).
### Planning of cloud migration with an Azure Migrate business case now includes Defender for Cloud
Learn more about [agentless container posture](concept-agentless-containers.md).
## May 2023
-Updates in May include:
+Updates in might include:
- [New alert in Defender for Key Vault](#new-alert-in-defender-for-key-vault) - [Agentless scanning now supports encrypted disks in AWS](#agentless-scanning-now-supports-encrypted-disks-in-aws)
Updates in May include:
| Alert (alert type) | Description | MITRE tactics | Severity | |||:-:||
-| **Unusual access to the key vault from a suspicious IP (Non-Microsoft or External)**<br>(KV_UnusualAccessSuspiciousIP) | A user or service principal has attempted anomalous access to key vaults from a non-Microsoft IP in the last 24 hours. This anomalous access pattern may be legitimate activity. It could be an indication of a possible attempt to gain access of the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
+| **Unusual access to the key vault from a suspicious IP (Non-Microsoft or External)**<br>(KV_UnusualAccessSuspiciousIP) | A user or service principal has attempted anomalous access to key vaults from a non-Microsoft IP in the last 24 hours. This anomalous access pattern might be legitimate activity. It could be an indication of a possible attempt to gain access of the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
For all of the available alerts, see [Alerts for Azure Key Vault](alerts-reference.md#alerts-azurekv).
Defender for Resource Manager has the following new alert:
| Alert (alert type) | Description | MITRE tactics | Severity | |||:-:||
-| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
+| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity might be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
You can see a list of all of the [alerts available for Resource Manager](alerts-reference.md#alerts-resourcemanager).
defender-for-cloud Review Exemptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-exemptions.md
+
+ Title: Exempt a recommendation in Microsoft Defender for Cloud.
+description: Learn how to exempt recommendations so they're not taken into account in Microsoft Defender for Cloud.
+++ Last updated : 01/02/2022++
+# Review resources exempted from recommendations
+
+In Microsoft Defender for Cloud, you can exempt protected resources from Defender for Cloud security recommendations. [Learn more](exempt-resource.md). This article describes how to review and work with exempted resources.
++
+## Review exempted resources in the portal
+
+1. In Defender for Cloud, open the **Recommendations** page.
+1. Select **Add filter** > **Is exempt**.
+1. Select whether you want to see recommendations that have exempted resources, or those without exemptions.
+
+ :::image type="content" source="media/review-exemptions/filter-exemptions.png" alt-text="Steps to create an exemption rule to exempt a recommendation from your subscription or management group." lightbox="media/review-exemptions/filter-exemptions.png":::
+
+1. In the details page for the relevant recommendation, review the exemption rules.
+
+1. For each resource, the **Reason** column shows why the resource is exempted. To modify the exemption settings for a resource, select the ellipsis in the resource > **Manage exemption**.
+
+You can also review exempted resources on the Defender for Cloud > **Inventory** page. In the page, select **Add filter**. In the **Filter** dropdown list, select **Contains Exemptions** to find all resources that have been exempted from one or more recommendations.
+++
+## Review exempted resources with Azure Resource Graph
+
+[Azure Resource Graph (ARG)](../governance/resource-graph/index.yml) provides instant access to resource information across your cloud environments with robust filtering, grouping, and sorting capabilities. It's a quick and efficient way to [query information](../governance/resource-graph/first-query-portal.md) using [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/).
+
+To view all recommendations that have exemption rules:
+
+1. In the **Recommendations** page, select **Open query**.
+1. Enter the following query and select **Run query**.
+
+ ```kusto
+ securityresources
+ | where type == "microsoft.security/assessments"
+ // Get recommendations in useful format
+ | project
+ ['TenantID'] = tenantId,
+ ['SubscriptionID'] = subscriptionId,
+ ['AssessmentID'] = name,
+ ['DisplayName'] = properties.displayName,
+ ['ResourceType'] = tolower(split(properties.resourceDetails.Id,"/").[7]),
+ ['ResourceName'] = tolower(split(properties.resourceDetails.Id,"/").[8]),
+ ['ResourceGroup'] = resourceGroup,
+ ['ContainsNestedRecom'] = tostring(properties.additionalData.subAssessmentsLink),
+ ['StatusCode'] = properties.status.code,
+ ['StatusDescription'] = properties.status.description,
+ ['PolicyDefID'] = properties.metadata.policyDefinitionId,
+ ['Description'] = properties.metadata.description,
+ ['RecomType'] = properties.metadata.assessmentType,
+ ['Remediation'] = properties.metadata.remediationDescription,
+ ['Severity'] = properties.metadata.severity,
+ ['Link'] = properties.links.azurePortal
+ | where StatusDescription contains "Exempt"
+ ```
++
+## Get notified when exemptions are created
+
+To keep track of how users are exempting resources from recommendations, we've created an Azure Resource Manager (ARM) template that deploys a Logic App Playbook, and all necessary API connections to notify you when an exemption has been created.
+
+- Learn more about the playbook in TechCommunity blog [How to keep track of Resource Exemptions in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-keep-track-of-resource-exemptions-in-azure-security/ba-p/1770580).
+- Locate the ARM template in [Microsoft Defender for Cloud GitHub repository](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation/Notify-ResourceExemption)
+- [Use this automated process](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Security-Center%2Fmaster%2FWorkflow%2520automation%2FNotify-ResourceExemption%2Fazuredeploy.json) to deploy all components.
++
+## Next steps
+
+[Review security recommendations](review-security-recommendations.md)
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Improving your security posture with recommendations
-description: This document walks you through how to identify security recommendations that will help you improve your security posture.
+ Title: Review security recommendations in Microsoft Defender for Cloud
+description: Learn how to review security recommendations in Microsoft Defender for Cloud
Previously updated : 01/10/2023 Last updated : 11/08/2023
-# Find recommendations that can improve your security posture
+# Review security recommendations
-To improve your [secure score](secure-score-security-controls.md), you have to implement the security recommendations for your environment. From the list of recommendations, you can use filters to find the recommendations that have the most impact on your score, or the ones that you were assigned to implement.
+In Microsoft Defender for Cloud, resources and workloads are assessed against built-in and custom security standards enabled in your Azure subscriptions, AWS accounts, and GCP projects. Based on those assessments, security recommendations provide practical steps to remediate security issues, and improve security posture.
-To get to the list of recommendations:
+This article describes how to review security recommendations in your Defender for Cloud deployment using the latest version of the portal experience.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+## Get an overview
-1. Either:
- - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment you want to improve.
- - Go to **Recommendations** in the Defender for Cloud menu.
+In the Defender for Cloud portal > **Overview** dashboard, get a holistic look at your environment, including security recommendations.
-You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations. Look at the [details of the recommendation](security-policy-concept.md) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-a-security-recommendation).
+- **Active recommendations**: Recommendations that are active in your environment.
+- **Unassigned recommendations**: See which recommendations don't have owners assigned to them.
+- **Overdue recommendations**: Recommendations that have an expired due date.
+- **Attack paths**: See the number of attack paths.
-You can learn more by watching this video from the Defender for Cloud in the Field video series:
-- [Security posture management improvements](episode-four.md)
-## Finding recommendations with high impact on your secure score<a name="monitor-recommendations"></a>
+## Review recommendations
-Your [secure score is calculated](secure-score-security-controls.md) based on the security recommendations that you've implemented. In order to increase your score and improve your security posture, you have to find recommendations with unhealthy resources and [remediate those recommendations](implement-security-recommendations.md).
+1. In Defender for Cloud, open the **Recommendations** page.
+1. For each recommendation, review:
-The list of recommendations shows the **Potential score increase** that you can achieve when you remediate all of the recommendations in the security control.
+ - **Risk level** - Specifies whether the recommendation risk is Critical, High, Medium or Low.
+ - **Affected resource** - Indicated affected resources.
+ - **Risk factors** - Environmental factors of the resource affected by the recommendation, which influences the exploitability and the business effect of the underlying security issue. For example, Internet exposure, sensitive data, lateral movement potential and more.
+ - **Attack Paths** - The number of attack paths.
+ - **Owner** - The person assigned to this recommendation.
+ - **Due date** - Indicates the due date for fixing the recommendation.
+ - **Recommendation status** indicates whether the recommendation has been assigned, and whether the due date for fixing the recommendation has passed.
+
-To find recommendations that can improve your secure score:
+## Review recommendation details
-1. In the list of recommendations, use the **Potential score increase** to identify the security control that contains recommendations that will increase your secure score.
- - You can also use the search box and filters above the list of recommendations to find specific recommendations.
-1. Open a security control to see the recommendations that have unhealthy resources.
+1. In the **Recommendations** page, select the recommendation.
+1. In the recommendation page, review the details:
+ - **Description** - A short description of the security issue.
+ - **Attack Paths** - The number of attack paths.
+ - **Scope** - The affected subscription or resource.
+ - **Freshness** - The freshness interval for the recommendation.
+ - **Last change date** - The date this recommendation last had a change
+ - **Owner** - The person assigned to this recommendation.
+ - **Due date** - The assigned date the recommendation must be resolved by.
+ - **Findings by severity** - The total findings by severity.
+ - **Tactics & techniques** - The tactics and techniques mapped to MITRE ATT&CK.
-When you [remediate](implement-security-recommendations.md) all of the recommendations in the security control, your secure score increases by the percentage point listed for the control.
+ :::image type="content" source="./media/review-security-recommendations/recommendation-details-page.png" alt-text="Screenshot of the recommendation details page with labels for each element." lightbox="./media/security-policy-concept/recommendation-details-page.png":::
-## Manage the owner and ETA of recommendations that are assigned to you
+## Explore a recommendation
-[Security teams can assign a recommendation](governance-rules.md) to a specific person and assign a due date to drive your organization towards increased security. If you have recommendations assigned to you, you're accountable to remediate the resources affected by the recommendations to help your organization be compliant with the security policy.
+You can perform a number of actions to interact with recommendations. If an option isn't available, it isn't relevant for the recommendation.
-Recommendations are listed as **On time** until their due date is passed, when they're changed to **Overdue**. Before the recommendation is overdue, the recommendation doesn't affect the secure score. The security team can also apply a grace period during which overdue recommendations continue to not affect the secure score.
+1. In the **Recommendations** page, select a recommendation.
+1. Select **Open query** to view detailed information about the affected resources using an Azure Resource Graph Explorer query
+1. Select **View policy definition** to view the Azure Policy entry for the underlying recommendation (if relevant).
+1. In **Review findings**, you can review affiliated findings by severity.
+
+ :::image type="content" source="media/review-security-recommendations/recommendation-findings.png" alt-text="Screenshot of the findings tab in a recommendation that shows all of the attack paths for that recommendation." lightbox="media/review-security-recommendations/recommendation-findings.png":::
-To help you plan your work and report on progress, you can set an ETA for the specific resources to show when you plan to have the recommendation resolved by for those resources. You can also change the owner of the recommendation for specific resources so that the person responsible for remediation is assigned to the resource.
+1. In **Take action**:
+ - **Remediate**: A description of the manual steps required to remediate the security issue on the affected resources. For recommendations with the **Fix** option, you can select **View remediation logic** before applying the suggested fix to your resources.
+ - **Assign owner and due date**: If you have a [governance rule](governance-rules.md) turned on for the recommendation, you can assign an owner and due date.
+ - **Exempt**: You can exempt resources from the recommendation, or disable specific findings using disable rules.
+ - **Workflow automation**: Set a logic app to trigger with this recommendation.
+1. In **Graph**, you can view and investigate all context that is used for risk prioritization, including [attack paths](how-to-manage-attack-path.md).
+ :::image type="content" source="media/review-security-recommendations/recommendation-graph.png" alt-text="Screenshot of the graph tab in a recommendation that shows all of the attack paths for that recommendation." lightbox="media/review-security-recommendations/recommendation-graph.png":::
-To change the owner of resources and set the ETA for remediation of recommendations that are assigned to you:
-1. In the filters for list of recommendations, select **Show my items only**.
- - The status column indicates the recommendations that are on time, overdue, or completed.
- - The insights column indicates the recommendations that are in a grace period, so they currently don't affect your secure score until they become overdue.
+## Manage recommendations assigned to you
-1. Select an on time or overdue recommendation.
-1. For the resources that are assigned to you, set the owner of the resource:
- 1. Select the resources that are owned by another person, and select **Change owner and set ETA**.
- 1. Select **Change owner**, enter the email address of the owner of the resource, and select **Save**.
+Defender for Cloud supports governance rules for recommendations, to specify a recommendation owner or due date for action. Governance rules help ensure accountability and an SLA for recommendations.
- The owner of the resource gets a weekly email listing the recommendations that they're assigned.
+- Recommendations are listed as **On time** until their due date is passed, when they're changed to **Overdue**.
+- Before the recommendation is overdue, the recommendation doesn't affect the secure score.
+- You can also apply a grace period during which overdue recommendations continue to not affect the secure score.
-1. For resources that you own, set an ETA for remediation:
- 1. Select resources that you plan to remediate by the same date, and select **Change owner and set ETA**.
- 1. Select **Change ETA** and set the date by which you plan to remediate the recommendation for those resources.
- 1. Enter a justification for the remediation by that date, and select **Save**.
+[Learn more](governance-rules.md) about configuring governance rules.
-The due date for the recommendation doesn't change, but the security team can see that you plan to update the resources by the specified ETA date.
+Manage recommendations assigned to you as follows:
-## Review recommendation data in Azure Resource Graph (ARG)
+1. In the Defender for Cloud portal > **Recommendations** page, select **Add filter** > **Owner**.
-You can review recommendations in ARG both on the Recommendations page or on an individual recommendation.
+1. Select your user entry.
+1. In the recommendation results, review the recommendations, including affected resources, risk factors, attack paths, due dates, and status.
+1. Select a recommendation to review it further.
+1. In **Take action** > **Change owner & due date**, you change the recommendation owner and due date if needed.
+ - By default the owner of the resource gets a weekly email listing the recommendations assigned to them.
+ - If you select a new remediation date, in **Justification** specify reasons for remediation by that date.
+ - In **Set email notifications** you can:
+ - Override the default weekly email to the owner.
+ - Notify owners weekly with a list of open/overdue tasks.
+ - Notify the owner's direct manager with an open task list.
+1. Select **Save**.
-The toolbar on the Recommendations page includes an **Open query** button to explore the details in [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that gives you the ability to query - across multiple subscriptions - Defender for Cloud's security posture data.
+> [!NOTE]
+> Changing the expected completion date doesn't change the due date for the recommendation, but security partners can see that you plan to update the resources by the specified date.
-ARG is designed to provide efficient resource exploration with the ability to query at scale across your cloud environments with robust filtering, grouping, and sorting capabilities. It's a quick and efficient way to query information across Azure subscriptions programmatically or from within the Azure portal.
+## Review recommendations in Azure Resource Graph
-Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), you can cross-reference Defender for Cloud data with other resource properties.
+You can use [Azure Resource Graph](../governance/resource-graph/index.yml) to query Defender for Cloud security posture data across multiple subscriptions. Azure Resource Graph provides an efficient way to query at scale across cloud environments by viewing, filtering, grouping, and sorting data.
-For example, this recommendation details page shows 15 affected resources:
+1. In the Defender for Cloud portal > **Recommendations** page > select **Open query**.
+1. In [Azure Resource Graph](../governance/resource-graph/index.yml), write a [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/).
+1. You can open the query in one of two ways:
-When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same 15 resources and their health status for this recommendation:
+ - **Query returning affected resource** - Returns a list of all of the resources affected by this recommendation.
+ - **Query returning security findings** - Returns a list of all security issues found by the recommendation.
-## Recommendation insights
+### Example
-The Insights column of the page gives you more details for each recommendation. The options available in this section include:
+In this example, this recommendation details page shows 15 affected resources:
-| Icon | Name | Description |
-|--|--|--|
-| :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: | **Preview recommendation** | This recommendation won't affect your secure score until it's GA. |
-| :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: | **Fix** | From within the recommendation details page, you can use 'Fix' to resolve this issue. |
-| :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: | **Enforce** | From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource. |
-| :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: | **Deny** | From within the recommendation details page, you can prevent new resources from being created with this issue. |
-Recommendations that aren't included in the calculations of your secure score, should still be remediated wherever possible, so that when the period ends they'll contribute towards your score instead of against it.
+When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same affected resources for this recommendation:
-## Download recommendations to a CSV report
-Recommendations can be downloaded to a CSV report from the Recommendations page.
-To download a CSV report of your recommendations:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
-1. Select **Download CSV report**.
-
- :::image type="content" source="media/review-security-recommendations/download-csv.png" alt-text="Screenshot showing you where to select the Download C S V report from.":::
-
-You'll know the report is being prepared when the pop-up appears.
--
-When the report is ready, you'll be notified by a second pop-up.
--
-## Learn more
-
-You can check out the following blogs:
--- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388)-- [New enhancements added to network security dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/new-enhancements-added-to-network-security-dashboard/ba-p/2896021) ## Next steps
-In this document, you were introduced to security recommendations in Defender for Cloud. For related information:
+[Remediate security recommendations](implement-security-recommendations.md)
-- [Remediate recommendations](implement-security-recommendations.md)-Learn how to configure security policies for your Azure subscriptions and resource groups.-- [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md).-- [Automate responses to Defender for Cloud triggers](workflow-automation.md)-Automate responses to recommendations-- [Exempt a resource from a recommendation](exempt-resource.md)-- [Security recommendations - a reference guide](recommendations-reference.md)
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Title: Secure score in Microsoft Defender for Cloud description: Learn about the Microsoft Cloud Security Benchmark secure score in Microsoft Defender for Cloud Previously updated : 06/19/2023 Last updated : 11/16/2023 # Secure score
When you turn on Defender for Cloud in a subscription, the [Microsoft cloud secu
Recommendations are issued based on assessment findings. Only built-in recommendations from the MSCB impact the secure score.
-> [!Note]
+> [!NOTE]
> Recommendations flagged as **Preview** aren't included in secure score calculations. They should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score. Preview recommendations are marked with: :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: > [!NOTE]
On the **Recommendations** page > **Secure score recommendations** tab in Defend
Each control is calculated every eight hours for each Azure subscription, or AWS/GCP cloud connector.
-> [!Important]
+> [!IMPORTANT]
> Recommendations within a control are updated more frequently than the control, and so there might be discrepancies between the resources count on the recommendations versus the one found on the control. ### Example scores for a control
defender-for-cloud Tutorial Enable Container Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-aws.md
You can learn more about Defender for Container's pricing on the [pricing page](
- [Connect your AWS account to Microsoft Defender for Cloud](quickstart-onboard-aws.md#connect-your-aws-account) -- Validate the following domains only if you're using a relevant OS.-
- | Domain | Port | Host operating systems |
- | -- | - |--|
- | amazonlinux.*.amazonaws.com/2/extras/\* | 443 | Amazon Linux 2 |
- | yum default repositories | - | RHEL / Centos |
- | apt default repositories | - | Debian |
+- Verify your Kubernetes nodes can access source repositories of your package manager.
- Ensure the following [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md) are validated.
defender-for-cloud Tutorial Enable Container Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-gcp.md
You can learn more about Defender for Container's pricing on the [pricing page](
- [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md#connect-your-gcp-project). -- Validate the following domains only if you're using a relevant OS.-
- | Domain | Port | Host operating systems |
- | -- | - |--|
- | amazonlinux.*.amazonaws.com/2/extras/\* | 443 | Amazon Linux 2 |
- | yum default repositories | - | RHEL / Centos |
- | apt default repositories | - | Debian |
+- Verify your Kubernetes nodes can access source repositories of your package manager.
- Ensure the following [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md) are validated.
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Last updated 06/18/2023
-# Automate responses to Microsoft Defender for Cloud triggers
+# Automate remediation responses
Every security program includes multiple workflows for incident response. These processes might include notifying relevant stakeholders, launching a change management process, and applying specific remediation steps. Security experts recommend that you automate as many steps of those procedures as you can. Automation reduces overhead. It can also improve your security by ensuring the process steps are done quickly, consistently, and according to your predefined requirements. This article describes the workflow automation feature of Microsoft Defender for Cloud. This feature can trigger consumption logic apps on security alerts, recommendations, and changes to regulatory compliance. For example, you might want Defender for Cloud to email a specific user when an alert occurs. You'll also learn how to create logic apps using [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
-## Availability
+## Before you start
+
+- You need **Security admin role** or **Owner** on the resource group.
+- You must also have write permissions for the target resource.
+- To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:
+
+ - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)
+ - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for logic app creation and modification.
+
+- If you want to use Logic Apps connectors, you might need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances).
-|Aspect|Details|
-|-|:-|
-|Release state:|General availability (GA)|
-|Pricing:|Free|
-|Required roles and permissions:|**Security admin role** or **Owner** on the resource group<br>Must also have write permissions for the target resource<br><br>To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:<br> - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)<br> - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for logic app creation and modification<br>If you want to use Logic Apps connectors, you might need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances)|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)|
## Create a logic app and define when it should automatically run
This article describes the workflow automation feature of Microsoft Defender for
:::image type="content" source="./media/workflow-automation/list-of-workflow-automations.png" alt-text="Screenshot of workflow automation page showing the list of defined automations." lightbox="./media/workflow-automation/list-of-workflow-automations.png":::
- From this page you can create new automation rules, enable, disable, or delete existing ones.
-
- > [!NOTE]
- > A scope refers to the subscription where the workflow automation is deployed.
+1. From this page, create new automation rules, enable, disable, or delete existing ones. A scope refers to the subscription where the workflow automation is deployed.
1. To define a new workflow, select **Add workflow automation**. The options pane for your new automation opens. :::image type="content" source="./media/workflow-automation/add-workflow.png" alt-text="Add workflow automations pane." lightbox="media/workflow-automation/add-workflow.png":::
- Here you can enter:
- 1. A name and description for the automation.
- 1. The triggers that will initiate this automatic workflow. For example, you might want your logic app to run when a security alert that contains "SQL" is generated.
+1. Enter the following:
- > [!NOTE]
- > If your trigger is a recommendation that has "sub-recommendations", for example **Vulnerability assessment findings on your SQL databases should be remediated**, the logic app will not trigger for every new security finding; only when the status of the parent recommendation changes.
+ - A name and description for the automation.
+ - The triggers that will initiate this automatic workflow. For example, you might want your logic app to run when a security alert that contains "SQL" is generated.
- 1. The consumption logic app that will run when your trigger conditions are met.
+ If your trigger is a recommendation that has "sub-recommendations", for example **Vulnerability assessment findings on your SQL databases should be remediated**, the logic app will not trigger for every new security finding; only when the status of the parent recommendation changes.
+
+1. Specify the consumption logic app that will run when your trigger conditions are met.
1. From the Actions section, select **visit the Logic Apps page** to begin the logic app creation process.
This article describes the workflow automation feature of Microsoft Defender for
> [!TIP] > Sometimes in a logic app, parameters are included in the connector as part of a string and not in their own field. For an example of how to extract parameters, see step #14 of [Working with logic app parameters while building Microsoft Defender for Cloud workflow automations](https://techcommunity.microsoft.com/t5/azure-security-center/working-with-logic-app-parameters-while-building-azure-security/ba-p/1342121).
- The logic app designer supports the following Defender for Cloud triggers:
-
- - **When a Microsoft Defender for Cloud Recommendation is created or triggered** - If your logic app relies on a recommendation that gets deprecated or replaced, your automation will stop working and you'll need to update the trigger. To track changes to recommendations, use the [release notes](release-notes.md).
+## Supported triggers
- - **When a Defender for Cloud Alert is created or triggered** - You can customize the trigger so that it relates only to alerts with the severity levels that interest you.
+The logic app designer supports the following Defender for Cloud triggers:
- - **When a Defender for Cloud regulatory compliance assessment is created or triggered** - Trigger automations based on updates to regulatory compliance assessments.
+- **When a Microsoft Defender for Cloud Recommendation is created or triggered** - If your logic app relies on a recommendation that gets deprecated or replaced, your automation will stop working and you'll need to update the trigger. To track changes to recommendations, use the [release notes](release-notes.md).
- > [!NOTE]
- > If you are using the legacy trigger "When a response to a Microsoft Defender for Cloud alert is triggered", your logic apps will not be launched by the Workflow Automation feature. Instead, use either of the triggers mentioned above.
+- **When a Defender for Cloud Alert is created or triggered** - You can customize the trigger so that it relates only to alerts with the severity levels that interest you.
- [![Sample logic app.](media/workflow-automation/sample-logic-app.png)](media/workflow-automation/sample-logic-app.png#lightbox)
+- **When a Defender for Cloud regulatory compliance assessment is created or triggered** - Trigger automations based on updates to regulatory compliance assessments.
-1. After you've defined your logic app, return to the workflow automation definition pane ("Add workflow automation"). Select **Refresh** to ensure your new logic app is available for selection.
+> [!NOTE]
+> If you are using the legacy trigger "When a response to a Microsoft Defender for Cloud alert is triggered", your logic apps will not be launched by the Workflow Automation feature. Instead, use either of the triggers mentioned above.
- ![Refresh.](media/workflow-automation/refresh-the-list-of-logic-apps.png)
+1. After you've defined your logic app, return to the workflow automation definition pane ("Add workflow automation").
+1. Select **Refresh** to ensure your new logic app is available for selection.
1. Select your logic app and save the automation. The logic app dropdown only shows those with supporting Defender for Cloud connectors mentioned above. ## Manually trigger a logic app You can also run logic apps manually when viewing any security alert or recommendation.
-To manually run a logic app, open an alert, or a recommendation and select **Trigger logic app**:
+To manually run a logic app, open an alert, or a recommendation and select **Trigger logic app**.
[![Manually trigger a logic app.](media/workflow-automation/manually-trigger-logic-app.png)](media/workflow-automation/manually-trigger-logic-app.png#lightbox)
-## Configure workflow automation at scale using the supplied policies
+## Configure workflow automation at scale
Automating your organization's monitoring and incident response processes can greatly improve the time it takes to investigate and mitigate security incidents.
To implement these policies:
|Workflow automation for security recommendations |[Deploy Workflow Automation for Microsoft Defender for Cloud recommendations](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef| |Workflow automation for regulatory compliance changes|[Deploy Workflow Automation for Microsoft Defender for Cloud regulatory compliance](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F509122b9-ddd9-47ba-a5f1-d0dac20be63c)|509122b9-ddd9-47ba-a5f1-d0dac20be63c|
- > [!TIP]
- > You can also find these by searching Azure Policy:
- >
- > 1. Open Azure Policy.
- > :::image type="content" source="./media/continuous-export/opening-azure-policy.png" alt-text="Accessing Azure Policy.":::
- > 2. From the Azure Policy menu, select **Definitions** and search for them by name.
+
+ You can also find these by searching Azure Policy. In Azure Policy, select **Definitions** and search for them by name.
+
1. From the relevant Azure Policy page, select **Assign**. :::image type="content" source="./media/workflow-automation/export-policy-assign.png" alt-text="Assigning the Azure Policy.":::
-1. Open each tab and set the parameters as desired:
- 1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that will use the workflow automation configuration.
- 1. In the Parameters tab, enter the required information.
+1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that will use the workflow automation configuration.
+1. In the **Parameters** tab, enter the required information.
:::image type="content" source="media/workflow-automation/parameters-tab.png" alt-text="Screenshot of the parameters tab.":::
- 1. (Optional), Apply this assignment to an existing subscription in the **Remediation** tab and select the option to create a remediation task.
+1. Optionally apply this assignment to an existing subscription in the **Remediation** tab and select the option to create a remediation task.
1. Review the summary page and select **Create**.
devtest-labs Lab Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/lab-services-overview.md
- Title: Azure Lab Services vs. Azure DevTest Labs
-description: Compare features, scenarios, and use cases for Azure DevTest Labs and Azure Lab Services.
--- Previously updated : 11/15/2021---
-# Compare Azure DevTest Labs and Azure Lab Services
-
-You can use two different Azure services to set up lab environments in the cloud:
--- [Azure DevTest Labs](devtest-lab-overview.md) provides development or test cloud environments for your team.-
- In DevTest Labs, a lab owner [creates a lab](devtest-lab-create-lab.md) and makes it available to lab users. The owner provisions the lab with Windows or Linux virtual machines (VMs) that have all necessary software and tools. Lab users connect to lab VMs for daily work and short-term projects. Lab administrators can analyze resource usage and costs across multiple labs, and set overarching policies to optimize organization or team costs.
--- [Azure Lab Services](../lab-services/lab-services-overview.md) provides managed classroom labs.-
- Lab Services does all infrastructure management, from spinning up VMs and scaling infrastructure to handling errors. After an IT administrator creates a Lab Services lab account, instructors can [create labs](../lab-services/quick-create-lab-plan-portal.md) in the account. An instructor specifies the number and type of VMs they need for the class, and adds users to the class. Once users register in the class, they can access the VMs to do class exercises and homework.
-
-## Key capabilities
-
-DevTest Labs and Lab Services support the following key capabilities and features:
--- **Fast and flexible lab setup**. Lab owners and instructors can quickly set up labs for their needs. Lab Services takes care of all Azure infrastructure work, and provides built-in infrastructure scaling and resiliency for managed labs. In DevTest Labs, lab owners can self-manage and customize infrastructure.--- **Simplified lab user experience**. In a Lab Services classroom lab, users can register with a code and access the lab to use resources. A DevTest Labs lab owner can give permissions for lab users to create and access VMs, manage and reuse data disks, and set up reusable secrets.--- **Cost optimization and analysis**. In Lab Services, you can give each student a limited number of hours for using the VMs. A DevTest Labs lab owner can set a lab schedule to specify when lab VMs are accessible to users. The schedule can automatically shut down and start up VMs at specified times. The lab owner can set usage policies per user or per lab to optimize costs. Lab owners can analyze lab usage and activity trends. Classroom labs offer a smaller subset of cost optimization and analysis options.-
-DevTest Labs also supports the following features:
--- **Embedded security**. A lab owner can set up a private virtual network and subnets for a lab, and enable a shared public IP address. DevTest Labs lab users can securely access virtual network resources by using Azure ExpressRoute or a site-to-site virtual private network (VPN).--- **Workflow and tool integration**. In DevTest Labs, you can automatically provision environments from within your continuous integration/continuous deployment (CI/CD) tools. You can integrate DevTest Labs into your organization's website and management systems.-
-## Scenarios
-
-Here are typical scenarios for Lab Services and DevTest Labs:
-
-### Set up a resizable classroom computer lab in the cloud
--- To create a managed classroom lab, you just tell Lab Services what you need. The service creates and manages lab infrastructure so you can focus on teaching your class, not technical details.-- Lab Services provides students with a lab of VMs that are configured with exactly what's needed. You can give each student a limited number of hours for using the VMs.-- You can move your school's physical computer lab into the cloud. Lab Services automatically scales the number of VMs to only the maximum usage and cost threshold you set.-- You can delete labs with a single click when you're done with them.-
-### Use DevTest Labs for development and test environments
-
-You can use DevTest Labs for many key scenarios. One primary scenario is to host development and test machines. DevTest Labs provides these benefits for developers and testers:
--- Lab owners and users can provision Windows and Linux environments by using reusable templates and artifacts.-- Developers can quickly provision development machines on demand, and easily customize their machines when necessary.-- Testers can test the latest application version, and scale up load testing by provisioning multiple test agents.-- Administrators can control costs by ensuring that developers and testers can't get more VMs than they need.-- Administrators can ensure that VMs are shut down when not in use.-
-For more information, see [Use DevTest Labs for development](devtest-lab-developer-lab.md) and [Use DevTest Labs for testing](devtest-lab-test-env.md).
-
-## Types of labs
-
-You can create two types of labs: **managed labs** with Lab Services, or **labs** with DevTest Labs. If you just want to input your needs and let the service set up and manage required lab infrastructure, select **classroom lab** from the **managed lab types** in Lab Services. If you want to manage your own infrastructure, create labs by using DevTest Labs.
-
-The following sections provide more details about these lab types.
-
-### Managed labs
-
-Managed labs are Lab Services labs with infrastructure that Azure manages. Managed lab types can fit specific needs, like classroom labs.
-
-With managed labs, you can get started right away, with minimal setup. To create a classroom lab, first you create a lab account for your organization. The lab account serves as the central account for managing all the labs in the organization.
-
-For managed labs, Lab Services creates and manages Azure resources in internal Microsoft subscriptions, not in your own Azure subscription. The service keeps track of resource usage in the internal subscriptions, and bills usage back to the Azure subscription that contains the lab account.
-
-Here are some use cases for managed lab types:
--- Provide students with a lab of VMs that have exactly what's needed for a class.-- Limit the number of hours that students can use VMs.-- Set up a pool of high-performance VMs to do compute-intensive or graphics-intensive research.-- Move a school's physical computer lab into the cloud.-- Quickly provision a lab of VMs for hosting a hackathon.-
-### DevTest Labs
-
-You might want to manage all lab infrastructure and configuration yourself, within your own Azure subscription. For this scenario, create a DevTest Labs lab in the Azure portal. You don't create or use a lab account for DevTest Labs.
-
-Here are some use cases for DevTest Labs:
--- Quickly provision a lab of VMs to host a hackathon or hands-on conference session.-- Create a pool of VMs configured with an application to use for bug bashes.-- Provide developers with VMs configured with all the tools they need.-- Repeatedly create labs of test machines to test the latest bits.-- Set up differently configured VMs and multiple test agents for scale and performance testing.-- Offer customer training sessions in a lab configured with a product's latest version.-
-## Lab Services vs. DevTest Labs
-
-The following table compares the two types of Azure lab environments:
-
-| Feature | Azure Lab Services | Azure DevTest Labs
-| -- | -- | -- |
-| Management of Azure infrastructure | Automatically infrastructure management | You manage the infrastructure manually |
-| Built-in resiliency | Automatic handling of resiliency | You handle resiliency manually |
-| Subscription management | The service handles allocation of resources within Microsoft subscriptions that back the service. | You manage the subscription within your own Azure subscription. |
-| Autoscaling. | Service automatically scales | No subscription autoscaling |
-| Azure Resource Manager deployment within the lab | Not available | Available |
-
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
Title: Microsoft Azure Data Manager for Energy entitlement concepts
-description: This article describes the various concepts regarding the entitlement services in Azure Data Manager for Energy
--
+ Title: Entitlement concepts in Microsoft Azure Data Manager for Energy
+description: This article describes the various concepts regarding the entitlement service in Azure Data Manager for Energy.
++ Last updated 02/10/2023
# Entitlement service
-Access management is a critical function for any service or resource. Entitlement service helps you manage who has access to your Azure Data Manager for Energy instance, what they can do with it, and what services they have access to.
+Access management is a critical function for any service or resource. Entitlement service helps you manage who has access to your Azure Data Manager for Energy instance, what they can view or edit, and what services or data they have access to.
## Groups
-The entitlements service of Azure Data Manager for Energy allows you to create groups, and an entitlement group defines permissions on services/data sources for your Azure Data Manager for Energy instance. Users added by you to that group obtain the associated permissions.
+The entitlements service of Azure Data Manager for Energy allows you to create groups and manage memberships of the groups. An entitlement group defines permissions on services/data sources for your Azure Data Manager for Energy instance. Users added to a given group obtain the associated permissions.
-The main motivation for entitlements service is data authorization, but the functionality enables three use cases:
+The entitlements service enables three use cases for authorization:
- **Data groups** used for data authorization (for example, data.welldb.viewers, data.welldb.owners) - **Service groups** used for service authorization (for example, service.storage.user, service.storage.admin) - **User groups** used for hierarchical grouping of user and service identities (for example, users.datalake.viewers, users.datalake.editors)
-## Users
-
-For each group, you can either add a user as an OWNER or a MEMBER. The only difference being if you're an OWNER of a group, then you can manage the members of that group.
-> [!NOTE]
-> Do not delete the OWNER of a group unless there is another OWNER to manage the users.
+Some user, data, and service groups are created by default when a data partition is provisioned with details in [Bootstrapped OSDU Entitlements Groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md).
## Group naming
-All group identifiers (emails) will be of form {groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}.com. A group naming convention has been adopted such that the group's name should start with the word "data." for data groups; "service." for service groups; and "users." for user groups. An exception is when a data partition is provisioned. When a data partition is created, so is a corresponding group-for example, for data partition `opendes`, the group `users@opendes.dataservices.energy` is created.
+All group identifiers (emails) will be of form {groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}.com. A group naming convention has been adopted such that the group's name should start with the word "data." for data groups; "service." for service groups; and "users." for user groups. There is one exception for "users" group which is created when a new data partition is provisioned. For example, for data partition `opendes`, the group `users@opendes.dataservices.energy` is created.
-## Permissions and roles
+## Users
-The OSDU&trade; Data Ecosystem user groups provide an abstraction from permission and user management and--without a user creating their own groups--the following user groups exist by default:
+For each OSDU group, you can either add a user as an OWNER or a MEMBER. If you're an OWNER of an OSDU group, then you can add or remove the members of that group or delete the group. If you are a MEMBER of an OSDU group, you can view, edit, or delete the service or data depending on the scope of the OSDU group. For example, if you are a MEMBER of service.legal.editor OSDU group, you can call the APIs to change the legal service.
+> [!NOTE]
+> Do not delete the OWNER of a group unless there is another OWNER to manage the users.
-- **users.datalake.viewers**: viewer level authorization for OSDU Data Ecosystem services.-- **users.datalake.editors**: editor level authorization for OSDU Data Ecosystem services and authorization to create the data using OSDU&trade; Data Ecosystem storage service.-- **users.datalake.admins**: admin level authorization for OSDU Data Ecosystem services.
+## Entitlement APIs
-A full list of all API endpoints for entitlements can be found in [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api). We have provided few illustrations below. Depending on the resources you have, you need to use the entitlements service in different ways than what is shown below. [Entitlement permissions](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#permissions) on the endpoints and the corresponding minimum level of permissions required.
+A full list of entitlements API endpoints can be found in [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api). A few illustrations of how to use Entitlement APIs are available in the [How to manage users](how-to-manage-users.md). Depending on the resources you have, you need to use the entitlements service in different ways than what is shown below.
> [!NOTE]
-> The OSDU documentation refers to V1 endpoints, but the scripts noted in this documentation refers to V2 endpoints, which work and have been successfully validated
+> The OSDU documentation refers to V1 endpoints, but the scripts noted in this documentation refer to V2 endpoints, which work and have been successfully validated.
OSDU&trade; is a trademark of The Open Group.
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
Title: How to manage users in Microsoft Azure Data Manager for Energy description: This article describes how to manage users in Azure Data Manager for Energy--++ Last updated 08/19/2022
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
This page is updated with the details about the upcoming release approximately a
<hr width = 100%>
+## November 2023
+
+### Compliant with M18 OSDU&trade; release
+Azure Data Manager for Energy is now compliant with the M18 OSDU&trade; milestone release. With this release, you can take advantage of the latest features and capabilities available in the [OSDU&trade; M18](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M18-Release-Notes).
+ ## September 2023 ### Azure Data Manager for Energy in Brazil South Region
event-grid Authenticate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-active-directory.md
- build-2023 - ignite-2023 Previously updated : 08/17/2023 Last updated : 11/15/2023 # Authentication and authorization with Microsoft Entra ID
event-grid Authenticate With Entra Id Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-entra-id-namespaces.md
- build-2023 - ignite-2023 Previously updated : 10/04/2023 Last updated : 11/15/2023 # Authentication and authorization with Microsoft Entra ID when using Event Grid namespaces
event-grid Choose Right Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/choose-right-tier.md
description: Describes how to choose the right tier based on resource features a
- ignite-2023 Previously updated : 11/02/2023 Last updated : 11/15/2023 # Choose the right Event Grid tier for your solution
event-grid Concepts Event Grid Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/concepts-event-grid-namespaces.md
Previously updated : 11/02/2023 Last updated : 11/15/2023 Title: Concepts for Event Grid namespace topics
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/concepts.md
description: Describes Azure Event Grid concepts that pertain to push delivery.
- ignite-2023 Previously updated : 05/24/2023 Last updated : 11/15/2023 # Azure Event Grid's push delivery - concepts
event-grid Configure Firewall Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-firewall-mqtt.md
description: This article describes how to configure firewall settings for Azure
- ignite-2023 Previously updated : 10/04/2023 Last updated : 11/15/2023
event-grid Configure Private Endpoints Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-private-endpoints-mqtt.md
description: This article describes how to configure private endpoints for Azure
- ignite-2023 Previously updated : 10/04/2023 Last updated : 11/15/2023
event-grid Configure Private Endpoints Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-private-endpoints-pull.md
description: This article describes how to configure private endpoints for Azure
- ignite-2023 Previously updated : 10/04/2023 Last updated : 11/15/2023 # Configure private endpoints for Azure Event Grid namespaces
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/consume-private-endpoints.md
description: This article describes how to work around push delivery's limitatio
- ignite-2023 Previously updated : 11/02/2023 Last updated : 11/15/2023 # Deliver events using private link service
event-grid Create View Manage Event Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-event-subscriptions.md
- ignite-2023 Previously updated : 05/24/2023 Last updated : 11/15/2023 # Create, view, and manage event subscriptions in namespace topics
event-grid Create View Manage Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-namespace-topics.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023 # Create, view, and manage namespace topics
event-grid Create View Manage Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-namespaces.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023 # Create, view, and manage namespaces
event-grid Custom Disaster Recovery Client Side https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-disaster-recovery-client-side.md
Title: Build your own client-side failover implementation in Azure Event Grid description: This article describes how to build your own client-side failover implementation in Azure Event Grid resources. Previously updated : 05/02/2023 Last updated : 11/15/2023 ms.devlang: csharp - devx-track-csharp
event-grid Custom Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-topics.md
- devx-track-azurepowershell - build-2023 - ignite-2023 Previously updated : 04/27/2023 Last updated : 11/15/2023 # Custom topics in Azure Event Grid
event-grid Dead Letter Event Subscriptions Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/dead-letter-event-subscriptions-namespace-topics.md
description: Describes the dead lettering feature for event subscriptions to nam
- ignite-2023 Previously updated : 09/29/2023 Last updated : 11/15/2023 # Dead lettering for event subscriptions to namespaces topics in Azure Event Grid
event-grid Event Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-domains.md
description: This article describes how to use event domains to manage the flow
- ignite-2023 Previously updated : 10/09/2023 Last updated : 11/15/2023 # Understand event domains for managing Event Grid topics
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-filtering.md
- devx-track-arm-template - ignite-2023 Previously updated : 11/02/2023 Last updated : 11/15/2023 # Understand event filtering for Event Grid subscriptions
event-grid Event Grid Dotnet Get Started Pull Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-grid-dotnet-get-started-pull-delivery.md
- references_regions - devx-track-dotnet - ignite-2023 Previously updated : 07/26/2023 Last updated : 11/15/2023
event-grid Event Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-retention.md
description: Describes the retention of events in Azure Event Grid namespace top
- ignite-2023 Previously updated : 09/29/2023 Last updated : 11/15/2023 # Event retention for Azure Event Grid namespace topics and event subscriptions
event-grid Handler Azure Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-azure-monitor-alerts.md
description: This article describes how Azure Event Grid delivers Azure Key Vaul
- ignite-2023 Previously updated : 10/16/2023 Last updated : 11/15/2023
event-grid Handler Event Grid Namespace Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-event-grid-namespace-topic.md
description: This article describes how to deliver events to Event Grid namespac
- ignite-2023 Previously updated : 10/16/2023 Last updated : 11/15/2023
event-grid High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/high-availability-disaster-recovery.md
description: Describes how Azure Event Grid's namespaces support building highly
- ignite-2023 Previously updated : 10/13/2023 Last updated : 11/15/2023
event-grid Monitor Mqtt Delivery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-mqtt-delivery-reference.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023 # Monitor data reference for Azure Event GridΓÇÖs MQTT broker feature (Preview)
event-grid Monitor Namespace Push Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-namespace-push-reference.md
- build-2023 - ignite-2023 Previously updated : 10/11/2023 Last updated : 11/15/2023 # Monitor data reference for Azure Event Grid's push delivery using namespaces
event-grid Monitor Pull Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-pull-reference.md
- build-2023 - ignite-2023 Previously updated : 04/28/2023 Last updated : 11/15/2023 # Monitor data reference for Azure Event Grid's pull delivery
event-grid Monitor Push Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-push-reference.md
- build-2023 - ignite-2023 Previously updated : 04/28/2023 Last updated : 11/15/2023 # Monitor data reference for Azure Event Grid's push event delivery
event-grid Mqtt Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-access-control.md
description: 'Describes the main concepts for access control for MQTT clients in
- ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Automotive Connectivity And Data Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-automotive-connectivity-and-data-solution.md
description: 'Describes the use case of automotive messaging'
- ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Certificate Chain Client Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-certificate-chain-client-authentication.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Client Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-authentication.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Client Azure Ad Token And Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-azure-ad-token-and-rbac.md
description: Describes JWT authentication and RBAC roles to authorize clients wi
- ignite-2023 Previously updated : 10/24/2023 Last updated : 11/15/2023
event-grid Mqtt Client Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-groups.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Client Life Cycle Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-life-cycle-events.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-clients.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Establishing Multiple Sessions Per Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-establishing-multiple-sessions-per-client.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Event Grid Namespace Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-event-grid-namespace-terminology.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
A Permission Binding grants access to a specific client group to either publish
## Next steps - Learn about [creating an Event Grid namespace](create-view-manage-namespaces.md)-- Learn about [MQTT support in Event Grid](mqtt-overview.md)
+- Learn about [MQTT broker feature in Azure Event Grid](mqtt-overview.md)
- Learn more about [MQTT clients](mqtt-clients.md) - Learn how to [Publish and subscribe MQTT messages using Event Grid namespace](mqtt-publish-and-subscribe-portal.md)
event-grid Mqtt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-overview.md
Title: 'Overview of MQTT Support in Azure Event Grid'
-description: 'Describes the main concepts for the MQTT Support in Azure Event Grid.'
+ Title: 'Overview of MQTT broker feature in Azure Event Grid'
+description: 'Describes the main concepts for the MQTT broker feature in Azure Event Grid.'
- ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
-# Overview of the MQTT Support in Azure Event Grid
+# Overview of the MQTT broker feature in Azure Event Grid
Azure Event Grid enables your MQTT clients to communicate with each other and with Azure services, to support your Internet of Things (IoT) solutions.
-Event GridΓÇÖs MQTT support enables you to accomplish the following scenarios:
+Azure Event GridΓÇÖs MQTT broker feature enables you to accomplish the following scenarios:
- Ingest telemetry using a many-to-one messaging pattern. This pattern enables the application to offload the burden of managing the high number of connections with devices to Event Grid. - Control your MQTT clients using the request-response (one-to-one) messaging pattern. This pattern enables any client to communicate with any other client without restrictions, regardless of the clients' roles.
MQTT is a publish-subscribe messaging transport protocol that was designed for c
The publish-subscribe messaging model provides a scalable and asynchronous communication to clients. It enables clients to offload the burden of handling a high number of connections and messages to the service. Through the Publish-Subscribe messaging model, your clients can communicate efficiently using one-to-many, many-to-one, and one-to-one messaging patterns. - The one-to-many messaging pattern enables clients to publish only one message that the service replicates for every interested client. -- The many-to-one messaging pattern enables clients to offload the burden of managing the high number of connections to MQTT broker.
+- The many-to-one messaging pattern enables clients to offload the burden of managing the high number of connections to MQTT broker.
- The one-to-one messaging pattern enables any client to communicate with any other client without restrictions, regardless of the clients' roles. ### Namespace
IoT applications are software designed to interact with and process data from Io
### Client authentication
-Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra ID (formerly Azure Active Directory)](mqtt-client-azure-ad-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
+Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra ID (formerly Azure Active Directory)](mqtt-client-azure-ad-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
### Access control
Event Grid allows you to route your MQTT messages to Azure services or webhooks
:::image type="content" source="media/mqtt-overview/routing-high-res.png" alt-text="Diagram of the MQTT message routing." border="false"::: ### Edge MQTT broker integration
-Event Grid integrates with [Azure IoT MQ](https://aka.ms/iot-mq) to bridge its MQTT broker capability on the edge with Event GridΓÇÖs MQTT broker capability in the cloud. Azure IoT MQ is a new distributed MQTT broker for edge computing, running on Arc enabled Kubernetes clusters. It can connect to Event Grid MQTT broker with Microsoft Entra ID (formerly Azure Active Directory) authentication using system-assigned managed identity, which simplifies credential management. Azure IoT MQ provides high availability, scalability, and security for your IoT devices and applications. It's now available in [public preview](https://aka.ms/iot-mq-preview) as part of Azure IoT Operations. [Learn more about connecting Azure IoT MQ to Azure Event Grid's MQTT broker](https://aka.ms/iot-mq-eg-bridge)
+Event Grid integrates with [Azure IoT MQ](https://aka.ms/iot-mq) to bridge its MQTT broker capability on the edge with Azure Event GridΓÇÖs MQTT broker feature in the cloud. Azure IoT MQ is a new distributed MQTT broker for edge computing, running on Arc enabled Kubernetes clusters. It can connect to Event Grid MQTT broker with Microsoft Entra ID (formerly Azure Active Directory) authentication using system-assigned managed identity, which simplifies credential management. Azure IoT MQ provides high availability, scalability, and security for your IoT devices and applications. It's now available in [public preview](https://aka.ms/iot-mq-preview) as part of Azure IoT Operations. [Learn more about connecting Azure IoT MQ to Azure Event Grid's MQTT broker](https://aka.ms/iot-mq-eg-bridge)
-### MQTT Clients Life Cycle Events
+### MQTT Clients Life Cycle Events
Client Life Cycle events allow applications to react to events about the client connection status or the client resource operations. It allows you to keep track of your client's connection status, react with a mitigation action for client disconnections, and track the namespace that your clients are attached to during automated failovers.Learn more about [MQTT Client Life Cycle Events](mqtt-client-life-cycle-events.md).
Use the following articles to learn more about the MQTT broker and its main conc
- [Terminology](mqtt-event-grid-namespace-terminology.md) - [Client authentication](mqtt-client-authentication.md) - [Access control](mqtt-access-control.md) -- [MQTT support](mqtt-support.md)
+- [MQTT protocol support](mqtt-support.md)
- [Routing MQTT messages](mqtt-routing.md) - [MQTT Client Life Cycle Events](mqtt-client-life-cycle-events.md).
event-grid Mqtt Publish And Subscribe Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-cli.md
- build-2023 - devx-track-azurecli - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
Azure Event GridΓÇÖs MQTT broker feature supports messaging using the MQTT proto
In this article, you use the Azure CLI to do the following tasks:
-1. Create an Event Grid namespace and enable MQTT
+1. Create an Event Grid namespace and enable MQTT broker
2. Create subresources such as clients, client groups, and topic spaces 3. Grant clients access to publish and subscribe to topic spaces 4. Publish and receive MQTT messages
event-grid Mqtt Publish And Subscribe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-portal.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
In this article, you use the Azure portal to do the following tasks:
-1. Create an Event Grid namespace and enable MQTT
+1. Create an Event Grid namespace and enable MQTT broker
2. Create sub resources such as clients, client groups, and topic spaces 3. Grant clients access to publish and subscribe to topic spaces 4. Publish and receive messages between clients
After a successful installation of Step, you should open a command prompt in you
> [!NOTE] > To keep the QuickStart simple, you'll be using only the Basics page to create a namespace. For detailed steps about configuring network, security, and other settings on other pages of the wizard, see [Create a Namespace](create-view-manage-namespaces.md). 1. After the deployment succeeds, select **Go to resource** to navigate to the Event Grid Namespace Overview page for your namespace.
-1. In the Overview page, you see that the **MQTT** is in **Disabled** state. To enable MQTT, select the **Disabled** link, it will redirect you to Configuration page.
-1. On **Configuration** page, select the **Enable MQTT** option, and then select **Apply** to apply the settings.
+1. In the Overview page, you see that the **MQTT broker** is in **Disabled** state. To enable MQTT broker, select the **Disabled** link, it will redirect you to Configuration page.
+1. On **Configuration** page, select the **Enable MQTT broker** option, and then select **Apply** to apply the settings.
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqtt-enable-mqtt-on-configuration.png" alt-text="Screenshot showing Event Grid namespace configuration page to enable MQTT." lightbox="./media/mqtt-publish-and-subscribe-portal/mqtt-enable-mqtt-on-configuration.png"::: ## Create clients
-1. On the left menu, select **Clients** in the **MQTT** section.
+1. On the left menu, select **Clients** in the **MQTT broker** section.
2. On the **Clients** page, select **+ Client** on the toolbar. :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/add-client-menu.png" alt-text="Screenshot of the Clients page with Add button selected." lightbox="./media/mqtt-publish-and-subscribe-portal/add-client-menu.png":::
After a successful installation of Step, you should open a command prompt in you
## Create topic spaces
-1. On the left menu, select **Topic spaces** in the **MQTT** section.
+1. On the left menu, select **Topic spaces** in the **MQTT broker** section.
2. On the **Topic spaces** page, select **+ Topic space** on the toolbar. :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/create-topic-space-menu.png" alt-text="Screenshot of Topic spaces page with create button selected." lightbox="./media/mqtt-publish-and-subscribe-portal/create-topic-space-menu.png":::
After a successful installation of Step, you should open a command prompt in you
## Configuring access control using permission bindings
-1. On the left menu, select **Permission bindings** in the **MQTT** section.
+1. On the left menu, select **Permission bindings** in the **MQTT broker** section.
2. On the Permission bindings page, select **+ Permission binding** on the toolbar. :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/create-permission-binding-menu.png" alt-text="Screenshot that shows the Permission bindings page with the Create button selected." lightbox="./media/mqtt-publish-and-subscribe-portal/create-permission-binding-menu.png":::
event-grid Mqtt Request Response Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-request-response-messages.md
description: 'Implementing Request-Response messaging pattern using MQTT broker,
- ignite-2023 Previously updated : 10/29/2023 Last updated : 11/15/2023
In this guide, you learn how to use MQTT v5 Request-Response messaging pattern to implement command-response flow with MQTT broker. Consider a sample scenario, in which a cloud application sends commands to devices and receives responses from the devices. ## Prerequisites-- You have an Event Grid namespace created with MQTT enabled. Refer to this [Quickstart - Publish and subscribe on an MQTT topic](mqtt-publish-and-subscribe-portal.md) to create the namespace, subresources, and to publish/subscribe on an MQTT topic.
+- You have an Event Grid namespace created with MQTT broker enabled. Refer to this [Quickstart - Publish and subscribe on an MQTT topic](mqtt-publish-and-subscribe-portal.md) to create the namespace, subresources, and to publish/subscribe on an MQTT topic.
## Configuration needed in Event Grid namespace to implement Request-Response messaging pattern
event-grid Mqtt Routing Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-enrichment.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Routing Event Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-event-schema.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Routing Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-filtering.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Routing To Event Hubs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-to-event-hubs-cli.md
- build-2023 - devx-track-azurecli - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Routing To Event Hubs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-to-event-hubs-portal.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing.md
description: 'An overview of Routing MQTT Messages and how to configure it.'
- ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-support.md
description: 'Describes the MQTT features supported by Azure Event GridΓÇÖs MQTT
- ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Topic Spaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-topic-spaces.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Mqtt Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-troubleshoot-errors.md
- build-2023 - ignite-2023 Previously updated : 05/23/2023 Last updated : 11/15/2023
event-grid Namespace Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/namespace-delivery-properties.md
description: Describes how you can set custom headers (or delivery properties) f
- ignite-2023 Previously updated : 10/10/2023 Last updated : 11/15/2023 # Delivery properties for namespace topics' subscriptions
event-grid Namespace Delivery Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/namespace-delivery-retry.md
description: This article describes how delivery and retry works with Azure Even
- ignite-2023 Previously updated : 10/20/2023 Last updated : 11/15/2023
event-grid Namespace Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/namespace-event-filtering.md
description: Describes how to filter events when creating subscriptions to Azure
- ignite-2023 Previously updated : 10/19/2023 Last updated : 11/15/2023 # Event filters for subscriptions to Azure Event Grid namespace topics
event-grid Namespace Handler Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/namespace-handler-event-hubs.md
description: Describes how you can use an Azure event hub as an event handler fo
- ignite-2023 Previously updated : 10/10/2023 Last updated : 11/15/2023 # Azure Event hubs as a handler destination in subscriptions to Azure Event Grid namespace topics
event-grid Namespace Push Delivery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/namespace-push-delivery-overview.md
description: Learn about push delivery supported by Azure Event Grid namespaces.
- ignite-2023 Previously updated : 10/16/2023 Last updated : 11/15/2023
event-grid Network Security Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/network-security-mqtt.md
description: This article describes how to use service tags for egress, IP firew
- ignite-2023 Previously updated : 10/06/2023 Last updated : 11/15/2023
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
Previously updated : 05/24/2023 Last updated : 11/15/2023 Title: Overview
Event Grid enables your clients to communicate on [custom MQTT topic names](http
Event Grid integrates with [Azure IoT MQ](https://aka.ms/iot-mq) to bridge its MQTT broker capability on the edge with Event GridΓÇÖs MQTT broker capability in the cloud. Azure IoT MQ is a new distributed MQTT broker for edge computing, running on Arc enabled Kubernetes clusters. It's now available in [public preview](https://aka.ms/iot-mq-preview) as part of Azure IoT Operations.
-The MQTT support in Event Grid is ideal for the implementation of automotive and mobility scenarios, among others. See [the reference architecture](mqtt-automotive-connectivity-and-data-solution.md) to learn how to build secure and scalable solutions for connecting millions of vehicles to the cloud, using AzureΓÇÖs messaging and data analytics services.
+The MQTT broker feature in Azure Event Grid is ideal for the implementation of automotive and mobility scenarios, among others. See [the reference architecture](mqtt-automotive-connectivity-and-data-solution.md) to learn how to build secure and scalable solutions for connecting millions of vehicles to the cloud, using AzureΓÇÖs messaging and data analytics services.
:::image type="content" source="media/overview/mqtt-messaging.png" alt-text="High-level diagram of Event Grid that shows bidirectional MQTT communication with publisher and subscriber clients." lightbox="media/overview/mqtt-messaging-high-res.png" border="false":::
-Event GridΓÇÖs MQTT support enables you to accomplish the following scenarios.
+Azure Event GridΓÇÖs MQTT broker feature enables you to accomplish the following scenarios.
#### Ingest IoT telemetry :::image type="content" source="media/overview/ingest-telemetry.png" alt-text="High-level diagram of Event Grid that shows IoT clients using MQTT protocol to send messages to a cloud app." lightbox="media/overview/ingest-telemetry-high-res.png" border="false":::
event-grid Publish Deliver Events With Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-deliver-events-with-namespace-topics.md
- ignite-2023 Previously updated : 10/24/2023 Last updated : 11/15/2023 # Publish and deliver events using namespace topics (preview)
event-grid Publish Events To Namespace Topics Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-to-namespace-topics-java.md
- ignite-2023 Previously updated : 11/02/2023 Last updated : 11/15/2023 # Publish events to namespace topics using Java
event-grid Publish Events Using Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-using-namespace-topics.md
- ignite-2023 Previously updated : 05/24/2023 Last updated : 11/15/2023 # Publish to namespace topics and consume events in Azure Event Grid
event-grid Publisher Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publisher-operations.md
description: Describes publisher operations supported by Azure Event Grid when u
- ignite-2023 Previously updated : 11/02/2023 Last updated : 11/15/2023 # Azure Event Grid - publisher operations
event-grid Pull Delivery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/pull-delivery-overview.md
Previously updated : 11/02/2023 Last updated : 11/15/2023 Title: Introduction to pull delivery
event-grid Push Delivery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/push-delivery-overview.md
Previously updated : 04/21/2023 Last updated : 11/15/2023 Title: Introduction to push delivery
event-grid Receive Events From Namespace Topics Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/receive-events-from-namespace-topics-java.md
- ignite-2023 Previously updated : 11/02/2023 Last updated : 11/15/2023 # Receive events using pull delivery with Java
event-grid Subscribe To Microsoft Entra Id Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-microsoft-entra-id-events.md
description: This article explains how to subscribe to events published by Micro
- ignite-2023 Previously updated : 10/09/2023 Last updated : 11/15/2023 # Subscribe to events published by Microsoft Entra ID
event-grid Subscriber Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscriber-operations.md
description: Describes subscriber operations supported by Azure Event Grid when
- ignite-2023 Previously updated : 11/02/2023 Last updated : 11/15/2023 # Azure Event Grid - subscriber operations
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/whats-new.md
- build-2023 - ignite-2023 Previously updated : 10/30/2023 Last updated : 11/15/2023 # What's new in Azure Event Grid?
event-hubs Azure Event Hubs Kafka Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/azure-event-hubs-kafka-overview.md
Title: Introduction to Apache Kafka in Event Hubs on Azure Cloud description: Learn what Apache Kafka in the Event Hubs service on Azure Cloud is and how to use it to stream data from Apache Kafka applications without setting up a Kafka cluster on your own. Previously updated : 02/03/2023 Last updated : 11/16/2023 # What is Azure Event Hubs for Apache Kafka
As explained [above](#is-apache-kafka-the-right-solution-for-your-workload), the
The client-side [compression](https://cwiki.apache.org/confluence/display/KAFKA/Compression) feature of Apache Kafka compresses a batch of multiple messages into a single message on the producer side and decompresses the batch on the consumer side. The Apache Kafka broker treats the batch as a special message.
-This feature is fundamentally at odds with Azure Event Hubs' multi-protocol model, which allows for messages, even those sent in batches, to be individually retrievable from the broker and through any protocol.
+Kafka producer application developers can enable message compression by setting the compression.type property. In the public preview, the only compression algorithm supported is gzip.
+ Compression.type = none | gzip
+These changes are exposed in the header, which then allows the consumer to properly decompress the data. The feature is currently only supported for Apache Kafka traffic producer and consumer traffic and not AMQP or web service traffic.
-The payload of any Event Hubs event is a byte stream and the content can be compressed with an algorithm of your choosing. The Apache Avro encoding format supports compression natively.
+The payload of any Event Hubs event is a byte stream and the content can be compressed with an algorithm of your choosing though in public preview, the only option is gzip. The benefits of using Kafka compression are through smaller message size, increased payload size you can transmit, and lower message broker resource consumption.
### Kafka Streams
event-hubs Monitor Event Hubs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs-reference.md
Azure Event Hubs supports the following dimensions for metrics in Azure Monitor.
| - | -- | |Entity Name| Name of the event hub. With the 'Incoming Requests' metric, the Entity Name dimension has a value of '-NamespaceOnlyMetric-' in addition to all your event hubs. It represents the requests that were made at the namespace level. Examples include a request to list all event hubs in the namespace or requests to entities that failed authentication or authorization.| + ## Resource logs+
+Azure Event Hubs now has the capability to dispatch logs to either of two destination tables - Azure Diagnostic or [Resource specific tables](~/articles/azure-monitor/essentials/resource-logs.md) in Log Analytics. You could use the toggle available on Azure portal to choose destination tables.
++ [!INCLUDE [event-hubs-diagnostic-log-schema](./includes/event-hubs-diagnostic-log-schema.md)]
Runtime audit logs capture aggregated diagnostic information for all data plane
Runtime audit logs include the elements listed in the following table:
-Name | Description
-- | -
-`ActivityId` | A randomly generated UUID that ensures uniqueness for the audit activity.
-`ActivityName` | Runtime operation name.
-`ResourceId` | Resource associated with the activity.
-`Timestamp` | Aggregation time.
-`Status` | Status of the activity (success or failure).
-`Protocol` | Type of the protocol associated with the operation.
-`AuthType` | Type of authentication (Microsoft Entra ID or SAS Policy).
-`AuthKey` | Microsoft Entra application ID or SAS policy name that's used to authenticate to a resource.
-`NetworkType` | Type of the network access: `Public` or `Private`.
-`ClientIP` | IP address of the client application.
-`Count` | Total number of operations performed during the aggregated period of 1 minute.
-`Properties` | Metadata that are specific to the data plane operation.
-`Category` | Log category
+
+Name | Description | Supported in Azure Diagnostics | Supported in Resource Specific table
+- | -| --| --|
+`ActivityId` | A randomly generated UUID that ensures uniqueness for the audit activity. | Yes | Yes
+`ActivityName` | Runtime operation name.| Yes | Yes
+`ResourceId` | Resource associated with the activity. | Yes | Yes
+`Timestamp` | Aggregation time. | Yes | No
+ `TimeGenerated [UTC]`|Time of executed operation (in UTC)| No | Yes
+`Status` | Status of the activity (success or failure). | Yes | Yes
+`Protocol` | Type of the protocol associated with the operation. | Yes | Yes
+`AuthType` | Type of authentication (Azure Active Directory or SAS Policy). | Yes | Yes
+`AuthKey` | Azure Active Directory application ID or SAS policy name that's used to authenticate to a resource. | Yes | Yes
+`NetworkType` | Type of the network access: `Public` or `Private`. | Yes | Yes
+`ClientIP` | IP address of the client application. | Yes | Yes
+`Count` | Total number of operations performed during the aggregated period of 1 minute. | Yes | Yes
+`Properties` | Metadata that are specific to the data plane operation. | Yes | Yes
+`Category` | Log category | Yes | NO
+ `Provider`|Name of Service emitting the logs e.g., Eventhub | No | Yes
+ `Type` | Type of logs emitted | No | Yes
++ Here's an example of a runtime audit log entry:
+AzureDiagnostics :
```json { "ActivityId": "<activity id>",
Here's an example of a runtime audit log entry:
"Category": "RuntimeAuditLogs" }
+```
+Resource specific table entry:
+```json
+{
+ "ActivityId": "<activity id>",
+ "ActivityName": "ConnectionOpen | Authorization | SendMessage | ReceiveMessage",
+ "ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.EVENTHUB/NAMESPACES/<Event Hubs namespace>/eventhubs/<event hub name>",
+ "TimeGenerated (UTC)": "1/1/2021 8:40:06 PM +00:00",
+ "Status": "Success | Failure",
+ "Protocol": "AMQP | KAFKA | HTTP | Web Sockets",
+ "AuthType": "SAS | Azure Active Directory",
+ "AuthId": "<AAD application name | SAS policy name>",
+ "NetworkType": "Public | Private",
+ "ClientIp": "x.x.x.x",
+ "Count": 1,
+ "Type": "AZMSRuntimeAUditLogs",
+ "Provider":"EVENTHUB"
+ }
+ ``` ## Application metrics logs
Name | Description
`OffsetFetch` | Number of offset fetch calls made to the event hub. - ## Azure Monitor Logs tables Azure Event Hubs uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables the service uses, see [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#event-hubs).
+You can view our sample queries to get started with different log categories.
+ > [!IMPORTANT] > Dimensions aren't exported to a Log Analytics workspace.
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
description: Learn about virtual network gateways for ExpressRoute, their SKUs,
-
- - ignite-2023
Previously updated : 11/15/2023 Last updated : 11/16/2023 + + # About ExpressRoute virtual network gateways To connect your Azure virtual network and your on-premises network using ExpressRoute, you must first create a virtual network gateway. A virtual network gateway serves two purposes: exchange IP routes between the networks and route network traffic. This article explains different gateway types, gateway SKUs, and estimated performance by SKU. This article also explains ExpressRoute [FastPath](#fastpath), a feature that enables the network traffic from your on-premises network to bypass the virtual network gateway to improve performance.
Before you create an ExpressRoute gateway, you must create a gateway subnet. The
>[!NOTE] >[!INCLUDE [vpn-gateway-gwudr-warning.md](../../includes/vpn-gateway-gwudr-warning.md)] >
+>- Linking a private DNS resolver to the virtual network where the ExpressRoute virtual network gateway is deployed may cause management connectivity issues and is not recommended.
When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others.
A virtual network with an ExpressRoute gateway can have virtual network peering
## ExpressRoute scalable gateway (Preview)
-The ErGwScale virtual network gateway SKU enables you to achieve 40-Gbps connectivity to VMs and Private Endpoints in the virtual network. This SKU allows you to set a minimum and maximum scale unit for the virtual network gateway infrastructure, which auto scales based on the active bandwidth. You can also set a fixed scale unit to maintain a constant connectivity at a desired bandwidth value.
+The ErGwScale virtual network gateway SKU enables you to achieve 40-Gbps connectivity to VMs and Private Endpoints in the virtual network. This SKU allows you to set a minimum and maximum scale unit for the virtual network gateway infrastructure, which auto scales based on the active bandwidth or flow count. You can also set a fixed scale unit to maintain a constant connectivity at a desired bandwidth value.
### Availability zone deployment & regional availability
ErGwScale is available in preview in the following regions:
### Autoscaling vs. fixed scale unit
-The virtual network gateway infrastructure auto scales between the minimum and maximum scale unit that you configure, based on the bandwidth utilization. The scale operations might take up to 30 minutes to complete. If you want to achieve a fixed connectivity at a specific bandwidth value, you can configure a fixed scale unit by setting the minimum scale unit and the maximum scale unit to the same value.
+The virtual network gateway infrastructure auto scales between the minimum and maximum scale unit that you configure, based on the bandwidth or flow count utilization. Scale operations might take up to 30 minutes to complete. If you want to achieve a fixed connectivity at a specific bandwidth value, you can configure a fixed scale unit by setting the minimum scale unit and the maximum scale unit to the same value.
### Limitations
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
Previously updated : 01/11/2023 Last updated : 11/14/2023 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
# Deploy and configure Azure Firewall using the Azure portal
-Controlling outbound network access is an important part of an overall network security plan. For example, you may want to limit access to web sites. Or, you may want to limit the outbound IP addresses and ports that can be accessed.
+Controlling outbound network access is an important part of an overall network security plan. For example, you might want to limit access to web sites. Or, you might want to limit the outbound IP addresses and ports that can be accessed.
One way you can control outbound network access from an Azure subnet is with Azure Firewall. With Azure Firewall, you can configure:
One way you can control outbound network access from an Azure subnet is with Azu
Network traffic is subjected to the configured firewall rules when you route your network traffic to the firewall as the subnet default gateway.
-For this article, you create a simplified single VNet with two subnets for easy deployment.
+For this article, you create a simplified single virtual network with two subnets for easy deployment.
-For production deployments, a [hub and spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) is recommended, where the firewall is in its own VNet. The workload servers are in peered VNets in the same region with one or more subnets.
+For production deployments, a [hub and spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) is recommended, where the firewall is in its own virtual network. The workload servers are in peered virtual networks in the same region with one or more subnets.
* **AzureFirewallSubnet** - the firewall is in this subnet. * **Workload-SN** - the workload server is in this subnet. This subnet's network traffic goes through the firewall.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Set up the network
-First, create a resource group to contain the resources needed to deploy the firewall. Then create a VNet, subnets, and a test server.
+First, create a resource group to contain the resources needed to deploy the firewall. Then create a virtual network, subnets, and a test server.
### Create a resource group
The resource group contains all the resources used in this procedure.
1. Select **Review + create**. 1. Select **Create**.
-### Create a VNet
+### Create a virtual network
-This VNet will have two subnets.
+This virtual network has two subnets.
> [!NOTE] > The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
This VNet will have two subnets.
1. Select **Create**. 1. For **Subscription**, select your subscription. 1. For **Resource group**, select **Test-FW-RG**.
-1. For **Name**, type **Test-FW-VN**.
-1. Select **Next: IP addresses**.
+1. For **Virtual network name**, type **Test-FW-VN**.
+1. For **Region**, select the same region that you used previously.
+1. Select **Next**.
+1. On the **Security** tab, select **Enable Azure Firewall**.
+1. For **Azure Firewall name**, type **Test-FW01**.
+1. For **Azure Firewall public IP address**, select **Create a public IP address**.
+1. For **Name**, type **fw-pip** and select **OK**.
+1. Select **Next**.
1. For **Address space**, accept the default **10.0.0.0/16**.
-1. Under **Subnet name**, select **default** and change it to **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
-1. For **Subnet address range**, change it to **10.0.1.0/26**.
+1. Under **Subnet**, select **default** and change the **Name** to **Workload-SN**.
+1. For **Starting address**, change it to **10.0.2.0/24**.
1. Select **Save**.--
- Next, create a subnet for the workload server.
-
-1. Select **Add subnet**.
-1. For **Subnet name**, type **Workload-SN**.
-1. For **Subnet address range**, type **10.0.2.0/24**.
-1. Select **Add**.
1. Select **Review + create**. 1. Select **Create**.
Now create the workload virtual machine, and place it in the **Workload-SN** sub
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-## Deploy the firewall
-
-Deploy the firewall into the VNet.
+## Examine the firewall
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. Type **firewall** in the search box and press **Enter**.
-3. Select **Firewall** and then select **Create**.
-4. On the **Create a Firewall** page, use the following table to configure the firewall:
-
- |Setting |Value |
- |||
- |Subscription |\<your subscription\>|
- |Resource group |**Test-FW-RG** |
- |Name |**Test-FW01**|
- |Region |Select the same location that you used previously|
- |Firewall SKU|**Standard**|
- |Firewall management|**Use Firewall rules (classic) to manage this firewall**|
- |Choose a virtual network |**Use existing**: **Test-FW-VN**|
- |Public IP address |**Add new**<br>**Name**: **fw-pip**|
-
-5. Accept the other default values, then select **Review + create**.
-6. Review the summary, and then select **Create** to create the firewall.
-
- This will take a few minutes to deploy.
-7. After deployment completes, select the **Go to resource**.
-8. Note the firewall private and public IP addresses. You'll use these addresses later.
+7. Go to the resource group and select the firewall.
+8. Note the firewall private and public IP addresses. You use these addresses later.
## Create a default route
-When creating a route for outbound and inbound connectivity through the firewall, a default route to 0.0.0.0/0 with the virtual appliance private IP as a next hop is sufficient. This will take care of any outgoing and incoming connections to go through the firewall. As an example, if the firewall is fulfilling a TCP-handshake and responding to an incoming request, then the response is directed to the IP address who sent the traffic. This is by design.
+When you create a route for outbound and inbound connectivity through the firewall, a default route to 0.0.0.0/0 with the virtual appliance private IP as a next hop is sufficient. This directs any outgoing and incoming connections through the firewall. As an example, if the firewall is fulfilling a TCP-handshake and responding to an incoming request, then the response is directed to the IP address who sent the traffic. This is by design.
-As a result, there is no need create an additional user defined route to include the AzureFirewallSubnet IP range. This may result in dropped connections. The original default route is sufficient.
+As a result, there's no need create another user defined route to include the AzureFirewallSubnet IP range. This might result in dropped connections. The original default route is sufficient.
For the **Workload-SN** subnet, configure the outbound default route to go through the firewall.
After deployment completes, select **Go to resource**.
13. Select **OK**. 14. Select **Routes** and then select **Add**. 15. For **Route name**, type **fw-dg**.
-1. For **Address prefix destination**, select **IP Addresses**.
+1. For **Destination type**, select **IP Addresses**.
1. For **Destination IP addresses/CIDR ranges**, type **0.0.0.0/0**. 1. For **Next hop type**, select **Virtual appliance**.
Now, test the firewall to confirm that it works as expected.
5. Browse to `https://www.microsoft.com`.
- You should be blocked by the firewall.
+ The firewall should block you.
-So now you've verified that the firewall rules are working:
+So now you verified that the firewall rules are working:
* You can connect to the virtual machine using RDP. * You can browse to the one allowed FQDN, but not to any others.
hdinsight Hdinsight Restrict Outbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-outbound-traffic.md
Create a subnet named **AzureFirewallSubnet** in the virtual network where your
### Create a new firewall for your cluster
-Create a firewall named **Test-FW01** using the steps in **Deploy the firewall** from [Tutorial: Deploy and configure Azure Firewall using the Azure portal](../firewall/tutorial-firewall-deploy-portal.md#deploy-the-firewall).
+Create a firewall named **Test-FW01** using the steps in **Deploy the firewall** from [Tutorial: Deploy and configure Azure Firewall using the Azure portal](../firewall/tutorial-firewall-deploy-portal.md#create-a-virtual-network).
### Configure the firewall with application rules
integration-environments Create Application Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/integration-environments/create-application-group.md
Title: Create application groups to organize Azure resources description: Create an application group to logically organize and manage Azure resources related to your integration solutions.-+ Last updated 11/15/2023
integration-environments Create Business Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/integration-environments/create-business-process.md
Title: Create business processes to add business context description: Model a business process to add business context about transactions in Standard workflows created with Azure Logic Apps.-+ Last updated 11/15/2023
integration-environments Create Integration Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/integration-environments/create-integration-environment.md
Title: Create integration environments for Azure resources description: Create an integration environment to centrally organize and manage Azure resources related to your integration solutions.-+ Last updated 11/15/2023
integration-environments Deploy Business Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/integration-environments/deploy-business-process.md
Title: Deploy business process and tracking profile to Azure description: Deploy your business process and tracking profile for an application group in an integration environment to Standard logic apps in Azure.-+ Last updated 11/15/2023
integration-environments Manage Business Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/integration-environments/manage-business-process.md
Title: Manage business processes description: Learn how to edit the description, make a copy, discard pending changes, or delete the deployment for a business process in an application group.-+ Last updated 11/15/2023
integration-environments Map Business Process Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/integration-environments/map-business-process-workflow.md
Title: Map business processes to Standard workflows description: Map business process stages to operations in Standard workflows created with Azure Logic Apps.-+ Last updated 11/15/2023
integration-environments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/integration-environments/overview.md
Title: Overview description: Centrally organize Azure resources for integration solutions. Model and map business processes to Azure resources. Collect business data from deployed solutions.-+ Last updated 11/15/2023
iot-operations Concept Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-custom/concept-providers.md
The following solution snippet demonstrates installing a Helm chart using the He
}, "opcUaConnector": { "settings": {
- "discoveryUrl": "opc.tcp://opcplc-000000.alice-springs:50000",
+ "discoveryUrl": "opc.tcp://opcplc-000000:50000",
"authenticationMode": "Anonymous", "autoAcceptUnrustedCertificates": "true" }
iot-operations Howto Prepare Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md
An Azure Arc-enabled Kubernetes cluster is a prerequisite for deploying Azure Io
To prepare your Azure Arc-enabled Kubernetes cluster, you need: - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- At least **Contributor** role permissions in your subscription plus the **Microsoft.Authorization/roleAssignments/write** permission. - [Azure CLI version 2.42.0 or newer installed](/cli/azure/install-azure-cli) on your development machine. - Hardware that meets the [system requirements](/azure/azure-arc/kubernetes/system-requirements).
iot-operations Concept About State Store Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/develop/concept-about-state-store-protocol.md
Title: About Azure IoT MQ state store protocol
description: Learn about the fundamentals of the Azure IoT MQ state store protocol
-#
+ - ignite-2023
iot-operations Howto Develop Dapr Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/develop/howto-develop-dapr-apps.md
Title: Use Dapr to develop distributed application workloads
description: Develop distributed applications that talk with Azure IoT MQ using Dapr.
-#
+ - ignite-2023
To start, you create a yaml file that uses the following definitions:
> | Component | Description | > |-|-|
-> | `volumes.dapr-unit-domain-socket` | The socket file used to communicate with the Dapr sidecar |
-> | `volumes.mqtt-client-token` | The System Authentication Token used for authenticating the Dapr pluggable components with the MQ broker and State Store |
-> | `volumes.aio-mq-ca-cert-chain` | The chain of trust to validate the MQTT broker TLS cert |
-> | `containers.mq-event-driven` | The pre-built dapr application container. **Replace this with your own container if desired**. |
+> | `volumes.dapr-unix-domain-socket` | A shared directory to host unix domain sockets used to communicate between the Dapr sidecar and the pluggable components |
+> | `volumes.mqtt-client-token` | The System Authentication Token used for authenticating the Dapr pluggable components with the IoT MQ broker |
+> | `volumes.aio-ca-trust-bundle` | The chain of trust to validate the MQTT broker TLS cert. This defaults to the test certificate deployed with Azure IoT Operations |
+> | `containers.mq-dapr-app` | The Dapr application container you want to deploy |
1. Save the following yaml to a file named `dapr-app.yaml`:
To start, you create a yaml file that uses the following definitions:
apiVersion: apps/v1 kind: Deployment metadata:
- name: mq-event-driven-dapr
+ name: mq-dapr-app
namespace: azure-iot-operations spec: replicas: 1 selector: matchLabels:
- app: mq-event-driven-dapr
+ app: mq-dapr-app
template: metadata: labels:
- app: mq-event-driven-dapr
+ app: mq-dapr-app
annotations: dapr.io/enabled: "true" dapr.io/unix-domain-socket-path: "/tmp/dapr-components-sockets"
- dapr.io/app-id: "mq-event-driven-dapr"
+ dapr.io/app-id: "mq-dapr-app"
dapr.io/app-port: "6001" dapr.io/app-protocol: "grpc" spec:
To start, you create a yaml file that uses the following definitions:
containers: # Container for the dapr quickstart application
- - name: mq-event-driven-dapr
- image: ghcr.io/azure-samples/explore-iot-operations/mq-event-driven-dapr:latest
+ - name: mq-dapr-app
+ image: <YOUR DAPR APPLICATION>
# Container for the Pub/sub component - name: aio-mq-pubsub-pluggable
iot-operations Quickstart Add Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-add-assets.md
_OPC UA servers_ are software applications that communicate with assets. _OPC UA
Complete [Quickstart: Deploy Azure IoT Operations to an Arc-enabled Kubernetes cluster](quickstart-deploy.md) before you begin this quickstart.
+To sign in to the Azure IoT Operations portal you need a work or school account in the tenant where you deployed Azure IoT Operations. If you're currently using a Microsoft account (MSA), you need to create a Microsoft Entra ID with at least contributor permissions for the resource group that contains your **Kubernetes - Azure Arc** instance. To learn more, see [Known Issues > Create Entra account](../troubleshoot/known-issues.md#azure-iot-operations-preview-portal).
+ Install the [mqttui](https://github.com/EdJoPaTo/mqttui) tool on the Ubuntu machine where you're running Kubernetes: ```bash
wget https://github.com/EdJoPaTo/mqttui/releases/download/v0.19.0/mqttui-v0.19.0
sudo dpkg -i mqttui-v0.19.0-x86_64-unknown-linux-gnu.deb ```
-Install the [k9s](https://k9scli.io/) tool on the Ubuntu machine where you're running Kubernetes:
-
-```bash
-sudo snap install k9s
-```
+> [!TIP]
+> If you're running the quickstart on another platform, you can use other MQTT tools such as [MQTT Explorer](https://apps.microsoft.com/detail/9PP8SFM082WD).
## What problem will we solve?
The data that OPC UA servers expose can have a complex structure and can be diff
## Sign into the Azure IoT Operations portal
-To create asset endpoints, assets and subscribe to OPC UA tags and events, use the Azure IoT Operations (preview) portal. Navigate to the [Azure IoT Operations](https://aka.ms/iot-operations-portal) portal in your browser and sign with your Microsoft Entra ID credentials.
+To create asset endpoints, assets and subscribe to OPC UA tags and events, use the Azure IoT Operations (preview) portal. Navigate to the [Azure IoT Operations](https://iotoperations.azure.com) portal in your browser and sign with your Microsoft Entra ID credentials.
> [!IMPORTANT]
-> You must use an Microsoft Entra account, you can't use a Microsoft account (MSA) to sign in. To learn more, see [Known Issues > Create Entra account](../troubleshoot/known-issues.md#azure-iot-operations-preview-portal).
+> You must use a work or school account to sign in to the Azure IoT Operations portal. To learn more, see [Known Issues > Create Entra account](../troubleshoot/known-issues.md#azure-iot-operations-preview-portal).
## Select your cluster
This configuration deploys a new module called `opc-ua-connector-0` to the clust
When the OPC PLC simulator is running, data flows from the simulator, to the connector, to the OPC UA broker, and finally to the MQ broker.
-<!-- TODO: Verify if this is still required -->
- To enable the asset endpoint to use an untrusted certificate:
-> [!WARNING]
+> [!CAUTION]
> Don't use untrusted certificates in production environments.
-1. On the machine where your Kubernetes cluster is running, create a file called _doe.yaml_ with the following content:
-
- ```yaml
- apiVersion: deviceregistry.microsoft.com/v1beta1
- kind: AssetEndpointProfile
- metadata:
- name: opc-ua-connector-0
- namespace: azure-iot-operations
- spec:
- additionalConfiguration: |-
- {
- "applicationName": "opc-ua-connector-0",
- "defaults": {
- "publishingIntervalMilliseconds": 1000,
- "samplingIntervalMilliseconds": 500,
- "queueSize": 1,
- },
- "session": {
- "timeout": 60000
- },
- "subscription": {
- "maxItems": 1000,
- },
- "security": {
- "autoAcceptUntrustedServerCertificates": true
- }
- }
- targetAddress: opc.tcp://opcplc-000000.azure-iot-operations:50000
- transportAuthentication:
- ownCertificates: []
- userAuthentication:
- mode: Anonymous
- uuid: doe-opc-ua-connector-0
- ```
-
-1. Run the following command to apply the configuration:
+1. Run the following command to apply the configuration to use an untrusted certificate:
```bash
- kubectl apply -f doe.yaml
+ kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/opc-ua-connector-0.yaml
```
-1. Restart the `aio-opc-supervisor` pod:
+1. Restart the `aio-opc-supervisor` pod by using a command that looks like the following example:
```bash kubectl delete pod aio-opc-supervisor-956fbb649-k9ppr -n azure-iot-operations ```
- The name of your pod might be different. To find the name of your pod, run the following command:
+ The name of your `aio-opc-supervisor` pod will be different. To find the name of your pod, run the following command:
```bash kubectl get pods -n azure-iot-operations
To verify data is flowing from your assets by using the **mqttui** tool:
1. Run the following command to make the MQ broker accessible from your local machine: ```bash
- # Create Listener
- kubectl apply -f - <<EOF
- apiVersion: mq.iotoperations.azure.com/v1beta1
- kind: BrokerListener
- metadata:
- name: az-mqtt-non-tls-listener
- namespace: azure-iot-operations
- spec:
- brokerRef: broker
- authenticationEnabled: false
- authorizationEnabled: false
- port: 1883
- EOF
+ kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/az-mqtt-non-tls-listener.yaml
```
-1. Run the following command to set up port forwarding for the MQ broker. This command blocks the terminal, for subsequent commands you need a new terminal:
+ > [!CAUTION]
+ > This configuration exposes the MQ broker without TLS. Don't use this configuration in production environments.
+
+1. Run the following command to find the `EXTERNAL-IP` address that the non-TLS listener pod is using:
```bash
- kubectl port-forward svc/aio-mq-dmqtt-frontend 1883:mqtt-1883 -n azure-iot-operations
+ kubectl get svc aio-mq-dmqtt-frontend-nontls -n azure-iot-operations
```
-1. In a separate terminal window, run the following command to connect to the MQ broker using the **mqttui** tool:
+1. In a separate terminal window, run the following command to connect to the MQ broker using the **mqttui** tool. Replace the `<external-ip>` placeholder with the `EXTERNAL-IP` address that you found in the previous step:
```bash
- mqttui -b mqtt://127.0.0.1:1883
+ mqttui -b mqtt://<external-ip>:1883
``` 1. Verify that the thermostat asset you added is publishing data. You can find the telemetry in the `azure-iot-operations/data` topic. :::image type="content" source="media/quickstart-add-assets/mqttui-output.png" alt-text="Screenshot of the mqttui topic display showing the temperature telemetry.":::
- If there's no data flowing, restart the `aio-opc-opc.tcp-1` pod. In the `k9s` tool, hover over the pod, and press _ctrl-k_ to kill a pod, the pod restarts automatically.
+ If there's no data flowing, restart the `aio-opc-opc.tcp-1` pod by using a command that looks like the following example:
-The sample tags you added in the previous quickstart generate messages from your asset that look like the following samples:
+ ```bash
+ kubectl delete pod aio-opc-opc.tcp-1-849dd78866-vhmz6 -n azure-iot-operations
+ ```
+
+ The name of your `aio-opc-opc.tcp-1` pod will be different. To find the name of your pod, run the following command:
+
+ ```bash
+ kubectl get pods -n azure-iot-operations
+ ```
+
+The sample tags you added in the previous quickstart generate messages from your asset that look like the following examples:
```json {
aio-akri-otel-collector-5c775f745b-g97qv 1/1 Running 3 (4h15m ago)
aio-akri-agent-daemonset-mp6v7 1/1 Running 3 (4h15m ago) 2d23h ```
-On the machine where your Kubernetes cluster is running, create a file called _opcua-configuration.yaml_ with the following content:
-
-```yaml
-apiVersion: akri.sh/v0
-kind: Configuration
-metadata:
- name: akri-opcua-asset
-spec:
- discoveryHandler:
- name: opcua-asset
- discoveryDetails: "opcuaDiscoveryMethod:\n - asset:\n endpointUrl: \" opc.tcp://opcplc-000000:50000\"\n useSecurity: false\n autoAcceptUntrustedCertificates: true\n"
- brokerProperties: {}
- capacity: 1
-```
-
-Run the following command to apply the configuration:
+On the machine where your Kubernetes cluster is running, run the following command to apply the configuration for a new configuration for the discovery handler:
```bash
-kubectl apply -f opcua-configuration.yaml -n azure-iot-operations
+kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/akri-opcua-asset.yaml
``` To verify the configuration, run the following command to view the Akri instances that represent the OPC UA data sources discovered by Akri:
iot-operations Quickstart Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-deploy.md
For this quickstart, we recommend GitHub Codespaces as a quick way to get starte
* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* At least **Contributor** role permissions in your subscription plus the **Microsoft.Authorization/roleAssignments/write** permission.
- * A [GitHub](https://github.com) account. # [Windows](#tab/windows) * An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* At least **Contributor** role permissions in your subscription plus the **Microsoft.Authorization/roleAssignments/write** permission.
- <!-- * Review the [AKS Edge Essentials requirements and support matrix](/azure/aks/hybrid/aks-edge-system-requirements) for other prerequisites, specifically the system and OS requirements. --> * Azure CLI installed on your development machine. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
For this quickstart, we recommend GitHub Codespaces as a quick way to get starte
* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* At least **Contributor** role permissions in your subscription plus the **Microsoft.Authorization/roleAssignments/write** permission.
- * Azure CLI installed on your development machine. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli). This quickstart requires Azure CLI version 2.42.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary.
iot-operations Quickstart Process Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-process-telemetry.md
Before you begin this quickstart, you must complete the following quickstarts:
- [Quickstart: Deploy Azure IoT Operations to an Arc-enabled Kubernetes cluster](quickstart-deploy.md) - [Quickstart: Add OPC UA assets to your Azure IoT Operations cluster](quickstart-add-assets.md)
-You also need a Microsoft Fabric subscription. You can sign up for a free [Microsoft Fabric (Preview) Trial](/fabric/get-started/fabric-trial).
+You also need a Microsoft Fabric subscription. You can sign up for a free [Microsoft Fabric (Preview) Trial](/fabric/get-started/fabric-trial). In your Microsoft Fabric subscription, ensure that the following settings are enabled for your tenant:
+
+- [Allow service principals to use Power BI APIs](/fabric/admin/service-admin-portal-developer#allow-service-principals-to-use-power-bi-apis)
+- [Users can access data stored in OneLake with apps external to Fabric](/fabric/admin/service-admin-portal-onelake#users-can-access-data-stored-in-onelake-with-apps-external-to-fabric)
+
+To learn more, see [Microsoft Fabric > About tenant settings](/fabric/admin/tenant-settings-index).
## What problem will we solve?
az keyvault secret set --vault-name <your-key-vault-name> --name AIOFabricSecret
To add the secret reference to your Kubernetes cluster, edit the **aio-default-spc** `secretproviderclass` resource:
-1. Enter the following command on the machine where your cluster is running to launch the `k9s` utility:
+1. Enter the following command on the machine where your cluster is running to edit the **aio-default-spc** `secretproviderclass` resource. The YAML configuration for the resource opens in your default editor:
```bash
- k9s
+ kubectl edit secretproviderclass aio-default-spc -n azure-iot-operations
```
-1. In `k9s` type `:` to open the command bar.
-
-1. In the command bar, type `secretproviderclass` and then press _Enter_. Then select the `aio-default-spc` resource.
-
-1. Type `e` to edit the resource. The editor that opens is `vi`, use `i` to enter insert mode, _ESC_ to exit insert mode, and `:wq` to save and exit.
- 1. Add a new entry to the array of secrets for your new Azure Key Vault secret. The `spec` section looks like the following example: ```yaml
+ # Please edit the object below. Lines beginning with a '#' will be ignored,
+ # and an empty file will abort the edit. If an error occurs while saving this file will be
+ # reopened with the relevant failures.
+ #
+ apiVersion: secrets-store.csi.x-k8s.io/v1
+ kind: SecretProviderClass
+ metadata:
+ creationTimestamp: "2023-11-16T11:43:31Z"
+ generation: 2
+ name: aio-default-spc
+ namespace: azure-iot-operations
+ resourceVersion: "89083"
+ uid: cda6add7-3931-47bd-b924-719cc862ca29
spec: parameters: keyvaultName: <this is the name of your key vault>
To add the secret reference to your Kubernetes cluster, edit the **aio-default-s
1. Save the changes and exit from the editor.
-The CSI driver updates secrets by using a polling interval, therefore the new secret isn't available to the pod until the polling interval is reached. To update the pod immediately, restart the pods for the component. For Data Processor, restart the `aio-dp-reader-worker-0` and `aio-dp-runner-worker-0` pods. In the `k9s` tool, hover over the pod, and press _ctrl-k_ to kill a pod, the pod restarts automatically
+The CSI driver updates secrets by using a polling interval, therefore the new secret isn't available to the pod until the polling interval is reached. To update the pod immediately, restart the pods for the component. For Data Processor, run the following commands:
+
+```bash
+kubectl delete pod aio-dp-reader-worker-0 -n azure-iot-operations
+kubectl delete pod aio-dp-runner-worker-0 -n azure-iot-operations
+```
## Create a basic pipeline
Create a basic pipeline to pass through the data to a separate MQTT topic.
In the following steps, leave all values at their default unless otherwise specified:
-1. In the [Azure IoT Operations portal](https://aka.ms/iot-operations-portal), navigate to **Data pipelines** in your cluster.
+1. In the [Azure IoT Operations portal](https://iotoperations.azure.com), navigate to **Data pipelines** in your cluster.
1. To create a new pipeline, select **+ Create pipeline**.
Create a reference data pipeline to temporarily store reference data in a refere
In the following steps, leave all values at their default unless otherwise specified:
-1. In the [Azure IoT Operations portal](https://aka.ms/iot-operations-portal), navigate to **Data pipelines** in your cluster.
+1. In the [Azure IoT Operations portal](https://iotoperations.azure.com), navigate to **Data pipelines** in your cluster.
1. Select **+ Create pipeline** to create a new pipeline.
After you publish the message, the pipeline receives the message and stores the
Create a Data Processor pipeline to process and enrich your data before it sends it to your Microsoft Fabric lakehouse. This pipeline uses the data stored in the equipment data reference data set to enrich messages.
-1. In the [Azure IoT Operations portal](https://aka.ms/iot-operations-portal), navigate to **Data pipelines** in your cluster.
+1. In the [Azure IoT Operations portal](https://iotoperations.azure.com), navigate to **Data pipelines** in your cluster.
1. Select **+ Create pipeline** to create a new pipeline.
iot-operations Concept Akri Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/concept-akri-architecture.md
Title: Azure IoT Akri architecture
description: Understand the key components in Azure IoT Akri Preview architecture.
-#
+ - ignite-2023
iot-operations Howto Autodetect Opcua Assets Using Akri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/howto-autodetect-opcua-assets-using-akri.md
Title: Discover OPC UA data sources using Azure IoT Akri
description: How to discover OPC UA data sources by using Azure IoT Akri Preview
-#
+ Last updated 11/14/2023
iot-operations Howto Configure Opc Plc Simulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/howto-configure-opc-plc-simulator.md
Title: Configure an OPC PLC simulator
description: How to configure an OPC PLC simulator
-#
+ - ignite-2023
iot-operations Howto Configure Opcua Authentication Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/howto-configure-opcua-authentication-options.md
Title: Configure OPC UA authentication options
description: How to configure OPC UA authentication options to use with Azure IoT OPC UA Broker
-#
+ - ignite-2023
iot-operations Howto Manage Assets Remotely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/howto-manage-assets-remotely.md
To configure an assets endpoint, you need a running instance of Azure IoT Operat
## Sign in to the Azure IoT Operations portal
-Navigate to the [Azure IoT Operations portal](https://aka.ms/iot-operations-portal) in your browser and sign in by using your Microsoft Entra ID credentials.
+Navigate to the [Azure IoT Operations portal](https://iotoperations.azure.com) in your browser and sign in by using your Microsoft Entra ID credentials.
## Select your cluster
The following script shows how to create a secret for the username and password
```sh # NAMESPACE is the namespace containing the MQ broker.
-export NAMESPACE="alice-springs-solution"
+export NAMESPACE="azure-iot-operations"
# Set the desired username and password here. export USERNAME="username"
iot-operations Overview Akri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/overview-akri.md
Title: Detect assets with Azure IoT Akri
description: Understand how Azure IoT Akri enables you to discover devices and assets at the edge, and expose them as resources on your cluster.
-#
+ - ignite-2023
iot-operations Overview Opcua Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/overview-opcua-broker.md
Title: Connect industrial assets using Azure IoT OPC UA Broker
description: Use the Azure IoT OPC UA Broker to connect to OPC UA servers and exchange telemetry with a Kubernetes cluster.
-#
+ - ignite-2023
iot-operations Howto Configure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/monitor/howto-configure-diagnostics.md
Title: Configure Azure IoT MQ diagnostics service
description: How to configure Azure IoT MQ diagnostics service. + - ignite-2023
iot-operations Tutorial Event Driven With Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/send-view-analyze-data/tutorial-event-driven-with-dapr.md
Title: Build event-driven apps with Dapr # description: Learn how to create a Dapr application that aggregates data and publishing on another topic--+++ Last updated 11/13/2023 #CustomerIntent: As an operator, I want to configure IoT MQ to bridge to Azure Event Grid MQTT broker PaaS so that I can process my IoT data at the edge and in the cloud.
-# Tutorial: Build event-driven apps with Dapr
+# Build event-driven apps with Dapr
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-In this tutorial, you learn how to subscribe to sensor data on an MQTT topic, and aggregate the data in a sliding window to then publish to a new topic.
+In this walkthrough, you deploy a Dapr application to the cluster. The Dapr application will consume simulated MQTT data published to Azure IoT MQ, apply a windowing function and then publish the result back to IoT MQ. This represents how high volume data can be aggregated on the edge to reduce message frequency and size. The Dapr application is stateless, and uses the IoT MQ state store to cache past values needed for the window calculations.
-The Dapr application in this tutorial is stateless. It uses the Distributed State Store to cache historical data used for the sliding window calculations.
+The Dapr application performs the following steps:
-The application subscribes to the topic `sensor/data` for incoming sensor data, and then publishes to `sensor/window_data` every 60 seconds.
+1. Subscribes to the `sensor/data` topic for sensor data.
+1. When receiving data on this topic, it's pushed to the Azure IoT MQ state store.
+1. Every **10 seconds**, it fetches the data from the state store, and calculates the *min*, *max*, *mean*, *median* and *75th percentile* values on any sensor data timestamped in the last **30 seconds**.
+1. Data older than **30 seconds** is expired from the state store.
+1. The result is published to the `sensor/window_data` topic in JSON format.
-> [!TIP]
-> This tutorial [disables Dapr CloudEvents](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-raw/) which enables it to publish and subscribe to raw MQTT events.
+> [!NOTE]
+> This tutorial [disables Dapr CloudEvents](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-raw/) which enables it to publish and subscribe using raw MQTT.
## Prerequisites
-1. [Deploy Azure IoT Operations](../get-started/quickstart-deploy.md)
-1. [Setup Dapr and MQ Pluggable Components](../develop/howto-develop-dapr-apps.md)
-1. [Docker](https://docs.docker.com/engine/install/) - for building the application container
-1. A Container registry - for hosting the application container
-
-## Create the Dapr application
-
-> [!TIP]
-> For convenience, a pre-built application container is available in the container registry `ghcr.io/azure-samples/explore-iot-operations/mq-event-driven-dapr`. You can use this container to follow along if you haven't built your own.
-
-### Build the container
-
-The following steps clone the GitHub repository containing the sample and then use docker to build the container:
-
-1. Clone the [Explore IoT Operations GitHub](https://github.com/Azure-Samples/explore-iot-operations)
-
- ```bash
- git clone https://github.com/Azure-Samples/explore-iot-operations
- ```
-
-1. Change to the Dapr sample directory and build the image
-
- ```bash
- cd explore-iot-operations/tutorials/mq-event-driven-dapr
- docker build docker build . -t mq-event-driven-dapr
- ```
-
-### Push to container registry
-
-To consume the application in your Kubernetes cluster, you need to push the image to a container registry such as the [Azure Container Registry](/azure/container-registry/container-registry-get-started-docker-cli). You could also push to a local container registry such as [minikube](https://minikube.sigs.k8s.io/docs/handbook/registry/) or [Docker](https://hub.docker.com/_/registry).
-
-| Component | Description |
-|-|-|
-| `container-alias` | The image alias containing the fully qualified path to your registry |
-
-```bash
-docker tag mq-event-driven-dapr {container-alias}
-docker push {container-alias}
-```
+* Azure IoT Operations installed - [Deploy Azure IoT Operations](../get-started/quickstart-deploy.md)
+* Dapr runtime and MQ's pluggable components installed - [Use Dapr to develop distributed application workloads](../develop/howto-develop-dapr-apps.md)
## Deploy the Dapr application
To start, create a yaml file that uses the following definitions:
| `volumes.dapr-unit-domain-socket` | The socket file used to communicate with the Dapr sidecar | | `volumes.mqtt-client-token` | The SAT used for authenticating the Dapr pluggable components with the MQ broker and State Store | | `volumes.aio-mq-ca-cert-chain` | The chain of trust to validate the MQTT broker TLS cert |
-| `containers.mq-event-driven` | The prebuilt dapr application container. **Replace this with your own container if desired**. |
+| `containers.mq-event-driven` | The prebuilt dapr application container. |
1. Save the following deployment yaml to a file named `app.yaml`:
To start, create a yaml file that uses the following definitions:
dapr.io/app-port: "6001" dapr.io/app-protocol: "grpc" spec:
+ serviceAccountName: mqtt-client
+ volumes: - name: dapr-unix-domain-socket emptyDir: {}
To start, create a yaml file that uses the following definitions:
## Deploy the simulator
-The repository contains a deployment for a simulator that generates sensor data to the `sensor/data` topic.
+Simulate test data by deploying a Kubernetes workload. It simulates a sensor by sending sample temperature, vibration, and pressure readings periodically to the MQ broker using an MQTT client on the `sensor/data` topic.
+
+1. [Download the simulator yaml](https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/tutorials/mq-event-driven-dapr/simulate-data.yaml) from the Explore IoT Operations repository
1. Deploy the simulator: ```bash
- kubectl apply -f ./yaml/simulate-data.yaml
+ kubectl apply -f simulate-data.yaml
``` 1. Confirm the simulator is running:
To verify the MQTT bridge is working, deploy an MQTT client to the cluster.
{"timestamp": "2023-11-14T05:21:49.807684+00:00", "window_size": 30, "temperature": {"min": 551.805, "max": 599.746, "mean": 579.929, "median": 581.917, "75_per": 591.678, "count": 29}, "pressure": {"min": 290.361, "max": 299.949, "mean": 295.98575862068964, "median": 296.383, "75_per": 298.336, "count": 29}, "vibration": {"min": 0.00114438, "max": 0.00497965, "mean": 0.0033943155172413792, "median": 0.00355337, "75_per": 0.00433423, "count": 29}} ```
+## Optional - Create the Dapr application
+
+The above tutorial uses a prebuilt container of the Dapr application. If you would like to modify and build the code yourself, follow these steps:
+
+### Prerequisites
+
+1. [Docker](https://docs.docker.com/engine/install/) - for building the application container
+1. A Container registry - for hosting the application container
+
+### Build the application
+
+1. Check out the Explore IoT Operations repository:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/explore-iot-operations
+ ```
+
+1. Change to the Dapr tutorial directory in the [Explore IoT Operations](https://github.com/Azure-Samples/explore-iot-operations) repository:
+
+ ```bash
+ cd explore-iot-operations/tutorials/mq-event-driven-dapr/src
+ ```
+
+1. Build the docker image:
+
+ ```bash
+ docker build docker build . -t mq-event-driven-dapr
+ ```
+
+1. To consume the application in your Kubernetes cluster, you need to push the image to a container registry such as the [Azure Container Registry](/azure/container-registry/container-registry-get-started-docker-cli). You could also push to a local container registry such as [minikube](https://minikube.sigs.k8s.io/docs/handbook/registry/) or [Docker](https://hub.docker.com/_/registry).
+
+ ```bash
+ docker tag mq-event-driven-dapr <container-alias>
+ docker push <container-alias>
+ ```
+
+1. Update your `app.yaml` to pull your newly created image.
+ ## Troubleshooting If the application doesn't start or you see the pods in `CrashLoopBackoff`, the logs for `daprd` are most helpful. The `daprd` is a container that is automatically deployed with your Dapr application.
Run the following command to view the logs:
kubectl logs dapr-workload daprd ```
-## Related content
+## Next steps
-- [Use Dapr to develop distributed application workloads](../develop/howto-develop-dapr-apps.md)
+* [Bridge MQTT data between IoT MQ and Azure Event Grid](tutorial-connect-event-grid.md)
iot-operations Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/known-issues.md
kubectl delete pod aio-opc-opc.tcp-1-f95d76c54-w9v9c -n azure-iot-operations
## Azure IoT Operations (preview) portal -- To sign in to the Azure IoT Operations portal, you need a Microsoft Entra ID. You can't sign in with a Microsoft account (MSA). To create an Entra ID in your Azure tenant:
+To sign in to the Azure IoT Operations portal, you need a Microsoft Entra ID with at least contributor permissions for the resource group that contains your **Kubernetes - Azure Arc** instance. You can't sign in with a Microsoft account (MSA). To create an Entra ID in your Azure tenant:
- 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with the same tenant and user name that you used to deploy Azure IoT Operations.
- 1. Create a new identity using Entra Identity and grant it at least **Contributor** permissions to the resource group that contains your cluster and Azure IoT Operations deployment.
- 1. Return to the [Azure IoT Operations portal](https://iotoperations.azure.com) and use the new account to sign in.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with the same tenant and user name that you used to deploy Azure IoT Operations.
+1. In the Azure portal, navigate to the **Microsoft Entra ID** section, select **Users > +New user > Create new user**. Create a new user and make a note of the password, you need it to sign in later.
+1. In the Azure portal, navigate to the resource group that contains your **Kubernetes - Azure Arc** instance. On the **Access control (IAM)** page, select **+Add > Add role assignment**.
+1. On the **Add role assignment page**, select **Privileged administrator roles**. Then select **Contributor** and then select **Next**.
+1. On the **Members** page, add your new user to the role.
+1. Select **Review and assign** to complete setting up the new user.
+
+You can now use the new user account to sign in to the [Azure IoT Operations portal](https://iotoperations.azure.com).
iot-operations Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/troubleshoot.md
The output from the pervious command looks like the following example:
```text NAMESPACE NAME AGE
-alice-springs-solution passthrough-data-pipeline 2d20h
-alice-springs-solution reference-data-pipeline 2d20h
-alice-springs-solution contextualized-data-pipeline 2d20h
+azure-iot-operations passthrough-data-pipeline 2d20h
+azure-iot-operations reference-data-pipeline 2d20h
+azure-iot-operations contextualized-data-pipeline 2d20h
``` To view detailed information for a pipeline, run the following command: ```bash
-kubectl describe pipelines passthrough-data-pipeline -n alice-springs-solution
+kubectl describe pipelines passthrough-data-pipeline -n azure-iot-operations
```
-The output from the pervious command looks like the following example:
+The output from the previous command looks like the following example:
```text ...
kubernetes-fleet Architectural Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/architectural-overview.md
# Architectural overview of Azure Kubernetes Fleet Manager
-Azure Kubernetes Fleet Manager (Fleet) is meant to solve at-scale and multi-cluster problems of Azure Kubernetes Service (AKS) clusters. This document provides an architectural overview of topological relationship between a Fleet resource and AKS clusters. This document also provides a conceptual overview of scenarios available on top of Fleet resource like Kubernetes resource propagation and multi-cluster Layer-4 load balancing.
+Azure Kubernetes Fleet Manager (Fleet) solves at-scale and multi-cluster problems of Azure Kubernetes Service (AKS) clusters. This document provides an architectural overview of the relationship between a Fleet resource and AKS clusters. This document also provides a conceptual overview of scenarios available on top of Fleet resource, including update orchestration, Kubernetes resource propagation (preview) and multi-cluster Layer-4 load balancing (preview).
[!INCLUDE [preview features note](./includes/preview/preview-callout.md)]
Fleet supports joining the following types of existing AKS clusters as member cl
* AKS clusters across different subscriptions of the same Microsoft Entra tenant * AKS clusters from different regions but within the same tenant
-You can join up to 100 AKS clusters as member clusters to the same fleet resource.
+If you want to use fleet only for the update orchestration scenario, you can create a fleet resource without the hub cluster. The fleet resource is treated just as a grouping resource, and does not have its own data plane. This is the default behavior when creating a new fleet resource. In this case, you can join up to 100 AKS clusters as member clusters to the same fleet resource.
-If you want to use fleet only for the update orchestration scenario, you can create a fleet resource without the hub cluster. The fleet resource is treated just as a grouping resource, and does not have its own data plane. This is the default behavior when creating a new fleet resource.
+If you want to use fleet for Kubernetes object propagation (preview) and multi-cluster load balancing (preview) in addition to update orchestration, then you need to create the fleet resource with the hub cluster enabled. In this case, you can join up to 20 AKS clusters as member clusters to the same fleet resource.
-If you want to use fleet for Kubernetes object propagation and multi-cluster load balancing in addition to update orchestration, then you need to create the fleet resource with the hub cluster enabled. If you have a hub cluster data plane for the fleet, you can use it to check the member clusters joined.
+Note that once a fleet resource has been created, it is not possible to change the hub mode for the fleet resource.
-Once a cluster is joined to a fleet resource, a MemberCluster custom resource is created on the fleet. Note that once a fleet resource has been created, it is not possible to change the hub mode (with/without) for the fleet resource.
+## Update orchestration across multiple clusters
+
+Platform admins managing Kubernetes fleets with large number of clusters often have problems with staging their updates in a safe and predictable way across multiple clusters. To address this pain point, Kubernetes Fleet Manager (Fleet) allows you to orchestrate updates across multiple clusters using update runs, stages, groups, and strategies.
++
+* **Update run**: An update run represents an update being applied to a collection of AKS clusters. An update run updates clusters in a predictable fashion by defining update stages and update groups. An update run can be stopped and started.
+* **Update stage**: Update runs are divided into stages which are applied sequentially. For example, a first update stage might update test environment member clusters, and a second update stage would then subsequently update production environment member clusters. A wait time can be specified to delay between the application of subsequent update stages.
+* **Update group**: Each update stage contains one or more update groups, which are used to select the member clusters to be updated. Update groups are also used to order the application of updates to member clusters. Within an update stage, updates are applied to all the different update groups in parallel; within an update group, member clusters update sequentially. Each member cluster of the fleet can only be a part of one update group.
+* **Update strategy**: Update strategies allows you to store templates for your update runs instead of creating them individually for each update run.
+
+Currently, the supported update operations on the cluster are upgrades. Within upgrades, you can either upgrade both the Kubernetes control plane version and the node image, or you can choose to upgrade only the node image. The latest available node image for a given cluster may vary based on its region (check [release tracker](../aks/release-tracker.md) for more information). Node image upgrades currently support upgrading each cluster to the latest node image available in its region, or applying a consistent node image across all clusters of the update run, regardless of the cluster regions. In this case the update run picks the **latest common** image across all these regions to achieve consistency.
+
+## Member cluster representation on the hub
+
+ Once each cluster is joined to a fleet resource, a corresponding MemberCluster custom resource is created on the fleet hub.
The member clusters can be viewed by running the following command:
The following labels are added automatically to all member clusters, which can t
* `fleet.azure.com/resource-group` * `fleet.azure.com/subscription-id`
-## Update orchestration across multiple clusters
-
-Platform admins managing Kubernetes fleets with large number of clusters often have problems with staging their updates in a safe and predictable way across multiple clusters. To address this pain point, Fleet allows orchestrating updates across multiple clusters using update runs, stages, and groups.
--
-* **Update group**: A group of AKS clusters for which updates are done sequentially one after the other. Each member cluster of the fleet can only be a part of one update group.
-* **Update stage**: Update stages allow pooling together update groups for which the updates need to be run in parallel. It can be used to define wait time between two different collections of update groups.
-* **Update run**: An update being applied to a collection of AKS clusters in a sequential or stage-by-stage manner. An update run can be stopped and started. An update run can either upgrade clusters one-by-one or in a stage-by-stage fashion using update stages and update groups.
-* **Update strategy**: Update strategy allows you to store templates for your update runs instead of creating them individually for each update run.
-
-Currently, the only supported update operations on the cluster are upgrades. Within upgrades, you can either upgrade both the Kubernetes control plane version and the node image or you can choose to upgrade only the node image. Node image upgrades currently only allow upgrading to either the latest available node image for each cluster, or applying the same consistent node image across all clusters of the update run. As it's possible for an update run to have AKS clusters across multiple regions where the latest available node images can be different (check [release tracker](../aks/release-tracker.md) for more information). The update run picks the **latest common** image across all these regions to achieve consistency.
- ## Kubernetes resource propagation Fleet provides `ClusterResourcePlacement` as a mechanism to control how cluster-scoped Kubernetes resources are propagated to member clusters. For more details, see the [resource propagation documentation](resource-propagation.md).
kubernetes-fleet Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/faq.md
This article covers the frequently asked questions for Azure Kubernetes Fleet Ma
Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since the Kubernetes control plane is managed by Azure, you only manage and maintain the agent nodes. You run your actual workloads on the AKS clusters.
-Azure Kubernetes Fleet Manager (Fleet) will help you address at-scale and multi-cluster scenarios for Azure Kubernetes Service clusters. Azure Kubernetes Fleet Manager only provides a group representation for your AKS clusters and helps users with orchestrating Kubernetes resource propagation and multi-cluster load balancing. User workloads can't be run on the fleet cluster itself.
+Azure Kubernetes Fleet Manager (Fleet) will help you address at-scale and multi-cluster scenarios for Azure Kubernetes Service clusters. Azure Kubernetes Fleet Manager provides a group representation for your AKS clusters and helps users with orchestrating cluster updates, Kubernetes resource propagation and multi-cluster load balancing. User workloads can't be run on the fleet cluster itself.
## Creation of AKS clusters from fleet resource
-The current preview of Azure Kubernetes Fleet Manager resource supports joining only existing AKS clusters as member. Creation and lifecycle management of new AKS clusters from fleet cluster is in the [roadmap](https://aka.ms/fleet/roadmap).
+Today, Azure Kubernetes Fleet Manager supports joining existing AKS clusters as fleet members. Creation and lifecycle management of new AKS clusters from fleet cluster is in the [roadmap](https://aka.ms/fleet/roadmap).
## Number of clusters
-During preview, you can join up to 20 AKS clusters as member clusters to the same fleet resource.
+The number of member clusters that can be joined to the same fleet resource depends on whether the fleet resource has a hub cluster or not. Fleets without a hub cluster support joining up to 100 AKS clusters. Fleet resources with a hub cluster support joining up to 20 AKS clusters.
## AKS clusters that can be joined as members
Fleet supports joining the following types of AKS clusters as member clusters:
## Relationship to Azure-Arc enabled Kubernetes
-The current preview of Azure Kubernetes Fleet Manager resource supports joining only AKS clusters as member clusters. Support for joining member clusters to the fleet resource is in the [roadmap](https://aka.ms/fleet/roadmap).
+Today, Azure Kubernetes Fleet Manager supports joining AKS clusters as member clusters. Support for joining member clusters to the fleet resource is in the [roadmap](https://aka.ms/fleet/roadmap).
## Regional or global
The roadmap for Azure Kubernetes Fleet Manager resource is available [here](http
## Next steps
-* Create an [Azure Kuberntes Fleet Manager resource and join member clusters](./quickstart-create-fleet-and-members.md)
+* Create an [Azure Kubernetes Fleet Manager resource and join member clusters](./quickstart-create-fleet-and-members.md)
kubernetes-fleet Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/overview.md
keywords: "Kubernetes, Azure, multi-cluster, multi, containers"
# What is Azure Kubernetes Fleet Manager?
-Azure Kubernetes Fleet Manager (Fleet) enables multi-cluster and at-scale scenarios for Azure Kubernetes Service (AKS) clusters. A Fleet resource creates a cluster that can be used to manage other member clusters.
-
-Fleet supports the following scenarios:
+Azure Kubernetes Fleet Manager (Fleet) enables at-scale management of multiple Azure Kubernetes Service (AKS) clusters. Fleet supports the following scenarios:
* Create a Fleet resource and group AKS clusters as member clusters.
-* Create Kubernetes resource objects on the Fleet resource's cluster and control their propagation to all or a subset of all member clusters.
-
-* Load balance incoming L4 traffic across service endpoints on multiple clusters
+* Orchestrate Kubernetes version and node image upgrades across multiple clusters by using update runs, stages, and groups.
-* Export a service from one member cluster to the Fleet resource. Once successfully exported, the service and its endpoints are synced to the hub, which other member clusters (or any Fleet resource-scoped load balancer) can consume.
+* Create Kubernetes resource objects on the Fleet resource's hub cluster and control their propagation to member clusters (preview).
-* Orchestrate Kubernetes version and node image upgrades across multiple clusters by using update runs, stages, and groups.
+* Export and import services between member clusters, and load balance incoming L4 traffic across service endpoints on multiple clusters (preview).
## Next steps
kubernetes-fleet Quickstart Create Fleet And Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-create-fleet-and-members.md
The following output example resembles successful creation of the resource group
## Create a fleet resource
-You can create a fleet resource to later group your AKS clusters as member clusters. This resource enables multi-cluster scenarios such as update orchestration across clusters, Kubernetes object propagation to member clusters, and north-south load balancing across endpoints deployed on multiple member clusters.
+You can create a fleet resource to later group your AKS clusters as member clusters. By default this resource enables member cluster grouping and update orchestration. If the fleet hub is enabled, additional preview features are enabled such as Kubernetes object propagation to member clusters, and L4 service load balancing across multiple member clusters.
> [!IMPORTANT]
-> As of now, once a fleet resource has been created, it is not possible to change the hub mode (with/without) for the fleet resource.
+> As of now, once a fleet resource has been created, it is not possible to change the hub mode for the fleet resource.
### Update orchestration only (default)
-If you want to use Fleet only for update orchestration scenario, you can create a fleet resource without the hub cluster using the [az fleet create](/cli/azure/fleet#az-fleet-create) command. This is the default experience when creating a new fleet resource.
+If you want to use Fleet only for update orchestration, you can create a fleet resource without the hub cluster using the [az fleet create](/cli/azure/fleet#az-fleet-create) command. This is the default experience when creating a new fleet resource.
```azurecli-interactive az fleet create --resource-group ${GROUP} --name ${FLEET} --location eastus
Fleet currently supports joining existing AKS clusters as member clusters.
**Create AKS clusters** ```azurecli-interactive
- export MEMBER_CLUSTER_1=aks-member-1
+ export MEMBER_NAME_1=aks-member-1
az aks create \ --resource-group ${GROUP} \ --location eastus \
- --name ${MEMBER_CLUSTER_1} \
+ --name ${MEMBER_NAME_1} \
--node-count 1 \ --network-plugin azure \ --vnet-subnet-id "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.Network/virtualNetworks/${FIRST_VNET}/subnets/${MEMBER_1_SUBNET}" ``` ```azurecli-interactive
- export MEMBER_CLUSTER_2=aks-member-2
+ export MEMBER_NAME_2=aks-member-2
az aks create \ --resource-group ${GROUP} \ --location eastus \
- --name ${MEMBER_CLUSTER_2} \
+ --name ${MEMBER_NAME_2} \
--node-count 1 \ --network-plugin azure \ --vnet-subnet-id "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.Network/virtualNetworks/${FIRST_VNET}/subnets/${MEMBER_2_SUBNET}" ``` ```azurecli-interactive
- export MEMBER_CLUSTER_3=aks-member-3
+ export MEMBER_NAME_3=aks-member-3
az aks create \ --resource-group ${GROUP} \ --location westcentralus \
- --name ${MEMBER_CLUSTER_3} \
+ --name ${MEMBER_NAME_3} \
--node-count 1 \ --network-plugin azure \ --vnet-subnet-id "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.Network/virtualNetworks/${SECOND_VNET}/subnets/${MEMBER_3_SUBNET}"
Fleet currently supports joining existing AKS clusters as member clusters.
1. Set the following environment variables for members: ```azurecli-interactive
- export MEMBER_CLUSTER_ID_1=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_CLUSTER_1}
export MEMBER_NAME_1=aks-member-1
+ export MEMBER_CLUSTER_ID_1=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_NAME_1}
- export MEMBER_CLUSTER_ID_2=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_CLUSTER_2}
export MEMBER_NAME_2=aks-member-2
+ export MEMBER_CLUSTER_ID_2=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_NAME_2}
- export MEMBER_CLUSTER_ID_3=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_CLUSTER_3}
export MEMBER_NAME_3=aks-member-3
+ export MEMBER_CLUSTER_ID_3=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_NAME_3}
``` 1. Join these clusters to the Fleet resource using the following commands:
To access the Fleet cluster's Kubernetes API, run the following commands:
## Next steps
-* Learn how to use [Kubernetes resource objects propagation](./resource-propagation.md)
+* Learn how to use [Update orchestration](./update-orchestration.md)
+* Learn how to use [Kubernetes resource object propagation](./resource-propagation.md)
kubernetes-fleet Update Orchestration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/update-orchestration.md
# Orchestrate updates across multiple clusters by using Azure Kubernetes Fleet Manager
-Platform admins managing Kubernetes fleets with large number of clusters often have problems with staging their updates in a safe and predictable way across multiple clusters. To address this pain point, Kubernetes Fleet Manager (Fleet) allows you to orchestrate updates across multiple clusters using update runs, stages, and groups.
+Platform admins managing Kubernetes fleets with large number of clusters often have problems with staging their updates in a safe and predictable way across multiple clusters. To address this pain point, Kubernetes Fleet Manager (Fleet) allows you to orchestrate updates across multiple clusters using update runs, stages, groups, and strategies.
:::image type="content" source="./media/update-orchestration/fleet-overview-inline.png" alt-text="Screenshot of the Azure portal pane for a fleet resource, showing member cluster Kubernetes versions and node images in use across all node pools of member clusters." lightbox="./media/update-orchestration/fleet-overview-lightbox.png":::
az fleet updaterun create --resource-group $GROUP --fleet-name $FLEET --name run
## Update clusters in a specific order
-Update groups and stages provide more control over the sequence that update runs follow when you're updating the clusters.
+Update groups and stages provide more control over the sequence that update runs follow when you're updating the clusters. Within an update stage, updates are applied to all the different update groups in parallel; within an update group, member clusters update sequentially.
### Assign a cluster to an update group
az fleet member update --resource-group $GROUP --fleet-name $FLEET --name member
### Define an update run and stages
-You can define an update run by using update stages to pool together update groups for whom the updates need to be run in parallel. You can also specify a wait time between the update stages.
+You can define an update run using update stages in order to sequentially order the application of updates to different update groups. For example, a first update stage might update test environment member clusters, and a second update stage would then subsequently update production environment member clusters. You can also specify a wait time between the update stages.
#### [Azure portal](#tab/azure-portal)
logic-apps Connect Virtual Network Vnet Set Up Single Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-set-up-single-ip-address.md
This topic shows how to route outbound traffic through an Azure Firewall, but yo
## Prerequisites
-* An Azure firewall that runs in the same virtual network as your ISE. If you don't have a firewall, first [add a subnet](../virtual-network/virtual-network-manage-subnet.md#add-a-subnet) that's named `AzureFirewallSubnet` to your virtual network. You can then [create and deploy a firewall](../firewall/tutorial-firewall-deploy-portal.md#deploy-the-firewall) in your virtual network.
+* An Azure firewall that runs in the same virtual network as your ISE. If you don't have a firewall, first [add a subnet](../virtual-network/virtual-network-manage-subnet.md#add-a-subnet) that's named `AzureFirewallSubnet` to your virtual network. You can then [create and deploy a firewall](../firewall/tutorial-firewall-deploy-portal.md#create-a-virtual-network) in your virtual network.
* An Azure [route table](../virtual-network/manage-route-table.md). If you don't have one, first [create a route table](../virtual-network/manage-route-table.md#create-a-route-table). For more information about routing, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md).
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
ms.suite: integration Previously updated : 10/10/2023 Last updated : 11/15/2023 # Customer intent: As a logic apps developer, I want to create a Standard logic app workflow that runs in single-tenant Azure Logic Apps using Visual Studio Code.
As you progress, you'll complete these high-level tasks:
### Access and connectivity
-* Access to the internet so that you can download the requirements, connect from Visual Studio Code to your Azure account, and publish from Visual Studio Code to Azure.
+* If you plan to locally build Standard logic app projects and run workflows using only the [built-in connectors](../connectors/built-in.md) that run natively on the Azure Logic Apps runtime, you don't need the following requirements. However, make sure that you have the following connectivity and Azure account credentials to publish or deploy your project from Visual Studio Code to Azure, use the [managed connectors](../connectors/managed.md) that run in global Azure, or access Standard logic app resources and workflows already deployed in Azure:
-* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+ * Access to the internet so that you can download the requirements, connect from Visual Studio Code to your Azure account, and publish from Visual Studio Code to Azure.
+
+ * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* To create the same example workflow in this article, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in. If you choose a [different email connector](/connectors/connector-reference/connector-reference-logicapps-connectors), such as Outlook.com, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
-<a name="storage-requirements"></a>
+### Tools
-### Storage requirements
+1. Download and install [Visual Studio Code](https://code.visualstudio.com/), which is free.
-For local development in Visual Studio Code, you need to set up a local data store for your logic app project and workflows to use for running in your local development environment. You can use and run the Azurite storage emulator as your local data store.
+1. Download and install the [Azure Account extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account) so that you have a single common experience for Azure sign-in and subscription filtering across all Azure extensions in Visual Studio Code. This how-to guide includes steps that use this experience.
-1. Download and install [Azurite 3.12.0 or later](https://www.npmjs.com/package/azurite) for your Windows, macOS, or Linux operating system. You can install either [from inside Visual Studio Code](../storage/common/storage-use-azurite.md?tabs=visual-studio-code) or by [using npm](../storage/common/storage-use-azurite.md?tabs=npm).
+1. Download and install the following Visual Studio Code dependencies for your specific operating system using either method:
-1. Before you run your logic app workflow, make sure to start the emulator.
+ - [Install all dependencies automatically (preview)](#dependency-installer).
+ - [Download and install each dependency separately](#install-dependencies-individually).
- 1. In Visual Studio Code, from the **View** menu, select **Command Palette**.
+ <a name="dependency-installer"></a>
- 1. After the command palette appears, enter **Azurite: Start**.
+ **Install all dependencies automatically (preview)**
-For more information, review the [documentation for the Azurite extension in Visual Studio Code](https://github.com/Azure/Azurite#visual-studio-code-extension).
+ > [!IMPORTANT]
+ > This capability is in preview and is subject to the
+ > [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-### Tools
+ Starting with version **2.81.5**, the Azure Logic Apps (Standard) extension for Visual Studio Code includes a dependency installer that automatically installs all the required dependencies in a new binary folder and leaves any existing dependencies unchanged. For more information, see [Get started more easily with the Azure Logic Apps (Standard) extension for Visual Studio Code](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/making-it-easy-to-get-started-with-the-azure-logic-apps-standard/ba-p/3979643).
-Install the following tools and versions for your specific operating system: Windows, macOS, or Linux.
+ This extension includes the following dependencies:
-* [Visual Studio Code](https://code.visualstudio.com/), which is free. Also, download and install these tools for Visual Studio Code, if you don't have them already:
+ | Dependency | Description |
+ ||-|
+ | [C# for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp) | Enables F5 functionality to run your workflow. |
+ | [Azurite for Visual Studio Code](https://github.com/Azure/Azurite#visual-studio-code-extension) | Provides a local data store and emulator to use with Visual Studio Code so that you can work on your logic app project and run your workflows in your local development environment. If you don't want Azurite to automatically start, you can disable this option: <br><br>1. On the **File** menu, select **Preferences** > **Settings**. <br><br>2. On the **User** tab, select **Extensions** > **Azure Logic Apps (Standard)**. <br><br>3. Find the setting named **Azure Logic Apps Standard: Auto Start Azurite**, and clear the selected checkbox. |
+ | [.NET SDK 6.x.x](https://dotnet.microsoft.com/download/dotnet/6.0) | Includes the .NET Runtime 6.x.x, a prerequisite for the Azure Logic Apps (Standard) runtime. |
+ | Azure Functions Core Tools - 4.x version | Installs the version based on your operating system ([Windows](https://github.com/Azure/azure-functions-core-tools/releases), [macOS](../azure-functions/functions-run-local.md?tabs=macos#install-the-azure-functions-core-tools), or [Linux](../azure-functions/functions-run-local.md?tabs=linux#install-the-azure-functions-core-tools)). <br><br>These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code. |
+ | [Node.js version 16.x.x unless a newer version is already installed](https://nodejs.org/en/download/releases/) | Required to enable the [Inline Code Operations action](../logic-apps/logic-apps-add-run-inline-code.md) that runs JavaScript. |
- * [Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account), which provides a single common Azure sign-in and subscription filtering experience for all other Azure extensions in Visual Studio Code.
+ The installer doesn't perform the following tasks:
- * [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app workflow.
+ - Check whether the required dependencies already exist.
+ - Install only the missing dependencies.
+ - Update older versions of existing dependencies.
- * [.NET SDK 6.x.x](https://dotnet.microsoft.com/download/dotnet/6.0), which includes the .NET Runtime 6.x.x, a prerequisite for the Azure Logic Apps (Standard) runtime.
+ 1. [Download and install the Azure Logic Apps (Standard) extension for Visual Studio Code, starting with version 2.81.5)](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurelogicapps).
- * Azure Functions Core Tools - 4.x version
+ 1. In Visual Studio Code, on the Activity bar, select **Extensions**. (Keyboard: Press Ctrl+Shift+X)
- * [Windows](https://github.com/Azure/azure-functions-core-tools/releases): Use the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`.
- * [macOS](../azure-functions/functions-run-local.md?tabs=macos#install-the-azure-functions-core-tools)
- * [Linux](../azure-functions/functions-run-local.md?tabs=linux#install-the-azure-functions-core-tools)
+ 1. On the **Extensions** pane, open the ellipses (**...**) menu, and select **Install from VSIX**.
- These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
+ 1. Find and select the downloaded VSIX file.
- * If you have an installation that's earlier than these versions, uninstall that version first, or make sure that the PATH environment variable points at the version that you download and install.
+ After setup completes, the extension automatically activates and runs the **Validate and install dependency binaries** command. To view the process logs, open the **Output** window.
- * Azure Functions v3 support in Azure Logic Apps ends on March 31, 2023. Starting mid-October 2022, new Standard logic app workflows in the Azure portal automatically use Azure Functions v4. Since January 31, 2023, existing Standard workflows in the Azure portal were automatically migrated to Azure Functions v4.
-
- Unless you deployed your Standard logic apps as NuGet-based projects, pinned your logic apps to a specific bundle version, or Microsoft determined that you had to take action before the automatic migration, this upgrade is designed to require no action from you nor have a runtime impact. However, if the exceptions apply to you, or for more information about Azure Functions v3 support, see [Azure Logic Apps Standard now supports Azure Functions v4](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-standard-now-supports-azure-functions-v4/ba-p/3656072).
+ 1. When the following prompt appears, select **Yes (Recommended)** to confirm that you want to automatically install the required dependencies:
- * [Azure Logic Apps (Standard) extension for Visual Studio Code](https://go.microsoft.com/fwlink/p/?linkid=2143167).
+ :::image type="content" source="media/create-single-tenant-workflows-visual-studio-code/dependency-installer-prompt.png" alt-text="Screenshot shows prompt to automatically install dependencies." lightbox="media/create-single-tenant-workflows-visual-studio-code/dependency-installer-prompt.png":::
- > [!IMPORTANT]
- > Projects created with earlier preview extensions no longer work. To continue,
- > uninstall any earlier versions, and recreate your logic app projects.
+ 1. Reload Visual Studio Code, if necessary.
- To install the **Azure Logic Apps (Standard)** extension, follow these steps:
+ 1. Confirm that the dependencies correctly appear in the following folder:
- 1. In Visual Studio Code, on the left toolbar, select **Extensions**.
+ **C:\Users\\<your-user-name\>\\.azurelogicapps\dependencies\\<dependency-name\>**
- 1. In the extensions search box, enter **azure logic apps standard**. From the results list, select **Azure Logic Apps (Standard)** **>** **Install**.
+ 1. Confirm the following extension settings in Visual Studio Code:
- After the installation completes, the extension appears in the **Extensions: Installed** list.
+ 1. On the **File** menu, select **Preferences** > **Settings**.
- ![Screenshot shows Visual Studio Code with Azure Logic Apps (Standard) extension installed.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-extension-installed.png)
+ 1. On the **User** tab, select **Extensions** > **Azure Logic Apps (Standard)**.
- > [!TIP]
- > If the extension doesn't appear in the installed list, try restarting Visual Studio Code.
+ 1. Review the following settings:
- Currently, you can have both Consumption (multi-tenant) and Standard (single-tenant) extensions installed at the same time. The development experiences differ from each other in some ways, but your Azure subscription can include both Standard and Consumption logic app types. In Visual Studio Code, the Azure window shows all the Azure-deployed and hosted logic apps in your Azure subscription, but organizes your apps in the following ways:
+ | Extension setting | Value |
+ |-|-|
+ | **Dependencies Path** | C:\Users\\<your-user-name\>\\.azurelogicapps\dependencies |
+ | **Dependency Timeout** | 60 seconds |
+ | **Dotnet Binary Path** | C:\Users\\<your-user-name\>\\.azurelogicapps\dependencies\DotNetSDK\dotnet.exe |
+ | **Func Core Tools Binary Path** | C:\Users\\<your-user-name\>\\.azurelogicapps\dependencies\FuncCoreTools\func |
+ | **Node JS Binary Path** | C:\Users\\<your-user-name\>\\.azurelogicapps\dependencies\NodeJs\node |
+ | **Auto Start Azurite** | Enabled |
+ | **Auto Start Design Time** | Enabled |
- * **Logic Apps (Consumption)** section: All the Consumption logic apps in your subscription
- * **Resources** section: All the Standard logic apps in your subscription. Previously, these logic apps appeared in the **Logic Apps (Standard)** section, which has now moved into the **Resources** section.
+ 1. If you have an existing logic app project with custom-defined tasks stored in the **.vscode/tasks.json** file, make sure that you save the **tasks.json** file elsewhere before you open your project.
+
+ When you open your project, you're prompted to update **tasks.json** file to use the required dependencies. If you choose to continue, the extension overwrites the **tasks.json** file.
-* To use the [Inline Code Operations action](../logic-apps/logic-apps-add-run-inline-code.md) that runs JavaScript, install [Node.js version 16.x.x unless a newer version is already installed](https://nodejs.org/en/download/releases/).
+ 1. When you open your logic app project, the following notifications appear:
- > [!TIP]
- > For Windows, download the MSI version. If you use the ZIP version instead, you have to
- > manually make Node.js available by using a PATH environment variable for your operating system.
+ | Notification | Action |
+ |--|--|
+ | **Always start the background design-time process at launch?** | To open the workflow designer faster, select **Yes (Recommended)**. |
+ | **Configure Azurite to autostart on project launch?** | To have Azurite storage automatically start when the project opens, select **Enable AutoStart**. At the top of Visual Studio Code, in the command window that appears, press enter to accept the default path: <br><br>**C\Users\\<your-user-name\>\\.azurelogicapps\\.azurite** |
+
+ <a name="known-issues-preview"></a>
+
+ **Known issues with preview**
+
+ - If you opted in to automatically install all dependencies on a computer that doesn't have any version of the .NET Core SDK, the following message appears:
+
+ **"The .NET Core SDK cannot be located: Error running dotnet -- info: Error: Command failed: dotnet --info 'dotnet is not recognized as an internal or external command, operable program, or batch file. 'dotnet' is not recognized as an internal or external command, operable program, or batch file. . .NET Core debugging will not be enabled. Make sure the .NET Core SDK is installed and is on the path."**
+
+ You get this message because the .NET Core framework is still installing when the extension activates. You can safely choose to disable this message.
+
+ If you have trouble with opening an existing logic app project or starting the debugging task (tasks.json) for **func host start**, and this message appears, follow these steps to resolve the problem:
+
+ 1. Add the dotnet binary path to your environment PATH variable.
-* To locally run webhook-based triggers and actions, such as the [built-in HTTP Webhook trigger](../connectors/connectors-native-webhook.md), in Visual Studio Code, you need to [set up forwarding for the callback URL](#webhook-setup).
+ 1. On the Windows taskbar, in the search box, enter **environment variables**, and select **Edit the system environment variables**.
-* To test the example workflow in this article, you need a tool that can send calls to the endpoint created by the Request trigger. If you don't have such a tool, you can download, install, and use the [Postman](https://www.postman.com/downloads/) app.
+ 1. In the **System Properties** box, on the **Advanced** tab, select **Environment Variables**.
-* If you create your logic app resources with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
+ 1. In the **Environment Variables** box, from the **User variables for \<your-user-name\>** list, select **PATH**, and then select **Edit**.
+
+ 1. If the following value doesn't appear in the list, select **New** to add the following value:
+
+ **C:\Users\\<your-user-name\>\\.azurelogicapps\dependencies\DotNetSDK**
+
+ 1. When you're done, select **OK**.
+
+ 1. Close all Visual Studio Code windows, and reopen your project.
+
+ - If you have problems installing and validating binary dependencies, for example:
+
+ - Linux permissions issues
+ - You get the following error: **\<File or path> does not exist**
+ - Validation gets stuck on **\<dependency-name>**.
+
+ Follow these steps to run the **Validate and install binary dependencies** command again:
+
+ 1. From the **View** menu, select **Command Palette**.
+
+ 1. When the command window appears, enter and run the **Validate and install binary dependencies** command.
+
+ - If you don't have .NET Core 7 or a later version installed, and you open an Azure Logic Apps workspace that contains an Azure Functions project, you get the following message:
+
+ **There were problems loading project [function-name].csproj. See log for details.**
+
+ This missing component doesn't affect the Azure Functions project, so you can safely ignore this message.
+
+ <a name="install-dependencies-individually"></a>
+
+ **Install each dependency separately**
+
+ | Dependency | Description |
+ ||-|
+ | [.NET SDK 6.x.x](https://dotnet.microsoft.com/download/dotnet/6.0) | Includes the .NET Runtime 6.x.x, a prerequisite for the Azure Logic Apps (Standard) runtime. |
+ | Azure Functions Core Tools - 4.x version | - [Windows](https://github.com/Azure/azure-functions-core-tools/releases): Use the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. <br>- [macOS](../azure-functions/functions-run-local.md?tabs=macos#install-the-azure-functions-core-tools) <br>- [Linux](../azure-functions/functions-run-local.md?tabs=linux#install-the-azure-functions-core-tools) <br><br>These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code. <br><br>If you have an installation that's earlier than these versions, uninstall that version first, or make sure that the PATH environment variable points at the version that you download and install. |
+ | [Node.js version 16.x.x unless a newer version is already installed](https://nodejs.org/en/download/releases/) | Required to enable the [Inline Code Operations action](../logic-apps/logic-apps-add-run-inline-code.md) that runs JavaScript. <br><br>**Note**: For Windows, download the MSI version. If you use the ZIP version instead, you have to manually make Node.js available by using a PATH environment variable for your operating system. |
+
+1. If you already installed the version of the Azure Logic Apps (Standard) extension that automatically installs all the dependencies (preview), skip this step. Otherwise, [download and install the Azure Logic Apps (Standard) extension for Visual Studio Code](https://go.microsoft.com/fwlink/p/?linkid=2143167).
+
+ 1. In Visual Studio Code, on the left toolbar, select **Extensions**.
+
+ 1. In the extensions search box, enter **azure logic apps standard**. From the results list, select **Azure Logic Apps (Standard)** **>** **Install**.
+
+ After the installation completes, the extension appears in the **Extensions: Installed** list.
+
+ ![Screenshot shows Visual Studio Code with Azure Logic Apps (Standard) extension installed.](./media/create-single-tenant-workflows-visual-studio-code/azure-logic-apps-extension-installed.png)
+
+ > [!TIP]
+ >
+ > If the extension doesn't appear in the installed list, try restarting Visual Studio Code.
+
+ Currently, you can have both Consumption (multitenant) and Standard (single-tenant) extensions installed at the same time. The development experiences differ from each other in some ways, but your Azure subscription can include both Standard and Consumption logic app types. In Visual Studio Code, the Azure window shows all the Azure-deployed and hosted logic apps in your Azure subscription, but organizes your apps in the following ways:
+
+ * **Logic Apps (Consumption)** section: All the Consumption logic apps in your subscription.
+
+ * **Resources** section: All the Standard logic apps in your subscription. Previously, these logic apps appeared in the **Logic Apps (Standard)** section, which has now moved into the **Resources** section.
+
+1. To locally run webhook-based triggers and actions, such as the [built-in HTTP Webhook trigger](../connectors/connectors-native-webhook.md), in Visual Studio Code, you need to [set up forwarding for the callback URL](#webhook-setup).
+
+1. To test the example workflow in this article, you need a tool that can send calls to the endpoint created by the Request trigger. If you don't have such a tool, you can download, install, and use the [Postman](https://www.postman.com/downloads/) app.
+
+1. If you create your logic app resources with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app resource. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
<a name="set-up"></a>
To locally run webhook-based triggers and actions in Visual Studio Code, you nee
#### Set up call forwarding using **ngrok**
-1. [Sign up for an **ngrok** account](https://dashboard.ngrok.com/signup) if you don't have one. Otherwise, [sign in to your account](https://dashboard.ngrok.com/login).
+1. [Go to the **ngrok** website](https://dashboard.ngrok.com). Either sign up for a new account or sign in to your account, if you have one already.
1. Get your personal authentication token, which your **ngrok** client needs to connect and authenticate access to your account.
- 1. To find your [authentication token page](https://dashboard.ngrok.com/auth/your-authtoken), on your account dashboard menu, expand **Authentication**, and select **Your Authtoken**.
+ 1. To find your authentication token page, on your account dashboard menu, expand **Authentication**, and select **Your Authtoken**.
1. From the **Your Authtoken** box, copy the token to a safe location.
For general information, see [Breakpoints - Visual Studio Code](https://code.vis
## Run, test, and debug locally
-To test your logic app, follow these steps to start a debugging session, and find the URL for the endpoint that's created by the Request trigger. You need this URL so that you can later send a request to that endpoint.
+To test your logic app workflow, follow these steps to start a debugging session, and find the URL for the endpoint that's created by the Request trigger. You need this URL so that you can later send a request to that endpoint.
1. To debug a stateless workflow more easily, you can [enable the run history for that workflow](#enable-run-history-stateless).
-1. Make sure that your Azurite emulator is running. For more information, review [Storage requirements](#storage-requirements).
+1. If your Azurite emulator is already running, continue to the next step. Otherwise, make sure to start the emulator before you run your workflow:
+
+ 1. In Visual Studio Code, from the **View** menu, select **Command Palette**.
+
+ 1. After the command palette appears, enter **Azurite: Start**.
+
+ For more information about Azurite commands, see the [documentation for the Azurite extension in Visual Studio Code](https://github.com/Azure/Azurite#visual-studio-code-extension).
1. On the Visual Studio Code Activity Bar, open the **Run** menu, and select **Start Debugging** (F5). The **Terminal** window opens so that you can review the debugging session. > [!NOTE]
+ >
> If you get the error, **"Error exists after running preLaunchTask 'generateDebugSymbols'"**, > see the troubleshooting section, [Debugging session fails to start](#debugging-fails-to-start).
Stopping a logic app affects workflow instances in the following ways:
To stop a trigger from firing on unprocessed items since the last run, clear the trigger state before you restart the logic app: 1. On the Visual Studio Code Activity Bar, select the Azure icon to open the Azure window.+ 1. In the **Resources** section, expand your subscription, which shows all the deployed logic apps for that subscription.+ 1. Expand your logic app, and then expand the node that's named **Workflows**.+ 1. Open a workflow, and edit any part of that workflow's trigger.+ 1. Save your changes. This step resets the trigger's current state.+ 1. Repeat for each workflow.+ 1. When you're done, restart your logic app. <a name="considerations-delete-logic-apps"></a>
Through the Azure portal, you can add blank workflows to a Standard logic app re
To debug a stateless workflow more easily, you can enable the run history for that workflow, and then disable the run history when you're done. Follow these steps for Visual Studio Code, or if you're working in the Azure portal, see [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-azure-portal.md#enable-run-history-stateless).
-1. In your Visual Studio Code project, expand the folder that's named **workflow-designtime**, and open the **local.settings.json** file.
+1. In your Visual Studio Code project, expand the folder that's named **workflow-designtime**. Open the **local.settings.json** file.
1. Add the `Workflows.{yourWorkflowName}.operationOptions` property and set the value to `WithStatelessRunHistory`, for example:
machine-learning Concept Endpoints Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online.md
The following table highlights key aspects about the online deployment options:
Azure Machine Learning provides various ways to debug online endpoints locally and by using container logs.
+#### Local debugging with the Azure Machine Learning inference HTTP server
+
+You can debug your scoring script locally by using the Azure Machine Learning inference HTTP server. The HTTP server is a Python package that exposes your scoring function as an HTTP endpoint and wraps the Flask server code and dependencies into a singular package. It's included in the [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md) that are used when deploying a model with Azure Machine Learning. Using the package alone, you can deploy the model locally for production, and you can also easily validate your scoring (entry) script in a local development environment. If there's a problem with the scoring script, the server will return an error and the location where the error occurred.
+You can also use Visual Studio Code to debug with the Azure Machine Learning inference HTTP server.
+
+To learn more about debugging with the HTTP server, see [Debugging scoring script with Azure Machine Learning inference HTTP server](how-to-inference-server-http.md).
+ #### Local debugging For **local debugging**, you need a local deployment; that is, a model that is deployed to a local Docker environment. You can use this local deployment for testing and debugging before deployment to the cloud. To deploy locally, you'll need to have the [Docker Engine](https://docs.docker.com/engine/install/) installed and running. Azure Machine Learning then creates a local Docker image that mimics the Azure Machine Learning image. Azure Machine Learning will build and run deployments for you locally and cache the image for rapid iterations.
As with local debugging, you first need to have the [Docker Engine](https://docs
To learn more about interactively debugging online endpoints in VS Code, see [Debug online endpoints locally in Visual Studio Code](/azure/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code).
-#### Local debugging with the Azure Machine Learning inference HTTP server (preview)
--
-You can debug your scoring script locally by using the Azure Machine Learning inference HTTP server. The HTTP server is a Python package that exposes your scoring function as an HTTP endpoint and wraps the Flask server code and dependencies into a singular package. It's included in the [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md) that are used when deploying a model with Azure Machine Learning. Using the package alone, you can deploy the model locally for production, and you can also easily validate your scoring (entry) script in a local development environment. If there's a problem with the scoring script, the server will return an error and the location where the error occurred.
-You can also use Visual Studio Code to debug with the Azure Machine Learning inference HTTP server.
-
-To learn more about debugging with the HTTP server, see [Debugging scoring script with Azure Machine Learning inference HTTP server (preview)](how-to-inference-server-http.md).
- #### Debugging with container logs For a deployment, you can't get direct access to the VM where the model is deployed. However, you can get logs from some of the containers that are running on the VM.
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
Previously updated : 10/18/2023 Last updated : 11/15/2023 reviewer: msakande
To deploy locally, [Docker Engine](https://docs.docker.com/engine/install/) must
> [!TIP] > You can use [Azure Machine Learning inference HTTP server Python package](how-to-inference-server-http.md) to debug your scoring script locally **without Docker Engine**. Debugging with the inference server helps you to debug the scoring script before deploying to local endpoints so that you can debug without being affected by the deployment container configurations.
-Local endpoints have the following limitations:
-- They do *not* support traffic rules, authentication, or probe settings. -- They support only one deployment per endpoint.-- They support local model files only. If you want to test registered models, first download them using [CLI](/cli/azure/ml/model#az-ml-model-download) or [SDK](/python/api/azure-ai-ml/azure.ai.ml.operations.modeloperations#azure-ai-ml-operations-modeloperations-download), then use `path` in the deployment definition to refer to the parent folder.
+> [!NOTE]
+> Local endpoints have the following limitations:
+> - They do *not* support traffic rules, authentication, or probe settings.
+> - They support only one deployment per endpoint.
+> - They support local model files and environment with local conda file only. If you want to test registered models, first download them using [CLI](/cli/azure/ml/model#az-ml-model-download) or [SDK](/python/api/azure-ai-ml/azure.ai.ml.operations.modeloperations#azure-ai-ml-operations-modeloperations-download), then use `path` in the deployment definition to refer to the parent folder. If you want to test registered environments, check the context of the environment in Azure Machine Learning studio and prepare local conda file to use. Example in this article demonstrates using local model and environment with local conda file, which supports local deployment.
For more information on debugging online endpoints locally before deploying to Azure, see [Debug online endpoints locally in Visual Studio Code](how-to-debug-managed-online-endpoints-visual-studio-code.md).
machine-learning How To Use Batch Pipeline Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-deployments.md
Previously updated : 04/21/2023 Last updated : 11/15/2023 reviewer: msakande
machine-learning How To Use Batch Pipeline From Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-from-job.md
reviewer: msakande Previously updated : 05/12/2023 - how-to - devplatv2 - ignite-2023 Last updated : 11/15/2023+ # Deploy existing pipeline jobs to batch endpoints
machine-learning How To Use Batch Scoring Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-scoring-pipeline.md
Previously updated : 04/21/2023 Last updated : 11/15/2023 reviewer: msakande
machine-learning How To Use Batch Training Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-training-pipeline.md
Previously updated : 04/21/2023 Last updated : 11/16/2023 reviewer: msakande
machine-learning How To Use Serverless Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-serverless-compute.md
Serverless compute can be used to fine-tune models in the model catalog such as
## How to use serverless compute * You can finetune foundation models such as LLAMA 2 using notebooks as shown below:
- * [Fine Tune LLAMA 2](https://github.com/Azure/azureml-examples/blob/bd799ecf31b60cec650e3b0ea2ea790fe0c99c4e/sdk/python/foundation-models/system/finetune/Llama-notebooks/text-classification/emotion-detection-llama-serverless-compute.ipynb)
- * [Fine Tune LLAMA 2 using multiple nodes](https://github.com/Azure/azureml-examples/blob/84ddcf23566038dfbb270da81c5b34b6e0fb3e5d/sdk/python/foundation-models/system/finetune/Llama-notebooks/multinode-text-classification/emotion-detection-llama-multinode-serverless.ipynb)
+ * [Fine Tune LLAMA 2](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/system/finetune/Llama-notebooks/text-classification/emotion-detection-llama-serverless-compute.ipynb)
+ * [Fine Tune LLAMA 2 using multiple nodes](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/system/finetune/Llama-notebooks/multinode-text-classification/emotion-detection-llama-multinode-serverless.ipynb)
* When you create your own compute cluster, you use its name in the command job, such as `compute="cpu-cluster"`. With serverless, you can skip creation of a compute cluster, and omit the `compute` parameter to instead use serverless compute. When `compute` isn't specified for a job, the job runs on serverless compute. Omit the compute name in your CLI or SDK jobs to use serverless compute in the following job types and optionally provide resources a job would need in terms of instance count and instance type: * Command jobs, including interactive jobs and distributed training
You can override these defaults. If you want to specify the VM type or number o
from azure.ai.ml import command from azure.ai.ml import MLClient # Handle to the workspace from azure.identity import DefaultAzureCredential # Authentication package
- from azure.ai.ml.entities import ResourceConfiguration
+ from azure.ai.ml.entities import JobResourceConfiguration
credential = DefaultAzureCredential() # Get a handle to the workspace. You can find the info on the workspace tab on ml.azure.com
You can override these defaults. If you want to specify the VM type or number o
job = command( command="echo 'hello world'", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest",
- resources = ResourceConfiguration(instance_type="Standard_NC24", instance_count=4)
+ resources = JobResourceConfiguration(instance_type="Standard_NC24", instance_count=4)
) # submit the command job ml_client.create_or_update(job)
You can also set serverless compute as the default compute in Designer.
View more examples of training with serverless compute at:- * [Quick Start](https://github.com/Azure/azureml-examples/blob/main/tutorials/get-started-notebooks/quickstart.ipynb) * [Train Model](https://github.com/Azure/azureml-examples/blob/main/tutorials/get-started-notebooks/train-model.ipynb)
-* [Fine Tune LLAMA 2](https://github.com/Azure/azureml-examples/blob/bd799ecf31b60cec650e3b0ea2ea790fe0c99c4e/sdk/python/foundation-models/system/finetune/Llama-notebooks/text-classification/emotion-detection-llama-serverless-compute.ipynb)
+* [Fine Tune LLAMA 2](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/system/finetune/Llama-notebooks/text-classification/emotion-detection-llama-serverless-compute.ipynb)
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
Previously updated : 03/31/2022 Last updated : 11/15/2023 # CLI (v2) batch deployment YAML schema
machine-learning Reference Yaml Endpoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-online.md
Previously updated : 04/26/2022- Last updated : 11/15/2023+
+reviewer: msakande
# CLI (v2) online endpoint YAML schema [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json.
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json for managed online endpoint, and at https://azuremlschemas.azureedge.net/latest/kubernetesOnlineEndpoint.schema.json for Kubernetes online endpoint. The differences between managed online endpoint and Kubernetes online endpoint are described in the table of properties in this article. Sample in this article focuses on managed online endpoint.
[!INCLUDE [schema note](includes/machine-learning-preview-old-json-schema-note.md)] > [!NOTE]
-> A fully specified sample YAML for online endpoints is available for [reference](https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.template.yaml)
+> A fully specified sample YAML for managed online endpoints is available for [reference](https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.template.yaml)
## YAML syntax
managed-grafana How To Connect Azure Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-azure-monitor-workspace.md
+
+ Title: Add an Azure Monitor workspace to Azure Managed Grafana
+description: Learn how to add an Azure Monitor workspace to Azure Managed Grafana to collect Prometheus data.
++++ Last updated : 11/10/2023
+
+
+# Add an Azure Monitor workspace to Azure Managed Grafana to collect Prometheus data
+
+In this guide, learn how to connect an Azure Monitor workspace to Grafana directly from an Azure Managed Grafana workspace. This feature is designed to provide a quick way to collect Prometheus metrics stored in an Azure Monitor workspace and enables you to monitor your Azure Kubernetes Service (AKS) clusters in Grafana.
+
+> [!IMPORTANT]
+> The integration of Azure Monitor workspaces within Azure Managed Grafana workspaces is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- An Azure Managed Grafana instance in the Standard tier. [Create a new instance](quickstart-managed-grafana-portal.md) if you don't have one.
+- An [Azure Monitor workspace with Prometheus data](../azure-monitor/containers/prometheus-metrics-enable.md).
+
+## Add a new role assignment
+
+In the Azure Monitor workspace, assign the *Monitoring Data Reader* role to the Azure Managed Grafana resource's managed identity, so that Grafana can collect data from the Azure Monitor workspace.
+
+> [!NOTE]
+> A system-assigned managed identity must be enabled in your Azure Managed Grafana resource. If needed, enable it by going to **Identity** and select **Status**: **On**.
+
+To assign the Monitoring Data Reader role:
+
+1. Open the Azure Monitor workspace that holds Prometheus data.
+1. Go to **Access control (IAM)** > **Add** > **Add role assignment**.
+1. Select the **Monitoring Data Reader** role, then **Next**.
+1. For **Assign access to**, select **Managed identity**
+1. Open **Select members** and select your Azure Managed Grafana resource.
+1. Select **Review + assign** to initiate the role assignment
+
+## Add an Azure Monitor workspace
+
+1. Open your Azure Managed Grafana workspace.
+1. In the left menu, select **Integrations** > **Azure Monitor workspaces (Preview**).
+
+ :::image type="content" source="media\monitor-integration\add-azure-monitor.png" alt-text="Screenshot of the Grafana roles in the Azure platform.":::
+
+1. Select **Add**.
+1. In the pane that opens, select an Azure Monitor workspace from the list and confirm with **Add**.
+1. Once the operation is complete, Azure displays all the Azure Monitor workspaces added to the Azure Managed Grafana workspace. You can add more Azure Monitor workspaces by selecting **Add** again.
+
+## Display Prometheus data in Grafana
+
+When you added the Azure Monitor workspace to Azure Managed Grafana in the previous step, Azure added a new Prometheus data source to Grafana.
+
+To get a dashboard with Prometheus metrics, either use one of the prebuilt dashboards or build a brand new one.
+
+### Use a prebuilt dashboard
+
+In Grafana, go to **Dashboards** from the left menu and expand the **Managed Prometheus** data source. Review the list of prebuilt dashboards and open one that seems interesting to you.
+
+The following automatically generated dashboards are available, as of November 7, 2023:
+
+- Kubernetes / Compute Resources / Cluster
+- Kubernetes / Compute Resources / Cluster (Windows)
+- Kubernetes / Compute Resources / Namespace (Pods)
+- Kubernetes / Compute Resources / Namespace (Windows)
+- Kubernetes / Compute Resources / Namespace (Workloads)
+- Kubernetes / Compute Resources / Node (Pods)
+- Kubernetes / Compute Resources / Pod
+- Kubernetes / Compute Resources / Pod (Windows)
+- Kubernetes / Compute Resources / Workload
+- Kubernetes / Kubelet
+- Kubernetes / Networking
+- Kubernetes / USE Method / Cluster (Windows)
+- Kubernetes / USE Method / Node (Windows)
+- Node Exporter / Nodes
+- Node Exporter / USE Method / Node
+- Overview
+
+The following screenshot shows some of the panels from the "Kubernetes / Compute Resources / Cluster" dashboard.
++
+Edit the dashboard as desired. For more information about editing a dashboard, read [Edit a dashboard panel](./how-to-create-dashboard.md#edit-a-dashboard-panel).
+
+### Create a new dashboard
+
+To build a brand new dashboard with Prometheus metrics:
+
+1. Open Grafana and select **Connections** > **Your connections** from the left menu.
+1. Find the new Prometheus data source.
+
+ :::image type="content" source="media\monitor-integration\prometheus-data-source.png" alt-text="Screenshot of a new Prometheus data source displayed in the Grafana user interface.":::
+
+1. Select **Build a dashboard** to start creating a new dashboard with Prometheus metrics.
+1. Select **Add visualization** to start creating a new panel.
+1. Under **metrics**, select a metric and then **Run queries** to check that your dashboard can collect and display your Prometheus data.
+
+ :::image type="content" source="media\monitor-integration\new-dashboard.png" alt-text="Screenshot the Grafana UI, showing a new dashboard displaying Prometheus data.":::
+
+ For more information about editing a dashboard, read [Edit a dashboard panel](./how-to-create-dashboard.md#edit-a-dashboard-panel).
+
+> [!TIP]
+> If you're unable to get Prometheus data in your dashboard, check if your Azure Monitor workspace is collecting Prometheus data. Go to [Troubleshoot collection of Prometheus metrics in Azure Monitor](../azure-monitor/containers/prometheus-metrics-troubleshoot.md) for more information.
+
+## Remove an Azure Monitor workspace
+
+If you no longer need it, you can remove an Azure Monitor workspace from your Azure Managed Grafana workspace:
+
+1. In your Azure Managed Grafana workspace, select **Integrations** > **Azure Monitor workspaces (Preview**) from the left menu.
+1. Select the row with the resource to delete and select **Delete** > **Yes**.
+
+Optionally also remove the role assignment that was previously added in the Azure Monitor workspace:
+
+1. In the Azure Monitor workspace resource, select **Access control (IAM)** > **Role assignments**.
+1. Under **Monitoring Data Reader**, select the row with the name of your Azure Managed Grafana resource and select **Remove** > **OK**.
+
+To learn more about Azure Monitor managed service for Prometheus, read the [Azure Monitor managed service for Prometheus guide](../azure-monitor/essentials/prometheus-metrics-overview.md).
+
+## Next steps
+
+In this how-to guide, you learned how to connect an Azure Monitor workspace to Grafana. To learn how to create and configure Grafana dashboards, go to [Create dashboards](how-to-create-dashboard.md).
migrate Least Privilege Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/least-privilege-credentials.md
Last updated 08/02/2023
-# Provision custom logins with least privileges for SQL Discovery and Assessment
+# Provision custom accounts with least privileges for SQL Discovery and Assessment
This article describes how to create a custom account with minimal permissions for Discovery and assessment.
-In preparation for discovery, the Azure Migrate appliance needs to be configured with the accounts for establishing connections with the SQL Server instances. If you prefer not to use an account with sysadmin privileges on the SQL instance for this purpose, the least privileged account provisioning utility can help create a custom account with the [minimal set of permissions](migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) required to obtain the necessary metadata for discovery and assessment. Once the custom account has been provisioned, add this account in the Appliance configuration for SQL Discovery and Assessment.
+In preparation for discovery, the Azure Migrate appliance needs to be configured with the accounts for establishing connections with the SQL Server instances. If you prefer to avoid using accounts with sysadmin privileges, a custom account with [minimal set of permissions](migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) required to obtain the necessary metadata for discovery and assessment can be created. Add this custom account in the Appliance configuration for SQL Discovery and Assessment. The least privileged account provisioning utility can help provision these custom accounts.
## Prerequisites - A prepared CSV with the list of SQL Server instances. Ensure all SQL Servers listed have [TCP/IP protocol enabled](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol). -- An account with sysadmin permissions on all the SQL Server instances listed in the CSV.
+- Accounts with sysadmin permissions on all the SQL Server instances listed in the CSV.
> [!Note]
- > - This account is used only to provision the least privileged account. Once the least privileged account is created, it can be provided in the Appliance configuration for the actual discovery and assessment.
- > - If there are multiple admin-level accounts that you wish to use, the utility can be run any number of times with the same input values by changing only the admin-level credential.
+ > - The admin-level account is used only to provision the least privileged account. Once the least privileged account is created, provide it in the Appliance configuration for the actual discovery and assessment.
+ > - If multiple admin-level accounts are required, use the same CSV file to run the utility again with the next admin-level credential. The instances that have already been successfully updated are skipped. Repeat this with different admin-level credentials until all sql instances have the *Status* field set to *Success*.
## Prepare the list of SQL Server instances
-The utility requires the SQL Server instances list to be created as a CSV with the following columns in the stated order:
-1. FqdnOrIpAddress (Mandatory): This field should contain the Fully Qualified Domain Name or IP Address of the server where the SQL Server instance is running.
+The utility requires the SQL Server instances list created as a CSV with the following columns in the stated order:
+1. FqdnOrIpAddress (Mandatory): This field should contain the Fully Qualified Domain Name (or optionally the IP Address for SQL Server authentication) of the server where the SQL Server instance is running.
2. InstanceName (Mandatory): This field should contain the instance name for a named instance or MSSQLSERVER for a default instance. 3. Port (Mandatory): The port that the SQL Server is listening on.
-4. Status (Optional/Output): This field is to be left blank initially. Any value here other than Success will allow the utility to attempt to provision the least privileged account against the corresponding instance. Success or failure is then updated in this field at the end of execution.
-5. ErrorSummary (Optional/Output): This field is updated by the utility to provide details of the errors (if any) that were encountered while provisioning the least privileged account.
-6. ErrorGuidance (Optional/Output): This field is used by the utility to provide details of the errors (if any) that were encountered while provisioning the least privileged account.
+4. Status (Optional/Output): This field can be left blank initially. Any value here other than Success allows the utility to attempt to provision the least privileged account against the corresponding instance. Success or failure is then updated in this field at the end of execution.
+5. ErrorSummary (Optional/Output): Leave blank. The utility updates this field with summary of the errors (if any) that were encountered while provisioning the least privileged account.
+6. ErrorGuidance (Optional/Output): Leave blank. The utility updates this field with detailed error messages (if any) that were encountered while provisioning the least privileged account.
## Provision the custom accounts
-1. Open a command prompt and navigate to the %ProgramFiles%\Microsoft Azure Appliance Configuration Manager\Tools folder.
-2. Launch the Least Privileged Account Provisioning utility using the command:
- `SQLMinPrivilege.exe`
-3. Provide the path to the CSV list of SQL Server instances.
-4. Provide the credentials of the account with admin-level permissions.
- 1. Select the credential type by entering 1 for SQL Account or 2 for Windows/Domain Account.
- 2. Provide the username and password for the admin-level account
-5. Now provide the credentials for the least privileged account that needs to be created.
- 1. Select the credential type by entering 1 for SQL Account or 2 for Windows/Domain Account.
- 2. If you chose to create a SQL Account in the previous step, you'll be notified that if an SQL Server instance in the list doesn't have SQL Authentication enabled, the script can optionally provision the account anyway and enable SQL Authentication. However, the instance needs to be restarted for the newly provisioned SQL Account to be used. If you don't want to proceed with SQL Account provisioning, enter *N* or *n* to go back to the previous step and choose the credential type again.
- 3. Provide the username and password for the least privileged account to be provisioned.
-6. If there are additional admin-level credentials to be used, start again at Step 2 with the same CSV file. The utility ignores instances, which have already been successfully configured.
+1. Open a command prompt and navigate to the %ProgramFiles%\Microsoft Azure Appliance Configuration Manager\Tools\SQLMinPrivilege folder.
+1. Launch the Least Privileged Account Provisioning utility using the command:
+ `MinimumPrivilegedUser.exe`
+1. Select the environment type by entering 1 if you're running it from AzureMigrate appliance or 2 otherwise.
+1. Provide the path to the CSV list of SQL Server instances.
+1. Provide a unique identifier(GUID) for creation of a custom security identifier(SID) for the custom account. We recommend that you use the same well known GUID for all executions of the utility. For example, you can use the Azure Subscription ID.
+1. Provide the credentials of the account with admin-level permissions.
+ 1. Select the credential type by entering 1 for *SQL Account* or 2 for *Windows/Domain Account*.
+ 1. Provide the username and password for the admin-level account
+1. Now provide the credentials for the least privileged account to be created.
+ 1. Select the credential type by entering 1 for *SQL Account* or 2 for *Windows/Domain Account*.
+ 1. If you chose *SQL Account* in the previous step, the SQL Server instances in the list should have SQL Server authentication (Mixed Mode) enabled. If a SQL Server instance in the list doesn't have SQL Authentication enabled, the script can optionally provision the account anyway and enable SQL Authentication. However, the instance should be restarted before the new SQL Account is used. If you don't want to proceed with SQL Account provisioning, enter *N* or *n* to go back to the previous step and choose the credential type again.
+ 1. Provide the username and password for the least privileged account to be provisioned.
+1. If there are more admin-level credentials to be used, start again with the same CSV file. The utility skips instances that are successfully configured.
> [!Note] > We recommend using the same least privileged account credentials to simplify the configuration of the Azure Migrate Appliance.
-### Use custom login for discovery and assessment
-Now that the custom login has been provisioned, provide this credential in the Appliance configuration.
+### Use custom account for discovery and assessment
+Now that the custom account is provisioned, provide this credential in the Appliance configuration.
## Next steps
mysql Concepts Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md
Before you attempt to configure Key Vault, be sure to address the following requ
Before you attempt to configure the CMK, be sure to address the following requirements. -- The customer-managed key to encrypt the DEK can be only asymmetric, RSA 2048.
+- The customer-managed key to encrypt the DEK can be only asymmetric, RSA 2048,3072 or 4096.
- The key activation date (if set) must be a date and time in the past. The expiration date not set. - The key must be in the **Enabled** state. - The key must have [soft delete](../../key-vault/general/soft-delete-overview.md) with retention period set to 90 days. This implicitly sets the required key attribute recoveryLevel: ΓÇ£Recoverable.ΓÇ¥
As you configure Key Vault to use data encryption using a customer-managed key,
- Keep a copy of the customer-managed key in a secure place or escrow it to the escrow service. - If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey).
-> [!NOTE]
+> [!NOTE]
> It is advised to use a key vault from the same region, but if necessary, you can use a key vault from another region by specifying the "enter key identifier" information.-
+> RSA key stored in **Azure Key Vault Managed HSM**, is currently not supported.
## Inaccessible customer-managed key condition When you configure data encryption with a CMK in Key Vault, continuous access to this key is required for the server to stay online. If the flexible server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The flexible server issues a corresponding error message and changes the server state to Inaccessible. The server can reach this state for various reasons.
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Previously updated : 09/20/2023 Last updated : 11/16/2023 #CustomerIntent: As an Azure administrator, I want to learn about NSG flow logs so that I can better monitor and optimize my network.
Pricing of NSG flow logs doesn't include the underlying costs of storage. Using
If you want to retain data forever and don't want to apply any retention policy, set retention days to 0. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
-### User-defined inbound TCP rules
+### Non-default inbound TCP rules
-Network security groups are implemented as a [stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). But because of current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless way.
+Network security groups are implemented as a [stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). But because of current platform limitations, network security group non-default security rules that affect inbound TCP flows are implemented in a stateless way.
-Flows that user-defined inbound rules affect become non-terminating. Additionally, byte and packet counts aren't recorded for these flows. Because of those factors, the number of bytes and packets reported in NSG flow logs (and Network Watcher traffic analytics) could be different from actual numbers.
+Flows affected by non-default inbound rules become non-terminating. Additionally, byte and packet counts aren't recorded for these flows. Because of those factors, the number of bytes and packets reported in NSG flow logs (and Network Watcher traffic analytics) could be different from actual numbers.
You can resolve this difference by setting the `FlowTimeoutInMinutes` property on the associated virtual networks to a non-null value. You can achieve default stateful behavior by setting `FlowTimeoutInMinutes` to 4 minutes. For long-running connections where you don't want flows to disconnect from a service or destination, you can set `FlowTimeoutInMinutes` to a value of up to 30 minutes. Use [Get-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to set `FlowTimeoutInMinutes` property:
notification-hubs Notification Hubs High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-high-availability.md
Previously updated : 10/02/2023 Last updated : 11/16/2023
New availability zones are being added regularly. The following regions currentl
| | Sweden Central | | Korea Central | | | Norway East | | | | | Germany West Central | | |
+| | Switzerland North | | |
### Enable availability zones
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
Near Zero Downtime Feature is enabled across all public regions and **no custome
> [!NOTE] > Near Zero Downtime Scaling process is the default operation. However, in cases where the following limitations are encountered, the system switches to regular scaling, which involves more downtime compared to the near zero downtime scaling.
+#### Pre-requisites
+- You should allow all inbound/outbound connections between the IPs in the delegated subnet. If this is not enabled near downtime scaling process will not work and scaling will occur through the standard scaling process which results in more downtime.
+
#### Limitations - Near Zero Downtime Scaling will not work if there are regional capacity constraints or quota limits on customer subscriptions.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 3.0.0 | Used to parse an address into constituent elements. | > |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 3.0.0 | Address Standardizer US dataset example| > |[amcheck](https://www.postgresql.org/docs/12/amcheck.html) | 1.2 | functions for verifying relation integrity|
-> |[azure_ai](./generative-ai-azure-overview.md) | 0.1.0 | Azure OpenAI and Cognitive Services integration for PostgreSQL |
> |[azure_storage](../../postgresql/flexible-server/concepts-storage-extension.md) | 1.3 | extension to export and import data from Azure Storage| > |[bloom](https://www.postgresql.org/docs/12/bloom.html) | 1.0 | bloom access method - signature file based index| > |[btree_gin](https://www.postgresql.org/docs/12/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[address_standardizer](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Used to parse an address into constituent elements. | > |[address_standardizer_data_us](http://postgis.net/docs/manual-2.5/Address_Standardizer.html) | 2.5.1 | Address Standardizer US dataset example| > |[amcheck](https://www.postgresql.org/docs/11/amcheck.html) | 1.1 | functions for verifying relation integrity|
-> |[azure_ai](./generative-ai-azure-overview.md) | 0.1.0 | Azure OpenAI and Cognitive Services integration for PostgreSQL |
-> |[azure_storage](../../postgresql/flexible-server/concepts-storage-extension.md) | 1.3 | extension to export and import data from Azure Storage|
> |[bloom](https://www.postgresql.org/docs/11/bloom.html) | 1.0 | bloom access method - signature file based index| > |[btree_gin](https://www.postgresql.org/docs/11/btree-gin.html) | 1.3 | support for indexing common datatypes in GIN| > |[btree_gist](https://www.postgresql.org/docs/11/btree-gist.html) | 1.5 | support for indexing common datatypes in GiST|
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-> [!NOTE]
--
-## Overview
- Azure Database for PostgreSQL Flexible Server supports PostgreSQL versions 11, 12, 13, 14 and 15. Postgres community releases a new major version containing new features about once a year. Additionally, major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward-compatible with existing applications. Azure Database for PostgreSQL Flexible Server periodically updates the minor versions during customerΓÇÖs maintenance window. Major version upgrades are more complicated than minor version upgrades as they can include internal changes and new features that may not be backward-compatible with existing applications.
+## Overview
+ Azure Database for PostgreSQL Flexible Server Postgres has now introduced in-place major version upgrade feature that performs an in-place upgrade of the server with just a click. In-place major version upgrade simplifies the upgrade process minimizing the disruption to users and applications accessing the server. In-place upgrades are a simpler way to upgrade the major version of the instance, as they retain the server name and other settings of the current server after the upgrade, and don't require data migration or changes to the application connection strings. In-place upgrades are faster and involve shorter downtime than data migration.
Here are some of the important considerations with in-place major version upgrad
- Once the in-place major version upgrade is successful, there are no automated ways to revert to the earlier version. However, you can perform a Point-In-Time Recovery (PITR) to a time prior to the upgrade to restore the previous version of the database instance.
-## Limitations:
+## Limitations
If in-place major version upgrade pre-check operations fail then it aborts with a detailed error message for all the below limitations.
If in-place major version upgrade pre-check operations fail then it aborts with
- In-place major version upgrade doesn't support certain extensions and there are some limitations to upgrading certain extensions. The extensions **Timescaledb**, **pgaudit**, **dblink**, **orafce** and **postgres_fdw** are unsupported for all PostgreSQL versions. -- When upgrading servers with PostGIS extension installed, set the 'search_path' server parameter to explicitly include the schemas of the PostGIS extension, extensions that depend on PostGIS, and extensions that serve as dependencies for the below extensions.
-
- **e.g postgis,postgis_raster,postgis_sfcgal,postgis_tiger_geocoder,postgis_topology,address_standardizer,address_standardizer_data_us,fuzzystrmatch (required for postgis_tiger_geocoder).**
-
+- When upgrading servers with PostGIS extension installed, set the `search_path` server parameter to explicitly include the schemas of the PostGIS extension, extensions that depend on PostGIS, and extensions that serve as dependencies for the below extensions.
+ **e.g postgis,postgis_raster,postgis_sfcgal,postgis_tiger_geocoder,postgis_topology,address_standardizer,address_standardizer_data_us,fuzzystrmatch (required for postgis_tiger_geocoder).**
- Servers configured with logical replication slots aren't supported. ----
-## How to Perform in-place major version upgrade:
+## How to perform in-place major version upgrade:
It's recommended to perform a dry run of the in-place major version upgrade in a non-production environment before upgrading the production server. It allows you to identify any application incompatibilities and validate that the upgrade completes successfully before upgrading the production environment. You can perform a Point-In-Time Recovery (PITR) of your production server and test the upgrade in the non-production environment. Addressing these issues before the production upgrade minimizes downtime and ensures a smooth upgrade process.
It's recommended to perform a dry run of the in-place major version upgrade in a
1. You can perform in-place major version upgrade using Azure portal or CLI (command-line interface). Click the **Upgrade** button in Overview blade. ---
- :::image type="content" source="media/concepts-major-version-upgrade/upgrade-tab.png" alt-text="Diagram of Upgrade tab to perform in-place major version upgrade.":::
---
+ :::image type="content" source="media/concepts-major-version-upgrade/upgrade-tab.png" alt-text="Diagram of Upgrade tab to perform in-place major version upgrade.":::
2. You'll see an option to select the major version of your choice, you have an option to skip versions to directly upgrade to higher versions. Choose the version and click **Upgrade**. ------
+ :::image type="content" source="media/concepts-major-version-upgrade/set-postgresql-version.png" alt-text="Diagram of PostgreSQL version to Upgrade.":::
3. During upgrade, users have to wait for the process to complete. You can resume accessing the server once the server is back online. --------
+ :::image type="content" source="media/concepts-major-version-upgrade/deployment-progress.png" alt-text="Diagram of deployment progress for major version upgrade.":::
4. Once the upgrade is successful,you can expand the **Deployment details** tab and click **Operation details** to see more information about upgrade process like duration, provisioning state etc. ---------
+ :::image type="content" source="media/concepts-major-version-upgrade/deployment-success.png" alt-text="Diagram of successful deployment of for major version upgrade.":::
5. You can click on the **Go to resource** tab to validate your upgrade. You notice that server name remained unchanged and PostgreSQL version upgraded to desired higher version with the latest minor version.
+ :::image type="content" source="media/concepts-major-version-upgrade/upgrade-verification.png" alt-text="Diagram of Upgraded version to Flexible Server after major version upgrade.":::
+## Post upgrade
---
-## Post Upgrade
-
-Run the **ANALYZE** operation to refresh the pg_statistic table. You should do this for every database on all your Flexible Server. Optimizer statistics aren't transferred during a major version upgrade, so you need to regenerate all statistics to avoid performance issues. Run the command without any parameters to generate statistics for all regular tables in the current database, as follows
+Run the **ANALYZE** operation to refresh the `pg_statistic` table. You should do this for every database on all your Flexible Server. Optimizer statistics aren't transferred during a major version upgrade, so you need to regenerate all statistics to avoid performance issues. Run the command without any parameters to generate statistics for all regular tables in the current database, as follows:
```
-ANALYZE VERBOSE
+VACUUM ANALYZE VERBOSE;
``` > [!NOTE] >
private-5g-core Azure Private 5G Core Release Notes 2308 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2308.md
Previously updated : 11/01/2023 Last updated : 11/16/2023 # Azure Private 5G Core 2308 release notes The following release notes identify the new features, critical open issues, and resolved issues for the 2308 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, review the information contained in these release notes.
-This article applies to the AP5GC 2308 release (2308.0-7). This release is compatible with the Azure Stack Edge Pro 1 GPU and Azure Stack Edge Pro 2 running the ASE 2303 and ASE 2309 releases and by the 2023-06-01 and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
+This article applies to the AP5GC 2308 release (2308.0-8). This release is compatible with the Azure Stack Edge Pro 1 GPU and Azure Stack Edge Pro 2 running the ASE 2303 and ASE 2309 releases. It supports the 2023-06-01 and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
-For more details about compatibility, see [Packet core and Azure Stack Edge compatibility](azure-stack-edge-packet-core-compatibility.md).
+For more information about compatibility, see [Packet core and Azure Stack Edge compatibility](azure-stack-edge-packet-core-compatibility.md).
With this release, there's a new naming scheme and Packet Core versions are now called ΓÇÿ2308.0-1ΓÇÖ rather than ΓÇÿPMN-2308.ΓÇÖ
With this release, there's a new naming scheme and Packet Core versions are now
## Support lifetime
-Packet core versions are supported until two subsequent versions have been released (unless otherwise noted), which is typically two months after the release date. You should plan to upgrade your packet core in this time frame to avoid losing support.
+Packet core versions are supported until two subsequent versions are released. You should plan to upgrade your packet core in this time frame to avoid losing support.
### Currently supported packet core versions The following table shows the support status for different Packet Core releases. | Release | Support Status | ||-|
-| AP5GC 2308 | Supported until AP5GC 2311 released |
+| AP5GC 2308 | Supported until AP5GC 2401 released |
| AP5GC 2307 | Supported until AP5GC 2310 released | | AP5GC 2306 and earlier | Out of Support |
The following table provides a summary of issues fixed in this release.
|No. |Feature | Issue | |--|--|--| | 1 | Packet Forwarding | If the packet forwarding component of the userplane crashes, it may not recover. If it does not, the system experiences an outage until manually recovered |
+ | 2 | Packet Forwarding | An intermittent fault at the network layer causes an outage of packet forwarding |
+ | 3 | Diagnosability | During packet capture, uplink userplane packets can be omitted from packet captures |
+ | 4 | Packet Forwarding | Errors in userplane packet detection rules can cause incorrect packet handling |
## Known issues in the AP5GC 2308 release
private-5g-core Gather Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/gather-diagnostics.md
Title: Gather remote diagnostics
-description: In this how-to guide, you'll learn how to gather remote diagnostics for a site using the Azure portal.
+description: In this how-to guide, you'll learn how to gather remote diagnostics for a site using the Azure portal.
You must already have an AP5GC site deployed to collect diagnostics.
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Navigate to the **Packet Core Control Pane** overview page of the site you want to gather diagnostics for. 1. Select **Diagnostics Collection** under the **Help** section on the left side. This will open a **Diagnostics Collection** view.
-1. Enter the **Storage account blob URL** that was configured for diagnostics storage and append the file name that you want to give the diagnostics. For example:
- `https://storageaccountname.blob.core.windows.net/diagscontainername/diagsPackageName.zip`
+1. Enter the **Storage account blob URL** that was configured for diagnostics storage and append the file name that you want to give the diagnostics. For example:
+ `https://storageaccountname.blob.core.windows.net/diagscontainername/diagsPackageName.zip`
> [!TIP] > The **Storage account blob URL** should have been noted during creation. If it wasn't: >
You must already have an AP5GC site deployed to collect diagnostics.
- If diagnostics file collection fails, an activity log will appear in the portal allowing you to troubleshoot via ARM: - If an invalid container URL was passed, the request will be rejected and report **400 Bad Request**. Repeat the process with the correct container URL. - If the asynchronous part of the operation fails, the asynchronous operation resource is set to **Failed** and reports a failure reason.-- Additionally, check that the same user-assigned identity was added to both the site and storage account.
+- Check that the same user-assigned identity was added to both the site and storage account.
+- Check whether the storage container has an immutability policy configured. If so, either remove the policy or ensure that the storage account has version-level immutability support enabled, as described in [Set up a storage account](#set-up-a-storage-account). This is required because the diagnostics file is streamed to the storage account container, so the container must support blob updates. For more information, see [Time-based retention policies for immutable blob data](../storage/blobs/immutable-time-based-retention-policy-overview.md).
- If this does not resolve the issue, share the correlation ID of the failed request with AP5GC support for investigation. See [How to open a support request for Azure Private 5G Core](open-support-request.md). ## Next steps - [Perform packet capture on a packet core instance](data-plane-packet-capture.md) - [Monitor Azure Private 5G Core with Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md)-- [Monitor Azure Private 5G Core with packet core dashboards](packet-core-dashboards.md)
+- [Monitor Azure Private 5G Core with packet core dashboards](packet-core-dashboards.md)
role-based-access-control Conditions Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-format.md
Depending on the selected actions, the attribute might be found in different pla
> [!div class="mx-tableFixed"] > | Attribute source | Description | Code | > | | | |
-> | [Environment](#environment-attributes) | Indicates that the attribute is associated with the environment of the request, such as the network origin of the request or the current date and time.</br>***(Environment attributes are currently in preview.)*** | `@Environment` |
-> | [Principal](#principal-attributes) | Indicates that the attribute is a Microsoft Entra custom security attribute on the principal, such as a user, enterprise application (service principal), or managed identity. | `@Principal` |
-> | [Request](#request-attributes) | Indicates that the attribute is part of the action request, such as setting the blob index tag. | `@Request` |
-> | [Resource](#resource-attributes) | Indicates that the attribute is a property of the resource, such as a container name. | `@Resource` |
+> | [Environment](#environment-attributes) | Attribute is associated with the environment of the request, such as the network origin of the request or the current date and time.</br>***(Environment attributes are currently in preview.)*** | `@Environment` |
+> | [Principal](#principal-attributes) | Attribute is a custom security attribute assigned to the principal, such as a user or enterprise application (service principal). | `@Principal` |
+> | [Request](#request-attributes) | Attribute is part of the action request, such as setting the blob index tag. | `@Request` |
+> | [Resource](#resource-attributes) | Attribute is a property of the resource, such as a container name. | `@Resource` |
For a complete list of the storage attributes you can use in conditions, see:
The following table lists the supported environment attributes for conditions.
#### Principal attributes
-Principal attributes are Microsoft Entra custom security attributes associated with the principal requesting access to a resource. The security principal can be a user, an enterprise application (a service principal), or a managed identity.
+Principal attributes are custom security attributes assigned to the security principal requesting access to a resource. The security principal can be a user or an enterprise application (service principal).
To use principal attributes, you must have the following: -- Microsoft Entra permissions for signed-in user, such as the [Attribute Assignment Administrator](../active-directory/roles/permissions-reference.md#attribute-assignment-administrator) role
+- Microsoft Entra permissions for the signed-in user, such as the [Attribute Assignment Administrator](/entra/identity/role-based-access-control/permissions-reference#attribute-assignment-administrator) role
- Custom security attributes defined in Microsoft Entra ID For more information about custom security attributes, see: -- [Allow read access to blobs based on tags and custom security attributes (Preview)](conditions-custom-security-attributes.md)-- [Principal does not appear in Attribute source (Preview)](conditions-troubleshoot.md#symptomprincipal-does-not-appear-in-attribute-source)-- [Add or deactivate custom security attributes in Microsoft Entra ID (Preview)](../active-directory/fundamentals/custom-security-attributes-add.md)
+- [Add or deactivate custom security attributes in Microsoft Entra ID](/entra/fundamentals/custom-security-attributes-add)
+- [Allow read access to blobs based on tags and custom security attributes](conditions-custom-security-attributes.md)
+- [Principal does not appear in Attribute source](conditions-troubleshoot.md#symptomprincipal-does-not-appear-in-attribute-source)
#### Request attributes
search Hybrid Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-how-to-query.md
api-key: {{admin-api-key}}
+ The filter mode can affect the number of results available to the semantic reranker. As a best practice, it's smart to give the semantic ranker the maximum number of documents (50). If prefilters or postfilters are too selective, you might be underserving the semantic ranker by giving it fewer than 50 documents to work with. ++ Prefiltering is applied before query execution. If prefilter reduces the search area to 100 documents, the vector query executes over the "contentVector" field for those 100 documents, returning the k=50 best matches. Those 50 matching documents then pass to RRF for merged results, and then to semantic ranker.+++ Postfilter is applied after query execution. If k=50 returns 50 matches on the vector query side, then the post-filter is applied to the 50 matches, reducing results that meet filter criteria, leaving you with fewer than 50 documents to pass to semantic ranker+ ## Configure a query response When you're setting up the hybrid query, think about the response structure. The response is a flattened rowset. Parameters on the query determine which fields are in each row and how many rows are in the response. The search engine ranks the matching documents and returns the most relevant results.
search Search Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-explorer.md
Previously updated : 11/15/2023 Last updated : 11/16/2023 - mode-ui - ignite-2023
Before you begin, have the following prerequisites in place:
1. Open Search explorer from the command bar:
- :::image type="content" source="medi2.png" alt-text="Search explorer command in portal" border="true":::
+ :::image type="content" source="medi2.png" alt-text="Screenshot of the Search explorer command in portal." border="true":::
Or use the embedded **Search explorer** tab on an open index:
- :::image type="content" source="media/search-explorer/search-explorer-tab.png" alt-text="Search explorer tab" border="true":::
+ :::image type="content" source="media/search-explorer/search-explorer-tab.png" alt-text="Screenshot of the Search explorer tab." border="true":::
+
+1. To specify syntax and choose an API version, select **JSON view**. The examples in this article assume JSON view throughout.
+
+ :::image type="content" source="media/search-explorer/search-explorer-json-view.png" alt-text="Screenshot of the JSON view selector." border="true":::
## Unspecified query
-In Search explorer, POST requests are formulated internally using the [Search REST API](/rest/api/searchservice/search-documents), with responses returned as verbose JSON documents.
+In Search explorer, POST requests are formulated internally using the [Search POST REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2023-10-01-preview&preserve-view=true), with responses returned as verbose JSON documents.
For a first look at content, execute an empty search by clicking **Search** with no terms provided. An empty search is useful as a first query because it returns entire documents so that you can review document composition. On an empty search, there's no search score and documents are returned in arbitrary order (`"@search.score": 1` for all documents). By default, 50 documents are returned in a search request.
-Equivalent syntax for an empty search is `*` or `search=*`.
-
- ```http
- search=*
+Equivalent syntax for an empty search is `*` or `"search": "*"`.
+
+ ```json
+ {
+ "search": "*"
+ }
``` **Results**
Equivalent syntax for an empty search is `*` or `search=*`.
## Free text search
-Free-form queries, with or without operators, are useful for simulating user-defined queries sent from a custom app to Azure AI Search. Only those fields attributed as **Searchable** in the index definition are scanned for matches.
+Free-form queries, with or without operators, are useful for simulating user-defined queries sent from a custom app to Azure AI Search. Only those fields attributed as "searchable" in the index definition are scanned for matches.
Notice that when you provide search criteria, such as query terms or expressions, search rank comes into play. The following example illustrates a free text search. The "@search.score" is a relevance score computed for the match using the [default scoring algorithm](index-ranking-similarity.md#default-scoring-algorithm).
- ```http
- Seattle apartment "Lake Washington" miele OR thermador appliance
+ ```json
+ {
+ "search": "Seattle townhouse `Lake Washington` miele OR thermador appliance"
+ }
``` **Results** You can use Ctrl-F to search within results for specific terms of interest.
- :::image type="content" source="media/search-explorer/search-explorer-example-freetext.png" alt-text="Free text query example" border="true":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-freetext.png" alt-text="Screenshot of a free text query example." border="true":::
## Count of matching documents
-Add **$count=true** to get the number of matches found in an index. On an empty search, count is the total number of documents in the index. On a qualified search, it's the number of documents matching the query input. Recall that the service returns the top 50 matches by default, so the count might indicate more matches in the index than what's returned in the results.
+Add `"count": true` to get the number of matches found in an index. On an empty search, count is the total number of documents in the index. On a qualified search, it's the number of documents matching the query input. Recall that the service returns the top 50 matches by default, so the count might indicate more matches in the index than what's returned in the results.
- ```http
- $count=true
+ ```json
+ {
+ "search": "Seattle townhouse `Lake Washington` miele OR thermador appliance",
+ "count": true
+ }
``` **Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-count.png" alt-text="Count of matching documents in index" border="true":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-count.png" alt-text="Screenshot of a count example." border="true":::
## Limit fields in search results
-Add [**$select**](search-query-odata-select.md) to limit results to the explicitly named fields for more readable output in **Search explorer**. To keep the previously mentioned parameters in the query, use **&** to separate each parameter.
+Add ["select"`](search-query-odata-select.md) to limit results to the explicitly named fields for more readable output in **Search explorer**. Only fields marked as "retrievable" in the search index can show up in results.
- ```http
- search=seattle condo&$select=listingId,beds,baths,description,street,city,price&$count=true
+ ```json
+ {
+ "search": "seattle condo",
+ "count": true,
+ "select": "listingId, beds, baths, description, street, city, price"
+ }
``` **Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-selectfield.png" alt-text="Restrict fields in search results" border="true":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-selectfield.png" alt-text="Screenshot of restrict fields in search results example." border="true":::
## Return next batch of results
-Azure AI Search returns the top 50 matches based on the search rank. To get the next set of matching documents, append **$top=100,&$skip=50** to increase the result set to 100 documents (default is 50, maximum is 1000), skipping the first 50 documents. You can check the document key (listingID) to identify a document.
+Azure AI Search returns the top 50 matches based on the search rank. To get the next set of matching documents, append `"top": 100` and `"skip": 50` to increase the result set to 100 documents (default is 50, maximum is 1000), skipping the first 50 documents. You can check the document key (listingID) to identify a document.
Recall that you need to provide search criteria, such as a query term or expression, to get ranked results. Notice that search scores decrease the deeper you reach into search results.
- ```http
- search=seattle condo&$select=listingId,beds,baths,description,street,city,price&$count=true&$top=100&$skip=50
+ ```json
+ {
+ "search": "seattle condo",
+ "count": true,
+ "select": "listingId, beds, baths, description, street, city, price",
+ "top": 100,
+ "skip": 50
+ }
``` **Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-topskip.png" alt-text="Return next batch of search results" border="true":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-topskip.png" alt-text="Screenshot of returning next batch of search results example." border="true":::
## Filter expressions (greater than, less than, equal to)
-Use the [**$filter**](search-query-odata-filter.md) parameter when you want to specify precise criteria rather than free text search. The field must be attributed as **Filterable** in the index. This example searches for bedrooms greater than 3:
+Use the [`filter`](search-query-odata-filter.md) parameter to specify inclusion or exclusion criteria. The field must be attributed as "filterable" in the index. This example searches for bedrooms greater than 3:
- ```http
- search=seattle condo&$filter=beds gt 3&$count=true
+ ```json
+ {
+ "search": "seattle condo",
+ "count": true,
+ "select": "listingId, beds, baths, description",
+ "filter": "beds gt 3"
+ }
``` **Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-filter.png" alt-text="Filter by criteria" border="true":::
+ :::image type="content" source="media/search-explorer/search-explorer-example-filter.png" alt-text="Screenshot of a filter example." border="true":::
## Sorting results
-Add [**$orderby**](search-query-odata-orderby.md) to sort results by another field besides search score. The field must be attributed as **Sortable** in the index. An example expression you can use to test this out is:
+Add [`orderby`](search-query-odata-orderby.md) to sort results by another field besides search score. The field must be attributed as "sortable" in the index. In situations where the filtered value is identical (for example, same price), the order is arbitrary, but you can add more criteria for deeper sorting. An example expression you can use to test this out is:
- ```http
- search=seattle condo&$select=listingId,beds,price&$filter=beds gt 3&$count=true&$orderby=price asc
+ ```json
+ {
+ "search": "seattle condo",
+ "count": true,
+ "select": "listingId, price, beds, baths, description",
+ "filter": "beds gt 3",
+ "orderby": "price asc"
+ }
``` **Results**
- :::image type="content" source="media/search-explorer/search-explorer-example-ordery.png" alt-text="Change the sort order" border="true":::
-
-Both **$filter** and **$orderby** expressions are OData constructions. For more information, see [Filter OData syntax](/rest/api/searchservice/odata-expression-syntax-for-azure-search).
-
-<a name="start-search-explorer"></a>
+ :::image type="content" source="media/search-explorer/search-explorer-example-orderby.png" alt-text="Screenshot of a sorting example." border="true":::
## Takeaways In this quickstart, you used **Search explorer** to query an index using the REST API.
-+ Results are returned as verbose JSON documents so that you can view document construction and content, in entirety. The **$select** parameter in a query expression can limit which fields are returned.
++ Results are returned as verbose JSON documents so that you can view document construction and content, in entirety. The `select` parameter in a query expression can limit which fields are returned.
-+ Search results are composed of all fields marked as **Retrievable** in the index. To view field attributes in the portal, select *realestate-us-sample* in the **Indexes** list on the search overview page, and then open the **Fields** tab.
++ Search results are composed of all fields marked as "retrievable" in the index. Select the adjacent **Fields** tab to review attributes. + Keyword search, similar to what you might enter in a commercial web browser, are useful for testing an end-user experience. For example, assuming the built-in real estate sample index, you could enter "Seattle apartments lake washington", and then you can use Ctrl-F to find terms within the search results.
In this quickstart, you used **Search explorer** to query an index using the RES
## Clean up resources
-When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
+When you're working in your own subscription, it's a good idea at the end of a project to decide whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
If you're using a free service, remember that you're limited to three indexes, i
## Next steps
-To learn more about query structures and syntax, use Postman or an equivalent tool to create query expressions that use more parts of the API. The [Search REST API](/rest/api/searchservice/search-documents) is especially helpful for learning and exploration.
+To learn more about query structures and syntax, use Postman or an equivalent tool to create query expressions that use more parts of the API. The [Search POST REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2023-10-01-preview&preserve-view=true) is especially helpful for learning and exploration.
> [!div class="nextstepaction"] > [Create a basic query in Postman](search-get-started-rest.md)
search Search Get Started Portal Import Vectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-import-vectors.md
Azure AI Search is a billable resource. If it's no longer needed, delete it from
## Next steps
-This quickstart introduced you to the **Import and vectorize data** wizard that creates all of the objects necessary for integrated vectorization. If you want to explore each step in detail, try the [integrated vectorization samples](https://github.com/Azure/cognitive-search-vector).
+This quickstart introduced you to the **Import and vectorize data** wizard that creates all of the objects necessary for integrated vectorization. If you want to explore each step in detail, try an [integrated vectorization sample](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python/code/azure-search-integrated-vectorization-sample.ipynb).
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Previously updated : 09/01/2023 Last updated : 11/16/2023 - mode-ui - ignite-2023
# Quickstart: Create a search index in the Azure portal
-In this Azure AI Search quickstart, you create your first _search index_ by using the [**Import data** wizard](search-import-data-portal.md) and a built-in sample data source consisting of fictitious hotel data. The wizard guides you through the creation of a search index to help you write interesting queries within minutes.
+In this Azure AI Search quickstart, create your first _search index_ by using the [**Import data** wizard](search-import-data-portal.md) and a built-in sample data source consisting of fictitious hotel data hosted by Microsoft. The wizard guides you through the creation of a no-code search index to help you write interesting queries within minutes.
-Search queries iterate over an index that contains searchable data, metadata, and other constructs that optimize certain search behaviors. An indexer is a source-specific crawler that can read metadata and content from supported Azure data sources. Normally, indexers are created programmatically. In the Azure portal, you can create them through the **Import data** wizard. For more information, see [Indexes in Azure AI Search](search-what-is-an-index.md) and [Indexers in Azure AI Search](search-indexer-overview.md) .
+The wizard creates multiple objects on your search service - [searchable index](search-what-is-an-index.md) - but also an [indexer](search-indexer-overview.md) and data source connection for automated data retrieval. At the end of this quickstart, we review each object.
> [!NOTE]
-> The **Import data** wizard includes options for AI enrichment that aren't reviewed in this quickstart. You can use these options to extract text and structure from image files and unstructured text. For a similar walkthrough that includes AI enrichment, see [Quickstart: Create a skillset in the Azure portal](cognitive-search-quickstart-blob.md).
+> The **Import data** wizard includes options for OCR, text translation, and other AI enrichments that aren't covered in this quickstart. For a similar walkthrough that focuses on AI enrichment, see [Quickstart: Create a skillset in the Azure portal](cognitive-search-quickstart-blob.md).
## Prerequisites
Search queries iterate over an index that contains searchable data, metadata, an
Many customers start with the free service. The free tier is limited to three indexes, three data sources, and three indexers. Make sure you have room for extra items before you begin. This quickstart creates one of each object.
-Check the **Overview** page for the service to see how many indexes, indexers, and data sources you already have.
+Check the **Overview > Usage** tab for the service to see how many indexes, indexers, and data sources you already have.
-## Create and load an index
-
-Azure AI Search uses an indexer by using the **Import data** wizard. The hotels-sample data set is hosted on Microsoft on Azure Cosmos DB and accessed over an internal connection. You don't need your own Azure Cosmos DB account or source files to access the data.
-
-### Start the wizard
-
-To get started, browse to your Azure AI Search service in the Azure portal and open the **Import data** wizard.
+## Start the wizard
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account, and go to your Azure AI Search service.
-1. On the **Overview** page, select **Import data** to create and populate a search index.
+1. On the **Overview** page, select **Import data** to start the wizard.
:::image type="content" source="medi.png" alt-text="Screenshot that shows how to open the Import data wizard in the Azure portal.":::
- The **Import data** wizard opens.
+## Create and load an index
+
+In this section, create and load an index in four steps.
### Connect to a data source
-The next step is to connect to a data source to use for the search index.
+The wizard creates a data source connection to sample data hosted by Microsoft on Azure Cosmos DB. This sample data is retrieved accessed over an internal connection. You don't need your own Azure Cosmos DB account or source files to run this quickstart.
-1. In the **Import data** wizard on the **Connect to your data** tab, expand the **Data Source** dropdown list and select **Samples**.
+1. On **Connect to your data**, expand the **Data Source** dropdown list and select **Samples**.
1. In the list of built-in samples, select **hotels-sample**.
- :::image type="content" source="media/search-get-started-portal/import-hotels-sample.png" alt-text="Screenshot that shows how to select the hotels-sample data source in the Import data wizard." border="false":::
-
- In this quickstart, you use a built-in data source. If you want to create your own data source, you need to specify a name, type, and connection information. After you create a data source, it can be reused in other import operations.
+ :::image type="content" source="media/search-get-started-portal/import-hotels-sample.png" alt-text="Screenshot that shows how to select the hotels-sample data source in the Import data wizard.":::
1. Select **Next: Add cognitive skills (Optional)** to continue. ### Skip configuration for cognitive skills
-The **Import data** wizard supports the creation of an AI-enrichment pipeline for incorporating the Azure AI services algorithms into indexing. For more information, see [AI enrichment in Azure AI Search](cognitive-search-concept-intro.md).
+The **Import data** wizard supports the creation of a skillset and [AI-enrichment](cognitive-search-concept-intro.md) into indexing.
1. For this quickstart, ignore the AI enrichment configuration options on the **Add cognitive skills** tab.
The **Import data** wizard supports the creation of an AI-enrichment pipeline fo
:::image type="content" source="media/search-get-started-portal/skip-cognitive-skills.png" alt-text="Screenshot that shows how to Skip to the Customize target index tab in the Import data wizard."::: > [!TIP]
-> If you want to try an AI-indexing example, see the following articles:
-> - [Quickstart: Create a skillset in the Azure portal](cognitive-search-quickstart-blob.md)
-> - [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md)
+> Interested in AI enrichment? Try this [Quickstart: Create a skillset in the Azure portal](cognitive-search-quickstart-blob.md)
### Configure the index
-The Azure AI Search service generates a schema for the built-in hotels-sample index. Except for a few advanced filter examples, queries in the documentation and samples that target the hotels-sample index run on this index definition. The definition is shown on the **Customize target index** tab in the **Import data** wizard:
-
+The wizard infers a schema for the built-in hotels-sample index. Follow these steps to configure the index:
-Typically, in a code-based exercise, index creation is completed prior to loading data. The **Import data** wizard condenses these steps by generating a basic index for any data source it can crawl.
+1. Accept the system-generated values for the **Index name** (_hotels-sample-index_) and **Key** field (_HotelId_).
-At a minimum, the index requires an **Index name** and a collection of **Fields**. One field must be marked as the _document key_ to uniquely identify each document. The index **Key** provides the unique document identifier. The value is always a string. If you want autocomplete or suggested queries, you can specify language **Analyzers** or **Suggesters**.
+1. Accept the system-generated values for all field attributes.
-Each field has a name, data type, and _attributes_ that control how to use the field in the search index. The **Customize target index** tab uses checkboxes to enable or disable the following attributes for all fields or specific fields:
+ > [!IMPORTANT]
+ > If you rerun the wizard and use an existing hotels-sample data source, the index isn't configured with default attributes.
+ > You have to manually select attributes on future imports.
-- **Retrievable**: Include the field contents in the search index.-- **Filterable**: Allow the field contents to be used as filters for the search index.-- **Sortable**: Make the field contents available for sorting the search index.-- **Facetable**: Use the field contents for faceted navigation structure.-- **Searchable**: Include the field contents in full text search. Strings are searchable. Numeric fields and Boolean fields are often marked as not searchable.
+1. Select **Next: Create an indexer** to continue.
-The storage requirements for the index can vary as a result of attribute selection. For example, enabling a field as **Filterable** requires more storage, but enabling a field as **Retrievable** doesn't. For more information, see [Example demonstrating the storage implications of attributes and suggesters](search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters).
-By default, the **Import data** wizard scans the data source for unique identifiers as the basis for the **Key** field. Strings are attributed as **Retrievable** and **Searchable**. Integers are attributed as **Retrievable**, **Filterable**, **Sortable**, and **Facetable**.
+At a minimum, the index requires an **Index name** and a collection of **Fields**. One field must be marked as the _document key_ to uniquely identify each document. The value is always a string. The wizard scans for unique string fields and chooses one for the key.
-Follow these steps to configure the index:
+Each field has a name, data type, and _attributes_ that control how to use the field in the search index. Checkboxes enable or disable the following attributes:
-1. Accept the system-generated values for the **Index name** (_hotels-sample-index_) and **Key** field (_HotelId_).
+- **Retrievable**: Fields returned in a query response.
+- **Filterable**: Fields that accept a filter expression.
+- **Sortable**: Fields that accept an orderby expression.
+- **Facetable**: Fields used in a faceted navigation structure.
+- **Searchable**: Fields used in full text search. Strings are searchable. Numeric fields and Boolean fields are often marked as not searchable.
-1. Accept the system-generated values for all field attributes.
+Strings are attributed as **Retrievable** and **Searchable**. Integers are attributed as **Retrievable**, **Filterable**, **Sortable**, and **Facetable**.
- > [!IMPORTANT]
- > If you rerun the wizard and use an existing hotels-sample data source, the index isn't configured with default attributes.
- > You have to manually select attributes on future imports.
+Attributes affect storage. **Filterable** fields consume extra storage, but **Retrievable** doesn't. For more information, see [Example demonstrating the storage implications of attributes and suggesters](search-what-is-an-index.md#example-demonstrating-the-storage-implications-of-attributes-and-suggesters).
-1. Select **Next: Create an indexer** to continue.
+If you want autocomplete or suggested queries, specify language **Analyzers** or **Suggesters**.
-### Configure the indexer
+### Configure and run the indexer
-The last step is to configure the indexer for the search index. This object defines an executable process. You can configure the indexer to run on a recurring schedule.
+The last step configures and runs the indexer. This object defines an executable process. The data source, index, and indexer are created in this step.
1. Accept the system-generated value for the **Indexer name** (_hotels-sample-indexer_).
-1. For this quickstart, use the default option to run the indexer once, immediately.
+1. For this quickstart, use the default option to run the indexer once, immediately. The hosted data is static so there's no change tracking enabled for it.
1. Select **Submit** to create and simultaneously run the indexer.
- :::image type="content" source="media/search-get-started-portal/hotels-sample-indexer.png" alt-text="Screenshot that shows how to configure the indexer for the hotels-sample data source in the Import data wizard." border="false":::
+ :::image type="content" source="media/search-get-started-portal/hotels-sample-indexer.png" alt-text="Screenshot that shows how to configure the indexer for the hotels-sample data source in the Import data wizard.":::
## Monitor indexer progress
-After you complete the **Import data** wizard, you can monitor creation of the indexer or index. The service **Overview** page provides links to the resources created in your Azure AI Search service.
+You can monitor creation of the indexer or index in the portal. The service **Overview** page provides links to the resources created in your Azure AI Search service.
-1. Go to the **Overview** page for your Azure AI Search service in the Azure portal.
+1. On the left, select **Indexers**.
-1. Select **Usage** to see the summary details for the service resources.
-
-1. In the **Indexers** box, select **View indexers**.
-
- :::image type="content" source="media/search-get-started-portal/view-indexers.png" alt-text="Screenshot that shows how to check the status of the indexer creation process in the Azure portal." lightbox="media/search-get-started-portal/view-indexers.png":::
+ :::image type="content" source="media/search-get-started-portal/indexers-status.png" alt-text="Screenshot that shows the creation of the indexer in progress in the Azure portal.":::
It can take a few minutes for the page results to update in the Azure portal. You should see the newly created indexer in the list with a status of _In progress_ or _Success_. The list also shows the number of documents indexed.
- :::image type="content" source="media/search-get-started-portal/indexers-status.png" alt-text="Screenshot that shows the creation of the indexer in progress in the Azure portal.":::
- ## Check search index results
-On the **Overview** page for the service, you can do a similar check for the index creation.
+1. On the left, select **Indexes**.
-1. In the **Indexes** box, select **View indexes**.
+1. Select **hotels-sample-index**.
Wait for the Azure portal page to refresh. You should see the index with a document count and storage size. :::image type="content" source="media/search-get-started-portal/indexes-list.png" alt-text="Screenshot of the Indexes list on the Azure AI Search service dashboard in the Azure portal.":::
-1. To view the schema for the new index, select the index name, **hotels-sample-index**.
+1. Select the **Fields** tab to view the index schema.
-1. On the **hotels-sample-index** index page, select the **Fields** tab to view the index schema.
-
- If you're writing queries and need to check whether a field is **Filterable** or **Sortable**, use this tab to see the attribute settings.
+ Check to see which fields are **Filterable** or **Sortable** so that you know what queries to write.
:::image type="content" source="media/search-get-started-portal/index-schema-definition.png" alt-text="Screenshot that shows the schema definition for an index in the Azure AI Search service in the Azure portal."::: ## Add or change fields
-On the **Fields** tab, you can create a new field for a schema definition with the **Add field** option. Specify the field name, the data type, and attribute settings.
-
-While you can always create a new field, in most cases, you can't change existing fields. Existing fields have a physical representation in your search service so they aren't modifiable, not even in code. To fundamentally change an existing field, you need to create a new index, which replaces the original. Other constructs, such as scoring profiles and CORS options, can be added at any time.
+On the **Fields** tab, you can create a new field using **Add field** with a name, [supported data type](/rest/api/searchservice/supported-data-types), and attributions.
+
+Changing existing fields is harder. Existing fields have a physical representation in the index so they aren't modifiable, not even in code. To fundamentally change an existing field, you need to create a new field that replaces the original. Other constructs, such as scoring profiles and CORS options, can be added to an index at any time.
To clearly understand what you can and can't edit during index design, take a minute to view the index definition options. Grayed options in the field list indicate values that can't be modified or deleted.
-## <a name="query-index"></a> Query with Search explorer
+## Query with Search explorer
+
+You now have a search index that can be queried with [**Search explorer**](search-explorer.md). **Search explorer** sends REST calls that conform to the [Search POST REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2023-10-01-preview&preserve-view=true). The tool supports [simple query syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) and [full Lucene query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search).
+
+1. On the **Search explorer** tab, enter text to search on.
-You now have a search index that can be queried with the **Search explorer** tool in Azure AI Search. **Search explorer** sends REST calls that conform to the [Search Documents REST API](/rest/api/searchservice/search-documents). The tool supports [simple query syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) and [full Lucene query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search).
+ :::image type="content" source="media/search-get-started-portal/search-explorer-query-string.png" alt-text="Screenshot that shows how to enter and run a query in the Search Explorer tool.":::
-You can access the tool from the **Search explorer** tab on the index page and from the **Overview** page for the service.
+1. Use the **Mini-map** to jump quickly to nonvisible areas of the output.
-1. Go to the **Overview** page for your Azure AI Search service in the Azure portal, and select **Search explorer**.
+ :::image type="content" source="media/search-get-started-portal/search-explorer-query-results.png" alt-text="Screenshot that shows long results for a query in the Search Explorer tool and the mini-map.":::
- :::image type="content" source="media/search-get-started-portal/open-search-explorer.png" alt-text="Screenshot that shows how to open the Search Explorer tool from the Overview page for the Azure AI Search service in the Azure portal.":::
+1. To specify syntax, switch to the JSON view.
-1. In the **Index** dropdown list, select the new index, **hotels-sample-index**.
+ :::image type="content" source="media/search-get-started-portal/search-explorer-change-view.png" alt-text="Screenshot of the JSON view selector.":::
- :::image type="content" source="media/search-get-started-portal/search-explorer-change-index.png" alt-text="Screenshot that shows how to select an index in the Search Explorer tool in the Azure portal.":::
+## Example queries for hotels sample index
- The **Request URL** box updates to show the link target with the selected index and API version.
+The following examples assume the JSON view and the 2023-11-01 REST API version.
-1. In the **Query string** box, enter a query string.
+### Filter examples
- For this quickstart, you can choose a query string from the examples provided in the [Run more example queries](#run-more-example-queries) section. The following example uses the query `search=beach &$filter=Rating gt 4`.
+Parking, tags, renovation date, rating and location are filterable.
- :::image type="content" source="media/search-get-started-portal/search-explorer-query-string.png" alt-text="Screenshot that shows how to enter and run a query in the Search Explorer tool.":::
+```json
+{
+ "search": "beach OR spa",
+ "select": "HotelId, HotelName, Description, Rating",
+ "count": true,
+ "top": 10,
+ "filter": "Rating gt 4"
+}
+```
- To change the presentation of the query syntax, use the **View** dropdown menu to switch between **Query view** and **JSON view**.
+Boolean filters assume "true" by default.
-1. Select **Search** to run the query.
+```json
+{
+ "search": "beach OR spa",
+ "select": "HotelId, HotelName, Description, Rating",
+ "count": true,
+ "top": 10,
+ "filter": "ParkingIncluded"
+}
+```
- The **Results** box updates to show the query results. For long results, use the **Mini-map** for the **Results** box to jump quickly to nonvisible areas of the output.
+Geospatial search is filter-based. The `geo.distance` function filters all results for positional data based on the specified `Location` and `geography'POINT` coordinates. The query seeks hotels that are within 5 kilometers of the latitude longitude coordinates `-122.12 47.67`, which is "Redmond, Washington, USA." The query displays the total number of matches `&$count=true` with the hotel names and address locations.
- :::image type="content" source="media/search-get-started-portal/search-explorer-query-results.png" alt-text="Screenshot that shows long results for a query in the Search Explorer tool and the mini-map.":::
+```json
+{
+ "search": "*",
+ "select": "HotelName, Address/City, Address/StateProvince",
+ "count": true,
+ "top": 10,
+ "filter": "geo.distance(Location, geography'POINT(-122.12 47.67)') le 5"
+}
+```
-For more information, see [Quickstart: Use Search explorer to run queries in the Azure portal](search-explorer.md).
+### Full Lucene syntax examples
-## Run more example queries
+The default syntax is [simple syntax](query-simple-syntax.md), but if you want fuzzy search or term boosting or regular expressions, specify the [full syntax](query-lucene-syntax.md).
-The queries in the following table are designed for searching the hotels-sample index with **Search Explorer**. The results are returned as verbose JSON documents. All fields marked as **Retrievable** in the index can appear in the results.
+```json
+{
+ "queryType": "full",
+ "search": "seatle~",
+ "select": "HotelId, HotelName,Address/City, Address/StateProvince",
+ "count": true
+}
+```
-| Query syntax | Query type | Description | Results |
-| | | | |
-| `search=spa` | Full text query | The `search=` parameter searches for specific keywords. | The query seeks hotel data that contains the keyword `spa` in any searchable field in the document. |
-| `search=beach &$filter=Rating gt 4` | Filtered query | The `filter` parameter filters on the supplied conditions. | The query seeks beach hotels with a rating value greater than four. |
-| `search=spa &$select=HotelName,Description,Tags &$count=true &$top=10` | Parameterized query | The ampersand symbol `&` appends search parameters, which can be specified in any order. <br> - The `$select` parameter returns a subset of fields for more concise search results. <br> - The `$count=true` parameter returns the total count of all documents that match the query. <br> - The `$top` parameter returns the specified number of highest ranked documents out of the total. | The query seeks the top 10 spa hotels and displays their names, descriptions, and tags. <br><br> By default, Azure AI Search returns the first 50 best matches. You can increase or decrease the amount by using this parameter. |
-| `search=* &facet=Category &$top=2` | Facet query on a string value | The `facet` parameter returns an aggregated count of documents that match the specified field. <br> - The specified field must be marked as **Facetable** in the index. <br> - On an empty or unqualified search, all documents are represented. | The query seeks the aggregated count for the `Category` field and displays the top 2. |
-| `search=spa &facet=Rating`| Facet query on a numeric value | The `facet` parameter returns an aggregated count of documents that match the specified field. <br> - Although the `Rating` field is a numeric value, it can be specified as a facet because it's marked as **Retrievable**, **Filterable**, and **Facetable** in the index. | The query seeks spa hotels for the `Rating` field data. The `Rating` field has numeric values (1 through 5) that are suitable for grouping results by each value. |
-| `search=beach &highlight=Description &$select=HotelName, Description, Category, Tags` | Hit highlighting | The `highlight` parameter applies highlighting to matching instances of the specified keyword in the document data. | The query seeks and highlights instances of the keyword `beach` in the `Description` field, and displays the corresponding hotel names, descriptions, category, and tags. |
-| Original: `search=seatle` <br><br> Adjusted: `search=seatle~ &queryType=full` | Fuzzy search | By default, misspelled query terms like `seatle` for `Seattle` fail to return matches in a typical search. The `queryType=full` parameter invokes the full Lucene query parser, which supports the tilde `~` operand. When these parameters are present, the query performs a fuzzy search for the specified keyword. The query seeks matching results along with results that are similar to but not an exact match to the keyword. | The original query returns no results because the keyword `seatle` is misspelled. <br><br> The adjusted query invokes the full Lucene query parser to match instances of the term `seatle~`. |
-| `$filter=geo.distance(Location, geography'POINT(-122.12 47.67)') le 5 &search=* &$select=HotelName, Address/City, Address/StateProvince &$count=true` | Geospatial search | The `$filter=geo.distance` parameter filters all results for positional data based on the specified `Location` and `geography'POINT` coordinates. | The query seeks hotels that are within 5 kilometers of the latitude longitude coordinates `-122.12 47.67`, which is "Redmond, Washington, USA." The query displays the total number of matches `&$count=true` with the hotel names and address locations. |
+By default, misspelled query terms like `seatle` for `Seattle` fail to return matches in a typical search. The `queryType=full` parameter invokes the full Lucene query parser, which supports the tilde `~` operand. When these parameters are present, the query performs a fuzzy search for the specified keyword. The query seeks matching results along with results that are similar to but not an exact match to the keyword.
-Take a minute to try a few of these example queries for your index. For more information about queries, see [Querying in Azure AI Search](search-query-overview.md).
+Take a minute to try a few of these example queries for your index. To learn more about queries, see [Querying in Azure AI Search](search-query-overview.md).
## Clean up resources
search Search Import Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-import-data-portal.md
- ignite-2023 Previously updated : 07/25/2023 Last updated : 11/16/2023 # Import data wizard in Azure AI Search
search Search Lucene Query Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-lucene-query-architecture.md
This article explored full text search in the context of Azure AI Search. We hop
## Next steps
-+ Build the sample index, try out different queries and review results. For instructions, see [Build and query an index in the portal](search-get-started-portal.md#query-index).
++ Build the sample index, try out different queries and review results. For instructions, see [Build and query an index in the portal](search-get-started-portal.md). + Try other query syntax from the [Search Documents](/rest/api/searchservice/search-documents#bkmk_examples) example section or from [Simple query syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) in Search explorer in the portal.
search Search Query Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-create.md
- ignite-2023 Previously updated : 10/09/2023 Last updated : 11/16/2023 # Create a full-text query in Azure AI Search
If you're building a query for [full text search](search-lucene-query-architectu
In Azure AI Search, a query is a read-only request against the docs collection of a single search index, with parameters that both inform query execution and shape the response coming back.
-A full text query is specified in a `search` parameter and consists of terms, quote-enclosed phrases, and operators. Other parameters add more definition to the request. For example, `searchFields` scopes query execution to specific fields, `select` specifies which fields are returned in results, and `count` returns the number of matches found in the index.
+A full text query is specified in a `search` parameter and consists of terms, quote-enclosed phrases, and operators. Other parameters add more definition to the request.
-The following [Search Documents REST API](/rest/api/searchservice/search-documents) call illustrates a query request using the aforementioned parameters.
+The following [Search POST REST API](/rest/api/searchservice/documents/search-post) call illustrates a query request using the aforementioned parameters.
```http
-POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "NY +view", "queryType": "simple",
In the portal, when you open an index, you can work with Search Explorer alongsi
1. Open **Indexes** and select an index.
-1. An index opens to the [**Search explorer**](search-explorer.md) tab so that you can query right away. A query string can use simple or full syntax, with support for all query parameters (filter, select, searchFields, and so on).
+1. An index opens to the [**Search explorer**](search-explorer.md) tab so that you can query right away. Switch to **JSON view** to specify query syntax.
Here's a full text search query expression that works for the Hotels sample index:
- `search=pool spa +airport&$searchFields=Description,Tags&$select=HotelName,Description,Category&$count=true`
+ ```json
+ {
+ "search": "pool spa +airport",
+ "queryType": "simple",
+ "searchMode": "any",
+ "searchFields": "Description, Tags",
+ "select": "HotelName, Description, Tags",
+ "top": 10,
+ "count": true
+ }
+ ```
The following screenshot illustrates the query and response: :::image type="content" source="media/search-explorer/search-explorer-full-text-query-hotels.png" alt-text="Screenshot of Search Explorer with a full text query.":::
-Notice that you can change the REST API version if you require search behaviors from a specific version, or switch to **JSON view** if you want to paste in the JSON definition of a query. For more information about what a JSON definition looks like, see [Search Documents (REST)](/rest/api/searchservice/search-documents).
- ### [**REST API**](#tab/rest-text-query)
-[Postman app](https://www.postman.com/downloads/) is useful for working with the REST APIs, such as [Search Documents (REST)](/rest/api/searchservice/search-documents).
+[Postman app](https://www.postman.com/downloads/) is useful for working with the REST APIs, such as [Search Documents (REST)](/rest/api/searchservice/documents/search-post).
[Quickstart: Create a search index using REST and Postman](search-get-started-rest.md) has step-by-step instructions for setting up requests. The following example calls the REST API for full text search: ```http
-POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "NY +view", "queryType": "simple", "searchMode": "all", "searchFields": "HotelName, Description, Address/City, Address/StateProvince, Tags", "select": "HotelName, Description, Address/City, Address/StateProvince, Tags",
- "count": "true"
+ "count": true
} ```
sentinel Customize Alert Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customize-alert-details.md
Follow the procedure detailed below to use the alert details feature. These step
1. When you have finished customizing your alert details, if you're now creating the rule, continue to the next tab in the wizard. If you're editing an existing rule, select the **Review and create** tab. Once the rule validation is successful, select **Save**.
+ > [!NOTE]
+ >
+ > **Service limits**
+ > - The combined size limit for all alert details and [custom details](surface-custom-details-in-alerts.md), collectively, is **64 KB**.
+ ## Next steps In this document, you learned how to customize alert details in Microsoft Sentinel analytics rules. To learn more about Microsoft Sentinel, see the following articles:
+- Explore the other ways to enrich your alerts:
+ - [Map data fields to entities in Microsoft Sentinel](map-data-fields-to-entities.md)
+ - [Surface custom event details in alerts in Microsoft Sentinel](surface-custom-details-in-alerts.md)
- Get the complete picture on [scheduled query analytics rules](detect-threats-custom.md). - Learn more about [entities in Microsoft Sentinel](entities.md).
sentinel Map Data Fields To Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/map-data-fields-to-entities.md
The procedure detailed below is part of the analytics rule creation wizard. It's
In this document, you learned how to map data fields to entities in Microsoft Sentinel analytics rules. To learn more about Microsoft Sentinel, see the following articles:
+- Explore the other ways to enrich your alerts:
+ - [Surface custom event details in alerts in Microsoft Sentinel](surface-custom-details-in-alerts.md)
+ - [Customize alert details in Microsoft Sentinel](customize-alert-details.md)
- Get the complete picture on [scheduled query analytics rules](detect-threats-custom.md). - Learn more about [entities in Microsoft Sentinel](entities.md).---
sentinel Surface Custom Details In Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/surface-custom-details-in-alerts.md
The procedure detailed below is part of the analytics rule creation wizard. It's
> **Service limits** > - You can define **up to 20 custom details** in a single analytics rule. >
- > - The size limit for all custom details, collectively, is **2 KB**.
+ > - The combined size limit for all custom details and [alert details](customize-alert-details.md), collectively, is **64 KB**.
## Next steps In this document, you learned how to surface custom details in alerts using Microsoft Sentinel analytics rules. To learn more about Microsoft Sentinel, see the following articles:
+- Explore the other ways to enrich your alerts:
+ - [Map data fields to entities in Microsoft Sentinel](map-data-fields-to-entities.md)
+ - [Customize alert details in Microsoft Sentinel](customize-alert-details.md)
- Get the complete picture on [scheduled query analytics rules](detect-threats-custom.md). - Learn more about [entities in Microsoft Sentinel](entities.md).
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus-reference.md
AzureDiagnostics:
```json { "ActivityId": "<activity id>",
- "ActivityName": "ConnectionOpen | Authorization | SendMessage | ReceiveMessage",
+ "ActivityName": "ConnectionOpen | Authorization | SendMessage | ReceiveMessage | PeekLockMessage",
"ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Service Bus namespace>/servicebus/<service bus name>", "Time": "1/1/2021 8:40:06 PM +00:00", "Status": "Success | Failure", "Protocol": "AMQP | HTTP | SBMP", "AuthType": "SAS | AAD",
- "AuthId": "<AAD Application Name| SAS policy name>",
+ "AuthKey": "<AAD Application Name| SAS policy name>",
"NetworkType": "Public | Private", "ClientIp": "x.x.x.x", "Count": 1,
Resource specific table entry:
```json { "ActivityId": "<activity id>",
- "ActivityName": "ConnectionOpen | Authorization | SendMessage | ReceiveMessage",
+ "ActivityName": "ConnectionOpen | Authorization | SendMessage | ReceiveMessage | PeekLockMessage",
"ResourceId": "/SUBSCRIPTIONS/xxx/RESOURCEGROUPS/<Resource Group Name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Service Bus namespace>/servicebus/<service bus name>", "TimeGenerated (UTC)": "1/1/2021 8:40:06 PM +00:00", "Status": "Success | Failure",
service-bus-messaging Service Bus Queues Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-queues-topics-subscriptions.md
Title: Azure Service Bus messaging - queues, topics, and subscriptions
description: This article provides an overview of Azure Service Bus messaging entities (queue, topics, and subscriptions). Previously updated : 10/11/2022 Last updated : 11/15/2023 # Service Bus queues, topics, and subscriptions
Queues offer **First In, First Out** (FIFO) message delivery to one or more comp
:::image type="content" source="./media/service-bus-messaging-overview/about-service-bus-queue.png" alt-text="Image showing how Service Queues work.":::
-A key benefit of using queues is to achieve **temporal decoupling of application components**. In other words, the producers (senders) and consumers (receivers) don't have to send and receive messages at the same time. That's because messages are stored durably in the queue. Furthermore, the producer doesn't have to wait for a reply from the consumer to continue to process and send messages.
+A key benefit of using queues is to achieve **temporal decoupling of application components**. In other words, the producers (senders) and consumers (receivers) don't have to send and receive messages at the same time, because messages are stored durably in the queue. Furthermore, the producer doesn't have to wait for a reply from the consumer to continue to process and send messages.
A related benefit is **load-leveling**, which enables producers and consumers to send and receive messages at different rates. In many applications, the system load varies over time. However, the processing time required for each unit of work is typically constant. Intermediating message producers and consumers with a queue means that the consuming application only has to be able to handle average load instead of peak load. The depth of the queue grows and contracts as the incoming load varies. This capability directly saves money regarding the amount of infrastructure required to service the application load. As the load increases, more worker processes can be added to read from the queue. Each message is processed by only one of the worker processes. Furthermore, this pull-based load balancing allows for best use of the worker computers even if the worker computers with processing power pull messages at their own maximum rate. This pattern is often termed the **competing consumer** pattern.
You can specify two different modes in which consumers can receive messages from
If the application is unable to process the message for some reason, it can request the Service Bus service to **abandon** the message. Service Bus **unlocks** the message and makes it available to be received again, either by the same consumer or by another competing consumer. Secondly, there's a **timeout** associated with the lock. If the application fails to process the message before the lock timeout expires, Service Bus unlocks the message and makes it available to be received again.
- If the application crashes after it processes the message, but before it requests the Service Bus service to complete the message, Service Bus redelivers the message to the application when it restarts. This process is often called **at-least once** processing. That is, each message is processed at least once. However, in certain situations the same message may be redelivered. If your scenario can't tolerate duplicate processing, add additional logic in your application to detect duplicates. For more information, see [Duplicate detection](duplicate-detection.md), which is known as **exactly once** processing.
+ If the application crashes after it processes the message, but before it requests the Service Bus service to complete the message, Service Bus redelivers the message to the application when it restarts. This process is often called **at-least once** processing. That is, each message is processed at least once. However, in certain situations the same message might be redelivered. If your scenario can't tolerate duplicate processing, add extra logic in your application to detect duplicates. For more information, see [Duplicate detection](duplicate-detection.md), which is known as **exactly once** processing.
> [!NOTE] > For more information about these two modes, see [Settling receive operations](message-transfers-locks-settlement.md#settling-receive-operations).
A queue allows processing of a message by a single consumer. In contrast to queu
:::image type="content" source="./media/service-bus-messaging-overview/about-service-bus-topic.png" alt-text="Image showing a Service Bus topic with three subscriptions.":::
-The subscriptions can use additional filters to restrict the messages that they want to receive. Publishers send messages to a topic in the same way that they send messages to a queue. But, consumers don't receive messages directly from the topic. Instead, consumers receive messages from subscriptions of the topic. A topic subscription resembles a virtual queue that receives copies of the messages that are sent to the topic. Consumers receive messages from a subscription identically to the way they receive messages from a queue.
+The subscriptions can use more filters to restrict the messages that they want to receive. Publishers send messages to a topic in the same way that they send messages to a queue. But, consumers don't receive messages directly from the topic. Instead, consumers receive messages from subscriptions of the topic. A topic subscription resembles a virtual queue that receives copies of the messages that are sent to the topic. Consumers receive messages from a subscription identically to the way they receive messages from a queue.
The message-sending functionality of a queue maps directly to a topic and its message-receiving functionality maps to a subscription. Among other things, this feature means that subscriptions support the same patterns described earlier in this section regarding queues: competing consumer, temporal decoupling, load leveling, and load balancing.
Then, send messages to a topic and receive messages from subscriptions using cli
### Rules and actions
-In many scenarios, messages that have specific characteristics must be processed in different ways. To enable this processing, you can configure subscriptions to find messages that have desired properties and then perform certain modifications to those properties. While Service Bus subscriptions see all messages sent to the topic, it's possible to only copy a subset of those messages to the virtual subscription queue. This filtering is accomplished using subscription filters. Such modifications are called **filter actions**. When a subscription is created, you can supply a filter expression that operates on the properties of the message. The properties can be both the system properties (for example, **Label**) and custom application properties (for example, **StoreName**.) The SQL filter expression is optional in this case. Without a SQL filter expression, any filter action defined on a subscription will be done on all the messages for that subscription.
+In many scenarios, messages that have specific characteristics must be processed in different ways. To enable this processing, you can configure subscriptions to find messages that have desired properties and then perform certain modifications to those properties. While Service Bus subscriptions see all messages sent to the topic, it's possible to only copy a subset of those messages to the virtual subscription queue. This filtering is accomplished using subscription filters. Such modifications are called **filter actions**. When a subscription is created, you can supply a filter expression that operates on the properties of the message. The properties can be both the system properties (for example, **Label**) and custom application properties (for example, **StoreName**.) The SQL filter expression is optional in this case. Without a SQL filter expression, any filter action defined on a subscription is done on all the messages for that subscription.
For a full working example, see the [TopicFilters sample](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples/TopicFilters) on GitHub. For more information about filters, see [Topic filters and actions](topic-filters.md).
Learn more about the [JMS 2.0 entities](java-message-service-20-entities.md) and
## Next steps
-Try the samples in the language of your choice to explore Azure Service Bus features.
+Try the samples in the language of your choice:
- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) - [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
Try the samples in the language of your choice to explore Azure Service Bus feat
- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
-Find samples for the older .NET and Java client libraries below:
+For samples that use the older .NET and Java client libraries, use the following links:
- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
service-connector How To Integrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-app-configuration.md
Last updated 10/26/2023
# Integrate Azure App Configuration with Service Connector
-This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure App Configuration to other cloud services using Service Connector. You might still be able to connect to App Configuration using other methods. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure App Configuration to other cloud services using Service Connector. You might still be able to connect to App Configuration using other methods. This page also shows default environment variable names and values you get when you create the service connection.
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties and sample code
-Use the connection details below to connect compute services to Azure App Configuration stores. This page also shows default environment variable names and values you get when you create the service connection, as well as sample code. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
+Use the connection details below to connect compute services to Azure App Configuration stores. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned managed identity
service-connector How To Integrate Cosmos Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-cassandra.md
# Integrate Azure Cosmos DB for Cassandra with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB for Apache Cassandra using Service Connector. You might still be able to connect to the Azure Cosmos DB for Cassandra in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection and sample code showing how to use them. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Cosmos DB for Apache Cassandra to other cloud services using Service Connector. You might still be able to connect to the Azure Cosmos DB for Cassandra in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties and Sample code
-Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect your compute services to Azure Cosmos DB for Apache Cassandra.
+Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect your compute services to Azure Cosmos DB for Apache Cassandra. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
-### Connect with System-assigned Managed Identity
+### System-assigned Managed Identity
| Default environment variable name | Description | Example value | |--|--|--|
Reference the connection details and sample code in the following tables, accord
Refer to the steps and code below to connect to Azure Cosmos DB for Cassandra using a system-assigned managed identity. [!INCLUDE [code sample for cassandra](./includes/code-cosmoscassandra-me-id.md)]
-### Connect with User-assigned Managed Identity
+### User-assigned Managed Identity
| Default environment variable name | Description | Example value | |--|--|--|
Refer to the steps and code below to connect to Azure Cosmos DB for Cassandra us
Refer to the steps and code below to connect to Azure Cosmos DB for Cassandra using a user-assigned managed identity. [!INCLUDE [code sample for cassandra](./includes/code-cosmoscassandra-me-id.md)]
-### Connect with Connection String
+### Connection String
#### SpringBoot client type
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
Previously updated : 09/19/2022 Last updated : 10/31/2023 # Integrate Azure Cosmos DB for MongoDB with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB for MongoDB using Service Connector. You might still be able to connect to the Azure Cosmos DB for MongoDB in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect the Azure Cosmos DB for MongoDB to other cloud services using Service Connector. You might still be able to connect to Azure Cosmos DB for MongoDB in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
## Supported compute services
This page shows the supported authentication types and client types for the Azur
Supported authentication and clients for App Service, Container Apps, and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Container Apps](#tab/container-apps)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps, and Azure
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-## Default environment variable names or application properties
+## Default environment variable names or application properties and sample code
-Use the connection details below to connect compute services to Azure Cosmos DB. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<Azure-Cosmos-DB-API-for-MongoDB-account>`, `<subscription-ID>`, `<resource-group-name>`, `<client-secret>`, and `<tenant-id>` with your own information.
+Use the connection details below to connect compute services to Azure Cosmos DB. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection, as well as sample code. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<Azure-Cosmos-DB-API-for-MongoDB-account>`, `<subscription-ID>`, `<resource-group-name>`, `<client-secret>`, and `<tenant-id>` with your own information. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
-### Azure App Service and Azure Container Apps
-
-#### Secret / Connection string
-
-| Default environment variable name | Description | Example value |
-|--|-|-|
-| AZURE_COSMOS_CONNECTIONSTRING | MongoDB API connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
-#### System-assigned managed identity
+### System-assigned managed identity
| Default environment variable name | Description | Example value | |--|--|--|
Use the connection details below to connect compute services to Azure Cosmos DB.
| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-API-for-MongoDB-account>.documents.azure.com:443/` |
-#### User-assigned managed identity
+#### Sample code
+Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a system-assigned managed identity.
+
+### User-assigned managed identity
| Default environment variable name | Description | Example value | |--|--|--|
Use the connection details below to connect compute services to Azure Cosmos DB.
| AZURE_COSMOS_CLIENTID | Your client ID | `<client-ID>` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-API-for-MongoDB-account>.documents.azure.com:443/` |
-#### Service principal
+#### Sample code
+Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a user-assigned managed identity.
+
+### Connection string
+
+#### SpringBoot client type
+
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| spring.data.mongodb.database | Your database | `<database-name>` |
+| spring.data.mongodb.uri | Your database URI | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
+
+#### Other client types
+
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_COSMOS_CONNECTIONSTRING | MongoDB API connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
+
+#### Sample code
+Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a connection string.
+
+### Service principal
| Default environment variable name | Description | Example value | |--|--|--|
Use the connection details below to connect compute services to Azure Cosmos DB.
| AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-API-for-MongoDB-account>.documents.azure.com:443/` |
-### Azure Spring Apps
-
-| Default environment variable name | Description | Example value |
-|--|-|-|
-| spring.data.mongodb.database | Your database | `<database-name>` |
-| spring.data.mongodb.uri | Your database URI | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
+#### Sample code
+Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a service principal.
## Next steps
service-connector How To Integrate Cosmos Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-gremlin.md
# Integrate the Azure Cosmos DB for Gremlin with Service Connector
-This page shows supported authentication methods and clients, and shows sample code you can use to connect the Azure Cosmos DB for Apache Gremlin to other cloud services using Service Connector. You might still be able to connect to the Azure Cosmos DB for Gremlin in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection, as well as sample code.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect the Azure Cosmos DB for Apache Gremlin to other cloud services using Service Connector. You might still be able to connect to the Azure Cosmos DB for Gremlin in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection.
## Supported compute services
service-connector How To Integrate Cosmos Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-sql.md
# Integrate the Azure Cosmos DB for NoSQL with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB for NoSQL using Service Connector. You might still be able to connect to Azure Cosmos DB for NoSQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Cosmos DB for NoSQL to other cloud services using Service Connector. You might still be able to connect to Azure Cosmos DB for NoSQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties and Sample code
-Use the connection details below to connect your compute services to the Azure Cosmos DB for NoSQL. For each example below, replace the placeholder texts `<database-server>`, `<database-name>`,`<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<SQL-server>`, `<client-secret>`, `<tenant-id>`, and `<access-key>` with your own information.
+Use the connection details below to connect your compute services to the Azure Cosmos DB for NoSQL. For each example below, replace the placeholder texts `<database-server>`, `<database-name>`,`<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<SQL-server>`, `<client-secret>`, `<tenant-id>`, and `<access-key>` with your own information. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned managed identity
Using a system-assigned managed identity as the authentication type is only avai
#### Sample code
-Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
+Refer to the steps and code below to connect to Azure Cosmos DB for NoSQL using a system-assigned identity.
[!INCLUDE [code for cosmos sql me id](./includes/code-cosmossql-me-id.md)] ### User-assigned managed identity
Using a user-assigned managed identity as the authentication type is only availa
#### Sample code
-Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
+Refer to the steps and code below to connect to Azure Cosmos DB for NoSQL using a user-assigned identity.
[!INCLUDE [code for cosmos sql me id](./includes/code-cosmossql-me-id.md)] ### Connection string
Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
#### Sample code
-Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
+Refer to the steps and code below to connect to Azure Cosmos DB for NoSQL using a connection string.
[!INCLUDE [code for cosmos sql](./includes/code-cosmossql-secret.md)] #### Service principal
Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
#### Sample code
-Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
+Refer to the steps and code below to connect to Azure Cosmos DB for NoSQL using a service principal.
[!INCLUDE [code for cosmos sql me id](./includes/code-cosmossql-me-id.md)] ## Next steps
service-connector How To Integrate Cosmos Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-table.md
# Integrate the Azure Cosmos DB for Table with Service Connector
-This page shows supported authentication methods and clients, and shows sample code you can use to connect the Azure Cosmos DB for Table to other cloud services using Service Connector. You might still be able to connect to the Azure Cosmos DB for Table in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection, as well as sample code.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect the Azure Cosmos DB for Table to other cloud services using Service Connector. You might still be able to connect to the Azure Cosmos DB for Table in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection.
## Supported compute services
service-connector How To Integrate Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-event-hubs.md
# Integrate Azure Event Hubs with Service Connector
-This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Event Hubs to other cloud services using Service Connector. You might still be able to connect to Event Hubs in other programming languages without using Service Connector. This page also shows default environment variable names and values or Spring Boot configuration you get when you create service connections.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Event Hubs to other cloud services using Service Connector. You might still be able to connect to Event Hubs in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create service connections.
## Supported compute services
Use the connection details below to connect compute services to Event Hubs. For
| spring.cloud.azure.eventhubs.credential.managed-identity-enabled | Whether to enable managed identity | `true` |
-#### SpringBoot Kafka client type
+#### Kafka-SpringBoot client type
| Default environment variable name | Description | Sample value | ||||
Refer to the steps and code below to connect to Azure Event Hubs using a system-
| spring.cloud.azure.eventhubs.credential.managed-identity-enabled | Whether to enable managed identity | `true` |
-#### SpringBoot Kafka client type
+#### Kafka-SpringBoot client type
| Default environment variable name | Description | Sample value | ||||
Refer to the steps and code below to connect to Azure Event Hubs using a user-as
> | spring.cloud.azure.storage.connection-string | Event Hubs connection string | `Endpoint=sb://servicelinkertesteventhub.servicebus.windows.net/;SharedAccessKeyName=<access-key-name>;SharedAccessKey=<access-key-value>` | > | spring.cloud.azure.eventhubs.connection-string| Event Hubs connection string for Spring Cloud Azure version above 4.0| `Endpoint=sb://servicelinkertesteventhub.servicebus.windows.net/;SharedAccessKeyName=<access-key-name>;SharedAccessKey=<access-key-value>` |
-#### SpringBoot Kafka client type
+#### Kafka-SpringBoot client type
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value |
Refer to the steps and code below to connect to Azure Event Hubs using a connect
| spring.cloud.azure.eventhubs.credential.client-secret | Your client secret for Spring Cloud Azure version above 4.0 | `<client-secret>` | | spring.cloud.azure.eventhubs.profile.tenant-id | Your tenant ID for Spring Cloud Azure version above 4.0 | `<tenant-id>` |
-#### SpringBoot Kafka client type
+#### Kafka-SpringBoot client type
| Default environment variable name | Description | Sample value | ||||
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-key-vault.md
> [!NOTE] > When you use Service Connector to connect your Key Vault or manage Key Vault connections, Service Connector uses your token to perform the corresponding operations.
-This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Key Vault to other cloud services using Service Connector. You might still be able to connect to Azure Key Vault in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection, as well as sample code.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Key Vault to other cloud services using Service Connector. You might still be able to connect to Azure Key Vault in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
-## Supported compute service
+## Supported compute services
- Azure App Service - Azure Container Apps
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md
# Integrate Azure Database for MySQL with Service Connector
-This page shows the supported authentication types, client types and sample code of Azure Database for MySQL - Flexible Server using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. Also detail steps with sample code about how to make connection to the database. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
-
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Database for MySQL - Flexible Server to other cloud services using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
[!INCLUDE [Azure-database-for-mysql-single-server-deprecation](../mysql/includes/azure-database-for-mysql-single-server-deprecation.md)]
-## Supported compute service
+## Supported compute services
- Azure App Service. You can get the configurations from Azure App Service configurations. - Azure Container Apps. You can get the configurations from Azure Container Apps environment variables.
Supported authentication and clients for App Service, Container Apps, and Azure
> [!NOTE] > System-assigned managed identity, User-assigned managed identity and Service principal are only supported on Azure CLI.
-## Default environment variable names or application properties and Sample code
+## Default environment variable names or application properties and sample code
-Reference the connection details and sample code in following tables, according to your connection's authentication type and client type, to connect compute services to Azure Database for MySQL.
+Reference the connection details and sample code in following tables, according to your connection's authentication type and client type, to connect compute services to Azure Database for MySQL. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned Managed Identity
Reference the connection details and sample code in following tables, according
-#### [SpringBoot](#tab/spring)
+#### [SpringBoot](#tab/springBoot)
| Application properties | Description | Example value | |||--|
Reference the connection details and sample code in following tables, according
#### Sample code
-Refer to the steps and code below to connect to Azure Database for MySQL.
+Refer to the steps and code below to connect to Azure Database for MySQL using a system-assigned managed identity.
[!INCLUDE [code sample for mysql system mi](./includes/code-mysql-me-id.md)] ### User-assigned Managed Identity
Refer to the steps and code below to connect to Azure Database for MySQL.
-#### [SpringBoot](#tab/spring)
+#### [SpringBoot](#tab/springBoot)
| Application properties | Description | Example value | |--|--||
Refer to the steps and code below to connect to Azure Database for MySQL.
#### Sample code
-Refer to the steps and code below to connect to Azure Database for MySQL.
+Refer to the steps and code below to connect to Azure Database for MySQL using a user-assigned managed identity.
[!INCLUDE [code sample for mysql system mi](./includes/code-mysql-me-id.md)] ### Connection String
Refer to the steps and code below to connect to Azure Database for MySQL.
| `AZURE_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>&password=<Uri.EscapeDataString(<MySQL-DB-password>)` |
-#### [SpringBoot](#tab/spring)
+#### [SpringBoot](#tab/springBoot)
| Application properties | Description | Example value | ||-|--|
After created a `springboot` client type connection, Service Connector service w
#### Sample code
-Refer to the steps and code below to connect to Azure Database for MySQL.
+Refer to the steps and code below to connect to Azure Database for MySQL using a connection string.
[!INCLUDE [code sample for mysql secrets](./includes/code-mysql-secret.md)] ### Service Principal
Refer to the steps and code below to connect to Azure Database for MySQL.
| `AZURE_MYSQL_CONNECTIONSTRING` | JDBC MySQL connection string | `jdbc:mysql://<MySQL-DB-name>.mysql.database.azure.com:3306/<MySQL-DB-name>?sslmode=required&user=<MySQL-DB-username>` |
-#### [SpringBoot](#tab/spring)
+#### [SpringBoot](#tab/springBoot)
| Application properties | Description | Example value | |--|--||
Refer to the steps and code below to connect to Azure Database for MySQL.
#### Sample code
-Refer to the steps and code below to connect to Azure Database for MySQL.
+Refer to the steps and code below to connect to Azure Database for MySQL using a service principal.
[!INCLUDE [code sample for mysql system mi](./includes/code-mysql-me-id.md)] ## Next steps
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md
# Integrate Azure Database for PostgreSQL with Service Connector
-This page shows the supported authentication types and client types of Azure Database for PostgreSQL using Service Connector. You might still be able to connect to Azure Database for PostgreSQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection and the sample code of how to use them. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Database for PostgreSQL to other cloud services using Service Connector. You might still be able to connect to Azure Database for PostgreSQL in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
-## Supported compute service
+## Supported compute services
- Azure App Service - Azure App Configuration
Supported authentication and clients for App Service, Container Apps, and Azure
## Default environment variable names or application properties and sample code
-Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect compute services to Azure Database for PostgreSQL.
+Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect compute services to Azure Database for PostgreSQL. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned Managed Identity
Reference the connection details and sample code in the following tables, accord
|-|--|--| | `AZURE_POSTGRESQL_CONNECTIONSTRING` | JDBC PostgreSQL connection string | `jdbc:postgresql://<PostgreSQL-server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require&user=<username>` |
-#### [SpringBoot](#tab/spring)
+#### [SpringBoot](#tab/springBoot)
| Application properties | Description | Example value | |-|-|| | `spring.datasource.azure.passwordless-enabled` | Enable passwordless authentication | `true` |
Reference the connection details and sample code in the following tables, accord
#### Sample code
-Refer to the steps and code below to connect to Azure Database for PostgreSQL.
+Refer to the steps and code below to connect to Azure Database for PostgreSQL using a system-assigned managed identity.
[!INCLUDE [code sample for postgresql system mi](./includes/code-postgres-me-id.md)]
Refer to the steps and code below to connect to Azure Database for PostgreSQL.
| `AZURE_POSTGRESQL_CLIENTID` | Your client ID | `<identity-client-ID>` | | `AZURE_POSTGRESQL_CONNECTIONSTRING` | JDBC PostgreSQL connection string | `jdbc:postgresql://<PostgreSQL-server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require&user=<username>` |
-#### [SpringBoot](#tab/spring)
+#### [SpringBoot](#tab/springBoot)
| Application properties | Description | Example value | ||-||
Refer to the steps and code below to connect to Azure Database for PostgreSQL.
#### Sample code
-Refer to the steps and code below to connect to Azure Database for PostgreSQL.
+Refer to the steps and code below to connect to Azure Database for PostgreSQL using a user-assigned managed identity.
[!INCLUDE [code sample for postgresql user mi](./includes/code-postgres-me-id.md)] ### Connection String
Refer to the steps and code below to connect to Azure Database for PostgreSQL.
|--|--|-| | `AZURE_POSTGRESQL_CONNECTIONSTRING` | JDBC PostgreSQL connection string | `jdbc:postgresql://<PostgreSQL-server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require&user=<username>&password=<password>` |
-#### [SpringBoot](#tab/spring)
+#### [SpringBoot](#tab/springBoot)
| Application properties | Description | Example value | ||-||
Refer to the steps and code below to connect to Azure Database for PostgreSQL.
#### Sample code
-Refer to the steps and code below to connect to Azure Database for PostgreSQL.
+Refer to the steps and code below to connect to Azure Database for PostgreSQL using a connection string.
[!INCLUDE [code sample for postgresql secrets](./includes/code-postgres-secret.md)] ### Service Principal
Refer to the steps and code below to connect to Azure Database for PostgreSQL.
| `AZURE_POSTGRESQL_TENANTID` | Your tenant ID | `<tenant-ID>` | | `AZURE_POSTGRESQL_CONNECTIONSTRING` | JDBC PostgreSQL connection string | `jdbc:postgresql://<PostgreSQL-server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require&user=<username>` |
-#### [SpringBoot](#tab/spring)
+#### [SpringBoot](#tab/springBoot)
| Application properties | Description | Example value | ||-||
Refer to the steps and code below to connect to Azure Database for PostgreSQL.
#### Sample code
-Refer to the steps and code below to connect to Azure Database for PostgreSQL.
+Refer to the steps and code below to connect to Azure Database for PostgreSQL using a service principal.
[!INCLUDE [code sample for postgresql service principal](./includes/code-postgres-me-id.md)]
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-redis-cache.md
# Integrate Azure Cache for Redis with Service Connector
-This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Cache for Redis to other cloud services using Service Connector. You might still be able to connect to Azure Cache for Redis in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection, as well as sample code.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Cache for Redis to other cloud services using Service Connector. You might still be able to connect to Azure Cache for Redis in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
-## Supported compute service
+## Supported compute services
- Azure App Service - Azure Container Apps
Use the environment variable names and application properties listed below to co
|--|-|-| | AZURE_REDIS_CONNECTIONSTRING | Jedis connection string | `rediss://:<redis-key>@<redis-server-name>.redis.cache.windows.net:6380/0` |
-#### [SpringBoot](#tab/spring)
+#### [SpringBoot](#tab/springBoot)
| Application properties | Description | Example value | ||-|--|
service-connector How To Integrate Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md
Last updated 08/11/2022
# Integrate Service Bus with Service Connector
-This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Service Bus to other cloud services using Service Connector. You might still be able to connect to Service Bus in other programming languages without using Service Connector.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Service Bus to other cloud services using Service Connector. You might still be able to connect to Service Bus in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create service connections.
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties
-Use the connection details below to connect compute services to Service Bus. This page also shows default environment variable names and values or Spring Boot configuration you get when you create service connections, as well as sample code. For each example below, replace the placeholder texts `<Service-Bus-namespace>`, `<access-key-name>`, `<access-key-value>` `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your own Service Bus namespace, shared access key name, shared access key value, client ID, client secret and tenant ID. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
+Use the connection details below to connect compute services to Service Bus. For each example below, replace the placeholder texts `<Service-Bus-namespace>`, `<access-key-name>`, `<access-key-value>` `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your own Service Bus namespace, shared access key name, shared access key value, client ID, client secret and tenant ID. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned managed identity
service-connector How To Integrate Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-signalr.md
# Integrate Azure SignalR Service with Service Connector
-This article supported authentication methods and clients, and shows sample code you can use to connect Azure SignalR Service to other cloud services using Service Connector. This article also shows default environment variable name and value or Spring Boot configuration that you get when you create the service connection. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
+This article supported authentication methods and clients, and shows sample code you can use to connect Azure SignalR Service to other cloud services using Service Connector. This article also shows default environment variable name and value (or Spring Boot configuration) that you get when you create the service connection.
## Supported compute service
Supported authentication and clients for App Service and Container Apps:
## Default environment variable names or application properties Use environment variable names listed below to connect compute services to Azure SignalR Service. For each example below, replace the placeholder texts
-`<SignalR-name>`, `<access-key>`, `<client-ID>`, `<tenant-ID>`, and `<client-secret>` with your own SignalR name, access key, client ID, tenant ID and client secret.
+`<SignalR-name>`, `<access-key>`, `<client-ID>`, `<tenant-ID>`, and `<client-secret>` with your own SignalR name, access key, client ID, tenant ID and client secret. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned Managed Identity
service-connector How To Integrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md
# Integrate Azure SQL Database with Service Connector
-This page shows supported authentication methods and clients, and shows sample code you can use to connect compute services to Azure SQL Database using Service Connector. You might still be able to connect to Azure SQL Database using other methods. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect compute services to Azure SQL Database using Service Connector. You might still be able to connect to Azure SQL Database using other methods. This page also shows default environment variable names and values you get when you create the service connection.
## Supported compute services
Supported authentication and clients for App Service, Container Apps, and Azure
## Default environment variable names or application properties and sample code
-Use the connection details below to connect compute services to Azure SQL Database. This page also shows default environment variable names and values you get when you create the service connection, as well as sample code. For each example below, replace the placeholder texts `<sql-server>`, `<sql-database>`, `<sql-username>`, and `<sql-password>` with your own server name, database name, user ID and password. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
+Use the connection details below to connect compute services to Azure SQL Database. For each example below, replace the placeholder texts `<sql-server>`, `<sql-database>`, `<sql-username>`, and `<sql-password>` with your own server name, database name, user ID and password. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned Managed Identity
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
Last updated 10/20/2023
# Integrate Azure Blob Storage with Service Connector
-This page shows the supported authentication types, client types and sample code of Azure Blob Storage using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection, as well as sample code. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types, client types and sample code of Azure Blob Storage using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
-## Supported compute service
+## Supported compute services
- Azure App Service - Azure Container Apps
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties and sample code
-Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect compute services to Azure Blob Storage.
+Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect compute services to Azure Blob Storage. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
### System-assigned managed identity For default environment variables and sample code of other authentication type, please choose from beginning of the documentation.
For default environment variables and sample code of other authentication type,
| azure.storage.blob-endpoint | Your Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
-#### other client types
+#### Other client types
| Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
service-connector How To Integrate Storage File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-file.md
This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure File Storage to other cloud services using Service Connector. You might still be able to connect to Azure File Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
-## Supported compute service
+## Supported compute services
- Azure App Service - Azure Container Apps
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md
This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Queue Storage to other cloud services using Service Connector. You might still be able to connect to Azure Queue Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection.
-## Supported compute service
+## Supported compute services
- Azure App Service - Azure Container Apps
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md
# Integrate Azure Table Storage with Service Connector
-This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Table Storage to other cloud services using Service Connector. You might still be able to connect to Azure Table Storage in other programming languages without using Service Connector. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Table Storage to other cloud services using Service Connector. You might still be able to connect to Azure Table Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values you get when you create the service connection.
-## Supported compute service
+## Supported compute services
- Azure App Service - Azure Container Apps
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties and sample code
-Use the connection details below to connect compute services to Azure Table Storage. This page also shows default environment variable names and values you get when you create the service connection, as well as sample code.
+Use the connection details below to connect compute services to Azure Table Storage. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned managed identity
service-connector How To Integrate Web Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-web-pubsub.md
Last updated 10/26/2023
# Integrate Azure Web PubSub with service connector
-This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Web PubSub to other cloud services using Service Connector. You might still be able to connect to App Configuration using other methods. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Web PubSub to other cloud services using Service Connector. You might still be able to connect to App Configuration using other methods. This page also shows default environment variable names and values you get when you create the service connection.
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties and sample code
-Use the environment variable names and application properties listed below, according to your connection's authentication type and client type, to connect compute services to Web PubSub using .NET, Java, Node.js, or Python. For each example below, replace the placeholder texts `<name>`, `<client-id>`, `<client-secret`, `<access-key>`, and `<tenant-id>` with your own resource name, client ID, client secret, access-key, and tenant ID.
+Use the environment variable names and application properties listed below, according to your connection's authentication type and client type, to connect compute services to Web PubSub using .NET, Java, Node.js, or Python. For each example below, replace the placeholder texts `<name>`, `<client-id>`, `<client-secret`, `<access-key>`, and `<tenant-id>` with your own resource name, client ID, client secret, access-key, and tenant ID. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned managed identity
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
For the default environment variable names, see the following articles:
* [Azure Cosmos DB for Table](../service-connector/how-to-integrate-cosmos-table.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code) * [Azure Cosmos DB for NoSQL](../service-connector/how-to-integrate-cosmos-sql.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code)
-* [Azure Cosmos DB for MongoDB](../service-connector/how-to-integrate-cosmos-db.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
+* [Azure Cosmos DB for MongoDB](../service-connector/how-to-integrate-cosmos-db.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code)
* [Azure Cosmos DB for Gremlin](../service-connector/how-to-integrate-cosmos-gremlin.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code) * [Azure Cosmos DB for Cassandra](../service-connector/how-to-integrate-cosmos-cassandra.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code)
storage Container Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md
description: An overview of Azure Container Storage Preview, a service built nat
Previously updated : 11/06/2023 Last updated : 11/16/2023 - references_regions
Azure Container Storage offers persistent volume support with ReadWriteOnce acce
## What's new in Azure Container Storage
-Based on feedback from customers, we've included the following capabilities in the Azure Container Storage Preview:
+Based on feedback from customers, we've included the following capabilities with the latest preview update:
-- Improve stateful application availability by using [multi-zone storage pools and ZRS disks](enable-multi-zone-redundancy.md)-- Enable server-side encryption with [customer-managed keys](use-container-storage-with-managed-disks.md#enable-server-side-encryption-with-customer-managed-keys) (Azure Disks only)-- Scale up by [resizing volumes](resize-volume.md) backed by Azure Disks and NVMe storage pools without downtime-- [Clone persistent volumes](clone-volume.md) within a storage pool
+- Improve stateful application availability by using [multi-zone storage pools and ZRS disks](enable-multi-zone-redundancy.md).
+- Enable server-side encryption with [customer-managed keys](use-container-storage-with-managed-disks.md#enable-server-side-encryption-with-customer-managed-keys) (Azure Disks only).
+- Scale up by dynamically [resizing volumes](resize-volume.md) backed by Azure Disks and NVMe storage pools without downtime.
+- [Clone persistent volumes](clone-volume.md) within a storage pool.
+- Optimize applications with Azure Linux Container Host.
+- Increase resiliency for applications using local NVMe volumes with replication. [Sign up here](https://aka.ms/NVMeReplication).
For more information on these features, email the Azure Container Storage team at azcontainerstorage@microsoft.com.
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
Azure Virtual Desktop for Azure Stack HCI supports the same [Remote Desktop clie
- Windows Server 2022 - Windows Server 2019
-You must license and activate the virtual machines you use for your session hosts on Azure Stack HCI before you use them with Azure Virtual Desktop. For activating Windows 10 and Windows 11 Enterprise multi-session, and Windows Server 2022 Datacenter: Azure Edition, you need to enable [Azure Benefits on Azure Stack HCI](/azure-stack/hci/manage/azure-benefits). Once Azure Benefits is enabled on Azure Stack HCI 23H2, Windows 11 Enterprise multi-session, and Windows Server 2022 Datacenter: Azure Edition are activated automatically. For all other OS images (such as Windows 10 and Windows 11 Enterprise, and other editions of Windows Server), you should continue to use existing activation methods. For more information, see [Activate Windows Server VMs on Azure Stack HCI](/azure-stack/hci/manage/vm-activate).
+You must license and activate the virtual machines you use for your session hosts on Azure Stack HCI before you use them with Azure Virtual Desktop. For activating Windows 10 and Windows 11 Enterprise multi-session, and Windows Server 2022 Datacenter: Azure Edition, use [Azure verification for VMs](/azure-stack/hci/deploy/azure-verification). For all other OS images (such as Windows 10 and Windows 11 Enterprise, and other editions of Windows Server), you should continue to use existing activation methods. For more information, see [Activate Windows Server VMs on Azure Stack HCI](/azure-stack/hci/manage/vm-activate).
## Licensing and pricing
virtual-desktop Create Host Pools User Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-user-profile.md
Title: Azure Virtual Desktop FSLogix profile container share - Azure
-description: How to set up an FSLogix profile container for a Azure Virtual Desktop host pool using a virtual machine-based file share.
+description: How to set up an FSLogix profile container for an Azure Virtual Desktop host pool using a virtual machine-based file share.
Last updated 04/08/2022
virtual-desktop Create Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-netapp-files.md
To start using Azure NetApp Files:
1. Set up your Azure NetApp Files account by following the instructions in [Set up your Azure NetApp Files account](create-fslogix-profile-container.md#set-up-your-azure-netapp-files-account). 2. Create a capacity pool by following the instructions in [Set up a capacity pool](../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md). 3. Join an Active Directory connection by following the instructions in [Join an Active Directory connection](create-fslogix-profile-container.md#join-an-active-directory-connection).
-4. Create a new volume by following the instructions to [create an SMB volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md). Ensure select **Enable Continuous Availability**.
+4. Create a new volume by following the instructions to [create an SMB volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md). Ensure you select **Enable Continuous Availability**.
5. Make sure your connection to the Azure NetApp Files share works by following the instructions in [Make sure users can access the Azure NetApp Files share](create-fslogix-profile-container.md#make-sure-users-can-access-the-azure-netapp-files-share). ## Upload an MSIX image to the Azure NetApp file share
virtual-desktop Customize Feed For Virtual Desktop Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-feed-for-virtual-desktop-users.md
You can change the display name for a published remote desktop by setting a frie
## Next steps
-Now that you've customized the feed for users, you can sign in to a Azure Virtual Desktop client to test it out. To do so, continue to the Connect to Azure Virtual Desktop How-tos:
+Now that you've customized the feed for users, you can sign in to an Azure Virtual Desktop client to test it out. To do so, continue to the Connect to Azure Virtual Desktop How-tos:
* [Connect with Windows](./users/connect-windows.md) * [Connect with the web client](./users/connect-web.md)
virtual-desktop Delegated Access Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/delegated-access-virtual-desktop.md
Title: Delegated access in Azure Virtual Desktop - Azure
-description: How to delegate administrative capabilities on a Azure Virtual Desktop deployment, including examples.
+description: How to delegate administrative capabilities on an Azure Virtual Desktop deployment, including examples.
Last updated 04/30/2020
For a more complete list of PowerShell cmdlets each role can use, see the [Power
For a complete list of roles supported in Azure RBAC, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
-For guidelines for how to set up a Azure Virtual Desktop environment, see [Azure Virtual Desktop environment](environment-setup.md).
+For guidelines for how to set up an Azure Virtual Desktop environment, see [Azure Virtual Desktop environment](environment-setup.md).
virtual-desktop Install Office On Wvd Master Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/install-office-on-wvd-master-image.md
Title: Install Office on a custom VHD image - Azure
-description: How to install and customize Office on a Azure Virtual Desktop custom image to Azure.
+description: How to install and customize Office on an Azure Virtual Desktop custom image to Azure.
Last updated 05/02/2019
This article assumes you've already created a virtual machine (VM). If not, see
This article also assumes you have elevated access on the VM, whether it's provisioned in Azure or Hyper-V Manager. If not, see [Elevate access to manage all Azure subscription and management groups](../role-based-access-control/elevate-access-global-admin.md). >[!NOTE]
->These instructions are for a Azure Virtual Desktop-specific configuration that can be used with your organization's existing processes.
+>These instructions are for an Azure Virtual Desktop-specific configuration that can be used with your organization's existing processes.
## Install Office in shared computer activation mode
virtual-desktop Key Distribution Center Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/key-distribution-center-proxy.md
Title: Set up Kerberos Key Distribution Center proxy Azure Virtual Desktop - Azure
-description: How to set up a Azure Virtual Desktop host pool to use a Kerberos Key Distribution Center proxy.
+description: How to set up an Azure Virtual Desktop host pool to use a Kerberos Key Distribution Center proxy.
Last updated 05/04/2021
# Configure a Kerberos Key Distribution Center proxy
-Security-conscious customers, such as financial or government organizations, often sign in using Smartcards. Smartcards make deployments more secure by requiring multifactor authentication (MFA). However, for the RDP portion of a Azure Virtual Desktop session, Smartcards require a direct connection, or "line of sight," with an Active Directory (AD) domain controller for Kerberos authentication. Without this direct connection, users can't automatically sign in to the organization's network from remote connections. Users in a Azure Virtual Desktop deployment can use the KDC proxy service to proxy this authentication traffic and sign in remotely. The KDC proxy allows for authentication for the Remote Desktop Protocol of a Azure Virtual Desktop session, letting the user sign in securely. This makes working from home much easier, and allows for certain disaster recovery scenarios to run more smoothly.
+Security-conscious customers, such as financial or government organizations, often sign in using Smartcards. Smartcards make deployments more secure by requiring multifactor authentication (MFA). However, for the RDP portion of an Azure Virtual Desktop session, Smartcards require a direct connection, or "line of sight," with an Active Directory (AD) domain controller for Kerberos authentication. Without this direct connection, users can't automatically sign in to the organization's network from remote connections. Users in an Azure Virtual Desktop deployment can use the KDC proxy service to proxy this authentication traffic and sign in remotely. The KDC proxy allows for authentication for the Remote Desktop Protocol of an Azure Virtual Desktop session, letting the user sign in securely. This makes working from home much easier, and allows for certain disaster recovery scenarios to run more smoothly.
However, setting up the KDC proxy typically involves assigning the Windows Server Gateway role in Windows Server 2016 or later. How do you use a Remote Desktop Services role to sign in to Azure Virtual Desktop? To answer that, let's take a quick look at the components.
This article will show you how to configure the feed in the Azure Virtual Deskto
## Requirements
-To configure a Azure Virtual Desktop session host with a KDC proxy, you'll need the following things:
+To configure an Azure Virtual Desktop session host with a KDC proxy, you'll need the following things:
- Access to the Azure portal and an Azure administrator account. - The remote client machines must be running at least Windows 10 and have the [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop) installed. The web client isn't currently supported.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
You can deploy a virtual machines (VMs) to be used as session hosts from these i
If your license entitles you to use Azure Virtual Desktop, you don't need to install or apply a separate license, however if you're using per-user access pricing for external users, you need to [enroll an Azure Subscription](remote-app-streaming/per-user-access-pricing.md). You need to make sure the Windows license used on your session hosts is correctly assigned in Azure and the operating system is activated. For more information, see [Apply Windows license to session host virtual machines](apply-windows-license.md).
-For Azure Stack HCI, you must license and activate the virtual machines you use for your session hosts before you use them with Azure Virtual Desktop. For activating Windows 10 and Windows 11 Enterprise multi-session, and Windows Server 2022 Datacenter: Azure Edition, you need to enable [Azure Benefits on Azure Stack HCI](/azure-stack/hci/manage/azure-benefits). For all other OS images (such as Windows 10 and Windows 11 Enterprise, and other editions of Windows Server), you should continue to use existing activation methods. For more information, see [Activate Windows Server VMs on Azure Stack HCI](/azure-stack/hci/manage/vm-activate).
+For session hosts on Azure Stack HCI, you must license and activate the virtual machines you use before you use them with Azure Virtual Desktop. For activating Windows 10 and Windows 11 Enterprise multi-session, and Windows Server 2022 Datacenter: Azure Edition, use [Azure verification for VMs](/azure-stack/hci/deploy/azure-verification). For all other OS images (such as Windows 10 and Windows 11 Enterprise, and other editions of Windows Server), you should continue to use existing activation methods. For more information, see [Activate Windows Server VMs on Azure Stack HCI](/azure-stack/hci/manage/vm-activate).
> [!TIP] > To simplify user access rights during initial development and testing, Azure Virtual Desktop supports [Azure Dev/Test pricing](https://azure.microsoft.com/pricing/dev-test/). If you deploy Azure Virtual Desktop in an Azure Dev/Test subscription, end users may connect to that deployment without separate license entitlement in order to perform acceptance tests or provide feedback.
virtual-desktop Troubleshoot Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-powershell.md
Title: Azure Virtual Desktop PowerShell - Azure
-description: How to troubleshoot issues with PowerShell when you set up a Azure Virtual Desktop environment.
+description: How to troubleshoot issues with PowerShell when you set up an Azure Virtual Desktop environment.
Last updated 06/05/2020
virtual-desktop Troubleshoot Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-service-connection.md
Title: Troubleshoot service connection Azure Virtual Desktop - Azure
-description: How to resolve issues while setting up service connections in a Azure Virtual Desktop tenant environment.
+description: How to resolve issues while setting up service connections in an Azure Virtual Desktop tenant environment.
Last updated 10/15/2020
This could also happen if a CSP Provider created the subscription and then trans
## Next steps - For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md).-- To troubleshoot issues while creating a Azure Virtual Desktop environment and host pool in a Azure Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md).
+- To troubleshoot issues while creating an Azure Virtual Desktop environment and host pool in an Azure Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md).
- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). - To troubleshoot issues related to the Azure Virtual Desktop agent or session connectivity, see [Troubleshoot common Azure Virtual Desktop Agent issues](troubleshoot-agent.md). - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md).
virtual-desktop Troubleshoot Set Up Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-issues.md
Title: Azure Virtual Desktop environment host pool creation - Azure
-description: How to troubleshoot and resolve tenant and host pool issues during setup of a Azure Virtual Desktop environment.
+description: How to troubleshoot and resolve tenant and host pool issues during setup of an Azure Virtual Desktop environment.
virtual-desktop Configure Host Pool Personal Desktop Assignment Type 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/configure-host-pool-personal-desktop-assignment-type-2019.md
Title: Azure Virtual Desktop (classic) personal desktop assignment type - Azure
-description: How to configure the assignment type for a Azure Virtual Desktop (classic) personal desktop host pool.
+description: How to configure the assignment type for an Azure Virtual Desktop (classic) personal desktop host pool.
Last updated 05/22/2020
If you need to add the session host back into the personal desktop host pool, un
## Next steps
-Now that you've configured the personal desktop assignment type, you can sign in to a Azure Virtual Desktop client to test it as part of a user session. These next two How-tos will tell you how to connect to a session using the client of your choice:
+Now that you've configured the personal desktop assignment type, you can sign in to an Azure Virtual Desktop client to test it as part of a user session. These next two How-tos will tell you how to connect to a session using the client of your choice:
- [Connect with the Windows Desktop client](connect-windows-2019.md) - [Connect with the web client](connect-web-2019.md)
virtual-desktop Configure Vm Gpu 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/configure-vm-gpu-2019.md
Azure Virtual Desktop supports GPU-accelerated rendering and encoding for improved app performance and scalability. GPU acceleration is particularly crucial for graphics-intensive apps.
-Follow the instructions in this article to create a GPU optimized Azure virtual machine, add it to your host pool, and configure it to use GPU acceleration for rendering and encoding. This article assumes you already have a Azure Virtual Desktop tenant configured.
+Follow the instructions in this article to create a GPU optimized Azure virtual machine, add it to your host pool, and configure it to use GPU acceleration for rendering and encoding. This article assumes you already have an Azure Virtual Desktop tenant configured.
## Select a GPU optimized Azure virtual machine size
virtual-desktop Create Host Pools Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-arm-template.md
Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop tenant environments. Each host pool can contain an application group that users can interact with as they would on a physical desktop.
-Follow this section's instructions to create a host pool for a Azure Virtual Desktop tenant with an Azure Resource Manager template provided by Microsoft. This article will tell you how to create a host pool in Azure Virtual Desktop, create a resource group with VMs in an Azure subscription, join those VMs to the AD domain, and register the VMs with Azure Virtual Desktop.
+Follow this section's instructions to create a host pool for an Azure Virtual Desktop tenant with an Azure Resource Manager template provided by Microsoft. This article will tell you how to create a host pool in Azure Virtual Desktop, create a resource group with VMs in an Azure subscription, join those VMs to the AD domain, and register the VMs with Azure Virtual Desktop.
## What you need to run the Azure Resource Manager template
Make sure you know the following things before running the Azure Resource Manage
- Your domain join credentials. - Your Azure Virtual Desktop credentials.
-When you create a Azure Virtual Desktop host pool with the Azure Resource Manager template, you can create a virtual machine from the Azure gallery, a managed image, or an unmanaged image. To learn more about how to create VM images, see [Prepare a Windows VHD or VHDX to upload to Azure](../../virtual-machines/windows/prepare-for-upload-vhd-image.md) and [Create a managed image of a generalized VM in Azure](../../virtual-machines/windows/capture-image-resource.md).
+When you create an Azure Virtual Desktop host pool with the Azure Resource Manager template, you can create a virtual machine from the Azure gallery, a managed image, or an unmanaged image. To learn more about how to create VM images, see [Prepare a Windows VHD or VHDX to upload to Azure](../../virtual-machines/windows/prepare-for-upload-vhd-image.md) and [Create a managed image of a generalized VM in Azure](../../virtual-machines/windows/capture-image-resource.md).
## Run the Azure Resource Manager template for provisioning a new host pool
virtual-desktop Create Host Pools Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-powershell-2019.md
To successfully domain-join, do the following things on each virtual machine:
## Register the virtual machines to the Azure Virtual Desktop host pool
-Registering the virtual machines to a Azure Virtual Desktop host pool is as simple as installing the Azure Virtual Desktop agents.
+Registering the virtual machines to an Azure Virtual Desktop host pool is as simple as installing the Azure Virtual Desktop agents.
To register the Azure Virtual Desktop agents, do the following on each virtual machine:
virtual-desktop Customize Feed Virtual Desktop Users 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/customize-feed-virtual-desktop-users-2019.md
Set-RdsRemoteDesktop -TenantName <tenantname> -HostPoolName <hostpoolname> -AppG
## Next steps
-Now that you've customized the feed for users, you can sign in to a Azure Virtual Desktop client to test it out. To do so, continue to the Connect to Azure Virtual Desktop How-tos:
+Now that you've customized the feed for users, you can sign in to an Azure Virtual Desktop client to test it out. To do so, continue to the Connect to Azure Virtual Desktop How-tos:
- [Connect from the Windows Desktop client](connect-windows-2019.md) - [Connect from a web browser](connect-web-2019.md)
virtual-desktop Customize Rdp Properties 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/customize-rdp-properties-2019.md
Set-RdsHostPool -TenantName <tenantname> -Name <hostpoolname> -CustomRdpProperty
## Next steps
-Now that you've customized the RDP properties for a given host pool, you can sign in to a Azure Virtual Desktop client to test them as part of a user session. These next two How-tos will tell you how to connect to a session using the client of your choice:
+Now that you've customized the RDP properties for a given host pool, you can sign in to an Azure Virtual Desktop client to test them as part of a user session. These next two How-tos will tell you how to connect to a session using the client of your choice:
- [Connect with the Windows Desktop client](connect-windows-2019.md) - [Connect with the web client](connect-web-2019.md)
virtual-desktop Delegated Access Virtual Desktop 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/delegated-access-virtual-desktop-2019.md
Title: Delegated access in Azure Virtual Desktop (classic) - Azure
-description: How to delegate administrative capabilities on a Azure Virtual Desktop (classic) deployment, including examples.
+description: How to delegate administrative capabilities on an Azure Virtual Desktop (classic) deployment, including examples.
Last updated 03/30/2020
You can modify the basic three cmdlets with the following parameters:
For a more complete list of PowerShell cmdlets each role can use, see the [PowerShell reference](/powershell/windows-virtual-desktop/overview).
-For guidelines for how to set up a Azure Virtual Desktop environment, see [Azure Virtual Desktop environment](environment-setup-2019.md).
+For guidelines for how to set up an Azure Virtual Desktop environment, see [Azure Virtual Desktop environment](environment-setup-2019.md).
virtual-desktop Expand Existing Host Pool 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/expand-existing-host-pool-2019.md
Follow the instructions in [Run the Azure Resource Manager template for provisio
## Next steps
-Now that you've expanded your existing host pool, you can sign in to a Azure Virtual Desktop client to test them as part of a user session. You can connect to a session with any of the following clients:
+Now that you've expanded your existing host pool, you can sign in to an Azure Virtual Desktop client to test them as part of a user session. You can connect to a session with any of the following clients:
- [Connect with the Windows Desktop client](connect-windows-2019.md) - [Connect with the web client](connect-web-2019.md)
virtual-desktop Host Pool Load Balancing 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/host-pool-load-balancing-2019.md
Title: Azure Virtual Desktop (classic) host pool load-balancing - Azure
-description: Host pool load-balancing methods for a Azure Virtual Desktop environment.
+description: Host pool load-balancing methods for an Azure Virtual Desktop environment.
Last updated 03/30/2020
virtual-desktop Manage Resources Using Ui Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui-powershell.md
-# Deploy a Azure Virtual Desktop (classic) management tool with PowerShell
+# Deploy an Azure Virtual Desktop (classic) management tool with PowerShell
>[!IMPORTANT] >This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects.
virtual-desktop Manage Resources Using Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui.md
Last updated 03/30/2020
-# Deploy a Azure Virtual Desktop (classic) management tool with an Azure Resource Manager template
+# Deploy an Azure Virtual Desktop (classic) management tool with an Azure Resource Manager template
>[!IMPORTANT] >This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects.
virtual-desktop Troubleshoot Service Connection 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-service-connection-2019.md
Title: Troubleshoot service connection Azure Virtual Desktop (classic) - Azure
-description: How to resolve issues when you set up client connections in a Azure Virtual Desktop (classic) tenant environment.
+description: How to resolve issues when you set up client connections in an Azure Virtual Desktop (classic) tenant environment.
Last updated 05/20/2020
If the web client is being used, confirm that there are no cached credentials is
## Next steps - For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview-2019.md).-- To troubleshoot issues while creating a tenant and host pool in a Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md).
+- To troubleshoot issues while creating a tenant and host pool in an Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md).
- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell-2019.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../../azure-resource-manager/templates/template-tutorial-troubleshoot.md).
virtual-desktop Troubleshoot Set Up Issues 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-issues-2019.md
Title: Azure Virtual Desktop (classic) tenant host pool creation - Azure
-description: How to troubleshoot and resolve tenant and host pool issues during setup of a Azure Virtual Desktop (classic) tenant environment.
+description: How to troubleshoot and resolve tenant and host pool issues during setup of an Azure Virtual Desktop (classic) tenant environment.
virtual-desktop Troubleshoot Set Up Overview 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-overview-2019.md
Title: Azure Virtual Desktop (classic) troubleshooting overview - Azure
-description: An overview for troubleshooting issues while setting up a Azure Virtual Desktop (classic) tenant environment.
+description: An overview for troubleshooting issues while setting up an Azure Virtual Desktop (classic) tenant environment.
Last updated 03/30/2020
>[!IMPORTANT] >This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../troubleshoot-set-up-overview.md).
-This article provides an overview of the issues you may encounter when setting up a Azure Virtual Desktop tenant environment and provides ways to resolve the issues.
+This article provides an overview of the issues you may encounter when setting up an Azure Virtual Desktop tenant environment and provides ways to resolve the issues.
## Provide feedback
Use the following table to identify and resolve issues you may encounter when se
| **Issue** | **Suggested Solution** | |-|-|
-| Creating a Azure Virtual Desktop tenant | If there's an Azure outage, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/); otherwise [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, select **Deployment** for the problem type, then select **Issues creating a Azure Virtual Desktop tenant** for the problem subtype.|
+| Creating an Azure Virtual Desktop tenant | If there's an Azure outage, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/); otherwise [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, select **Deployment** for the problem type, then select **Issues creating an Azure Virtual Desktop tenant** for the problem subtype.|
| Accessing Marketplace templates in Azure portal | If there's an Azure outage, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/). <br> <br> Azure Marketplace Azure Virtual Desktop templates are freely available.| | Accessing Azure Resource Manager templates from GitHub | See the [Creating Azure Virtual Desktop session host VMs](troubleshoot-set-up-issues-2019.md#creating-azure-virtual-desktop-session-host-vms) section of [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md). If the problem is still unresolved, contact the [GitHub support team](https://github.com/contact). <br> <br> If the error occurs after accessing the template in GitHub, contact [Azure Support](https://azure.microsoft.com/support/create-ticket/).| | Session host pool Azure Virtual Network (VNET) and Express Route settings | [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), then select the appropriate service (under the Networking category). |
Use the following table to identify and resolve issues you may encounter when se
## Next steps -- To troubleshoot issues while creating a tenant and host pool in a Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md).
+- To troubleshoot issues while creating a tenant and host pool in an Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md).
- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md). - To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-windows.md)
virtual-desktop Troubleshoot Vm Configuration 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-vm-configuration-2019.md
Examine the registry entries listed below and confirm that their values match. I
3. Install the side-by-side stack using [Create a host pool with PowerShell](create-host-pools-powershell-2019.md).
-## How to fix a Azure Virtual Desktop side-by-side stack that malfunctions
+## How to fix an Azure Virtual Desktop side-by-side stack that malfunctions
There are known circumstances that can cause the side-by-side stack to malfunction:
To learn more about this policy, see [Allow log on through Remote Desktop Servic
## Next steps - For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview-2019.md).-- To troubleshoot issues while creating a tenant and host pool in a Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md).
+- To troubleshoot issues while creating a tenant and host pool in an Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md).
- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md). - To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-windows.md)
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 10/05/2023 Last updated : 11/16/2023
A rollout may take several weeks before the agent is available in all environmen
| Release | Latest version | |--|--|
-| Production | 1.0.7539.8300 |
-| Validation | 1.0.7755.1100 |
+| Production | 1.0.7755.1800 |
+| Validation | 1.0.7909.1200 |
The agent is automatically installed when adding session hosts in most scenarios. If you need to download the agent, you find it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it.
-## Version 1.0.7755.1100 (validation)
+## Version 1.0.7909.1200 (validation)
+
+This update was released in November 2023 and includes the following changes:
+
+- General improvements and bug fixes.
+
+## Version 1.0.7755.1800
+
+This update was released in November 2023 and includes the following changes:
+
+- General improvements and bug fixes.
+
+## Version 1.0.7755.1100
This update was released at the end of September 2023 and includes the following changes:
virtual-machines Image Builder Api Update Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-api-update-release-notes.md
Previously updated : 11/01/2023 Last updated : 11/10/2023
Azure Image Builder is enabling Isolated Image Builds using Azure Container Inst
You might observe a different set of transient Azure resources appear temporarily in the staging resource group but that does not impact your actual builds or the way you interact with Azure Image Builder. For more information, please see [Isolated Image Builds](./security-isolated-image-builds-image-builder.md).
+> [!IMPORTANT]
+> Make sure your subscription is registered for the `Microsoft.ContainerInstance` provider.
+ ### April 2023 New portal functionality has been added for Azure Image Builder. Search ΓÇ£Image TemplatesΓÇ¥ in Azure portal, then click ΓÇ£CreateΓÇ¥. You can also [get started here](https://ms.portal.azure.com/#create/Microsoft.ImageTemplate) with building and validating custom images inside the portal.
virtual-machines Image Builder Triggers How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-triggers-how-to.md
Previously updated : 10/16/2023 Last updated : 11/10/2023
Before setting up your first trigger, ensure you're using Azure Image Builder AP
## How to set up a trigger in Azure Image Builder
-### Register the features
+### Register the providers
-To use VM Image Builder with triggers, you need to register the below features. Check your registration by running the following commands:
+To use VM Image Builder with triggers, you need to register the below providers. Check your registration by running the following commands:
```azurecli-interactive az provider show -n Microsoft.VirtualMachineImages -o json | grep registrationState
az provider show -n Microsoft.KeyVault -o json | grep registrationState
az provider show -n Microsoft.Compute -o json | grep registrationState az provider show -n Microsoft.Storage -o json | grep registrationState az provider show -n Microsoft.Network -o json | grep registrationState
+az provider show -n Microsoft.ContainerInstance -o json | grep registrationState
``` If the output doesn't say registered, run the following commands:
az provider register -n Microsoft.Compute
az provider register -n Microsoft.KeyVault az provider register -n Microsoft.Storage az provider register -n Microsoft.Network
+az provider register -n Microsoft.ContainerInstance
``` Register the auto image build triggers feature:
additionalregion=eastus2
acgName=ibTriggersGallery # Name of the image definition to be created - ibTriggersImageDef in this example imageDefName=ibTriggersImageDef
+# Name of the Trigger to be created - ibTrigger in this example
+ibTriggerName=ibTrigger
# Name of the image template to be created - ibTriggersImageTemplate in this example imageTemplateName=ibTriggersImageTemplate # Reference name in the image distribution metadata
Image template requirements:
After configuring your template use the following command to submit the image configuration to the Azure Image Builder service: ```azurecli-interactive
-az resource create --api-version 2022-07-01 --resource-group $resourceGroupName --properties @helloImageTemplateforTriggers.json --is-full-object --resource-type Microsoft.VirtualMachineImages/imageTemplates --name $imageTemplateName
+az image builder create -g $resourceGroupName -n $imageTemplateName --image-template helloImageTemplateforTriggers.json
``` You can use the following command to check to make sure the image template was created successfully: ```azurecli-interactive
-az resource show --api-version 2022-07-01 --ids /subscriptions/$subscriptionID/resourcegroups/$resourceGroupName/providers/Microsoft.VirtualMachineImages/imageTemplates/$imageTemplateName
+az image builder show --name $imageTemplateName --resource-group $resourceGroupName
``` > [!NOTE] > When running the command above the `provisioningState` should say "Succeeded", which means the template was created without any issues. If the `provisioningState` does not say succeeded, you will not be able to make a trigger use the image template.
Trigger requirements:
Use the following command to add the trigger to your resource group. ```azurecli-interactive
-az resource create --api-version 2022-07-01 --resource-group $resourceGroupName --properties @trigger.json --is-full-object --namespace Microsoft.VirtualMachineImages --parent imageTemplates/$imageTemplateName --resource-type triggers --name source
+az image builder trigger create --name $ibTriggerName --resource-group $resourceGroupName --image-template-name $imageTemplateName --kind SourceImage
``` You can also use the following command to check that the trigger was created successfully: ```azurecli
-az resource show --api-version 2022-07-01 --ids /subscriptions/$subscriptionID/resourcegroups/$resourceGroupName/providers/Microsoft.VirtualMachineImages/imageTemplates/$imageTemplateName/triggers/source
+az image builder trigger show --name $ibTriggerName --image-template-name $imageTemplateName --resource-group $resourceGroupName
``` > [!NOTE] > When running the command above the `provisioningState` should say `Succeeded`, which means the trigger was created without any issues. In `status`, the code should say `Healthy` and the message should say `Trigger is active.`
az resource show --api-version 2022-07-01 --ids /subscriptions/$subscriptionID/r
Use the following command to delete the trigger: ```azurecli-interactive
-az resource delete --api-version 2022-07-01 --ids /subscriptions/$subscriptionID/resourcegroups/$resourceGroupName/providers/Microsoft.VirtualMachineImages/imageTemplates/$imageTemplateName/triggers/source
+az image builder trigger delete --name $ibTriggerName --image-template-name $imageTemplateName --resource-group $resourceGroupName
``` #### Deleting the image template Use the following command to delete the image template: ```azurecli-interactive
-az resource delete --api-version 2022-07-01 --ids /subscriptions/$subscriptionID/resourcegroups/$resourceGroupName/providers/Microsoft.VirtualMachineImages/imageTemplates/$imageTemplateName
+az image builder delete --name $imageTemplateName --resource-group $resourceGroupName
``` ## Next steps
virtual-machines Image Builder Devops Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-devops-task.md
Before you begin, you must:
* Have an Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) account, and a Build Pipeline created. * Register and enable the VM Image Builder feature requirements in the subscription that's used by the pipelines:
- * [Azure PowerShell](../windows/image-builder-powershell.md#register-features)
- * [The Azure CLI](../windows/image-builder.md#register-the-features)
+ * [Azure PowerShell](../windows/image-builder-powershell.md#register-providers)
+ * [The Azure CLI](../windows/image-builder.md#register-the-providers)
* Create a standard Azure storage account in the source image resource group. You can use other resource groups or storage accounts. The storage account is used transfer the build artifacts from the DevOps task to the image.
virtual-machines Image Builder Gallery Update Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-gallery-update-image-version.md
Title: Create a new VM image version from an existing image version by using Azu
description: In this article, you'll learn how to create a new VM image version from an existing image version by using VM Image Builder in Linux. - Previously updated : 03/02/2020+ Last updated : 11/10/2020
In this article, you learn how to update an existing image version in an [Azure
To configure the image, you use a sample JSON template, [helloImageTemplateforSIGfromSIG.json](https://raw.githubusercontent.com/azure/azvmimagebuilder/master/quickquickstarts/2_Creating_a_Custom_Linux_Shared_Image_Gallery_Image_from_SIG/helloImageTemplateforSIGfromSIG.json).
-## Register the features
+## Register the providers
-To use VM Image Builder, you need to register the features.
+To use VM Image Builder, you need to register the providers.
1. Check your provider registrations. Make sure that each one returns *Registered*.
To use VM Image Builder, you need to register the features.
az provider show -n Microsoft.Compute | grep registrationState az provider show -n Microsoft.Storage | grep registrationState az provider show -n Microsoft.Network | grep registrationState
+ az provider show -n Microsoft.ContainerInstance | grep registrationState
``` 1. If they don't return *Registered*, register the providers by running the following commands:
To use VM Image Builder, you need to register the features.
az provider register -n Microsoft.KeyVault az provider register -n Microsoft.Storage az provider register -n Microsoft.Network
+ az provider register -n Microsoft.ContainerInstance
``` ## Set variables and permissions
virtual-machines Image Builder Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-gallery.md
Title: Use Azure Image Builder & Azure Compute Gallery for Linux VMs
description: Learn how to use the Azure Image Builder, and the Azure CLI, to create an image version in an Azure Compute Gallery, and then distribute the image globally. - Previously updated : 04/11/2023+ Last updated : 11/10/2023
This article shows you how you can use the Azure Image Builder, and the Azure CLI, to create an image version in an [Azure Compute Gallery](../shared-image-galleries.md) (formerly known as Shared Image Gallery), then distribute the image globally. You can also do this using [Azure PowerShell](../windows/image-builder-gallery.md).
-We will be using a sample .json template to configure the image. The .json file we are using is here: [helloImageTemplateforSIG.json](https://github.com/azure/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json).
+We'll be using a sample .json template to configure the image. The .json file we're using is here: [helloImageTemplateforSIG.json](https://github.com/azure/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json).
To distribute the image to an Azure Compute Gallery, the template uses [sharedImage](image-builder-json.md#distribute-sharedimage) as the value for the `distribute` section of the template.
-## Register the features
+## Register the providers
To use Azure Image Builder, you need to register the feature.
az provider show -n Microsoft.KeyVault | grep registrationState
az provider show -n Microsoft.Compute | grep registrationState az provider show -n Microsoft.Storage | grep registrationState az provider show -n Microsoft.Network | grep registrationState
+az provider show -n Microsoft.ContainerInstance | grep registrationState
```
-If they do not say registered, run the following:
+If they don't say registered, run the following:
```azurecli-interactive az provider register -n Microsoft.VirtualMachineImages
az provider register -n Microsoft.Compute
az provider register -n Microsoft.KeyVault az provider register -n Microsoft.Storage az provider register -n Microsoft.Network
+az provider register -n Microsoft.ContainerInstance
``` ## Set variables and permissions
-We will be using some pieces of information repeatedly, so we will create some variables to store that information.
+We'll be using some pieces of information repeatedly, so we'll create some variables to store that information.
Image Builder only supports creating custom images in the same Resource Group as the source managed image. Update the resource group name in this example to be the same resource group as your source managed image.
az group create -n $sigResourceGroup -l $location
## Create a user-assigned identity and set permissions on the resource group
-Image Builder will use the [user-identity](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) provided to inject the image into the Azure Compute Gallery. In this example, you will create an Azure role definition that has the granular actions to perform distributing the image to the gallery. The role definition will then be assigned to the user-identity.
+Image Builder uses the [user-identity](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) provided to inject the image into the Azure Compute Gallery. In this example, you'll create an Azure role definition that has the granular actions to perform distributing the image to the gallery. The role definition will then be assigned to the user-identity.
```azurecli-interactive # create user assigned identity for image builder to access the storage account where the script is located
az role assignment create \
## Create an image definition and gallery
-To use Image Builder with an Azure Compute Gallery, you need to have an existing gallery and image definition. Image Builder will not create the gallery and image definition for you.
+To use Image Builder with an Azure Compute Gallery, you need to have an existing gallery and image definition. Image Builder won't create the gallery and image definition for you.
If you don't already have a gallery and image definition to use, start by creating them. First, create a gallery.
You should see the image was customized with a *Message of the Day* as soon as y
## Clean up resources
-If you want to now try re-customizing the image version to create a new version of the same image, skip the next steps and go on to [Use Azure Image Builder to create another image version](image-builder-gallery-update-image-version.md).
+If you want to now try recustomizing the image version to create a new version of the same image, skip the next steps and go on to [Use Azure Image Builder to create another image version](image-builder-gallery-update-image-version.md).
-This will delete the image that was created, along with all of the other resource files. Make sure you are finished with this deployment before deleting the resources.
+This deletes the image that was created, along with all of the other resource files. Make sure you're finished with this deployment before deleting the resources.
When deleting gallery resources, you need delete all of the image versions before you can delete the image definition used to create them. To delete a gallery, you first need to have deleted all of the image definitions in the gallery.
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
description: This article helps you troubleshoot common problems and errors you
Previously updated : 11/01/2023 Last updated : 11/10/2023
The cause might be a timing issue because of the D1_V2 VM size. If customization
To avoid the timing issue, you can increase the VM size or you can add a 60-second PowerShell sleep customization.
+### Unregistered Azure Container Instances provider
+
+#### Error
+```text
+Azure Container Instances provider not registered for your subscription.
+```
+
+#### Cause
+Your template subscription doesn't have the Azure Container Instances provider registered.
+
+#### Solution
+Register the Azure Container Instances provider for your template subscription and add the Azure CLI or PowerShell commands:
+
+- Azure CLI: `az provider register -n Microsoft.ContainerInstance`
+- PowerShell: `Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance`
+++ ### Azure Container Instances quota exceeded #### Error
-"Azure Container Instances quota exceeded"
+```text
+Azure Container Instances quota exceeded"
+```
#### Cause Your subscription doesn't have enough Azure Container Instances (ACI) quota for Azure Image Builder to successfully build an image.
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder.md
Title: Use Azure VM Image Builder with an Azure Compute Gallery for Linux VMs
description: Create Linux VM images with Azure VM Image Builder and Azure Compute Gallery. - Previously updated : 04/11/2023+ Last updated : 11/10/2023
This article uses a sample JSON template to configure the image. The JSON file i
To distribute the image to an Azure Compute Gallery, the template uses [sharedImage](image-builder-json.md#distribute-sharedimage) as the value for the `distribute` section of the template.
-## Register the features
+## Register the providers
-To use VM Image Builder, you need to register the feature. Check your registration by running the following commands:
+To use VM Image Builder, you need to register the providers. Check your registration by running the following commands:
```azurecli-interactive az provider show -n Microsoft.VirtualMachineImages -o json | grep registrationState
az provider show -n Microsoft.KeyVault -o json | grep registrationState
az provider show -n Microsoft.Compute -o json | grep registrationState az provider show -n Microsoft.Storage -o json | grep registrationState az provider show -n Microsoft.Network -o json | grep registrationState
+az provider show -n Microsoft.ContainerInstance -o json | grep registrationState
``` If the output doesn't say *registered*, run the following commands:
az provider register -n Microsoft.Compute
az provider register -n Microsoft.KeyVault az provider register -n Microsoft.Storage az provider register -n Microsoft.Network
+az provider register -n Microsoft.ContainerInstance
``` ## Set variables and permissions
virtual-machines Security Isolated Image Builds Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-isolated-image-builds-image-builder.md
Title: Isolated Image Builds for Azure VM Image Builder description: Isolated Image Builds is achieved by transitioning core process of VM image customization/validation from shared infrastructure to dedicated Azure Container Instances resources in your subscription providing compute and network isolation. Previously updated : 11/01/2023 Last updated : 11/10/2023
Isolated Image Builds enable defense-in-depth by limiting network access of your
1. **Compute Isolation:** Isolated Image Builds perform major portion of image building processing in Azure Container Instances resources in your subscription instead of on AIB's shared platform resources. ACI provides hypervisor isolation for each container group to ensure containers run in isolation without sharing a kernel. 2. **Network Isolation:** Isolated Image Builds remove all direct network WinRM/ssh communication between your build VM and Image Builder service.
- - If you are provisioning an Image Builder template without your own Virtual Network then a Public IP Address resource will no more be provisioned in your staging resource group at image build time.
- - If you are provisioning an Image Builder template with an existing Virtual Network in your subscription then a Private Link based communication channel will no more be setup between your Build VM and AIB's backend platform resources. Instead, the communication channel will be setup between the Azure Container Instance and the Build VM resources - both of which reside in the staging resource group in your subscription.
+ - If you're provisioning an Image Builder template without your own Virtual Network, then a Public IP Address resource will no more be provisioned in your staging resource group at image build time.
+ - If you're provisioning an Image Builder template with an existing Virtual Network in your subscription, then a Private Link based communication channel will no more be set up between your Build VM and AIB's backend platform resources. Instead, the communication channel is set up between the Azure Container Instance and the Build VM resources - both of which reside in the staging resource group in your subscription.
3. **Transparency:** AIB is built on HashiCorp [Packer](https://www.packer.io/). Isolated Image Builds executes Packer in the ACI in your subscription, which allows you to inspect the ACI resource and its containers. Similarly, having the entire network communication pipeline in your subscription allows you to inspect all the network resources, their settings, and their allowances.
-4. **Better viewing of live logs:** AIB writes customization logs to a storage account in the staging resource group in your subscription. Isolated Image Builds provides with another way to follow the same logs directly in the Azure portal which can be done by navigating to Image Builder's container in the ACI resource.
+4. **Better viewing of live logs:** AIB writes customization logs to a storage account in the staging resource group in your subscription. Isolated Image Builds provides with another way to follow the same logs directly in the Azure portal, which can be done by navigating to Image Builder's container in the ACI resource.
## Backward compatibility
-This is a platform level change and doesn't affect AIB's interfaces. So, your existing Image Template and Trigger resources continue to function and there's no change in the way you'll deploy new resources of these types. Similarly, customization logs continue to be available in the storage account.
+This is a platform level change and doesn't affect AIB's interfaces. So, your existing Image Template and Trigger resources continue to function and there's no change in the way you deploy new resources of these types. Similarly, customization logs continue to be available in the storage account.
-You might observe a few new resources temporarily appear in the staging resource group (for example, Azure Container Instance, and Private Endpoint) while some other resource will no longer appear (for example, Public IP Address). Just as earlier, these temporary resources will exist only for the duration of the build and will be deleted by Image Builder thereafter.
+You might observe a few new resources temporarily appear in the staging resource group (for example, Azure Container Instance, and Private Endpoint) while some other resource will no longer appear (for example, Public IP Address). As earlier, these temporary resources exist only during the build and will be deleted by Image Builder thereafter.
-Your image builds will automatically be migrated to Isolated Image Builds and you need to take no action to opt-in.
+Your image builds will automatically be migrated to Isolated Image Builds and you need to take no action to opt in.
> [!NOTE] > Image Builder is in the process of rolling this change out to all locations and customers. Some of these details might change as the process is fine-tuned based on service telemetry and feedback. Please refer to the [troubleshooting guide](./linux/image-builder-troubleshoot.md#troubleshoot-build-failures) for more information.
+> [!IMPORTANT]
+> Make sure your subscription is registered for `Microsoft.ContainerInstance provider`:
+> - Azure CLI: `az provider register -n Microsoft.ContainerInstance`
+> - PowerShell: `Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance`
++ ## Next steps - [Azure VM Image Builder overview](./image-builder-overview.md)
virtual-machines Image Builder Gallery Update Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-gallery-update-image-version.md
description: Create a new Windows VM image version from an existing image versio
Previously updated : 07/21/2023 Last updated : 11/10/2023
In this article, you learn how to update an existing Windows image version in an
To configure the image, you use a sample JSON template, [helloImageTemplateforSIGfromWinSIG.json](https://raw.githubusercontent.com/azure/azvmimagebuilder/master/quickquickstarts/2_Creating_a_Custom_Win_Shared_Image_Gallery_Image_from_SIG/helloImageTemplateforSIGfromWinSIG.json).
-## Register the features
+## Register the providers
-To use VM Image Builder, you need to register the features.
+To use VM Image Builder, you need to register the providers.
1. Check your provider registrations. Make sure that each one returns *Registered*.
To use VM Image Builder, you need to register the features.
az provider show -n Microsoft.Compute | grep registrationState az provider show -n Microsoft.Storage | grep registrationState az provider show -n Microsoft.Network | grep registrationState
+ az provider show -n Microsoft.ContainerInstance | grep registrationState
``` 1. If they don't return *Registered*, register the providers by running the following commands:
To use VM Image Builder, you need to register the features.
az provider register -n Microsoft.KeyVault az provider register -n Microsoft.Storage az provider register -n Microsoft.Network
+ az provider register -n Microsoft.ContainerInstance
```
virtual-machines Image Builder Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-gallery.md
description: Create Azure Shared Gallery image versions using VM Image Builder a
Previously updated : 06/30/2023 Last updated : 11/10/2023
VM Image Builder automatically runs `Sysprep` to generalize the image. The comma
Be aware of the number of times you layer customizations. You can run the `Sysprep` command a limited number of times on a single Windows image. After you've reached the `Sysprep` limit, you must re-create your Windows image. For more information, see [Limits on how many times you can run Sysprep](/windows-hardware/manufacture/desktop/sysprep--generalize--a-windows-installation#limits-on-how-many-times-you-can-run-sysprep).
-## Register the features
+## Register the providers
-To use VM Image Builder, you need to register the features.
+To use VM Image Builder, you need to register the providers.
1. Check your provider registrations. Make sure that each one returns *Registered*.
To use VM Image Builder, you need to register the features.
Get-AzResourceProvider -ProviderNamespace Microsoft.Compute | Format-table -Property ResourceTypes,RegistrationState Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault | Format-table -Property ResourceTypes,RegistrationState Get-AzResourceProvider -ProviderNamespace Microsoft.Network | Format-table -Property ResourceTypes,RegistrationState
+ Get-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance | Format-table -Property ResourceTypes,RegistrationState
``` 1. If they don't return *Registered*, register the providers by running the following commands:
To use VM Image Builder, you need to register the features.
Register-AzResourceProvider -ProviderNamespace Microsoft.Storage Register-AzResourceProvider -ProviderNamespace Microsoft.Compute Register-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
- Register-AzResourceProvider -ProviderNamespace Microsoft.Network
+ Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance
``` 1. Install PowerShell modules:
virtual-machines Image Builder Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-powershell.md
Title: Create a Windows VM with Azure VM Image Builder by using PowerShell
description: In this article, you create a Windows VM by using the VM Image Builder PowerShell module. - Previously updated : 09/12/2022+ Last updated : 11/10/2022
should be billed. Select a specific subscription by using the
Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 ```
-### Register features
+### Register providers
If you haven't already done so, register the following resource providers to use with your Azure subscription:
If you haven't already done so, register the following resource providers to use
- Microsoft.Network - Microsoft.VirtualMachineImages - Microsoft.ManagedIdentity
+- Microsoft.ContainerInstance
```azurepowershell-interactive Get-AzResourceProvider -ProviderNamespace Microsoft.Compute, Microsoft.KeyVault, Microsoft.Storage, Microsoft.VirtualMachineImages, Microsoft.Network, Microsoft.ManagedIdentity |
virtual-machines Image Builder Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-virtual-desktop.md
description: Create an Azure VM image of Azure Virtual Desktop by using VM Image
Previously updated : 06/20/2023 Last updated : 11/10/2023
Get-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages
Get-AzResourceProvider -ProviderNamespace Microsoft.Storage Get-AzResourceProvider -ProviderNamespace Microsoft.Compute Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
+Get-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance
# If they don't show as 'Registered', run the following commented-out code
Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
## Register-AzResourceProvider -ProviderNamespace Microsoft.Storage ## Register-AzResourceProvider -ProviderNamespace Microsoft.Compute ## Register-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
+## Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance
``` ## Set up the environment and variables
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder.md
description: In this article, you learn how to create a Windows VM by using VM I
Previously updated : 06/12/2023 Last updated : 11/10/2023
Use the following sample JSON template to configure the image: [helloImageTempla
> [!NOTE] > Windows users can run the following Azure CLI examples on [Azure Cloud Shell](https://shell.azure.com) by using Bash.
-## Register the features
+## Register the providers
To use VM Image Builder, you need to register the feature. Check your registration by running the following commands:
az provider show -n Microsoft.KeyVault | grep registrationState
az provider show -n Microsoft.Compute | grep registrationState az provider show -n Microsoft.Storage | grep registrationState az provider show -n Microsoft.Network | grep registrationState
+az provider show -n Microsoft.ContainerInstance -o json | grep registrationState
``` If the output doesn't say *registered*, run the following commands:
az provider register -n Microsoft.Compute
az provider register -n Microsoft.KeyVault az provider register -n Microsoft.Storage az provider register -n Microsoft.Network
+az provider register -n Microsoft.ContainerInstance
``` ## Set variables
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md
You can associate the following resources to a static public IP address from a p
|||| |Virtual machines| Associating public IPs from a prefix to your virtual machines in Azure reduces management overhead when adding IP addresses to an allowlist in the firewall. You can add an entire prefix with a single firewall rule. As you scale with virtual machines in Azure, you can associate IPs from the same prefix saving cost, time, and management overhead.| To associate IPs from a prefix to your virtual machine: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. [Associate the IP to your virtual machine's network interface.](./virtual-network-network-interface-addresses.md#add-ip-addresses) </br> You can also [associate the IPs to a Virtual Machine Scale Set](https://azure.microsoft.com/resources/templates/vmss-with-public-ip-prefix/). | Standard load balancers | Associating public IPs from a prefix to your frontend IP configuration or outbound rule of a load balancer ensures simplification of your Azure public IP address space. Simplify your scenario by grooming outbound connections from a range of contiguous IP addresses. | To associate IPs from a prefix to your load balancer: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When creating the load balancer, select or update the IP created in step 2 above as the frontend IP of your load balancer. |
-| Azure Firewall | You can use a public IP from a prefix for outbound SNAT. All outbound virtual network traffic is translated to the [Azure Firewall](../../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) public IP. | To associate an IP from a prefix to your firewall: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you [deploy the Azure firewall](../../firewall/tutorial-firewall-deploy-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json#deploy-the-firewall), be sure to select the IP you previously gave from the prefix.|
+| Azure Firewall | You can use a public IP from a prefix for outbound SNAT. All outbound virtual network traffic is translated to the [Azure Firewall](../../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) public IP. | To associate an IP from a prefix to your firewall: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you [deploy the Azure firewall](../../firewall/tutorial-firewall-deploy-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json#create-a-virtual-network), be sure to select the IP you previously gave from the prefix.|
| VPN Gateway (AZ SKU), Application Gateway v2, NAT Gateway | You can use a public IP from a prefix for your gateway | To associate an IP from a prefix to your gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you deploy the [VPN Gateway](../../vpn-gateway/tutorial-create-gateway-portal.md), [Application Gateway](../../application-gateway/quick-create-portal.md#create-an-application-gateway), or [NAT Gateway](../nat-gateway/quickstart-create-nat-gateway-portal.md), be sure to select the IP you previously gave from the prefix.| The following resources utilize a public IP address prefix:
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
The following metrics are available for Azure ExpressRoute gateways:
| Metric | Description| | | |
-| **BitsInPerSecond** | Bits per second ingressing Azure via ExpressRoute gateway that can be further split for specific connections. |
-| **BitsOutPerSecond** | Bits per second egressing Azure via ExpressRoute gateway that can be further split for specific connections. |
+| **BitsInPerSecond** | Bits per second ingressing Azure via ExpressRoute that can be further split for specific connections. |
+| **BitsOutPerSecond** | Bits per second egressing Azure via ExpressRoute that can be further split for specific connections. |
| **Bits Received Per Second** | Total Bits received on ExpressRoute gateway per second. | | **CPU Utilization** | CPU Utilization of the ExpressRoute gateway.| | **Packets per second** | Total Packets received on ExpressRoute gateway per second.|
virtual-wan Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitoring-best-practices.md
Last updated 10/03/2023
This article provides configuration best practices for monitoring Virtual WAN and the different components that can be deployed with it. The recommendations presented in this article are mostly based on existing Azure Monitor metrics and logs generated by Azure Virtual WAN. For a list of metrics and logs collected for Virtual WAN, see the [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md).
-Most of the recommendations in this article suggest creating Azure Monitor alerts. Azure Monitor alerts are meant to proactively notify you when thereΓÇÖs an important event in the monitoring data to help you address the root cause quicker and ultimately reduce downtime. To learn how to create a metric alert, see [Tutorial: Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md). To learn how to create a log query alert, see [Tutorial: Create a log query alert for an Azure resource](../azure-monitor/alerts/tutorial-log-alert.md).
+Most of the recommendations in this article suggest creating Azure Monitor alerts. Azure Monitor alerts are meant to proactively notify you when there is an important event in the monitoring data to help you address the root cause quicker and ultimately reduce downtime. To learn how to create a metric alert, see [Tutorial: Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md). To learn how to create a log query alert, see [Tutorial: Create a log query alert for an Azure resource](../azure-monitor/alerts/tutorial-log-alert.md).
## Virtual WAN gateways
virtual-wan Virtual Wan Point To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-portal.md
In this section, you create a connection between your virtual hub and your VNet.
## <a name="viewwan"></a>Point to site sessions dashboard
-To view your active point to site sessions, click on **Point-to-site Sessions**. This will show you all active point to site users that are connected to your User VPN gateway.
+1. To view your active point to site sessions, click on **Point-to-site Sessions**. This will show you all active point to site users that are connected to your User VPN gateway.
:::image type="content" source="../../includes/media/virtual-wan-p2s-sessions-dashboard/point-to-site-sessions-button.png" alt-text="Screenshot shows point to site blade in Virtual WAN." lightbox="../../includes/media/virtual-wan-p2s-sessions-dashboard/point-to-site-sessions-button.png":::
+1. To disconnect users from the User VPN gateway, click the ... context menu and click "Disconnect".
+
+ :::image type="content" source="../../includes/media/virtual-wan-p2s-sessions-dashboard/point-to-site-sessions-disconnect.png" alt-text="Screenshot shows point to site sessions dashboard." lightbox="../../includes/media/virtual-wan-p2s-sessions-dashboard/point-to-site-sessions-disconnect.png":::
+ ## Modify settings ### <a name="address-pool"></a>Modify client address pool
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
A VPN gateway is a type of virtual network gateway. A VPN gateway sends encrypte
### Why can't I specify policy-based and route-based VPN types?
-As of Oct 1, 2023, you no longer need to specify VPN type. All new VPN gateways will automatically be created as route-based gateways. If you already have a policy-based gateway, you don't need to upgrade your gateway to route-based.
+As of Oct 1, 2023, you can't create a policy-based VPN gateway through Azure portal. All new VPN gateways will automatically be created as route-based. If you already have a policy-based gateway, you don't need to upgrade your gateway to route-based. You can use Powershell/CLI to create the policy-based gateways.
Previously, the older gateway SKUs didn't support IKEv1 for route-based gateways. Now, most of the current gateway SKUs support both IKEv1 and IKEv2.