Updates from: 06/29/2024 01:09:10
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/language-support.md
# Language support for Azure AI Content Safety > [!IMPORTANT]
-> Azure AI Content Safety features not listed in this article, such as Prompt Shields, Protected material detection, Groundedness detection, and Custom categories (rapid) only support English.
-
-## Text moderation
-
-The Azure AI Content Safety text moderation feature supports many languages, but it has been specially trained and tested on a smaller set of languages.
+> Azure AI Content Safety models have been specifically trained and tested on the following languages: Chinese, English, French, German, Italian, Japanese, Portuguese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
> [!NOTE] > **Language auto-detection** > > You don't need to specify a language code for text moderation. The service automatically detects your input language.
-| Language name | Language code | Text moderation | Specially trained |
+| Language name | Language code | Supported Languages | Specially trained languages|
|--||--|--| | Afrikaans | `af` | ✔️ | | | Albanian | `sq` | ✔️ | |
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md
The following regions are supported for Speech service features such as speech t
| Asia Pacific | Japan West | `japanwest` <sup>3</sup> | | Asia Pacific | Korea Central | `koreacentral` <sup>2</sup> | | Canada | Canada Central | `canadacentral` <sup>1</sup> |
-| Europe | North Europe | `northeurope` <sup>1,2,4,5,7</sup> |
+| Europe | North Europe | `northeurope` <sup>1,2,4,5,7,10</sup> |
| Europe | West Europe | `westeurope` <sup>1,2,4,5,7,9,10</sup> | | Europe | France Central | `francecentral` | | Europe | Germany West Central | `germanywestcentral` | | Europe | Norway East | `norwayeast` |
-| Europe | Sweden Central | `swedencentral`<sup>8</sup> |
+| Europe | Sweden Central | `swedencentral`<sup>8,10</sup> |
| Europe | Switzerland North | `switzerlandnorth` <sup>6</sup> | | Europe | Switzerland West | `switzerlandwest` <sup>3</sup> | | Europe | UK South | `uksouth` <sup>1,2,4,7</sup> |
The following regions are supported for Speech service features such as speech t
| US | East US | `eastus` <sup>1,2,4,5,7,9,11</sup> | | US | East US 2 | `eastus2` <sup>1,2,4,5</sup> | | US | North Central US | `northcentralus` <sup>4,6</sup> |
-| US | South Central US | `southcentralus` <sup>1,2,4,5,6,7</sup> |
+| US | South Central US | `southcentralus` <sup>1,2,4,5,6,7,10</sup> |
| US | West Central US | `westcentralus` <sup>3,5</sup> | | US | West US | `westus` <sup>2,5</sup> | | US | West US 2 | `westus2` <sup>1,2,4,5,7,10</sup> |
ai-services Avatar Gestures With Ssml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/avatar-gestures-with-ssml.md
In this example, the avatar will start waving their hand at the left after the w
:::image type="content" source="./media/gesture.png" alt-text="Screenshot of displaying the prebuilt avatar waving their hand at the left." lightbox="./media/gesture.png":::
-## Supported pre-built avatar characters, styles, and gestures
+## Supported prebuilt avatar characters, styles, and gestures
The full list of prebuilt avatar supported gestures provided here can also be found in the text to speech avatar portal. | Characters | Styles | Gestures | ||-|--|
+| Harry | business | 123<br>calm-down<br>come-on<br>five-star-reviews<br>good<br>hello<br>introduce<br>invite<br>thanks<br>welcome |
+| Harry | casual | 123<br>come-on<br>five-star-reviews<br>gong-xi-fa-cai<br>good<br>happy-new-year<br>hello<br>please<br>welcome |
+| Harry | youthful | 123<br>come-on<br>down<br>five-star<br>good<br>hello<br>invite<br>show-right-up-down<br>welcome |
+| Jeff | business | 123<br>come-on<br>five-star-reviews<br>hands-up<br>here<br>meddle<br>please2<br>show<br>silence<br>thanks |
+| Jeff | formal | 123<br>come-on<br>five-star-reviews<br>lift<br>please<br>silence<br>thanks<br>very-good |
| Lisa| casual-sitting | numeric1-left-1<br>numeric2-left-1<br>numeric3-left-1<br>thumbsup-left-1<br>show-front-1<br>show-front-2<br>show-front-3<br>show-front-4<br>show-front-5<br>think-twice-1<br>show-front-6<br>show-front-7<br>show-front-8<br>show-front-9 | | Lisa | graceful-sitting | wave-left-1<br>wave-left-2<br>thumbsup-left<br>show-left-1<br>show-left-2<br>show-left-3<br>show-left-4<br>show-left-5<br>show-right-1<br>show-right-2<br>show-right-3<br>show-right-4<br>show-right-5 | | Lisa | graceful-standing | | | Lisa | technical-sitting | wave-left-1<br>wave-left-2<br>show-left-1<br>show-left-2<br>point-left-1<br>point-left-2<br>point-left-3<br>point-left-4<br>point-left-5<br>point-left-6<br>show-right-1<br>show-right-2<br>show-right-3<br>point-right-1<br>point-right-2<br>point-right-3<br>point-right-4<br>point-right-5<br>point-right-6 |
-| Lisa | technical-standing | |
+| Lisa | technical-standing |
+| Lori | casual | 123-left<br>a-little<br>beg<br>calm-down<br>come-on<br>five-star-reviews<br>good<br>hello<br>open<br>please<br>thanks |
+| Lori | graceful | 123-left<br>applaud<br>come-on<br>introduce<br>nod<br>please<br>show-left<br>show-right<br>thanks<br>welcome |
+| Lori | formal | 123<br>come-on<br>come-on-left<br>down<br>five-star<br>good<br>hands-triangle<br>hands-up<br>hi<br>hopeful<br>thanks |
+| Max | business | a-little-bit<br>click-the-link<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-right<br>good-01<br>good-02<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>lower-left<br>number-one<br>press-both-hands-down-1<br>press-both-hands-down-2<br>push-forward<br>raise-ones-hand<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>slide-to-the-left<br>thanks<br>the-front<br>top-middle-and-bottom-left<br>top-middle-and-bottom-right<br>upper-left<br>upper-right<br>welcome |
+| Max | casual | a-little-bit<br>applaud<br>click-the-link<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>good-1<br>good-2<br>hello<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>introduction-to-products-4<br>left<br>length<br>nodding<br>number-one<br>press-both-hands-down<br>raise-ones-hand<br>right<br>right-front<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>slide-to-the-left<br>thanks<br>the-front<br>upper-left<br>upper-right<br>welcome |
+| Max | formal | a-little-bit<br>click-the-link<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>front-right<br>good-1<br>good-2<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>lower-left<br>lower-right<br>press-both-hands-down<br>push-forward<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>slide-to-the-left<br>the-front<br>top-middle-and-bottom-right<br>upper-left<br>upper-right |
+| Meg | formal | a-little-bit<br>click-the-link<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>front-right<br>good-1<br>good-2<br>hands-forward<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>number-one<br>press-both-hands-down-1<br>press-both-hands-down-2<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>the-front<br>upper-left<br>upper-right |
+| Meg | casual | a-little-bit<br>click-the-link<br>cross-hand<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>front-right<br>good-1<br>good-2<br>handclap<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>length<br>lower-left<br>lower-right<br>number-one<br>press-both-hands-down<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-right-to-left<br>slide-to-the-left<br>spread-hands<br>the-front<br>top-middle-and-bottom-left<br>top-middle-and-bottom-right<br>upper-left<br>upper-right |
+| Meg | business | a-little-bit<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>front-right<br>good-1<br>good-2<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>length<br>number-one<br>press-both-hands-down-1<br>press-both-hands-down-2<br>raise-ones-hand<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>slide-to-the-left<br>spread-hands<br>thanks<br>the-front<br>upper-left |
Only the `casual-sitting` style is supported via the real-time text to speech API. Gestures are only supported with the batch synthesis API and aren't supported via the real-time API.
ai-services Batch Synthesis Avatar Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/batch-synthesis-avatar-properties.md
The following table describes the avatar properties.
| Property | Description | |||
-| avatarConfig.talkingAvatarCharacter | The character name of the talking avatar.<br/><br/>The supported avatar characters can be found [here](avatar-gestures-with-ssml.md#supported-pre-built-avatar-characters-styles-and-gestures).<br/><br/>This property is required.|
-| avatarConfig.talkingAvatarStyle | The style name of the talking avatar.<br/><br/>The supported avatar styles can be found [here](avatar-gestures-with-ssml.md#supported-pre-built-avatar-characters-styles-and-gestures).<br/><br/>This property is required for prebuilt avatar, and optional for customized avatar.|
+| avatarConfig.talkingAvatarCharacter | The character name of the talking avatar.<br/><br/>The supported avatar characters can be found [here](avatar-gestures-with-ssml.md#supported-prebuilt-avatar-characters-styles-and-gestures).<br/><br/>This property is required.|
+| avatarConfig.talkingAvatarStyle | The style name of the talking avatar.<br/><br/>The supported avatar styles can be found [here](avatar-gestures-with-ssml.md#supported-prebuilt-avatar-characters-styles-and-gestures).<br/><br/>This property is required for prebuilt avatar, and optional for customized avatar.|
| avatarConfig.customized | A bool value indicating whether the avatar to be used is customized avatar or not. True for customized avatar, and false for prebuilt avatar.<br/><br/>This property is optional, and the default value is `false`.| | avatarConfig.videoFormat | The format for output video file, could be mp4 or webm.<br/><br/>The `webm` format is required for transparent background.<br/><br/>This property is optional, and the default value is mp4.| | avatarConfig.videoCodec | The codec for output video, could be h264, hevc or vp9.<br/><br/>Vp9 is required for transparent background. The synthesis speed will be slower with vp9 codec, as vp9 encoding is slower.<br/><br/>This property is optional, and the default value is hevc.|
ai-services Batch Synthesis Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/batch-synthesis-avatar.md
To submit a batch synthesis request, construct the HTTP POST request body follow
- Set the required `inputKind` property. - If the `inputKind` property is set to `PlainText`, you must also set the `voice` property in the `synthesisConfig`. In the example below, the `inputKind` is set to `SSML`, so the `speechSynthesis` isn't set. - Set the required `SynthesisId` property. Choose a unique `SynthesisId` for the same speech resource. The `SynthesisId` can be a string of 3 to 64 characters, including letters, numbers, '-', or '_', with the condition that it must start and end with a letter or number.-- Set the required `talkingAvatarCharacter` and `talkingAvatarStyle` properties. You can find supported avatar characters and styles [here](./avatar-gestures-with-ssml.md#supported-pre-built-avatar-characters-styles-and-gestures).
+- Set the required `talkingAvatarCharacter` and `talkingAvatarStyle` properties. You can find supported avatar characters and styles [here](./avatar-gestures-with-ssml.md#supported-prebuilt-avatar-characters-styles-and-gestures).
- Optionally, you can set the `videoFormat`, `backgroundColor`, and other properties. For more information, see [batch synthesis properties](batch-synthesis-avatar-properties.md). > [!NOTE]
ai-services Real Time Synthesis Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/real-time-synthesis-avatar.md
The default voice is the first voice returned per locale from the [voice list AP
## Select avatar character and style
-The supported avatar characters and styles can be found [here](avatar-gestures-with-ssml.md#supported-pre-built-avatar-characters-styles-and-gestures).
+The supported avatar characters and styles can be found [here](avatar-gestures-with-ssml.md#supported-prebuilt-avatar-characters-styles-and-gestures).
The following code snippet shows how to set avatar character and style:
ai-services What Is Text To Speech Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md
Text to speech avatar converts text into a digital video of a photorealistic hum
With text to speech avatar's advanced neural network models, the feature empowers users to deliver life-like and high-quality synthetic talking avatar videos for various applications while adhering to [responsible AI practices](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context).
-> [!NOTE]
-> The text to speech avatar feature is only available in the following service regions: West US 2, West Europe, and Southeast Asia.
- Azure AI text to speech avatar feature capabilities include: - Converts text into a digital video of a photorealistic human speaking with natural-sounding voices powered by Azure AI text to speech.
Sample code for text to speech avatar is available on [GitHub](https://github.co
- When utilizing the text-to-speech avatar feature, charges will be incurred based on the minutes of video output. However, with the real-time avatar, charges are based on the minutes of avatar activation, irrespective of whether the avatar is actively speaking or remaining silent. To optimize costs for real-time avatar usage, refer to the provided tips in the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar#chat-sample) (search "Use Local Video for Idle"). - Throughout an avatar real-time session or batch content creation, the text-to-speech, speech-to-text, Azure OpenAI, or other Azure services are charged separately.-- For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). Note that avatar pricing will only be visible for service regions where the feature is available, including West US 2, West Europe, and Southeast Asia.
+- For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). Note that avatar pricing will only be visible for service regions where the feature is available, including Southeast Asia, North Europe, West Europe, Sweden Central, South Central US, and West US 2.
## Available locations
-The text to speech avatar feature is only available in the following service regions: West US 2, West Europe, and Southeast Asia.
+The text to speech avatar feature is only available in the following service regions: Southeast Asia, North Europe, West Europe, Sweden Central, South Central US, and West US 2.
### Responsible AI
ai-services Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/install-run.md
Previously updated : 06/21/2024 Last updated : 06/27/2024 recommendations: false keywords: on-premises, Docker, container, identify
In this article, learn how to install and run the Translator container online wi
* **🆕 Text Transliteration**. Convert text from one language script or writing system to another language script or writing system in real time. For more information, *see* [Container: transliterate text](transliterate-text-parameters.md).
-* **🆕 Document translation (preview)**. Synchronously translate documents while preserving structure and format in real time. For more information, *see* [Container:translate documents](translate-document-parameters.md).
+* **🆕 Document translation**. Synchronously translate documents while preserving structure and format in real time. For more information, *see* [Container:translate documents](translate-document-parameters.md).
## Prerequisites
docker run --rm -it -p 5000:5000 --memory 12g --cpus 4 \
mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest ```
-The above command:
+The Docker command:
* Creates a running Translator container from a downloaded container image. * Allocates 12 gigabytes (GB) of memory and four CPU core.
ai-services Translate Document Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translate-document-parameters.md
Previously updated : 04/29/2024 Last updated : 06/27/2024
-# Container: Translate Documents (preview)
-
-> [!IMPORTANT]
->
-> * Azure AI Translator public preview releases provide early access to features that are in active development.
-> * Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
+# Container: Translate Documents
**Translate document with source language specified**.
Example: ```bash
-curl -i -X POST "http://localhost:5000/translator/document:translate?sourceLanguage=en&targetLanguage=hi&api-version=2023-11-01-preview" -F "document=@C:\Test\test-file.md;type=text/markdown" -o "C:\translation\translated-file.md"
+curl -i -X POST "http://localhost:5000/translator/document:translate?sourceLanguage=en&targetLanguage=hi&api-version=2024-05-01" -F "document=@C:\Test\test-file.md;type=text/markdown" -o "C:\translation\translated-file.md"
``` ## Synchronous request headers and parameters
For this project, you need a source document to translate. You can download our
Here's an example cURL HTTP request using localhost:5000: ```bash
-curl -v "http://localhost:5000/translator/document:translate?sourceLanguage=en&targetLanguage=es&api-version=2023-11-01-preview" -F "document=@document-translation-sample-docx" -o "C:\translation\translated-file.md"
+curl -v "http://localhost:5000/translator/document:translate?sourceLanguage=en&targetLanguage=es&api-version=2024-05-01" -F "document=@document-translation-sample-docx" -o "C:\translation\translated-file.md"
``` ***Upon successful completion***:
ai-services Client Library Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/client-library-sdks.md
Previously updated : 06/19/2024 Last updated : 06/27/2024 zone_pivot_groups: programming-languages-document-sdk
Document Translation is a cloud-based feature of the [Azure AI Translator](../..
> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Azure AI services (multi-service) resource. > * Document Translation is supported in paid tiers. The Language Studio supports the S1 or D3 instance tiers. We suggest that you select Standard S1 to try Document Translation. *See* [Azure AI services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). > * Document Translation public preview releases provide early access to features that are in active development. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
-> * The public preview version of Document Translation client libraries default to REST API version [**2024-05-01**](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true).
+> * The public preview version of Document Translation client libraries default to REST API version **2024-05-01**.
## Prerequisites
ai-services Get Documents Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-documents-status.md
Previously updated : 02/09/2024 Last updated : 06/27/2024 # Get status for all documents
ai-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/rest-api-guide.md
Previously updated : 06/19/2024 Last updated : 06/27/2024
Document Translation is a cloud-based feature of the Azure AI Translator service
| Request|Method| Description|API path| ||:-|-|--| |***Single*** |***Synchronous***|***Document***|***Translation***|
-|[**Translate document**](translate-document.md)|POST|Synchronously translate a single document.|`{document-translation-endpoint}/translator/document:translate?sourceLanguage={source language}&targetLanguage={target language}&api-version=2023-11-01-preview" -H "Ocp-Apim-Subscription-Key:{your-key}" -F "document={path-to-your-document-with-file-extension};type={ContentType}/{file-extension}" -F "glossary={path-to-your-glossary-with-file-extension};type={ContentType}/{file-extension}" -o "{path-to-output-file}"`|
+|[**Translate document**](translate-document.md)|POST|Synchronously translate a single document.|`{document-translation-endpoint}/translator/document:translate?sourceLanguage={source language}&targetLanguage={target language}&api-version=2024-05-01" -H "Ocp-Apim-Subscription-Key:{your-key}" -F "document={path-to-your-document-with-file-extension};type={ContentType}/{file-extension}" -F "glossary={path-to-your-glossary-with-file-extension};type={ContentType}/{file-extension}" -o "{path-to-output-file}"`|
||||| |***Batch***|***Asynchronous***|***Documents***| ***Translation***| |[**Start translation**](start-translation.md)|POST|Start a batch document translation job.|`{document-translation-endpoint}.cognitiveservices.azure.com/translator/text/batch/v1.1/batches`|
ai-services Translate Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/translate-document.md
Previously updated : 02/12/2024 Last updated : 06/27/2024 recommendations: false
Query string parameters:
|Query parameter | Description | | | |
-|**api-version** | _Required parameter_.<br>Version of the API requested by the client. Current value is `2023-11-01-preview`. |
+|**api-version** | _Required parameter_.<br>Version of the API requested by the client. Current value is `2024-05-01`. |
|**targetLanguage**|_Required parameter_.<br>Specifies the language of the output document. The target language must be one of the supported languages included in the translation scope.| |&bull; **document=**<br> &bull; **type=**|_Required parameters_.<br>&bull; Path to the file location for your source document and file format type.</br> &bull; Ex: **"document=@C:\Test\Test-file.txt;type=text/html**| |**--output**|_Required parameter_.<br> &bull; File path for the target file location. Your translated file is printed to the output file.</br> &bull; Ex: **"C:\Test\Test-file-output.txt"**. The file extension should be the same as the source file.|
ai-studio Deploy Jais Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-jais-models.md
You can find the JAIS model in the [Model Catalog](model-catalog.md) by filterin
### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions will not work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).
+- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for JAIS is only available with hubs created in these regions:
- > [!IMPORTANT]
- > For JAIS models, the serverless API model deployment offering is only available with hubs created in East US 2 or Sweden Central region.
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
- An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). ### JAIS 30b Chat
-JAIS 30b Chat is an auto-regressive bi-lingual LLM for **Arabic** & **English**. The tuned versions use supervised fine-tuning (SFT). The model is finetuned with both Arabic and English prompt-response pairs. The finetuning datasets included a wide range of instructional data across various domains. The model covers a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, the Core42 team developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic.
+JAIS 30b Chat is an auto-regressive bi-lingual LLM for **Arabic** & **English**. The tuned versions use supervised fine-tuning (SFT). The model is fine-tuned with both Arabic and English prompt-response pairs. The fine-tuning datasets included a wide range of instructional data across various domains. The model covers a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, the Core42 team developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic.
*Context length:* JAIS supports a context length of 8K.
Models deployed as a service with pay-as-you-go billing are protected by [Azure
- [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)
+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
ai-studio Deploy Models Cohere Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-command.md
The previously mentioned Cohere models can be deployed as a service with pay-as-
### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).-
- > [!IMPORTANT]
- > For Cohere family models, the serverless API model deployment offering is only available with hubs created in **EastUS2** or **Sweden Central** region.
+- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Cohere Command is only available with hubs created in these regions:
+
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
- An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
Models deployed as a serverless API with pay-as-you-go billing are protected by
- [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)
+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
ai-studio Deploy Models Cohere Embed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-embed.md
The previously mentioned Cohere models can be deployed as a service with pay-as-
### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md).-
- > [!IMPORTANT]
- > For Cohere family models, the serverless API model deployment offering is only available with hubs created in **EastUS2** or **Sweden Central** region.
+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Cohere Embed is only available with hubs created in these regions:
+
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
- An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
Models deployed as a serverless API are protected by [Azure AI Content Safety](.
- [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)
+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
ai-studio Deploy Models Jamba https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-jamba.md
To get started with Jamba Instruct deployed as a serverless API, explore our int
### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Jamba Instruct is only available with hubs created in **East US 2** and **Sweden Central**.
+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Jamba Instruct is only available with hubs created in these regions:
+
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
- An Azure [AI Studio project](../how-to/create-projects.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
Models deployed as a serverless API are protected by Azure AI content safety. Wi
- [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)
+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
ai-studio Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md
If you need to deploy a different model, [deploy it to managed compute](#deploy-
# [Meta Llama 3](#tab/llama-three) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md).-
- > [!IMPORTANT]
- > For Meta Llama 3 models, the pay-as-you-go model deployment offering is only available with hubs created in **East US 2** and **Sweden Central** regions.
-
+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Meta Llama 3 is only available with hubs created in these regions:
+
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
- An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
If you need to deploy a different model, [deploy it to managed compute](#deploy-
# [Meta Llama 2](#tab/llama-two) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md).-
- > [!IMPORTANT]
- > For Meta Llama 2 models, the pay-as-you-go model deployment offering is only available with hubs created in **East US 2** and **West US 3** regions.
-
+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Meta Llama 2 is only available with hubs created in these regions:
+
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
- An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
Models deployed as a serverless API with pay-as-you-go are protected by Azure AI
- [What is Azure AI Studio?](../what-is-ai-studio.md) - [Fine-tune a Meta Llama 2 model in Azure AI Studio](fine-tune-model-llama.md) - [Azure AI FAQ article](../faq.yml)
+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
ai-studio Deploy Models Phi 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-phi-3.md
Certain models in the model catalog can be deployed as a serverless API with pay
### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).
+- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Phi-3 is only available with hubs created in these regions:
- > [!IMPORTANT]
- > For Phi-3 family models, the serverless API model deployment offering is only available with hubs created in **East US 2** and **Sweden Central** regions.
+ * East US 2
+ * Sweden Central
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
- An [Azure AI Studio project](../how-to/create-projects.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
## Related content + - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)
+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
ai-studio Deploy Models Timegen 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-timegen-1.md
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
- [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)
+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azur
Previously updated : 03/05/2024 Last updated : 06/28/2024
The following table includes parameters you can use to define a custom storage c
| | | | | |skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `PremiumV2_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`| |fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows|
-|cachingMode | [Azure Data Disk Host Cache Setting][disk-host-cache-setting] | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
+|cachingMode | [Azure Data Disk Host Cache Setting][disk-host-cache-setting](PremiumV2_LRS and UltraSSD_LRS only support `None` caching mode) | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
|resourceGroup | Specify the resource group for the Azure Disks | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster| |DiskIOPSReadWrite | [UltraSSD disk][ultra-ssd-disks] or [Premium SSD v2][premiumv2_lrs_disks] IOPS Capability (minimum: 2 IOPS/GiB) | 100~160000 | No | `500`| |DiskMBpsReadWrite | [UltraSSD disk][ultra-ssd-disks] or [Premium SSD v2][premiumv2_lrs_disks] Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`|
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azur
Previously updated : 03/05/2024 Last updated : 06/28/2024
Kubernetes needs credentials to access the file share created in the previous st
storageClassName: azurefile-csi csi: driver: file.csi.azure.com
- volumeHandle: unique-volumeid # make sure this volumeid is unique for every identical share in the cluster
+ volumeHandle: "{resource-group-name}#{account-name}#{file-share-name}" # make sure this volumeid is unique for every identical share in the cluster
volumeAttributes: resourceGroup: resourceGroupName # optional, only set this when storage account is not in the same resource group as node shareName: aksshare
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
To use Azure Linux, you specify the OS SKU by setting `os-sku` to `AzureLinux` d
name: nvidia-device-plugin-ds spec: tolerations:
- - key: nvidia.com/gpu
- operator: Exists
- effect: NoSchedule
+ - key: "sku"
+ operator: "Equal"
+ value: "gpu"
+ effect: "NoSchedule"
# Mark this pod as a critical add-on; when enabled, the critical add-on # scheduler reserves resources for critical add-on pods so that they can # be rescheduled after a failure.
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Create a private cluster with default basic networking using the [`az aks create
```azurecli-interactive az aks create \ --name <private-cluster-name> \
- --resource-group-name <private-cluster-resource-group> \
+ --resource-group <private-cluster-resource-group> \
--load-balancer-sku standard \ --enable-private-cluster \ --generate-ssh-keys
aks Use Node Taints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-node-taints.md
This article assumes you have an existing AKS cluster. If you need an AKS cluste
az aks nodepool update \ --cluster-name $CLUSTER_NAME \ --name $NODE_POOL_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
--node-taints "sku=gpu:NoSchedule" ```
This article assumes you have an existing AKS cluster. If you need an AKS cluste
```azurecli-interactive az aks nodepool update \ --cluster-name $CLUSTER_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
--name $NODE_POOL_NAME \ --node-taints "" ```
When you remove all initialization taint occurrences from node pool replicas, th
az aks update \ --resource-group $RESOURCE_GROUP_NAME \ --name $CLUSTER_NAME \
- --node-init-taints "sku=gpu:NoSchedule"
+ --node-init-taints ""
``` ## Check that the taint has been removed from the node
aks Use Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-trusted-launch.md
In this article, you learned how to enable trusted launch. Learn more about [tru
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add [az-aks-nodepool-update]: /cli/azure/aks/nodepool#az-aks-nodepool-update [azure-generation-two-virtual-machines]: ../virtual-machines/generation-2.md
-[verify-secure-boot-failures]: ../virtual-machines/trusted-launch-faq.md#verifying-secure-boot-failures
+[verify-secure-boot-failures]: ../virtual-machines/trusted-launch-faq.md#verify-secure-boot-failures
[tusted-launch-ephemeral-os-sizes]: ../virtual-machines/ephemeral-os-disks.md#trusted-launch-for-ephemeral-os-disks [skip-gpu-driver-install]: gpu-cluster.md#skip-gpu-driver-installation-preview
api-management Api Management Howto Api Inspector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-api-inspector.md
# Tutorial: Debug your APIs using request tracing This tutorial describes how to inspect (trace) request processing in Azure API Management. Tracing helps you debug and troubleshoot your API.
In this tutorial, you learn how to:
:::image type="content" source="media/api-management-howto-api-inspector/api-inspector-002.png" alt-text="Screenshot showing the API inspector." lightbox="media/api-management-howto-api-inspector/api-inspector-002.png"::: - ## Prerequisites + Learn the [Azure API Management terminology](api-management-terminology.md).
api-management Api Management Howto Setup Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-setup-delegation.md
Previously updated : 08/07/2023 Last updated : 06/24/2024 # How to delegate user registration and product subscription Delegation enables your website to own the user data and perform custom validation. With delegation, you can handle developer sign-in/sign-up (and related account management operations) and product subscription using your existing website, instead of the developer portal's built-in functionality.
The final workflow will be:
### Set up API Management to route requests via delegation endpoint
-1. In the Azure portal, search for **Developer portal** in your API Management resource.
-1. Click the **Delegation** item.
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **Developer portal**, select **Delegation**.
1. Click the checkbox to enable **Delegate sign-in & sign-up**. :::image type="content" source="media/api-management-howto-setup-delegation/api-management-delegation-signin-up.png" alt-text="Screenshot showing delegation of sign-in and sign-up in the portal.":::
api-management Trace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md
The `trace` policy adds a custom trace into the request tracing output in the te
[!INCLUDE [api-management-tracing-alert](../../includes/api-management-tracing-alert.md)] - [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)] ## Policy statement
api-management V2 Service Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md
The following API Management capabilities are currently unavailable in the v2 ti
* CA Certificates **Developer portal**
-* Delegation of user registration and product subscription
* Reports * Custom HTML code widget and custom widget * Self-hosted developer portal
The following API Management capabilities are currently unavailable in the v2 ti
* Cipher configuration * Client certificate renegotiation * Free, managed TLS certificate
-* Request tracing in the test console
* Requests to the gateway over localhost ## Resource limits
app-service App Service Web Configure Tls Mutual Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-configure-tls-mutual-auth.md
ms.assetid: cd1d15d3-2d9e-4502-9f11-a306dac4453a Previously updated : 12/11/2020 Last updated : 06/21/2024 ms.devlang: csharp-+ # Configure TLS mutual authentication for Azure App Service
az webapp update --set clientCertEnabled=true --name <app-name> --resource-group
``` ### [Bicep](#tab/bicep)
-For Bicep, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sampe Bicep snippet is provided for you:
+For Bicep, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sample Bicep snippet is provided for you:
```bicep resource appService 'Microsoft.Web/sites@2020-06-01' = {
resource appService 'Microsoft.Web/sites@2020-06-01' = {
### [ARM](#tab/arm)
-For ARM templates, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sampe ARM template snippet is provided for you:
+For ARM templates, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sample ARM template snippet is provided for you:
```ARM {
public class ClientCertValidator {
} ```
+## Python sample
+
+The following Flask and Django Python code samples implement a decorator named `authorize_certificate` that can be used on a view function to permit access only to callers that present a valid client certificate. It expects a PEM formatted certificate in the `X-ARR-ClientCert` header and uses the Python [cryptography](https://pypi.org/project/cryptography/) package to validate the certificate based on its fingerprint (thumbprint), subject common name, issuer common name, and beginning and expiration dates. If validation fails, the decorator ensures that an HTTP response with status code 403 (Forbidden) is returned to the client.
+
+### [Flask](#tab/flask)
+
+```python
+from functools import wraps
+from datetime import datetime, timezone
+from flask import abort, request
+from cryptography import x509
+from cryptography.x509.oid import NameOID
+from cryptography.hazmat.primitives import hashes
++
+def validate_cert(request):
+
+ try:
+ cert_value = request.headers.get('X-ARR-ClientCert')
+ if cert_value is None:
+ return False
+
+ cert_data = ''.join(['--BEGIN CERTIFICATE--\n', cert_value, '\n--END CERTIFICATE--\n',])
+ cert = x509.load_pem_x509_certificate(cert_data.encode('utf-8'))
+
+ fingerprint = cert.fingerprint(hashes.SHA1())
+ if fingerprint != b'12345678901234567890':
+ return False
+
+ subject = cert.subject
+ subject_cn = subject.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value
+ if subject_cn != "contoso.com":
+ return False
+
+ issuer = cert.issuer
+ issuer_cn = issuer.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value
+ if issuer_cn != "contoso.com":
+ return False
+
+ current_time = datetime.now(timezone.utc)
+
+ if current_time < cert.not_valid_before_utc:
+ return False
+
+ if current_time > cert.not_valid_after_utc:
+ return False
+
+ return True
+
+ except Exception as e:
+ # Handle any errors encountered during validation
+ print(f"Encountered the following error during certificate validation: {e}")
+ return False
+
+def authorize_certificate(f):
+ @wraps(f)
+ def decorated_function(*args, **kwargs):
+ if not validate_cert(request):
+ abort(403)
+ return f(*args, **kwargs)
+ return decorated_function
+```
+
+The following code snippet shows how to use the decorator on a Flask view function.
+
+```python
+@app.route('/hellocert')
+@authorize_certificate
+def hellocert():
+ print('Request for hellocert page received')
+ return render_template('https://docsupdatetracker.net/index.html')
+```
+
+### [Django](#tab/django)
+
+```python
+from functools import wraps
+from datetime import datetime, timezone
+from django.core.exceptions import PermissionDenied
+from cryptography import x509
+from cryptography.x509.oid import NameOID
+from cryptography.hazmat.primitives import hashes
++
+def validate_cert(request):
+
+ try:
+ cert_value = request.headers.get('X-ARR-ClientCert')
+ if cert_value is None:
+ return False
+
+ cert_data = ''.join(['--BEGIN CERTIFICATE--\n', cert_value, '\n--END CERTIFICATE--\n',])
+ cert = x509.load_pem_x509_certificate(cert_data.encode('utf-8'))
+
+ fingerprint = cert.fingerprint(hashes.SHA1())
+ if fingerprint != b'12345678901234567890':
+ return False
+
+ subject = cert.subject
+ subject_cn = subject.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value
+ if subject_cn != "contoso.com":
+ return False
+
+ issuer = cert.issuer
+ issuer_cn = issuer.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value
+ if issuer_cn != "contoso.com":
+ return False
+
+ current_time = datetime.now(timezone.utc)
+
+ if current_time < cert.not_valid_before_utc:
+ return False
+
+ if current_time > cert.not_valid_after_utc:
+ return False
+
+ return True
+
+ except Exception as e:
+ # Handle any errors encountered during validation
+ print(f"Encountered the following error during certificate validation: {e}")
+ return False
+
+def authorize_certificate(view):
+ @wraps(view)
+ def _wrapped_view(request, *args, **kwargs):
+ if not validate_cert(request):
+ raise PermissionDenied
+ return view(request, *args, **kwargs)
+ return _wrapped_view
+```
+
+The following code snippet shows how to use the decorator on a Django view function.
+
+```python
+@authorize_certificate
+def hellocert(request):
+ print('Request for hellocert page received')
+ return render(request, 'hello_azure/https://docsupdatetracker.net/index.html')
+```
+++ [exclusion-paths]: ./media/app-service-web-configure-tls-mutual-auth/exclusion-paths.png
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 6/26/2024 Last updated : 6/28/2024 # Migration to App Service Environment v3 using the side-by-side migration feature
App Service can automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). There are different migration options. Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case. App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
-The side-by-side migration feature automates your migration to App Service Environment v3. The side-by-side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. Because of this process, there's a rollback option if you need to cancel your migration. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md).
+The side-by-side migration feature automates your migration to App Service Environment v3. The side-by-side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md).
> [!IMPORTANT] > If you fail to complete all steps described in this tutorial, you'll experience downtime. For example, if you don't update all dependent resources with the new IP addresses or you don't allow access to/from your new subnet, such as the case for your custom domain suffix key vault, you'll experience downtime until that's addressed.
Once you're ready to redirect traffic, you can complete the final step of the mi
> You have 14 days to complete this step. If you don't complete this step in 14 days, your migration is automatically reverted back to an App Service Environment v2. If you need more than 14 days to complete this step, contact support. >
-If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, you can revert all changes and return to your old App Service Environment v2. The revert process takes 3 to 6 hours to complete. Once the revert process completes, your old App Service Environment is back online and your new App Service Environment v3 is deleted. You can then attempt the migration again once you resolve any issues.
+If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, contact support.
## Use the side-by-side migration feature
This step is your opportunity to test and validate your new App Service Environm
Once you confirm your apps are working as expected, you can finalize the migration by running the following command. This command also deletes your old environment. You have 14 days to complete this step. If you don't complete this step in 14 days, your migration is automatically reverted back to an App Service Environment v2. If you need more than 14 days to complete this step, contact support.
-If you find any issues or decide at this point that you no longer want to proceed with the migration, contact support to revert the migration. Don't run the DNS change command if you need to revert the migration. For more information, see [Revert migration](#redirect-customer-traffic-validate-your-app-service-environment-v3-and-complete-migration).
+If you find any issues or decide at this point that you no longer want to proceed with the migration, contact support to discuss your options. Don't run the DNS change command since that command completes the migration.
```azurecli az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=DnsChange&api-version=2022-03-01"
The App Service plan SKUs available for App Service Environment v3 run on the Is
- **What properties of my App Service Environment will change?** You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. Both your inbound and outbound IPs change when using the side-by-side migration feature. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md). - **What happens if migration fails or there is an unexpected issue during the migration?**
- If there's an unexpected issue, support teams are on hand. We recommend that you migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads. With the side-by-side migration feature, you can revert all changes if there's any issues.
+ If there's an unexpected issue, support teams are on hand. We recommend that you migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads.
- **What happens to my old App Service Environment?** If you decide to migrate an App Service Environment using the side-by-side migration feature, your old environment is used up until the final step in the migration process. Once you complete the final step, the old environment and all of the apps hosted on it get shutdown and deleted. Your old environment is no longer accessible. A revert to the old environment at this point isn't possible. - **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
Title: Run Azure Automation runbooks on a Hybrid Runbook Worker
description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 02/20/2024 Last updated : 06/28/2024
For instance, a runbook with `Get-AzVM` can return all the VMs in the subscripti
### Use runbook authentication with Hybrid Worker Credentials
+**Prerequisite**
+- Hybrid Worker should be deployed and the machine should be in running state before executing a runbook.
+
+**Hybrid Worker Credentials**
Instead of having your runbook provide its own authentication to local resources, you can specify Hybrid Worker Credentials for a Hybrid Runbook Worker group. To specify a Hybrid Worker Credentials, you must define a [credential asset](./shared-resources/credentials.md) that has access to local resources. These resources include certificate stores and all runbooks run under these credentials on a Hybrid Runbook Worker in the group. - The user name for the credential must be in one of the following formats:
automation Manage Runtime Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runtime-environment.md
Title: Manage Runtime environment and associated runbooks in Azure Automation
description: This article tells how to manage runbooks in Runtime environment and associated runbooks Azure Automation Previously updated : 01/17/2024 Last updated : 06/28/2024
An Azure Automation account in supported public region (except Central India, Ge
> [!NOTE] > - When you import a package, it might take several minutes. 100MB is the maximum total size of the files that you can import.
- > - Use *.zip* files for PowerShell runbook types.
+ > - Use *.zip* files for PowerShell runbook types as mentioned [here](https://learn.microsoft.com/powershell/scripting/developer/module/understanding-a-windows-powershell-module?view=powershell-7.4)
> - For Python 3.8 packages, use .tar.gz or .whl files targeting cp38-amd64. > - For Python 3.10 (preview) packages, use .whl files targeting cp310 Linux OS.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
### PureStorage
-|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version|
|--|--|--|--|--|
-| Portworx Enterprise 2.7 1.22.5 | 1.20.7 | 1.1.0_2021-11-02 | 15.0.2148.140 | Not validated |
-| Portworx Enterprise 2.9 | 1.22.5 | 1.1.0_2021-11-02 | 15.0.2195.191 | 12.3 (Ubuntu 12.3-1) |
+|[Portworx Enterprise 3.1](https://www.purestorage.com/products/cloud-native-applications/portworx.html)|1.28.7|1.30.0_2024-06-11|16.0.5349.20214|Not validated|
+|Portworx Enterprise 2.7 1.22.5 |1.20.7 |1.1.0_2021-11-02 |15.0.2148.140 |Not validated |
+|Portworx Enterprise 2.9 |1.22.5 |1.1.0_2021-11-02 |15.0.2195.191 |12.3 (Ubuntu 12.3-1) |
### Red Hat
azure-cache-for-redis Cache Best Practices Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-client-libraries.md
Although we don't own or support any client libraries, we do recommend some libr
| ioredis | Node.js | [Link](https://github.com/luin/ioredis) | [More information here](https://ioredis.readthedocs.io/en/stable/API/) | > [!NOTE]
-> Your application can to connect and use your Azure Cache for Redis instance with any client library that can also communicate with open-source Redis.
+> Your application can use any client library that is compatible with open-source Redis to connect to your Azure Cache for Redis instance.
## Client library-specific guidance
azure-cache-for-redis Cache Troubleshoot Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-timeouts.md
There are several changes you can make to mitigate high server load:
- Investigate what is causing high server load such as [long-running commands](#long-running-commands), noted in this article, because of high memory pressure. - [Scale](cache-how-to-scale.md) out to more shards to distribute load across multiple Redis processes or scale up to a larger cache size with more CPU cores. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).-- If your production workload on a _C1_ cache is negatively affected by extra latency from virus scanning, you can reduce the effect by to pay for a higher tier offering with multiple CPU cores, such as _C2_.
+- If your production workload on a _C1_ cache is negatively affected by extra latency from some internal defender scan runs, you can reduce the effect by scaling to a higher tier offering with multiple CPU cores, such as _C2_.
#### Spikes in server load
-On _C0_ and _C1_ caches, you might see short spikes in server load not caused by an increase in requests a couple times a day while virus scanning is running on the VMs. You see higher latency for requests while virus scanning is happening on these tiers. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving virus scanning and Redis requests.
+On _C0_ and _C1_ caches, you might see short spikes in server load not caused by an increase in requests a couple times a day while internal defender scanning is running on the VMs. You see higher latency for requests while internal defender scans happen on these tiers. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving internal defender scanning and Redis requests.
### High memory usage
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Migration is a complex task. Start planning your migration to Azure Monitor Agen
> [!IMPORTANT] > The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). You can expect the following when you use the MMA or OMS agent after this date.
-> - **Data upload:** Cloud ingestion services will gradually reduce support for MMA agents, which may result in decreased support and potential compatibility issues for MMA agents over time. Ingestion for MMA will be unchanged until Febuary 1 2025.
-> - **Installation:** The ability to install the legacy agents will be removed from the Azure Portal and installation policies for legacy agents will be removed. You can still install the MMA agents extension as well as perfrom offline installations.
+> - **Data upload:** Cloud ingestion services will gradually reduce support for MMA agents, which may result in decreased support and potential compatibility issues for MMA agents over time. Ingestion for MMA will be unchanged until February 1 2025.
+> - **Installation:** The ability to install the legacy agents will be removed from the Azure Portal and installation policies for legacy agents will be removed. You can still install the MMA agents extension as well as perform offline installations.
> - **Customer Support:** You will not be able to get support for legacy agent issues.
-> - **OS Support:** Support for new Linux or Windows distros, incluing service packs won't be added after the deprecation of the legacy agents.
+> - **OS Support:** Support for new Linux or Windows distros, including service packs, won't be added after the deprecation of the legacy agents.
## Before you begin
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
The table created in the script has two columns:
- `TimeGenerated` (datetime) [Required] - `RawData` (string) [Optional if table schema provided]-- 'FilePath' (string) [Optional]
+- `FilePath` (string) [Optional]
+- `Computer` (string) [Optional]
- `YourOptionalColumn` (string) [Optional]
-The default table schema for log data collected from text files is 'TimeGenerated' and 'RawData'. Adding the 'FilePath' to either team is optional. If you know your final schema or your source is a JSON log, you can add the final columns in the script before creating the table. You can always [add columns using the Log Analytics table UI](../logs/create-custom-table.md#add-or-delete-a-custom-column) later.
+The default table schema for log data collected from text files is 'TimeGenerated' and 'RawData'. Adding the 'FilePath' or 'Computer' to either stream is optional. If you know your final schema or your source is a JSON log, you can add the final columns in the script before creating the table. You can always [add columns using the Log Analytics table UI](../logs/create-custom-table.md#add-or-delete-a-custom-column) later.
Your column names and JSON attributes must exactly match to automatically parse into the table. Both columns and JSON attributes are case sensitive. For example `Rawdata` will not collect the event data. It must be `RawData`. Ingestion will drop JSON attributes that do not have a corresponding column.
$tableParams = @'
"name": "FilePath", "type": "String" },
- {
+ {
+ "name": "Computer",
+ "type": "String"
+ },
+ {
"name": "YourOptionalColumn", "type": "String" }
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
You can write code to filter, modify, or enrich your telemetry before it's sent from the SDK. The processing includes data that's sent from the standard telemetry modules, such as HTTP request collection and dependency collection.
-* [Filtering](./api-filtering-sampling.md#filtering) can modify or discard telemetry before it's sent from the SDK by implementing `ITelemetryProcessor`. For example, you could reduce the volume of telemetry by excluding requests from robots. Unlike sampling, You have full control what is sent or discarded, but it will affect any metrics based on aggregated logs. Depending on how you discard items, you might also lose the ability to navigate between related items.
+* [Filtering](./api-filtering-sampling.md#filtering) can modify or discard telemetry before it's sent from the SDK by implementing `ITelemetryProcessor`. For example, you could reduce the volume of telemetry by excluding requests from robots. Unlike sampling, You have full control over what is sent or discarded, but it affects any metrics based on aggregated logs. Depending on how you discard items, you might also lose the ability to navigate between related items.
* [Add or Modify properties](./api-filtering-sampling.md#add-properties) to any telemetry sent from your app by implementing an `ITelemetryInitializer`. For example, you could add calculated values or version numbers by which to filter the data in the portal.
You can write code to filter, modify, or enrich your telemetry before it's sent
> [!NOTE] > [The SDK API](./api-custom-events-metrics.md) is used to send custom events and metrics.
-Before you start:
+## Prerequisites
-* Install the appropriate SDK for your application: [ASP.NET](asp-net.md), [ASP.NET Core](asp-net-core.md), [Non HTTP/Worker for .NET/.NET Core](worker-service.md), or [JavaScript](javascript.md).
-
-<a name="filtering"></a>
+Install the appropriate SDK for your application: [ASP.NET](asp-net.md), [ASP.NET Core](asp-net-core.md), [Non-HTTP/Worker for .NET/.NET Core](worker-service.md), or [JavaScript](javascript.md).
## Filtering
To filter telemetry, you write a telemetry processor and register it with `Telem
> Filtering the telemetry sent from the SDK by using processors can skew the statistics that you see in the portal and make it difficult to follow related items. > > Instead, consider using [sampling](./sampling.md).
->
->
-### Create a telemetry processor
+### .NET applications
-### C#
-
-1. To create a filter, implement `ITelemetryProcessor`.
+1. Implement `ITelemetryProcessor`.
Telemetry processors construct a chain of processing. When you instantiate a telemetry processor, you're given a reference to the next processor in the chain. When a telemetry data point is passed to the process method, it does its work and then calls (or doesn't call) the next telemetry processor in the chain.
To filter telemetry, you write a telemetry processor and register it with `Telem
2. Add your processor.
-ASP.NET **apps**
-
-Insert this snippet in ApplicationInsights.config:
-
-```xml
-<TelemetryProcessors>
- <Add Type="WebApplication9.SuccessfulDependencyFilter, WebApplication9">
- <!-- Set public property -->
- <MyParamFromConfigFile>2-beta</MyParamFromConfigFile>
- </Add>
-</TelemetryProcessors>
-```
-
-You can pass string values from the .config file by providing public named properties in your class.
-
-> [!WARNING]
-> Take care to match the type name and any property names in the .config file to the class and property names in the code. If the .config file references a nonexistent type or property, the SDK may silently fail to send any telemetry.
->
-
-Alternatively, you can initialize the filter in code. In a suitable initialization class, for example, AppStart in `Global.asax.cs`, insert your processor into the chain:
-
-```csharp
-var builder = TelemetryConfiguration.Active.DefaultTelemetrySink.TelemetryProcessorChainBuilder;
-builder.Use((next) => new SuccessfulDependencyFilter(next));
-
-// If you have more processors:
-builder.Use((next) => new AnotherProcessor(next));
-
-builder.Build();
-```
-
-Telemetry clients created after this point will use your processors.
-
-ASP.NET **Core/Worker service apps**
-
-> [!NOTE]
-> Adding a processor by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK.
-
-For apps written by using [ASP.NET Core](asp-net-core.md#add-telemetry-processors) or [WorkerService](worker-service.md#add-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class.
-
-```csharp
+ #### [ASP.NET](#tab/dotnet)
+
+ Insert this snippet in ApplicationInsights.config:
+
+ ```xml
+ <TelemetryProcessors>
+ <Add Type="WebApplication9.SuccessfulDependencyFilter, WebApplication9">
+ <!-- Set public property -->
+ <MyParamFromConfigFile>2-beta</MyParamFromConfigFile>
+ </Add>
+ </TelemetryProcessors>
+ ```
+
+ You can pass string values from the .config file by providing public named properties in your class.
+
+ > [!WARNING]
+ > Take care to match the type name and any property names in the .config file to the class and property names in the code. If the .config file references a nonexistent type or property, the SDK may silently fail to send any telemetry.
+ >
+
+ Alternatively, you can initialize the filter in code. In a suitable initialization class, for example, AppStart in `Global.asax.cs`, insert your processor into the chain:
+
+ ```csharp
+ var builder = TelemetryConfiguration.Active.DefaultTelemetrySink.TelemetryProcessorChainBuilder;
+ builder.Use((next) => new SuccessfulDependencyFilter(next));
+
+ // If you have more processors:
+ builder.Use((next) => new AnotherProcessor(next));
+
+ builder.Build();
+ ```
+
+ Telemetry clients created after this point use your processors.
+
+ #### [ASP.NET Core/Worker service](#tab/dotnetcore)
+
+ > [!NOTE]
+ > Adding a processor by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK.
+
+ For apps written by using [ASP.NET Core](asp-net-core.md#add-telemetry-processors) or [WorkerService](worker-service.md#add-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class.
+
+ ```csharp
public void ConfigureServices(IServiceCollection services) { // ... services.AddApplicationInsightsTelemetry(); services.AddApplicationInsightsTelemetryProcessor<SuccessfulDependencyFilter>();-
+
// If you have more processors: services.AddApplicationInsightsTelemetryProcessor<AnotherProcessor>(); }
-```
-
-To register telemetry processors that need parameters in ASP.NET Core, create a custom class implementing **ITelemetryProcessorFactory**. Call the constructor with the desired parameters in the **Create** method and then use **AddSingleton<ITelemetryProcessorFactory, MyTelemetryProcessorFactory>()**.
+ ```
+
+ To register telemetry processors that need parameters in ASP.NET Core, create a custom class implementing **ITelemetryProcessorFactory**. Call the constructor with the desired parameters in the **Create** method and then use **AddSingleton<ITelemetryProcessorFactory, MyTelemetryProcessorFactory>()**.
+
+
### Example filters #### Synthetic requests
Filter out bots and web tests. Although Metrics Explorer gives you the option to
```csharp public void Process(ITelemetry item) {
- if (!string.IsNullOrEmpty(item.Context.Operation.SyntheticSource)) {return;}
-
- // Send everything else:
- this.Next.Process(item);
+ if (!string.IsNullOrEmpty(item.Context.Operation.SyntheticSource)) {return;}
+
+ // Send everything else:
+ this.Next.Process(item);
} ```
public void Process(ITelemetry item)
<a name="add-properties"></a>
-### Java
+### Java applications
To learn more about telemetry processors and their implementation in Java, reference the [Java telemetry processors documentation](./java-standalone-telemetry-processors.md). ### JavaScript web applications
-**Filter by using ITelemetryInitializer**
+You can filter telemetry from JavaScript web applications by using ITelemetryInitializer.
1. Create a telemetry initializer callback function. The callback function takes `ITelemetryItem` as a parameter, which is the event that's being processed. Returning `false` from this callback results in the telemetry item to be filtered out.
- ```JS
- var filteringFunction = (envelope) => {
- if (envelope.data.someField === 'tobefilteredout') {
- return false;
- }
-
- return true;
- };
- ```
+ ```js
+ var filteringFunction = (envelope) => {
+ if (envelope.data.someField === 'tobefilteredout') {
+ return false;
+ }
+ return true;
+ };
+ ```
2. Add your telemetry initializer callback:
- ```JS
+ ```js
appInsights.addTelemetryInitializer(filteringFunction); ```
To learn more about telemetry processors and their implementation in Java, refer
Use telemetry initializers to enrich telemetry with additional information or to override telemetry properties set by the standard telemetry modules.
-For example, Application Insights for a web package collects telemetry about HTTP requests. By default, it flags as failed any request with a response code >=400. But if you want to treat 400 as a success, you can provide a telemetry initializer that sets the success property.
-
-If you provide a telemetry initializer, it's called whenever any of the Track*() methods are called. This initializer includes `Track()` methods called by the standard telemetry modules. By convention, these modules don't set any property that was already set by an initializer. Telemetry initializers are called before calling telemetry processors. So any enrichments done by initializers are visible to processors.
+For example, Application Insights for a web package collects telemetry about HTTP requests. By default, it flags any request with a response code >=400 as failed. If instead you want to treat 400 as a success, you can provide a telemetry initializer that sets the success property.
-**Define your initializer**
+If you provide a telemetry initializer, it's called whenever any of the Track*() methods are called. This initializer includes `Track()` methods called by the standard telemetry modules. By convention, these modules don't set any property that was already set by an initializer. Telemetry initializers are called before calling telemetry processors, so any enrichments done by initializers are visible to processors.
-*C#*
+### .NET applications
-```csharp
-using System;
-using Microsoft.ApplicationInsights.Channel;
-using Microsoft.ApplicationInsights.DataContracts;
-using Microsoft.ApplicationInsights.Extensibility;
+1. Define your initializer
-namespace MvcWebRole.Telemetry
-{
- /*
- * Custom TelemetryInitializer that overrides the default SDK
- * behavior of treating response codes >= 400 as failed requests
- *
- */
- public class MyTelemetryInitializer : ITelemetryInitializer
- {
- public void Initialize(ITelemetry telemetry)
+ ```csharp
+ using System;
+ using Microsoft.ApplicationInsights.Channel;
+ using Microsoft.ApplicationInsights.DataContracts;
+ using Microsoft.ApplicationInsights.Extensibility;
+
+ namespace MvcWebRole.Telemetry
{
- var requestTelemetry = telemetry as RequestTelemetry;
- // Is this a TrackRequest() ?
- if (requestTelemetry == null) return;
- int code;
- bool parsed = Int32.TryParse(requestTelemetry.ResponseCode, out code);
- if (!parsed) return;
- if (code >= 400 && code < 500)
+ /*
+ * Custom TelemetryInitializer that overrides the default SDK
+ * behavior of treating response codes >= 400 as failed requests
+ *
+ */
+ public class MyTelemetryInitializer : ITelemetryInitializer
{
- // If we set the Success property, the SDK won't change it:
- requestTelemetry.Success = true;
-
- // Allow us to filter these requests in the portal:
- requestTelemetry.Properties["Overridden400s"] = "true";
+ public void Initialize(ITelemetry telemetry)
+ {
+ var requestTelemetry = telemetry as RequestTelemetry;
+ // Is this a TrackRequest() ?
+ if (requestTelemetry == null) return;
+ int code;
+ bool parsed = Int32.TryParse(requestTelemetry.ResponseCode, out code);
+ if (!parsed) return;
+ if (code >= 400 && code < 500)
+ {
+ // If we set the Success property, the SDK won't change it:
+ requestTelemetry.Success = true;
+
+ // Allow us to filter these requests in the portal:
+ requestTelemetry.Properties["Overridden400s"] = "true";
+ }
+ // else leave the SDK to set the Success property
+ }
}
- // else leave the SDK to set the Success property
}
- }
-}
-```
-
-ASP.NET **apps: Load your initializer**
-
-In ApplicationInsights.config:
-
-```xml
-<ApplicationInsights>
- <TelemetryInitializers>
- <!-- Fully qualified type name, assembly name: -->
- <Add Type="MvcWebRole.Telemetry.MyTelemetryInitializer, MvcWebRole"/>
- ...
- </TelemetryInitializers>
-</ApplicationInsights>
-```
-
-Alternatively, you can instantiate the initializer in code, for example, in Global.aspx.cs:
-
-```csharp
-protected void Application_Start()
-{
- // ...
- TelemetryConfiguration.Active.TelemetryInitializers.Add(new MyTelemetryInitializer());
-}
-```
-
-See more of [this sample](https://github.com/MohanGsk/ApplicationInsights-Home/tree/master/Samples/AzureEmailService/MvcWebRole).
-
-ASP.NET **Core/Worker service apps: Load your initializer**
-
-> [!NOTE]
-> Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK.
+ ```
-For apps written using [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) or [WorkerService](worker-service.md#add-telemetry-initializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method.
+2. Load your initializer
+
+ #### [ASP.NET](#tab/dotnet)
+
+ In ApplicationInsights.config:
+
+ ```xml
+ <ApplicationInsights>
+ <TelemetryInitializers>
+ <!-- Fully qualified type name, assembly name: -->
+ <Add Type="MvcWebRole.Telemetry.MyTelemetryInitializer, MvcWebRole"/>
+ ...
+ </TelemetryInitializers>
+ </ApplicationInsights>
+ ```
+
+ Alternatively, you can instantiate the initializer in code, for example, in Global.aspx.cs:
+
+ ```csharp
+ protected void Application_Start()
+ {
+ // ...
+ TelemetryConfiguration.Active.TelemetryInitializers.Add(new MyTelemetryInitializer());
+ }
+ ```
+
+ See more of [this sample](https://github.com/MohanGsk/ApplicationInsights-Home/tree/master/Samples/AzureEmailService/MvcWebRole).
+
+ #### [ASP.NET Core/Worker service](#tab/dotnetcore)
+
+ > [!NOTE]
+ > Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK.
+
+ For apps written using [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) or [WorkerService](worker-service.md#add-telemetry-initializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method.
+
+ ```csharp
+ using Microsoft.ApplicationInsights.Extensibility;
+ using CustomInitializer.Telemetry;
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddSingleton<ITelemetryInitializer, MyTelemetryInitializer>();
+ }
+ ```
+
+
-```csharp
- using Microsoft.ApplicationInsights.Extensibility;
- using CustomInitializer.Telemetry;
- public void ConfigureServices(IServiceCollection services)
-{
- services.AddSingleton<ITelemetryInitializer, MyTelemetryInitializer>();
-}
-```
### JavaScript telemetry initializers Insert a JavaScript telemetry initializer, if needed. For more information on the telemetry initializers for the Application Insights JavaScript SDK, see [Telemetry initializers](https://github.com/microsoft/ApplicationInsights-JS#telemetry-initializers).
Insert a telemetry initializer by adding the onInit callback function in the [Ja
<!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-sdk.md and 2) articles\azure-monitor\app\javascript-feature-extensions.md --> ```html <script type="text/javascript">
-!(function (cfg){function e(){cfg.onInit&&cfg.onInit(i)}var S,u,D,t,n,i,C=window,x=document,w=C.location,I="script",b="ingestionendpoint",E="disableExceptionTracking",A="ai.device.";"instrumentationKey"[S="toLowerCase"](),u="crossOrigin",D="POST",t="appInsightsSDK",n=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=n),i=C[n]||function(l){var d=!1,g=!1,f={initialize:!0,queue:[],sv:"7",version:2,config:l};function m(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[A+"id"]=i[S](),n[A+"type"]=i,n["ai.operation.name"]=w&&w.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(f.sv||f.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:4,seq:"1",aiDataContract:undefined}}var h=-1,v=0,y=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],k=l.url||cfg.src;if(k){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~k.indexOf("ai.3")&&(k=k.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<y.length;e++)if(0<k.indexOf(y[e])){h=e;break}var i=function(e){var a,t,n,i,o,r,s,c,p,u;f.queue=[],g||(0<=h&&v+1<y.length?(a=(h+v+1)%y.length,T(k.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+y[a]+i})),v+=1):(d=g=!0,o=k,c=(p=function(){var e,t={},n=l.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][S]()]=o[1])}return t[b]||(e=(n=t.endpointsuffix)?t.location:null,t[b]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||l.instrumentationKey||"",p=(p=p[b])?p+"/v2/track":l.endpointUrl,(u=[]).push((t="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",n=o,r=p,(s=(i=m(c,"Exception")).data).baseType="ExceptionData",s.baseData.exceptions=[{typeName:"SDKLoadFailed",message:t.replace(/\./g,"-"),hasFullStack:!1,stack:t+"\nSnippet failed to load ["+n+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(w&&w.pathname||"_unknown_")+"\nEndpoint: "+r,parsedStack:[]}],i)),u.push((s=o,t=p,(r=(n=m(c,"Message")).data).baseType="MessageData",(i=r.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+s+")").replace(/\"/g,"")+'"',i.properties={endpoint:t},n)),o=u,c=p,JSON&&((r=C.fetch)&&!cfg.useXhr?r(c,{method:D,body:JSON.stringify(o),mode:"cors"}):XMLHttpRequest&&((s=new XMLHttpRequest).open(D,c),s.setRequestHeader("Content-type","application/json"),s.send(JSON.stringify(o))))))},a=function(e,t){g||setTimeout(function(){!t&&f.core||i()},500),d=!1},T=function(e){var n=x.createElement(I),e=(n.src=e,cfg[u]);return!e&&""!==e||"undefined"==n[u]||(n[u]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?x.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){x.getElementsByTagName(I)[0].parentNode.appendChild(n)},cfg.ld||0),n};T(k)}try{f.cookie=x.cookie}catch(p){}function t(e){for(;e.length;)!function(t){f[t]=function(){var e=arguments;d||f.queue.push(function(){f[t].apply(f,e)})}}(e.pop())}var r,s,n="track",o="TrackPage",c="TrackEvent",n=(t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+o,"stop"+o,"start"+c,"stop"+c,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),f.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(l.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==l[E]&&!0!==n[E]&&(t(["_"+(r="onerror")]),s=C[r],C[r]=function(e,t,n,i,a){var o=s&&s(e,t,n,i,a);return!0!==o&&f["_"+r]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},l.autoExceptionInstrumented=!0),f}(cfg.cfg),(C[n]=i).queue&&0===i.queue.length?(i.queue.push(e),i.trackPageView({})):e();})({
+!(function (cfg){function e(){cfg.onInit&&cfg.onInit(n)}var x,w,D,t,E,n,C=window,O=document,b=C.location,q="script",I="ingestionendpoint",L="disableExceptionTracking",j="ai.device.";"instrumentationKey"[x="toLowerCase"](),w="crossOrigin",D="POST",t="appInsightsSDK",E=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=E),n=C[E]||function(g){var f=!1,m=!1,h={initialize:!0,queue:[],sv:"8",version:2,config:g};function v(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[j+"id"]=i[x](),n[j+"type"]=i,n["ai.operation.name"]=b&&b.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(h.sv||h.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:undefined,seq:"1",aiDataContract:undefined}}var n,i,t,a,y=-1,T=0,S=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],o=g.url||cfg.src,r=function(){return s(o,null)};function s(d,t){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~d.indexOf("ai.3")&&(d=d.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<S.length;e++)if(0<d.indexOf(S[e])){y=e;break}var n,i=function(e){var a,t,n,i,o,r,s,c,u,l;h.queue=[],m||(0<=y&&T+1<S.length?(a=(y+T+1)%S.length,p(d.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+S[a]+i})),T+=1):(f=m=!0,s=d,!0!==cfg.dle&&(c=(t=function(){var e,t={},n=g.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][x]()]=o[1])}return t[I]||(e=(n=t.endpointsuffix)?t.location:null,t[I]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||g.instrumentationKey||"",t=(t=(t=t[I])&&"/"===t.slice(-1)?t.slice(0,-1):t)?t+"/v2/track":g.endpointUrl,t=g.userOverrideEndpointUrl||t,(n=[]).push((i="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",o=s,u=t,(l=(r=v(c,"Exception")).data).baseType="ExceptionData",l.baseData.exceptions=[{typeName:"SDKLoadFailed",message:i.replace(/\./g,"-"),hasFullStack:!1,stack:i+"\nSnippet failed to load ["+o+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(b&&b.pathname||"_unknown_")+"\nEndpoint: "+u,parsedStack:[]}],r)),n.push((l=s,i=t,(u=(o=v(c,"Message")).data).baseType="MessageData",(r=u.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+l+")").replace(/\"/g,"")+'"',r.properties={endpoint:i},o)),s=n,c=t,JSON&&((u=C.fetch)&&!cfg.useXhr?u(c,{method:D,body:JSON.stringify(s),mode:"cors"}):XMLHttpRequest&&((l=new XMLHttpRequest).open(D,c),l.setRequestHeader("Content-type","application/json"),l.send(JSON.stringify(s)))))))},a=function(e,t){m||setTimeout(function(){!t&&h.core||i()},500),f=!1},p=function(e){var n=O.createElement(q),e=(n.src=e,t&&(n.integrity=t),n.setAttribute("data-ai-name",E),cfg[w]);return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?O.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){O.getElementsByTagName(q)[0].parentNode.appendChild(n)},cfg.ld||0),n};p(d)}cfg.sri&&(n=o.match(/^((http[s]?:\/\/.*\/)\w+(\.\d+){1,5})\.(([\w]+\.){0,2}js)$/))&&6===n.length?(d="".concat(n[1],".integrity.json"),i="@".concat(n[4]),l=window.fetch,t=function(e){if(!e.ext||!e.ext[i]||!e.ext[i].file)throw Error("Error Loading JSON response");var t=e.ext[i].integrity||null;s(o=n[2]+e.ext[i].file,t)},l&&!cfg.useXhr?l(d,{method:"GET",mode:"cors"}).then(function(e){return e.json()["catch"](function(){return{}})}).then(t)["catch"](r):XMLHttpRequest&&((a=new XMLHttpRequest).open("GET",d),a.onreadystatechange=function(){if(a.readyState===XMLHttpRequest.DONE)if(200===a.status)try{t(JSON.parse(a.responseText))}catch(e){r()}else r()},a.send())):o&&r();try{h.cookie=O.cookie}catch(k){}function e(e){for(;e.length;)!function(t){h[t]=function(){var e=arguments;f||h.queue.push(function(){h[t].apply(h,e)})}}(e.pop())}var c,u,l="track",d="TrackPage",p="TrackEvent",l=(e([l+"Event",l+"PageView",l+"Exception",l+"Trace",l+"DependencyData",l+"Metric",l+"PageViewPerformance","start"+d,"stop"+d,"start"+p,"stop"+p,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),h.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(g.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==g[L]&&!0!==l[L]&&(e(["_"+(c="onerror")]),u=C[c],C[c]=function(e,t,n,i,a){var o=u&&u(e,t,n,i,a);return!0!==o&&h["_"+c]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},g.autoExceptionInstrumented=!0),h}(cfg.cfg),(C[E]=n).queue&&0===n.queue.length?(n.queue.push(e),n.trackPageView({})):e();})({
src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js",
-crossOrigin: "anonymous",
+crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag
onInit: function (sdk) {
- sdk.addTelemetryInitializer(function (envelope) {
+ sdk.addTelemetryInitializer(function (envelope) {
envelope.data = envelope.data || {}; envelope.data.someField = 'This item passed through my telemetry initializer';
- });
+ });
}, // Once the application insights instance has loaded and initialized this method will be called
+// sri: false, // Custom optional value to specify whether fetching the snippet from integrity file and do integrity check
cfg: { // Application Insights Configuration connectionString: "YOUR_CONNECTION_STRING" }});
cfg: { // Application Insights Configuration
#### [npm package](#tab/npmpackage)
- ```js
- import { ApplicationInsights } from '@microsoft/applicationinsights-web'
-
- const appInsights = new ApplicationInsights({ config: {
- connectionString: 'YOUR_CONNECTION_STRING'
- /* ...Other Configuration Options... */
- } });
- appInsights.loadAppInsights();
- // To insert a telemetry initializer, uncomment the following code.
- /** var telemetryInitializer = (envelope) => { envelope.data = envelope.data || {}; envelope.data.someField = 'This item passed through my telemetry initializer';
- };
- appInsights.addTelemetryInitializer(telemetryInitializer); **/
- appInsights.trackPageView();
- ```
+```js
+import { ApplicationInsights } from '@microsoft/applicationinsights-web'
+
+const appInsights = new ApplicationInsights({ config: {
+ connectionString: 'YOUR_CONNECTION_STRING'
+ /* ...Other Configuration Options... */
+} });
+appInsights.loadAppInsights();
+// To insert a telemetry initializer, uncomment the following code.
+/** var telemetryInitializer = (envelope) => { envelope.data = envelope.data || {}; envelope.data.someField = 'This item passed through my telemetry initializer';
+ };
+appInsights.addTelemetryInitializer(telemetryInitializer); **/
+appInsights.trackPageView();
+```
The following sample initializer adds a custom property to every tracked telemet
```csharp public void Initialize(ITelemetry item) {
- var itemProperties = item as ISupportProperties;
- if(itemProperties != null && !itemProperties.Properties.ContainsKey("customProp"))
+ var itemProperties = item as ISupportProperties;
+ if(itemProperties != null && !itemProperties.Properties.ContainsKey("customProp"))
{ itemProperties.Properties["customProp"] = "customValue"; }
public void Initialize(ITelemetry telemetry)
#### Control the client IP address used for geolocation mappings
-The following sample initializer sets the client IP which will be used for geolocation mapping, instead of the client socket IP address, during telemetry ingestion.
+The following sample initializer sets the client IP, which is used for geolocation mapping, instead of the client socket IP address, during telemetry ingestion.
```csharp public void Initialize(ITelemetry telemetry)
What's the difference between telemetry processors and telemetry initializers?
## <a name="next"></a>Next steps * [Search events and logs](./transaction-search-and-diagnostics.md?tabs=transaction-search) * [sampling](./sampling.md)-
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn how to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 10/11/2023- Last updated : 06/28/2024+ # Migrate to workspace-based Application Insights resources
Workspace-based Application Insights resources allow you to take advantage of th
* [Customer-managed keys](../logs/customer-managed-keys.md) provide encryption at rest for your data with encryption keys that only you have access to. * [Azure Private Link](../logs/private-link-security.md) allows you to securely link the Azure platform as a service (PaaS) to your virtual network by using private endpoints.
-* [Bring your own storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over:
+* [Profiler and Snapshot Debugger Bring your own storage (BYOS)](./profiler-bring-your-own-storage.md) gives you full control over:
- Encryption-at-rest policy. - Lifetime management policy. - Network access for all data associated with Application Insights Profiler and Snapshot Debugger.
If you don't need to migrate an existing resource, and instead want to create a
## Prerequisites -- A Log Analytics workspace with the access control mode set to the **Use resource or workspace permissions** setting:
+- A Log Analytics workspace with the access control mode set to the **"Use resource or workspace permissions"** setting:
- - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **workspace-based permissions** setting. To learn more about Log Analytics workspace access control, see the [Access control mode guidance](../logs/manage-access.md#access-control-mode).
- - If you don't already have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
+ - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **workspace-based permissions** setting. To learn more about Log Analytics workspace access control, see the [Access control mode guidance](../logs/manage-access.md#access-control-mode).
+ - If you don't already have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
- **Continuous export** isn't compatible with workspace-based resources and must be disabled. After the migration is finished, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs.
- > [!CAUTION]
- > * Diagnostic settings use a different export format/schema than continuous export. Migrating breaks any existing integrations with Azure Stream Analytics.
- > * Diagnostic settings export might increase costs. For more information, see [Export telemetry from Application Insights](export-telemetry.md#diagnostic-settings-based-export).
-
+ > [!CAUTION]
+ > * Diagnostic settings use a different export format/schema than continuous export. Migrating breaks any existing integrations with Azure Stream Analytics.
+ > * Diagnostic settings export might increase costs. For more information, see [Export telemetry from Application Insights](export-telemetry.md#diagnostic-settings-based-export).
- Check your current retention settings under **Settings** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource.
- > [!NOTE]
- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#configure-retention-and-archive-at-the-table-level).
- > - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention continues to be billed through that Application Insights resource until the data exceeds the retention period.
- > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
-
+ > [!NOTE]
+ > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#configure-retention-and-archive-at-the-table-level).
+ > - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention continues to be billed through that Application Insights resource until the data exceeds the retention period.
+ > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage.
- Understand [workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
+## Find your Classic Application Insights resources
+
+You can use on of the following methods to find Classic Application Insights resources within your subscription:
+
+#### Application Insights resource in Azure portal
+
+Within the Overview of an Application Insights resource, Classic Application Insights resources don't have a linked Workspace and the Classic Application Insights retirement warning banner appears. Workspace-based resources have a linked workspace within the overview section
+
+Classic resource:
+
+Workspace-based resource:
+
+#### Azure Resource Graph
+
+You can use the Azure Resource Graph (ARG) Explorer and run a query on the 'resources' table to pull this information:
+
+```kusto
+resources
+| where subscriptionId == 'Replace with your own subscription ID'
+| where type contains 'microsoft.insights/components'
+| distinct resourceGroup, name, tostring(properties['IngestionMode']), tostring(properties['WorkspaceResourceId'])
+```
+> [!NOTE]
+> Classic resources are identified by ΓÇÿApplicationInsightsΓÇÖ, 'N/A', or *Empty* values.
+#### Azure CLI:
+
+Run the following script from Cloud Shell in the portal where authentication is built in or anywhere else after authenticating using `az login`:
+
+```azurecli
+$resources = az resource list --resource-type 'microsoft.insights/components' | ConvertFrom-Json
+
+$resources | Sort-Object -Property Name | Format-Table -Property @{Label="App Insights Resource"; Expression={$_.name}; width = 35}, @{Label="Ingestion Mode"; Expression={$mode = az resource show --name $_.name --resource-group $_.resourceGroup --resource-type microsoft.insights/components --query "properties.IngestionMode" -o tsv; $mode}; width = 45}
+```
+> [!NOTE]
+> Classic resources are identified by ΓÇÿApplicationInsightsΓÇÖ, 'N/A', or *Empty* values.
+The following PowerShell script can be run from the Azure CLI:
+
+```azurepowershell
+$subscription = "SUBSCRIPTION ID GOES HERE"
+$token = (Get-AZAccessToken).Token
+$header = @{Authorization = "Bearer $token"}
+$uri = "https://management.azure.com/subscriptions/$subscription/providers/Microsoft.Insights/components?api-version=2015-05-01"
+$RestResult=""
+$RestResult = Invoke-RestMethod -Method GET -Uri $uri -Headers $header -ContentType "application/json" -ErrorAction Stop -Verbose
+ $list=@()
+$ClassicList=@()
+foreach ($app in $RestResult.value)
+ {
+ #"processing: " + $app.properties.WorkspaceResourceId ## Classic Application Insights do not have a workspace.
+ if ($app.properties.WorkspaceResourceId)
+ {
+ $Obj = New-Object -TypeName PSObject
+ #$app.properties.WorkspaceResourceId
+ $Obj | Add-Member -Type NoteProperty -Name Name -Value $app.name
+ $Obj | Add-Member -Type NoteProperty -Name WorkspaceResourceId -Value $app.properties.WorkspaceResourceId
+ $list += $Obj
+ }
+ else
+ {
+ $Obj = New-Object -TypeName PSObject
+ $app.properties.WorkspaceResourceId
+ $Obj | Add-Member -Type NoteProperty -Name Name -Value $app.name
+ $ClassicList += $Obj
+ }
+ }
+$list |Format-Table -Property Name, WorkspaceResourceId -Wrap
+ "";"Classic:"
+$ClassicList | FT
+```
+ ## Migrate your resource To migrate a classic Application Insights resource to a workspace-based resource:
-1. From your Application Insights resource, select **Properties** under the **Configure** heading in the menu on the left.
+1. From your Application Insights resource, select **"Properties"** under the **"Configure"** heading in the menu on the left.
- :::image type="content" source="./media/convert-classic-resource/properties.png" lightbox="./media/convert-classic-resource/properties.png" alt-text="Screenshot that shows Properties under the Configure heading.":::
+ :::image type="content" source="./media/convert-classic-resource/properties.png" lightbox="./media/convert-classic-resource/properties.png" alt-text="Screenshot that shows Properties under the Configured heading.":::
1. Select **Migrate to Workspace-based**.
From within the Application Insights resource pane, select **Properties** > **Ch
This section provides answers to common questions.
-### What will happen if I don't migrate my Application Insights classic resource to a workspace-based resource?
+### What happens if I don't migrate my Application Insights classic resource to a workspace-based resource?
-Microsoft will begin an automatic phased approach to migrating classic resources to workspace-based resources beginning in May 2024 and this migration will span the course of several months. We can't provide approximate dates that specific resources, subscriptions, or regions will be migrated.
+Microsoft began a phased approach to migrating classic resources to workspace-based resources in May 2024 and this migration is ongoing for several months. We can't provide approximate dates that specific resources, subscriptions, or regions are migrated.
-We strongly encourage manual migration to workspace-based resources, which is initiated by selecting the retirement notice banner in the classic Application Insights resource Overview pane of the Azure portal. This process typically involves a single step of choosing which Log Analytics workspace will be used to store your application data. If you use continuous export, you'll need to additionally migrate to diagnostic settings or disable the feature first.
+We strongly encourage manual migration to workspace-based resources. This process is initiated by selecting the retirement notice banner. You can find it in the classic Application Insights resource Overview pane of the Azure portal. This process typically involves a single step of choosing which Log Analytics workspace is used to store your application data. If you use continuous export, you need to additionally migrate to diagnostic settings or disable the feature first.
-If you don't wish to have your classic resource automatically migrated to a workspace-based resource, you may delete or manually migrate the resource.
+If you don't wish to have your classic resource automatically migrated to a workspace-based resource, you can delete or manually migrate the resource.
### Is there any implication on the cost from migration? There's usually no difference, with two exceptions. -- Application Insights resources that were receiving 1 GB per month free via legacy Application Insights pricing model will no longer receive the free data.-- Application Insights resources that were in the basic pricing tier prior to April 2018 continue to be billed at the same non-regional price point as before April 2018. Application Insights resources created after that time, or those converted to be workspace-based, will receive the current regional pricing. For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/).
+- Application Insights resources that were receiving 1 GB per month free via legacy Application Insights pricing model doesn't receive the free data.
+- Application Insights resources that were in the basic pricing tier before April 2018 continue to be billed at the same nonregional price point as before April 2018. Application Insights resources created after that time, or those resources converted to be workspace-based, will receive the current regional pricing. For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/).
-The migration to workspace-based Application Insights offers a number of options to further [optimize cost](../logs/cost-logs.md), including [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers), [dedicated clusters](../logs/cost-logs.md#dedicated-clusters), and [basic logs](../logs/cost-logs.md#basic-logs).
+The migration to workspace-based Application Insights offers many options to further [optimize cost](../logs/cost-logs.md), including [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers), [dedicated clusters](../logs/cost-logs.md#dedicated-clusters), and [basic logs](../logs/cost-logs.md#basic-logs).
### How will telemetry capping work?
No. We merge data during query time.
Yes, they continue to work.
-### Will my dashboards that have pinned metric and log charts continue to work after migration?
+### Will my dashboards with pinned metric and log charts continue to work after migration?
Yes, they continue to work.
No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor-and-
### What happens with continuous export after migration?
-To continue with automated exports, you'll need to migrate to [diagnostic settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) before migrating to workspace-based resource. The diagnostic setting carries over in the migration to workspace-based Application Insights.
+To continue with automated exports, you need to migrate to [diagnostic settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) before migrating to workspace-based resource. The diagnostic setting carries over in the migration to workspace-based Application Insights.
### How do I ensure a successful migration of my App Insights resource using Terraform?
-If you're using Terraform to manage your Azure resources, it's important to use the latest version of the Terraform azurerm provider before attempting to upgrade your App Insights resource. Using an older version of the provider, such as version 3.12, may result in the deletion of the classic component before creating the replacement workspace-based Application Insights resource. It can cause the loss of previous data and require updating the configurations in your monitored apps with new connection string and instrumentation key values.
+If you're using Terraform to manage your Azure resources, it's important to use the latest version of the Terraform azurerm provider before attempting to upgrade your App Insights resource. Use of an older version of the provider, such as version 3.12, can result in the deletion of the classic component before creating the replacement workspace-based Application Insights resource. It can cause the loss of previous data and require updating the configurations in your monitored apps with new connection string and instrumentation key values.
-To avoid this issue, make sure to use the latest version of the Terraform [azurerm provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest), version 3.89 or higher, which performs the proper migration steps by issuing the appropriate ARM call to upgrade the App Insights classic resource to a workspace-based resource while preserving all the old data and connection string/instrumentation key values.
+To avoid this issue, make sure to use the latest version of the Terraform [azurerm provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest), version 3.89 or higher. It performs the proper migration steps by issuing the appropriate Azure Resource Manager (ARM) call to upgrade the App Insights classic resource to a workspace-based resource while preserving all the old data and connection string/instrumentation key values.
### Can I still use the old API to create Application Insights resources programmatically?
-For backwards compatibility, calls to the old API for creating Application Insights resources will continue to work. Each of these calls will eventually create both a workspace-based Application Insights resource and a Log Analytics workspace to store the data.
+For backwards compatibility, calls to the old API for creating Application Insights resources continue to work. Each of these calls creates both a workspace-based Application Insights resource and a Log Analytics workspace to store the data.
We strongly encourage updating to the [new API](create-workspace-resource.md) for better control over resource creation.
Yes, we recommend migrating diagnostic settings on classic Application Insights
## Troubleshooting
-This section offers troubleshooting tips for common issues.
+This section provides troubleshooting tips.
### Access mode
-**Error message:** "The selected workspace is configured with workspace-based access mode. Some Application Performance Monitoring (APM) features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI."
+**Error message:** "The selected workspace is configured with workspace-based access mode. Some Application Performance Monitoring (APM) features can be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI."
For your workspace-based Application Insights resource to operate properly, you need to change the access control mode of your target Log Analytics workspace to the **Resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For instructions, see the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience remains blocked.
If you can't change the access control mode for security reasons for your curren
The legacy **Continuous export** functionality isn't supported for workspace-based resources. Before migrating, you need to enable diagnostic settings and disable continuous export. 1. [Enable Diagnostic Settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) on your classic Application Insights resource.
-1. From your Application Insights resource view, under the **Configure** heading, select **Continuous export**.
+1. From your Application Insights resource view, under the **"Configure"** heading, select **"Continuous export"**.
:::image type="content" source="./media/convert-classic-resource/continuous-export.png" lightbox="./media/convert-classic-resource/continuous-export.png" alt-text="Screenshot that shows the Continuous export menu item.":::
The legacy **Continuous export** functionality isn't supported for workspace-bas
- After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings aren't saved, select **OK**. This prompt doesn't pertain to disabling or enabling continuous export.
- - After you've successfully migrated your Application Insights resource to workspace based, you can use diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostics settings** > **Add diagnostic setting** in your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account or stream to Azure Event Hubs. For more information on diagnostic settings, see the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md).
+ - After migrating your Application Insights resource, you can use diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostics settings** > **Add diagnostic setting** in your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account or stream to Azure Event Hubs. For more information on diagnostic settings, see the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md).
### Retention settings
-**Warning message:** "Your customized Application Insights retention settings won't apply to data sent to the workspace. You'll need to reconfigure these separately."
+**Warning message:** "Your customized Application Insights retention settings doesn't apply to data sent to the workspace. You need to reconfigure them separately."
-You don't have to make any changes prior to migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you might want to modify the retention settings for your Log Analytics workspace prior to migrating and starting to ingest new data.
+You don't have to make any changes before migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you might want to modify the retention settings for your Log Analytics workspace before migrating and starting to ingest new data.
You can check your current retention settings for Log Analytics under **Settings** > **Usage and estimated costs** > **Data Retention** in the Log Analytics UI. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource. ## Workspace-based resource changes
-Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separately from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This arrangement simplifies your configuration. You can analyze data across multiple solutions more easily and use the capabilities of workspaces.
+Before the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separately from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This arrangement simplifies your configuration. You can analyze data across multiple solutions more easily and use the capabilities of workspaces.
### Classic data structure
Legacy table: availabilityResults
|customMeasurements|dynamic|Measurements|Dynamic| |duration|real|DurationMs|real| |`id`|string|`Id`|string|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|String|
Legacy table: browserTimings
|cloud_RoleName|string|AppRoleName|string| |customDimensions|dynamic|Properties|Dynamic| |customMeasurements|dynamic|Measurements|Dynamic|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|string|
Legacy table: dependencies
|data|string|Data|string| |duration|real|DurationMs|real| |`id`|string|`Id`|string|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|String|
Legacy table: customEvents
|cloud_RoleName|string|AppRoleName|string| |customDimensions|dynamic|Properties|Dynamic| |customMeasurements|dynamic|Measurements|Dynamic|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|string|
Legacy table: customMetrics
|cloud_RoleInstance|string|AppRoleInstance|string| |cloud_RoleName|string|AppRoleName|string| |customDimensions|dynamic|Properties|Dynamic|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|itemId|string|\_ItemId|string| |itemType|string|Type|string| |name|string|Name|string|
Legacy table: pageViews
|customMeasurements|dynamic|Measurements|Dynamic| |duration|real|DurationMs|real| |`id`|string|`Id`|string|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|String|
Legacy table: performanceCounters
|cloud_RoleName|string|AppRoleName|string| |counter|string|(removed)|| |customDimensions|dynamic|Properties|Dynamic|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|instance|string|Instance|string| |itemId|string|\_ItemId|string| |itemType|string|Type|string|
Legacy table: requests
|customMeasurements|dynamic|Measurements|Dynamic| |duration|real|DurationMs|Real| |`id`|string|`Id`|String|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|String|
Legacy table: exceptions
|customMeasurements|dynamic|Measurements|dynamic| |details|dynamic|Details|dynamic| |handledAt|string|HandledAt|string|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|innermostAssembly|string|InnermostAssembly|string| |innermostMessage|string|InnermostMessage|string| |innermostMethod|string|InnermostMethod|string|
Legacy table: traces
|cloud_RoleName|string|AppRoleName|string| |customDimensions|dynamic|Properties|dynamic| |customMeasurements|dynamic|Measurements|dynamic|
-|iKey|string|IKey|string|
+|`iKey`|string|`IKey`|string|
|itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|string|
Legacy table: traces
* [Explore metrics](../essentials/metrics-charts.md) * [Write Log Analytics queries](../logs/log-query-overview.md)-
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web
#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript)
-1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights.
- <!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-sdk.md and 2) articles\azure-monitor\app\api-filtering-sampling.md -->
- ```html
- <script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.min.js"></script>
- <script type="text/javascript">
- var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin();
- // Click Analytics configuration
- var clickPluginConfig = {
- autoCapture : true,
- dataTags: {
- useDefaultContentNameOrId: true
- }
- }
- // Application Insights configuration
- var configObj = {
- connectionString: "YOUR_CONNECTION_STRING",
- // Alternatively, you can pass in the instrumentation key,
- // but support for instrumentation key ingestion will end on March 31, 2025.
- // instrumentationKey: "YOUR INSTRUMENTATION KEY",
- extensions: [
- clickPluginInstance
- ],
- extensionConfig: {
- [clickPluginInstance.identifier] : clickPluginConfig
- },
- };
- // Application Insights JavaScript (Web) SDK Loader Script code
- !(function (cfg){function e(){cfg.onInit&&cfg.onInit(i)}var S,u,D,t,n,i,C=window,x=document,w=C.location,I="script",b="ingestionendpoint",E="disableExceptionTracking",A="ai.device.";"instrumentationKey"[S="toLowerCase"](),u="crossOrigin",D="POST",t="appInsightsSDK",n=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=n),i=C[n]||function(l){var d=!1,g=!1,f={initialize:!0,queue:[],sv:"7",version:2,config:l};function m(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[A+"id"]=i[S](),n[A+"type"]=i,n["ai.operation.name"]=w&&w.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(f.sv||f.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:4,seq:"1",aiDataContract:undefined}}var h=-1,v=0,y=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],k=l.url||cfg.src;if(k){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~k.indexOf("ai.3")&&(k=k.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<y.length;e++)if(0<k.indexOf(y[e])){h=e;break}var i=function(e){var a,t,n,i,o,r,s,c,p,u;f.queue=[],g||(0<=h&&v+1<y.length?(a=(h+v+1)%y.length,T(k.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+y[a]+i})),v+=1):(d=g=!0,o=k,c=(p=function(){var e,t={},n=l.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][S]()]=o[1])}return t[b]||(e=(n=t.endpointsuffix)?t.location:null,t[b]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||l.instrumentationKey||"",p=(p=p[b])?p+"/v2/track":l.endpointUrl,(u=[]).push((t="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",n=o,r=p,(s=(i=m(c,"Exception")).data).baseType="ExceptionData",s.baseData.exceptions=[{typeName:"SDKLoadFailed",message:t.replace(/\./g,"-"),hasFullStack:!1,stack:t+"\nSnippet failed to load ["+n+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(w&&w.pathname||"_unknown_")+"\nEndpoint: "+r,parsedStack:[]}],i)),u.push((s=o,t=p,(r=(n=m(c,"Message")).data).baseType="MessageData",(i=r.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+s+")").replace(/\"/g,"")+'"',i.properties={endpoint:t},n)),o=u,c=p,JSON&&((r=C.fetch)&&!cfg.useXhr?r(c,{method:D,body:JSON.stringify(o),mode:"cors"}):XMLHttpRequest&&((s=new XMLHttpRequest).open(D,c),s.setRequestHeader("Content-type","application/json"),s.send(JSON.stringify(o))))))},a=function(e,t){g||setTimeout(function(){!t&&f.core||i()},500),d=!1},T=function(e){var n=x.createElement(I),e=(n.src=e,cfg[u]);return!e&&""!==e||"undefined"==n[u]||(n[u]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?x.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){x.getElementsByTagName(I)[0].parentNode.appendChild(n)},cfg.ld||0),n};T(k)}try{f.cookie=x.cookie}catch(p){}function t(e){for(;e.length;)!function(t){f[t]=function(){var e=arguments;d||f.queue.push(function(){f[t].apply(f,e)})}}(e.pop())}var r,s,n="track",o="TrackPage",c="TrackEvent",n=(t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+o,"stop"+o,"start"+c,"stop"+c,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),f.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(l.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==l[E]&&!0!==n[E]&&(t(["_"+(r="onerror")]),s=C[r],C[r]=function(e,t,n,i,a){var o=s&&s(e,t,n,i,a);return!0!==o&&f["_"+r]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},l.autoExceptionInstrumented=!0),f}(cfg.cfg),(C[n]=i).queue&&0===i.queue.length?(i.queue.push(e),i.trackPageView({})):e();})({
- src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js",
- crossOrigin: "anonymous",
- cfg: configObj // configObj is defined above.
- });
- </script>
- ```
-
-1. To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration).
+Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights.
+<!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-sdk.md and 2) articles\azure-monitor\app\api-filtering-sampling.md -->
+
+```html
+<script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.min.js"></script>
+<script type="text/javascript">
+var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin();
+ // Click Analytics configuration
+var clickPluginConfig = {
+ autoCapture : true,
+ dataTags: {
+ useDefaultContentNameOrId: true
+ }
+}
+// Application Insights configuration
+var configObj = {
+ connectionString: "YOUR_CONNECTION_STRING",
+ // Alternatively, you can pass in the instrumentation key,
+ // but support for instrumentation key ingestion will end on March 31, 2025.
+ // instrumentationKey: "YOUR INSTRUMENTATION KEY",
+ extensions: [
+ clickPluginInstance
+ ],
+ extensionConfig: {
+ [clickPluginInstance.identifier] : clickPluginConfig
+ },
+};
+// Application Insights JavaScript (Web) SDK Loader Script code
+!(function (cfg){function e(){cfg.onInit&&cfg.onInit(n)}var x,w,D,t,E,n,C=window,O=document,b=C.location,q="script",I="ingestionendpoint",L="disableExceptionTracking",j="ai.device.";"instrumentationKey"[x="toLowerCase"](),w="crossOrigin",D="POST",t="appInsightsSDK",E=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=E),n=C[E]||function(g){var f=!1,m=!1,h={initialize:!0,queue:[],sv:"8",version:2,config:g};function v(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[j+"id"]=i[x](),n[j+"type"]=i,n["ai.operation.name"]=b&&b.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(h.sv||h.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:undefined,seq:"1",aiDataContract:undefined}}var n,i,t,a,y=-1,T=0,S=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],o=g.url||cfg.src,r=function(){return s(o,null)};function s(d,t){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~d.indexOf("ai.3")&&(d=d.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<S.length;e++)if(0<d.indexOf(S[e])){y=e;break}var n,i=function(e){var a,t,n,i,o,r,s,c,u,l;h.queue=[],m||(0<=y&&T+1<S.length?(a=(y+T+1)%S.length,p(d.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+S[a]+i})),T+=1):(f=m=!0,s=d,!0!==cfg.dle&&(c=(t=function(){var e,t={},n=g.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][x]()]=o[1])}return t[I]||(e=(n=t.endpointsuffix)?t.location:null,t[I]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||g.instrumentationKey||"",t=(t=(t=t[I])&&"/"===t.slice(-1)?t.slice(0,-1):t)?t+"/v2/track":g.endpointUrl,t=g.userOverrideEndpointUrl||t,(n=[]).push((i="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",o=s,u=t,(l=(r=v(c,"Exception")).data).baseType="ExceptionData",l.baseData.exceptions=[{typeName:"SDKLoadFailed",message:i.replace(/\./g,"-"),hasFullStack:!1,stack:i+"\nSnippet failed to load ["+o+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(b&&b.pathname||"_unknown_")+"\nEndpoint: "+u,parsedStack:[]}],r)),n.push((l=s,i=t,(u=(o=v(c,"Message")).data).baseType="MessageData",(r=u.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+l+")").replace(/\"/g,"")+'"',r.properties={endpoint:i},o)),s=n,c=t,JSON&&((u=C.fetch)&&!cfg.useXhr?u(c,{method:D,body:JSON.stringify(s),mode:"cors"}):XMLHttpRequest&&((l=new XMLHttpRequest).open(D,c),l.setRequestHeader("Content-type","application/json"),l.send(JSON.stringify(s)))))))},a=function(e,t){m||setTimeout(function(){!t&&h.core||i()},500),f=!1},p=function(e){var n=O.createElement(q),e=(n.src=e,t&&(n.integrity=t),n.setAttribute("data-ai-name",E),cfg[w]);return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?O.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){O.getElementsByTagName(q)[0].parentNode.appendChild(n)},cfg.ld||0),n};p(d)}cfg.sri&&(n=o.match(/^((http[s]?:\/\/.*\/)\w+(\.\d+){1,5})\.(([\w]+\.){0,2}js)$/))&&6===n.length?(d="".concat(n[1],".integrity.json"),i="@".concat(n[4]),l=window.fetch,t=function(e){if(!e.ext||!e.ext[i]||!e.ext[i].file)throw Error("Error Loading JSON response");var t=e.ext[i].integrity||null;s(o=n[2]+e.ext[i].file,t)},l&&!cfg.useXhr?l(d,{method:"GET",mode:"cors"}).then(function(e){return e.json()["catch"](function(){return{}})}).then(t)["catch"](r):XMLHttpRequest&&((a=new XMLHttpRequest).open("GET",d),a.onreadystatechange=function(){if(a.readyState===XMLHttpRequest.DONE)if(200===a.status)try{t(JSON.parse(a.responseText))}catch(e){r()}else r()},a.send())):o&&r();try{h.cookie=O.cookie}catch(k){}function e(e){for(;e.length;)!function(t){h[t]=function(){var e=arguments;f||h.queue.push(function(){h[t].apply(h,e)})}}(e.pop())}var c,u,l="track",d="TrackPage",p="TrackEvent",l=(e([l+"Event",l+"PageView",l+"Exception",l+"Trace",l+"DependencyData",l+"Metric",l+"PageViewPerformance","start"+d,"stop"+d,"start"+p,"stop"+p,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),h.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(g.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==g[L]&&!0!==l[L]&&(e(["_"+(c="onerror")]),u=C[c],C[c]=function(e,t,n,i,a){var o=u&&u(e,t,n,i,a);return!0!==o&&h["_"+c]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},g.autoExceptionInstrumented=!0),h}(cfg.cfg),(C[E]=n).queue&&0===n.queue.length?(n.queue.push(e),n.trackPageView({})):e();})({
+ src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js",
+ crossOrigin: "anonymous",
+ // sri: false, // Custom optional value to specify whether fetching the snippet from integrity file and do integrity check
+ cfg: configObj // configObj is defined above.
+});
+</script>
+```
+
+To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration).
#### [npm package](#tab/npmpackage)
If you have the [`contentName` callback function](#ivaluecallback) in advanced c
- For a clicked HTML `<img>` or `<area>` element, the plugin collects the value of its `alt` attribute. - For all other clicked HTML elements, `contentName` is populated based on the following rules, which are listed in order of precedence:
- 1. The value of the `value` attribute for the element
- 1. The value of the `name` attribute for the element
- 1. The value of the `alt` attribute for the element
- 1. The value of the innerText attribute for the element
- 1. The value of the `id` attribute for the element
+ 1. The value of the `value` attribute for the element
+ 1. The value of the `name` attribute for the element
+ 1. The value of the `alt` attribute for the element
+ 1. The value of the innerText attribute for the element
+ 1. The value of the `id` attribute for the element
### `parentId` key
Three different `behaviorValidator` callback functions are exposed as part of th
#### Passing in string vs. numerical values
-To reduce the bytes you pass, pass in the number value instead of the full text string. If cost isnΓÇÖt an issue, you can pass in the full text string (e.g. NAVIGATIONBACK).
+To reduce the bytes you pass, pass in the number value instead of the full text string. If cost isnΓÇÖt an issue, you can pass in the full text string (for example, NAVIGATIONBACK).
#### Sample usage with behaviorValidator
In example 1, the `parentDataTag` isn't declared and `data-parentid` or `data-*-
```javascript export const clickPluginConfigWithUseDefaultContentNameOrId = {
- dataTags : {
- customDataPrefix: "",
- parentDataTag: "",
- dntDataTag: "ai-dnt",
- captureAllMetaDataContent:false,
- useDefaultContentNameOrId: true,
- autoCapture: true
- },
+ dataTags : {
+ customDataPrefix: "",
+ parentDataTag: "",
+ dntDataTag: "ai-dnt",
+ captureAllMetaDataContent:false,
+ useDefaultContentNameOrId: true,
+ autoCapture: true
+ },
}; <div className="test1" data-id="test1parent">
- <div>Test1</div>
- <div>with id, data-id, parent data-id defined</div>
- <Button id="id1" data-id="test1id" variant="info" onClick={trackEvent}>Test1</Button>
+ <div>Test1</div>
+ <div>with id, data-id, parent data-id defined</div>
+ <Button id="id1" data-id="test1id" variant="info" onClick={trackEvent}>Test1</Button>
</div> ```
-For clicked element `<Button>` the value of `parentId` is `ΓÇ£not_specifiedΓÇ¥`, because no `parentDataTag` details are defined and no parent element id is provided within the current element.
+For clicked element `<Button>` the value of `parentId` is `ΓÇ£not_specifiedΓÇ¥`, because no `parentDataTag` details are defined and no parent element ID is provided within the current element.
### Example 2
-In example 2, `parentDataTag` is declared and `data-parentid` is defined. This example shows how parent id details are collected.
+In example 2, `parentDataTag` is declared and `data-parentid` is defined. This example shows how parent ID details are collected.
```javascript export const clickPluginConfigWithParentDataTag = {
- dataTags : {
- customDataPrefix: "",
- parentDataTag: "group",
- ntDataTag: "ai-dnt",
- captureAllMetaDataContent:false,
- useDefaultContentNameOrId: false,
- autoCapture: true
- },
+ dataTags : {
+ customDataPrefix: "",
+ parentDataTag: "group",
+ ntDataTag: "ai-dnt",
+ captureAllMetaDataContent:false,
+ useDefaultContentNameOrId: false,
+ autoCapture: true
+ },
};
- <div className="test2" data-group="buttongroup1" data-id="test2parent">
- <div>Test2</div>
- <div>with data-id, parentid, parent data-id defined</div>
- <Button data-id="test2id" data-parentid = "parentid2" variant="info" onClick={trackEvent}>Test2</Button>
- </div>
+<div className="test2" data-group="buttongroup1" data-id="test2parent">
+ <div>Test2</div>
+ <div>with data-id, parentid, parent data-id defined</div>
+ <Button data-id="test2id" data-parentid = "parentid2" variant="info" onClick={trackEvent}>Test2</Button>
+</div>
```
-For clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` is directly defined within the element. Therefore, this value takes precedence over all other parent ids or id details defined in its parent elements.
+For clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` is directly defined within the element. Therefore, this value takes precedence over all other parent IDs or ID details defined in its parent elements.
### Example 3
In example 3, `parentDataTag` is declared and the `data-parentid` or `data-*-par
```javascript export const clickPluginConfigWithParentDataTag = {
- dataTags : {
- customDataPrefix: "",
- parentDataTag: "group",
- dntDataTag: "ai-dnt",
- captureAllMetaDataContent:false,
- useDefaultContentNameOrId: false,
- autoCapture: true
- },
+ dataTags : {
+ customDataPrefix: "",
+ parentDataTag: "group",
+ dntDataTag: "ai-dnt",
+ captureAllMetaDataContent:false,
+ useDefaultContentNameOrId: false,
+ autoCapture: true
+ },
}; <div className="test6" data-group="buttongroup1" data-id="test6grandparent">
export const clickPluginConfigWithParentDataTag = {
</div> </div> ```
-For clicked element `<Button>`, the value of `parentId` is `test6parent`, because `parentDataTag` is declared. This declaration allows the plugin to traverse the current element tree and therefore the id of its closest parent will be used when parent id details are not directly provided within the current element. With the `data-group="buttongroup1"` defined, the plug-in finds the `parentId` more efficiently.
+
+For clicked element `<Button>`, the value of `parentId` is `test6parent`, because `parentDataTag` is declared. This declaration allows the plugin to traverse the current element tree and therefore the ID of its closest parent will be used when parent ID details aren't directly provided within the current element. With the `data-group="buttongroup1"` defined, the plug-in finds the `parentId` more efficiently.
If you remove the `data-group="buttongroup1"` attribute, the value of `parentId` is still `test6parent`, because `parentDataTag` is still declared.
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
The Application Insights JavaScript SDK has a base SDK and several plugins for m
We collect page views by default. But if you want to also collect clicks by default, consider adding the [Click Analytics Auto-Collection plug-in](./javascript-feature-extensions.md): -- If you're adding a [framework extension](./javascript-framework-extensions.md), which you can [add](#optional-add-advanced-sdk-configuration) after you follow the steps to get started below, you can optionally add Click Analytics when you add the framework extension.
+- If you're adding a [framework extension](./javascript-framework-extensions.md), which you can [add](#optional-add-advanced-sdk-configuration) after you follow the steps to [get started](#get-started), you can optionally add Click Analytics when you add the framework extension.
- If you're not adding a framework extension, [add the Click Analytics plug-in](./javascript-feature-extensions.md) after you follow the steps to get started. We provide the [Debug plugin](https://github.com/microsoft/ApplicationInsights-JS/blob/main/extensions/applicationinsights-debugplugin-js/README.md) and [Performance plugin](https://github.com/microsoft/ApplicationInsights-JS/blob/main/extensions/applicationinsights-perfmarkmeasure-js/README.md) for debugging/testing. In rare cases, it's possible to build your own extension by adding a [custom plugin](https://github.com/microsoft/ApplicationInsights-JS/blob/e4be62c0aa9318b540157118b729bb0c4d8b6c6e/API-reference.md#custom-extension).
Two methods are available to add the code to enable Application Insights via the
1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights.
- Preferably, you should add it as the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies.
-
- If Internet Explorer 8 is detected, JavaScript SDK v2.x is automatically loaded.
- <!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-feature-extensions.md and 2) articles\azure-monitor\app\api-filtering-sampling.md -->
- ```html
- <script type="text/javascript">
- !(function (cfg){function e(){cfg.onInit&&cfg.onInit(i)}var S,u,D,t,n,i,C=window,x=document,w=C.location,I="script",b="ingestionendpoint",E="disableExceptionTracking",A="ai.device.";"instrumentationKey"[S="toLowerCase"](),u="crossOrigin",D="POST",t="appInsightsSDK",n=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=n),i=C[n]||function(l){var d=!1,g=!1,f={initialize:!0,queue:[],sv:"7",version:2,config:l};function m(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[A+"id"]=i[S](),n[A+"type"]=i,n["ai.operation.name"]=w&&w.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(f.sv||f.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:4,seq:"1",aiDataContract:undefined}}var h=-1,v=0,y=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],k=l.url||cfg.src;if(k){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~k.indexOf("ai.3")&&(k=k.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<y.length;e++)if(0<k.indexOf(y[e])){h=e;break}var i=function(e){var a,t,n,i,o,r,s,c,p,u;f.queue=[],g||(0<=h&&v+1<y.length?(a=(h+v+1)%y.length,T(k.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+y[a]+i})),v+=1):(d=g=!0,o=k,c=(p=function(){var e,t={},n=l.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][S]()]=o[1])}return t[b]||(e=(n=t.endpointsuffix)?t.location:null,t[b]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||l.instrumentationKey||"",p=(p=p[b])?p+"/v2/track":l.endpointUrl,(u=[]).push((t="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",n=o,r=p,(s=(i=m(c,"Exception")).data).baseType="ExceptionData",s.baseData.exceptions=[{typeName:"SDKLoadFailed",message:t.replace(/\./g,"-"),hasFullStack:!1,stack:t+"\nSnippet failed to load ["+n+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(w&&w.pathname||"_unknown_")+"\nEndpoint: "+r,parsedStack:[]}],i)),u.push((s=o,t=p,(r=(n=m(c,"Message")).data).baseType="MessageData",(i=r.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+s+")").replace(/\"/g,"")+'"',i.properties={endpoint:t},n)),o=u,c=p,JSON&&((r=C.fetch)&&!cfg.useXhr?r(c,{method:D,body:JSON.stringify(o),mode:"cors"}):XMLHttpRequest&&((s=new XMLHttpRequest).open(D,c),s.setRequestHeader("Content-type","application/json"),s.send(JSON.stringify(o))))))},a=function(e,t){g||setTimeout(function(){!t&&f.core||i()},500),d=!1},T=function(e){var n=x.createElement(I),e=(n.src=e,cfg[u]);return!e&&""!==e||"undefined"==n[u]||(n[u]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?x.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){x.getElementsByTagName(I)[0].parentNode.appendChild(n)},cfg.ld||0),n};T(k)}try{f.cookie=x.cookie}catch(p){}function t(e){for(;e.length;)!function(t){f[t]=function(){var e=arguments;d||f.queue.push(function(){f[t].apply(f,e)})}}(e.pop())}var r,s,n="track",o="TrackPage",c="TrackEvent",n=(t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+o,"stop"+o,"start"+c,"stop"+c,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),f.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(l.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==l[E]&&!0!==n[E]&&(t(["_"+(r="onerror")]),s=C[r],C[r]=function(e,t,n,i,a){var o=s&&s(e,t,n,i,a);return!0!==o&&f["_"+r]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},l.autoExceptionInstrumented=!0),f}(cfg.cfg),(C[n]=i).queue&&0===i.queue.length?(i.queue.push(e),i.trackPageView({})):e();})({
- src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js",
- // name: "appInsights",
- // ld: 0,
- // useXhr: 1,
- crossOrigin: "anonymous",
- // onInit: null,
- // cr: 0,
- cfg: { // Application Insights Configuration
- connectionString: "YOUR_CONNECTION_STRING"
- }});
- </script>
- ```
-
-1. (Optional) Add or update optional [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration), depending on if you need to optimize the loading of your web page or resolve loading errors.
-
- :::image type="content" source="media/javascript-sdk/sdk-loader-script-configuration.png" alt-text="Screenshot of the JavaScript (Web) SDK Loader Script. The parameters for configuring the JavaScript (Web) SDK Loader Script are highlighted." lightbox="media/javascript-sdk/sdk-loader-script-configuration.png":::
+ Preferably, you should add it as the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies.
+
+ If Internet Explorer 8 is detected, JavaScript SDK v2.x is automatically loaded.
+ <!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-feature-extensions.md and 2) articles\azure-monitor\app\api-filtering-sampling.md -->
+ ```html
+ <script type="text/javascript">
+ !(function (cfg){function e(){cfg.onInit&&cfg.onInit(n)}var x,w,D,t,E,n,C=window,O=document,b=C.location,q="script",I="ingestionendpoint",L="disableExceptionTracking",j="ai.device.";"instrumentationKey"[x="toLowerCase"](),w="crossOrigin",D="POST",t="appInsightsSDK",E=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=E),n=C[E]||function(g){var f=!1,m=!1,h={initialize:!0,queue:[],sv:"8",version:2,config:g};function v(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[j+"id"]=i[x](),n[j+"type"]=i,n["ai.operation.name"]=b&&b.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(h.sv||h.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:undefined,seq:"1",aiDataContract:undefined}}var n,i,t,a,y=-1,T=0,S=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],o=g.url||cfg.src,r=function(){return s(o,null)};function s(d,t){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~d.indexOf("ai.3")&&(d=d.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<S.length;e++)if(0<d.indexOf(S[e])){y=e;break}var n,i=function(e){var a,t,n,i,o,r,s,c,u,l;h.queue=[],m||(0<=y&&T+1<S.length?(a=(y+T+1)%S.length,p(d.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+S[a]+i})),T+=1):(f=m=!0,s=d,!0!==cfg.dle&&(c=(t=function(){var e,t={},n=g.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][x]()]=o[1])}return t[I]||(e=(n=t.endpointsuffix)?t.location:null,t[I]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||g.instrumentationKey||"",t=(t=(t=t[I])&&"/"===t.slice(-1)?t.slice(0,-1):t)?t+"/v2/track":g.endpointUrl,t=g.userOverrideEndpointUrl||t,(n=[]).push((i="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",o=s,u=t,(l=(r=v(c,"Exception")).data).baseType="ExceptionData",l.baseData.exceptions=[{typeName:"SDKLoadFailed",message:i.replace(/\./g,"-"),hasFullStack:!1,stack:i+"\nSnippet failed to load ["+o+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(b&&b.pathname||"_unknown_")+"\nEndpoint: "+u,parsedStack:[]}],r)),n.push((l=s,i=t,(u=(o=v(c,"Message")).data).baseType="MessageData",(r=u.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+l+")").replace(/\"/g,"")+'"',r.properties={endpoint:i},o)),s=n,c=t,JSON&&((u=C.fetch)&&!cfg.useXhr?u(c,{method:D,body:JSON.stringify(s),mode:"cors"}):XMLHttpRequest&&((l=new XMLHttpRequest).open(D,c),l.setRequestHeader("Content-type","application/json"),l.send(JSON.stringify(s)))))))},a=function(e,t){m||setTimeout(function(){!t&&h.core||i()},500),f=!1},p=function(e){var n=O.createElement(q),e=(n.src=e,t&&(n.integrity=t),n.setAttribute("data-ai-name",E),cfg[w]);return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?O.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){O.getElementsByTagName(q)[0].parentNode.appendChild(n)},cfg.ld||0),n};p(d)}cfg.sri&&(n=o.match(/^((http[s]?:\/\/.*\/)\w+(\.\d+){1,5})\.(([\w]+\.){0,2}js)$/))&&6===n.length?(d="".concat(n[1],".integrity.json"),i="@".concat(n[4]),l=window.fetch,t=function(e){if(!e.ext||!e.ext[i]||!e.ext[i].file)throw Error("Error Loading JSON response");var t=e.ext[i].integrity||null;s(o=n[2]+e.ext[i].file,t)},l&&!cfg.useXhr?l(d,{method:"GET",mode:"cors"}).then(function(e){return e.json()["catch"](function(){return{}})}).then(t)["catch"](r):XMLHttpRequest&&((a=new XMLHttpRequest).open("GET",d),a.onreadystatechange=function(){if(a.readyState===XMLHttpRequest.DONE)if(200===a.status)try{t(JSON.parse(a.responseText))}catch(e){r()}else r()},a.send())):o&&r();try{h.cookie=O.cookie}catch(k){}function e(e){for(;e.length;)!function(t){h[t]=function(){var e=arguments;f||h.queue.push(function(){h[t].apply(h,e)})}}(e.pop())}var c,u,l="track",d="TrackPage",p="TrackEvent",l=(e([l+"Event",l+"PageView",l+"Exception",l+"Trace",l+"DependencyData",l+"Metric",l+"PageViewPerformance","start"+d,"stop"+d,"start"+p,"stop"+p,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),h.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(g.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==g[L]&&!0!==l[L]&&(e(["_"+(c="onerror")]),u=C[c],C[c]=function(e,t,n,i,a){var o=u&&u(e,t,n,i,a);return!0!==o&&h["_"+c]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},g.autoExceptionInstrumented=!0),h}(cfg.cfg),(C[E]=n).queue&&0===n.queue.length?(n.queue.push(e),n.trackPageView({})):e();})({
+ src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js",
+ // name: "appInsights", // Global SDK Instance name defaults to "appInsights" when not supplied
+ // ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head. (default) = 0ms load after timeout,
+ // useXhr: 1, // Use XHR instead of fetch to report failures (if available),
+ // dle: true, // Prevent the SDK from reporting load failure log
+ crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag
+ // onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DON'T ADD anything to the sdk.queue -- As they won't get called)
+ // sri: false, // Custom optional value to specify whether fetching the snippet from integrity file and do integrity check
+ cfg: { // Application Insights Configuration
+ connectionString: "YOUR_CONNECTION_STRING"
+ }});
+ </script>
+ ```
+
+1. (Optional) Add or update optional [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration), if you need to optimize the loading of your web page or resolve loading errors.
+
+ :::image type="content" source="media/javascript-sdk/sdk-loader-script-configuration.png" alt-text="Screenshot of the JavaScript (Web) SDK Loader Script. The parameters for configuring the JavaScript (Web) SDK Loader Script are highlighted." lightbox="media/javascript-sdk/sdk-loader-script-configuration.png":::
#### JavaScript (Web) SDK Loader Script configuration
- | Name | Type | Required? | Description
- |||--|
- | src | string | Required | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
- | name | string | Optional | The global name for the initialized SDK. Use this setting if you need to initialize two different SDKs at the same time.<br><br>The default value is appInsights, so ```window.appInsights``` is a reference to the initialized instance.<br><br> Note: If you assign a name value or if a previous instance has been assigned to the global name appInsightsSDK, the SDK initialization code requires it to be in the global namespace as `window.appInsightsSDK=<name value>` to ensure the correct JavaScript (Web) SDK Loader Script skeleton, and proxy methods are initialized and updated.
- | ld | number in ms | Optional | Defines the load delay to wait before attempting to load the SDK. Use this setting when the HTML page is failing to load because the JavaScript (Web) SDK Loader Script is loading at the wrong time.<br><br>The default value is 0ms after timeout. If you use a negative value, the script tag is immediately added to the `<head>` region of the page and blocks the page load event until the script is loaded or fails.
- | useXhr | boolean | Optional | This setting is used only for reporting SDK load failures. For example, this setting is useful when the JavaScript (Web) SDK Loader Script is preventing the HTML page from loading, causing fetch() to be unavailable.<br><br>Reporting first attempts to use fetch() if available and then fallback to XHR. Set this setting to `true` to bypass the fetch check. This setting is only required if your application is being used in an environment where fetch would fail to send the failure events such as if the JavaScript (Web) SDK Loader Script isn't loading successfully.
- | crossOrigin | string | Optional | By including this setting, the script tag added to download the SDK includes the crossOrigin attribute with this string value. Use this setting when you need to provide support for CORS. When not defined (the default), no crossOrigin attribute is added. Recommended values are not defined (the default), "", or "anonymous". For all valid values, see the [cross origin HTML attribute](https://developer.mozilla.org/docs/Web/HTML/Attributes/crossorigin) documentation.
- | onInit | function(aiSdk) { ... } | Optional | This callback function is called after the main SDK script has been successfully loaded and initialized from the CDN (based on the src value). This callback function is useful when you need to insert a telemetry initializer. It's passed one argument, which is a reference to the SDK instance that's being called for and is also called before the first initial page view. If the SDK has already been loaded and initialized, this callback is still called. NOTE: During the processing of the sdk.queue array, this callback is called. You CANNOT add any more items to the queue because they're ignored and dropped. (Added as part of JavaScript (Web) SDK Loader Script version 5--the sv:"5" value within the script). |
- | cr | boolean | Optional | If the SDK fails to load and the endpoint value defined for `src` is the public CDN location, this configuration option attempts to immediately load the SDK from one of the following backup CDN endpoints:<ul><li>js.monitor.azure.com</li><li>js.cdn.applicationinsights.io</li><li>js.cdn.monitor.azure.com</li><li>js0.cdn.applicationinsights.io</li><li>js0.cdn.monitor.azure.com</li><li>js2.cdn.applicationinsights.io</li><li>js2.cdn.monitor.azure.com</li><li>az416426.vo.msecnd.net</li></ul>NOTE: az416426.vo.msecnd.net is partially supported, so it's not recommended.<br><br>If the SDK successfully loads from a backup CDN endpoint, it loads from the first available one, which is determined when the server performs a successful load check. If the SDK fails to load from any of the backup CDN endpoints, the SDK Failure error message appears.<br><br>When not defined, the default value is `true`. If you donΓÇÖt want to load the SDK from the backup CDN endpoints, set this configuration option to `false`.<br><br>If youΓÇÖre loading the SDK from your own privately hosted CDN endpoint, this configuration option is not applicable.
+| Name | Type | Required? | Description
+|||--|
+| src | string | Required | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
+| name | string | Optional | The global name for the initialized SDK. Use this setting if you need to initialize two different SDKs at the same time.<br><br>The default value is appInsights, so ```window.appInsights``` is a reference to the initialized instance.<br><br> Note: If you assign a name value or if a previous instance is assigned to the global name appInsightsSDK, the SDK initialization code requires it to be in the global namespace as `window.appInsightsSDK=<name value>` to ensure the correct JavaScript (Web) SDK Loader Script skeleton, and proxy methods are initialized and updated.
+| ld | number in ms | Optional | Defines the load delay to wait before attempting to load the SDK. Use this setting when the HTML page is failing to load because the JavaScript (Web) SDK Loader Script is loading at the wrong time.<br><br>The default value is 0ms after timeout. If you use a negative value, the script tag is immediately added to the `<head>` region of the page and blocks the page load event until the script is loaded or fails.
+| useXhr | boolean | Optional | This setting is used only for reporting SDK load failures. For example, this setting is useful when the JavaScript (Web) SDK Loader Script is preventing the HTML page from loading, causing fetch() to be unavailable.<br><br>Reporting first attempts to use fetch() if available and then fallback to XHR. Set this setting to `true` to bypass the fetch check. This setting is necessary only in environments where fetch cannot transmit failure events, for example, when the JavaScript (Web) SDK Loader Script fails to load successfully.
+| crossOrigin | string | Optional | By including this setting, the script tag added to download the SDK includes the crossOrigin attribute with this string value. Use this setting when you need to provide support for CORS. When not defined (the default), no crossOrigin attribute is added. Recommended values aren't defined (the default), "", or "anonymous". For all valid values, see the [cross origin HTML attribute](https://developer.mozilla.org/docs/Web/HTML/Attributes/crossorigin) documentation.
+| onInit | function(aiSdk) { ... } | Optional | This callback function is called after the main SDK script is successfully loaded and initialized from the CDN (based on the src value). This callback function is useful when you need to insert a telemetry initializer. It's passed one argument, which is a reference to the SDK instance that's being called for and is also called before the first initial page view. If the SDK has already been loaded and initialized, this callback is still called. NOTE: During the processing of the sdk.queue array, this callback is called. You CANNOT add any more items to the queue because they're ignored and dropped. (Added as part of JavaScript (Web) SDK Loader Script version 5--the sv:"5" value within the script). |
+| cr | boolean | Optional | If the SDK fails to load and the endpoint value defined for `src` is the public CDN location, this configuration option attempts to immediately load the SDK from one of the following backup CDN endpoints:<ul><li>js.monitor.azure.com</li><li>js.cdn.applicationinsights.io</li><li>js.cdn.monitor.azure.com</li><li>js0.cdn.applicationinsights.io</li><li>js0.cdn.monitor.azure.com</li><li>js2.cdn.applicationinsights.io</li><li>js2.cdn.monitor.azure.com</li><li>az416426.vo.msecnd.net</li></ul>NOTE: az416426.vo.msecnd.net is partially supported, so it's not recommended.<br><br>If the SDK successfully loads from a backup CDN endpoint, it loads from the first available one, which is determined when the server performs a successful load check. If the SDK fails to load from any of the backup CDN endpoints, the SDK Failure error message appears.<br><br>When not defined, the default value is `true`. If you donΓÇÖt want to load the SDK from the backup CDN endpoints, set this configuration option to `false`.<br><br>If youΓÇÖre loading the SDK from your own privately hosted CDN endpoint, this configuration option isn't applicable.
#### [npm package](#tab/npmpackage) 1. Use the following command to install the Microsoft Application Insights JavaScript SDK - Web package.
- ```sh
- npm i --save @microsoft/applicationinsights-web
- ```
+ ```sh
+ npm i --save @microsoft/applicationinsights-web
+ ```
- *Typings are included with this package*, so you do *not* need to install a separate typings package.
+ *Typings are included with this package*, so you *don't* need to install a separate typings package.
1. Add the following JavaScript to your application's code.
- Where and also how you add this JavaScript code depends on your application code. For example, you might be able to add it exactly as it appears below or you may need to create wrappers around it.
+ Where and also how you add this JavaScript code depends on your application code. For example, you might be able to add it exactly as it appears below or you may need to create wrappers around it.
+
+ ```js
+ import { ApplicationInsights } from '@microsoft/applicationinsights-web'
- ```js
- import { ApplicationInsights } from '@microsoft/applicationinsights-web'
-
- const appInsights = new ApplicationInsights({ config: {
- connectionString: 'YOUR_CONNECTION_STRING'
- /* ...Other Configuration Options... */
- } });
- appInsights.loadAppInsights();
- appInsights.trackPageView();
- ```
+ const appInsights = new ApplicationInsights({ config: {
+ connectionString: 'YOUR_CONNECTION_STRING'
+ /* ...Other Configuration Options... */
+ } });
+ appInsights.loadAppInsights();
+ appInsights.trackPageView();
+ ```
Two methods are available to add the code to enable Application Insights via the
To paste the connection string in your environment, follow these steps:
- 1. Navigate to the **Overview** pane of your Application Insights resource.
- 1. Locate the **Connection String**.
- 1. Select the **Copy to clipboard** icon to copy the connection string to the clipboard.
+1. Navigate to the **Overview** pane of your Application Insights resource.
+1. Locate the **Connection String**.
+1. Select the **Copy to clipboard** icon to copy the connection string to the clipboard.
- :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
- 1. Replace the placeholder `"YOUR_CONNECTION_STRING"` in the JavaScript code with your [connection string](./sdk-connection-string.md) copied to the clipboard.
+1. Replace the placeholder `"YOUR_CONNECTION_STRING"` in the JavaScript code with your [connection string](./sdk-connection-string.md) copied to the clipboard.
- The `connectionString` format must follow "InstrumentationKey=xxxx;....". If the string provided does not meet this format, the SDK load process fails.
-
- The connection string isn't considered a security token or key. For more information, see [Do new Azure regions require the use of connection strings?](./sdk-connection-string.md#do-new-azure-regions-require-the-use-of-connection-strings).
+ The `connectionString` format must follow "InstrumentationKey=xxxx;....". If the string provided doesn't meet this format, the SDK load process fails.
+
+ The connection string isn't considered a security token or key. For more information, see [Do new Azure regions require the use of connection strings?](./sdk-connection-string.md#do-new-azure-regions-require-the-use-of-connection-strings).
### (Optional) Add SDK configuration
If you want to use the extra features provided by plugins for specific framework
1. Open the **Event types** dropdown menu and select **Select all** to clear the checkboxes in the menu. 1. From the **Event types** dropdown menu, select:
- - **Page View** for Azure Monitor Application Insights Real User Monitoring
- - **Custom Event** for the Click Analytics Auto-Collection plug-in.
-
- It might take a few minutes for data to show up in the portal. If the only data you see showing up is a load failure exception, see [Troubleshoot SDK load failure for JavaScript web apps](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting#troubleshoot-sdk-load-failure-for-javascript-web-apps).
-
- In some cases, if multiple instances of different versions of Application Insights are running on the same page, errors can occur during initialization. For these cases and the error message that appears, see [Running multiple versions of the Application Insights JavaScript SDK in one session](https://github.com/microsoft/ApplicationInsights-JS/blob/main/versionConflict.md). If you've encountered one of these errors, try changing the namespace by using the `name` setting. For more information, see [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration).
-
- :::image type="content" source="media/javascript-sdk/confirm-data-flowing.png" alt-text="Screenshot of the Application Insights Transaction search pane in the Azure portal with the Page View option selected. The page views are highlighted." lightbox="media/javascript-sdk/confirm-data-flowing.png":::
+ - **Page View** for Azure Monitor Application Insights Real User Monitoring
+ - **Custom Event** for the Click Analytics Auto-Collection plug-in.
+
+ It might take a few minutes for data to show up in the portal. If the only data you see showing up is a load failure exception, see [Troubleshoot SDK load failure for JavaScript web apps](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting#troubleshoot-sdk-load-failure-for-javascript-web-apps).
+
+ In some cases, if multiple instances of different versions of Application Insights are running on the same page, errors can occur during initialization. For these cases and the error message that appears, see [Running multiple versions of the Application Insights JavaScript SDK in one session](https://github.com/microsoft/ApplicationInsights-JS/blob/main/versionConflict.md). If you've encountered one of these errors, try changing the namespace by using the `name` setting. For more information, see [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration).
+
+ :::image type="content" source="media/javascript-sdk/confirm-data-flowing.png" alt-text="Screenshot of the Application Insights Transaction search pane in the Azure portal with the Page View option selected. The page views are highlighted." lightbox="media/javascript-sdk/confirm-data-flowing.png":::
1. If you want to query data to confirm data is flowing:
- 1. Select **Logs** in the left pane.
-
- When you select Logs, the [Queries dialog](../logs/queries.md#queries-dialog) opens, which contains sample queries relevant to your data.
-
- 1. Select **Run** for the sample query you want to run.
-
- 1. If needed, you can update the sample query or write a new query by using [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/).
-
- For essential KQL operators, see [Learn common KQL operators](/azure/data-explorer/kusto/query/tutorials/learn-common-operators).
+ 1. Select **Logs** in the left pane.
+
+ When you select Logs, the [Queries dialog](../logs/queries.md#queries-dialog) opens, which contains sample queries relevant to your data.
+
+ 1. Select **Run** for the sample query you want to run.
+
+ 1. If needed, you can update the sample query or write a new query by using [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/).
+
+ For essential KQL operators, see [Learn common KQL operators](/azure/data-explorer/kusto/query/tutorials/learn-common-operators).
## Frequently asked questions
azure-monitor Opentelemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry.md
The **.NET** OpenTelemetry implementation uses logging, metrics, and activity AP
**Azure Monitor pipeline at edge** is a powerful solution designed to facilitate high-scale data ingestion and routing from edge environments to seamlessly enable observability across cloud, edge, and multicloud. It uses the OpenTelemetry Collector. Currently, in public preview, it can be deployed on a single Arc-enabled Kubernetes cluster, and it can collect OpenTelemetry Protocol (OTLP) logs. -- [Accelerate your observability journey with Azure Monitor pipeline (preview)](https://devblogs.microsoft.com/dotnet/introducing-dotnet-aspire-simplifying-cloud-native-development-with-dotnet-8/)-- [Configure Azure Monitor pipeline for edge and multicloud](/dotnet/aspire/fundamentals/dashboard/overview)
+- [Accelerate your observability journey with Azure Monitor pipeline (preview)](https://techcommunity.microsoft.com/t5/azure-observability-blog/accelerate-your-observability-journey-with-azure-monitor/ba-p/4124852)
+- [Configure Azure Monitor pipeline for edge and multicloud](../essentials/edge-pipeline-configure.md)
**OpenTelemetry Collector Azure Data Explorer Exporter** is a data exporter component that can be plugged into the OpenTelemetry Collector. It supports ingestion of data from many receivers into to Azure Data Explorer, Azure Synapse Data Explorer, and Real-Time Analytics in Fabric.
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
A daily cap on a Log Analytics workspace allows you to avoid unexpected increase
## How the daily cap works Each workspace has a daily cap that defines its own data volume limit. When the daily cap is reached, a warning banner appears across the top of the page for the selected Log Analytics workspace in the Azure portal, and an operation event is sent to the *Operation* table under the **LogManagement** category. You can optionally create an alert rule to send an alert when this event is created.
+The data size used for the daily cap is the size after customer-defined data transformations. (Learn more about data [transformations in Data Collection Rules](../essentials/data-collection-transformations.md).)
+ Data collection resumes at the reset time which is a different hour of the day for each workspace. This reset hour can't be configured. You can optionally create an alert rule to send an alert when this event is created. > [!NOTE]
azure-resource-manager Bicep Config Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md
Title: Module setting for Bicep config
description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 02/16/2024 Last updated : 06/28/2024 # Add module settings in the Bicep config file
For a template spec, use:
module stgModule 'ts/CoreSpecs:storage:v1' = { ```
-An alias has been predefined for the [public module registry](./modules.md#path-to-module). To reference a public module, you can use the format:
+An alias has been predefined for [public modules](./modules.md#file-in-registry). To reference a public module, you can use the format:
```bicep br/public:<file>:<tag> ```
-You can override the public module registry alias definition in the bicepconfig.json file:
+> [!NOTE]
+> Non-AVM (Azure Verified Modules) modules are retired from the public module registry with most of them available as AVM modules.
+
+You can override the public module registry alias definition in the [bicepconfig.json file](./bicep-config.md):
```json {
azure-resource-manager Bicep Using https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-using.md
Title: Using statement
description: Describes how to use the using statement in Bicep. Previously updated : 10/11/2023 Last updated : 06/28/2024 # Using statement
The `using` statement in [Bicep parameter files](./parameter-files.md) ties the
using '<path>/<file-name>.json' ``` -- To use public module:
+- To use [public modules](./modules.md#path-to-module):
```bicep using 'br/public:<file-path>:<tag>'
The `using` statement in [Bicep parameter files](./parameter-files.md) ties the
For example: ```bicep
- using 'br/public:storage/storage-account:3.0.1'
+ using 'br/public:avm/res/storage/storage-account:0.9.0'
param name = 'mystorage' ```
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
Title: Bicep modules
description: Describes how to define a module in a Bicep file, and how to use module scopes. Previously updated : 02/02/2024 Last updated : 06/28/2024 # Bicep modules
-Bicep enables you to organize deployments into modules. A module is a Bicep file (or an ARM JSON template) that is deployed from another Bicep file. With modules, you improve the readability of your Bicep files by encapsulating complex details of your deployment. You can also easily reuse modules for different deployments.
+Bicep enables you to organize deployments into modules. A module is a Bicep file (or an Azure Resource Manager JSON template) that is deployed from another Bicep file. With modules, you improve the readability of your Bicep files by encapsulating complex details of your deployment. You can also easily reuse modules for different deployments.
-To share modules with other people in your organization, create a [template spec](../bicep/template-specs.md), [public registry](https://github.com/Azure/bicep-registry-modules), or [private registry](private-module-registry.md). Template specs and modules in the registry are only available to users with the correct permissions.
+To share modules with other people in your organization, create a [template spec](../bicep/template-specs.md), or [private registry](private-module-registry.md). Template specs and modules in the registry are only available to users with the correct permissions.
> [!TIP] > The choice between module registry and template specs is mostly a matter of preference. There are a few things to consider when you choose between the two:
To share modules with other people in your organization, create a [template spec
> - Content in the Bicep module registry can only be deployed from another Bicep file. Template specs can be deployed directly from the API, Azure PowerShell, Azure CLI, and the Azure portal. You can even use [`UiFormDefinition`](../templates/template-specs-create-portal-forms.md) to customize the portal deployment experience. > - Bicep has some limited capabilities for embedding other project artifacts (including non-Bicep and non-ARM-template files. For example, PowerShell scripts, CLI scripts and other binaries) by using the [`loadTextContent`](./bicep-functions-files.md#loadtextcontent) and [`loadFileAsBase64`](./bicep-functions-files.md#loadfileasbase64) functions. Template specs can't package these artifacts.
-Bicep modules are converted into a single Azure Resource Manager template with [nested templates](../templates/linked-templates.md#nested-template). For more information about how Bicep resolves configuration files and how Bicep merge user-defined configuration file with the default configuration file, see [Configuration file resolution process](./bicep-config.md#understand-the-file-resolution-process) and [Configuration file merge process](./bicep-config.md#understand-the-merge-process).
+Bicep modules are converted into a single Azure Resource Manager template with [nested templates](../templates/linked-templates.md#nested-template). For more information about how Bicep resolves configuration files and how Bicep merges user-defined configuration file with the default configuration file, see [Configuration file resolution process](./bicep-config.md#understand-the-file-resolution-process) and [Configuration file merge process](./bicep-config.md#understand-the-merge-process).
### Training resources
Like resources, modules are deployed in parallel unless they depend on other mod
## Path to module
-The file for the module can be either a local file or an external file. The external file can be in template spec or a Bicep module registry. All of these options are shown below.
+The file for the module can be either a local file or an external file. The external file can be in template spec or a Bicep module registry.
### Local file
For example, to deploy a file that is up one level in the directory from your ma
#### Public module registry
-The public module registry is hosted in a Microsoft container registry (MCR). The source code and the modules are stored in [GitHub](https://github.com/azure/bicep-registry-modules). To view the available modules and their versions, see [Bicep registry Module Index](https://aka.ms/br-module-index).
+> [!NOTE]
+> Non-AVM (Azure Verified Modules) modules are retired from the public module registry.
+
+[Azure Verified Modules](https://azure.github.io/Azure-Verified-Modules/) are prebuilt, pretested, and preverified modules for deploying resources on Azure. Created and owned by Microsoft employees, these modules are designed to simplify and accelerate the deployment process for common Azure resources and configurations whilst also aligning to best practices; such as the Well-Architected Framework.
+Browse to the [Azure Verified Modules Bicep Index](https://azure.github.io/Azure-Verified-Modules/indexes/bicep/)to see the list of modules available, select the highlighted numbers in the following screenshot to be taken directly to that filtered view.
-Select the versions to see the available versions. You can also select **Source code** to see the module source code, and open the Readme files.
-There are only a few published modules currently. More modules are coming. If you like to contribute to the registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md).
+The module list shows the latest version. Select the version number to see a list of available versions:
-To link to a public registry module, specify the module path with the following syntax:
+
+To link to a public module, specify the module path with the following syntax:
```bicep module <symbolic-name> 'br/public:<file-path>:<tag>' = {} ``` -- **br/public** is the alias for the public module registry. This alias is predefined in your configuration.
+- **br/public** is the alias for public modules. You can customize this alias in the [Bicep configuration file](./bicep-config-modules.md).
- **file path** can contain segments that can be separated by the `/` character. - **tag** is used for specifying a version for the module. For example:
+```bicep
+module storage 'br/public:avm/res/storage/storage-account:0.9.0' = {
+ name: 'myStorage'
+ params: {
+ name: 'store${resourceGroup().name}'
+ }
+}
+```
> [!NOTE]
-> **br/public** is the alias for the public registry. It can also be written as
+> **br/public** is the alias for public modules. It can also be written as:
> > ```bicep > module <symbolic-name> 'br:mcr.microsoft.com/bicep/<file-path>:<tag>' = {}
The full path for a module in a registry can be long. Instead of providing the f
An alias for the public module registry has been predefined:
+```bicep
+module storage 'br/public:avm/res/storage/storage-account:0.9.0' = {
+ name: 'myStorage'
+ params: {
+ name: 'store${resourceGroup().name}'
+ }
+}
+```
You can override the public alias in the bicepconfig.json file.
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
Title: Create parameters files for Bicep deployment
description: Create parameters file for passing in values during deployment of a Bicep file. Previously updated : 04/01/2024 Last updated : 06/28/2024 # Create parameters files for Bicep deployment
using './azuredeploy.json'
``` ```bicep
-using 'br/public:storage/storage-account:3.0.1'
+using 'br/public:avm/res/storage/storage-account:0.9.0'
... ```
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
Title: Create private registry for Bicep module
description: Learn how to set up an Azure container registry for private Bicep modules Previously updated : 05/10/2024 Last updated : 06/28/2024 # Create private registry for Bicep modules
-To share [modules](modules.md) within your organization, you can create a private module registry. You publish modules to that registry and give read access to users who need to deploy the modules. After the modules are shared in the registries, you can reference them from your Bicep files. To contribute to the public module registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md).
+To share [modules](modules.md) within your organization, you can create a private module registry. You can then publish modules to that registry and give read access to users who need to deploy the modules. After the modules are shared in the registries, you can reference them from your Bicep files. To use public modules, see [Bicep Modules](./modules.md#file-in-registry).
To work with module registries, you must have [Bicep CLI](./install.md) version **0.4.1008 or later**. To use with Azure CLI, you must also have version **2.31.0 or later**; to use with Azure PowerShell, you must also have version **7.0.0** or later.
A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-r
## Publish files to registry
-After setting up the container registry, you can publish files to it. Use the [publish](bicep-cli.md#publish) command and provide any Bicep files you intend to use as modules. Specify the target location for the module in your registry. The publish command will create an ARM template which will be stored in the registry. This means if publishing a Bicep file that references other local modules, these modules will be fully expanded as one JSON file and published to the registry.
+After setting up the container registry, you can publish files to it. Use the [publish](bicep-cli.md#publish) command and provide any Bicep files you intend to use as modules. Specify the target location for the module in your registry. The publish command creates an ARM template, which is stored in the registry. This means if publishing a Bicep file that references other local modules, these modules are fully expanded as one JSON file and published to the registry.
# [PowerShell](#tab/azure-powershell)
az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bic
-With the with source switch, you see an additional layer in the manifest:
+With the with source switch, you see another layer in the manifest:
:::image type="content" source="./media/private-module-registry/bicep-module-with-source-manifest.png" lightbox="./media/private-module-registry/bicep-module-with-source-manifest.png" alt-text="Screenshot of bicep module registry with source.":::
-Note that if the Bicep module references a module in a Private Registry, the ACR endpoint will be visible. To hide the full endpoint, you can configure an alias for the private registry.
+If the Bicep module references a module in a Private Registry, the ACR endpoint is visible. To hide the full endpoint, you can configure an alias for the private registry.
## View files in registry
To see the published module in the portal:
1. Search for **container registries**. 1. Select your registry. 1. Select **Services** -> **Repositories** from the left menu.
-1. Select the module path (repository). In the preceding example, the module path name is **bicep/modules/storage**.
+1. Select the module path (repository). In the preceding example, the module path name is **bicep/modules/storage**.
1. Select the tag. In the preceding example, the tag is **v1**.
-1. The **Artifact reference** matches the reference you'll use in the Bicep file.
+1. The **Artifact reference** matches the reference you use in the Bicep file.
![Bicep module registry artifact reference](./media/private-module-registry/bicep-module-registry-artifact-reference.png)
You're now ready to reference the file in the registry from a Bicep file. For ex
## Working with Bicep registry files
-When leveraging bicep files that are hosted in a remote registry, it's important to understand how your local machine will interact with the registry. When you first declare the reference to the registry, your local editor will try to communicate with the Azure Container Registry and download a copy of the registry to your local cache.
+When using bicep files that are hosted in a remote registry, it's important to understand how your local machine interacts with the registry. When you first declare the reference to the registry, your local editor tries to communicate with the Azure Container Registry and download a copy of the registry to your local cache.
The local cache is found in:
The local cache is found in:
~/.bicep ```
-Any changes made to the remote registry will not be recognized by your local machine until a `restore` has been ran with the specified file that includes the registry reference.
+Your local machine can recognize any changes made to the remote registry until you run a `restore` with the specified file that includes the registry reference.
```azurecli az bicep restore --file <bicep-file> [--force] ```
-For more information refer to the [`restore` command.](bicep-cli.md#restore)
-
+For more information, see the [`restore` command.](bicep-cli.md#restore)
## Next steps
-* To learn about modules, see [Bicep modules](modules.md).
-* To configure aliases for a module registry, see [Add module settings in the Bicep config file](bicep-config-modules.md).
-* For more information about publishing and restoring modules, see [Bicep CLI commands](bicep-cli.md).
+- To learn about modules, see [Bicep modules](modules.md).
+- To configure aliases for a module registry, see [Add module settings in the Bicep config file](bicep-config-modules.md).
+- For more information about publishing and restoring modules, see [Bicep CLI commands](bicep-cli.md).
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
Title: Publish modules to private module registry description: Publish Bicep modules to private module registry and use the modules. Previously updated : 06/20/2024 Last updated : 06/28/2024 #Customer intent: As a developer new to Azure deployment, I want to learn how to publish Bicep modules to private module registry.
# Quickstart: Publish Bicep modules to private module registry
-Learn how to publish Bicep modules to private modules registry, and how to call the modules from your Bicep files. Private module registry allows you to share Bicep modules within your organization. To learn more, see [Create private registry for Bicep modules](./private-module-registry.md). To contribute to the public module registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md).
+Learn how to publish Bicep modules to private modules registry, and how to call the modules from your Bicep files. Private module registry allows you to share Bicep modules within your organization. To learn more, see [Create private registry for Bicep modules](./private-module-registry.md).
## Prerequisites
batch Batch Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md
Title: Autoscale compute nodes in an Azure Batch pool description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 06/11/2024 Last updated : 06/27/2024
You can get the value of these service-defined variables to make adjustments tha
| $UsableNodeCount | The number of usable compute nodes. | | $PreemptedNodeCount | The number of nodes in the pool that are in a preempted state. |
-> [!WARNING]
-> Select service-defined variables will be retired after **31 March 2024** as noted in the table above. After the retirement
-> date, these service-defined variables will no longer be populated with sample data. Please discontinue use of these variables
-> before this date.
- > [!NOTE] > Use `$RunningTasks` when scaling based on the number of tasks running at a point in time, and `$ActiveTasks` when scaling based on the number of tasks that are queued up to run.
In Batch .NET, the [CloudPool.AutoScaleRun](/dotnet/api/microsoft.azure.batch.cl
- [AutoScaleRun.Results](/dotnet/api/microsoft.azure.batch.autoscalerun.results) - [AutoScaleRun.Error](/dotnet/api/microsoft.azure.batch.autoscalerun.error)
-In the REST API, the [Get information about a pool request](/rest/api/batchservice/get-information-about-a-pool) returns information about the pool, which includes the latest automatic scaling run information in the [autoScaleRun](/rest/api/batchservice/get-information-about-a-pool) property.
+In the REST API, [information about a pool](/rest/api/batchservice/get-information-about-a-pool) includes the latest automatic scaling run information in the [autoScaleRun](/rest/api/batchservice/get-information-about-a-pool) property.
The following C# example uses the Batch .NET library to print information about the last autoscaling run on pool *myPool*.
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 05/31/2024 Last updated : 06/27/2024
This article discusses best practices and useful tips for using the Azure Batch
- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable a small but important subset of scenarios. For more information, see [configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode). -- **`virtualMachineConfiguration` or `cloudServiceConfiguration`:** While you can currently create pools using either
-configuration, new pools should be configured using `virtualMachineConfiguration` and not `cloudServiceConfiguration`.
-All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Service Configuration
-pools don't support all features and no new capabilities are planned. You won't be able to create new
-`cloudServiceConfiguration` pools or add new nodes to existing pools
-[after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
-For more information, see
-[Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
- - **`classic` or `simplified` node communication mode:** Pools can be configured in one of two node communication modes, classic or [simplified](simplified-compute-node-communication.md). In the classic node communication model, the Batch service initiates communication to the compute nodes, and compute nodes also require communicating to Azure Storage. In the simplified
Before you recreate or resize your pool, you should download any node agent logs
#### Operating system updates It's recommended that the VM image selected for a Batch pool should be up-to-date with the latest publisher provided security updates.
-Some images may perform automatic updates upon boot (or shortly thereafter), which may interfere with certain user directed actions such
+Some images may perform automatic package updates upon boot (or shortly thereafter), which may interfere with certain user directed actions such
as retrieving package repository updates (for example, `apt update`) or installing packages during actions such as a [StartTask](jobs-and-tasks.md#start-task).
+It's recommended to enable [Auto OS upgrade for Batch pools](batch-upgrade-policy.md), which allows the underlying
+Azure infrastructure to coordinate updates across the pool. This option can be configured to be nondisrupting for task
+execution. Automatic OS upgrade doesn't support all operating systems that Batch supports. For more information, see the
+[Virtual Machine Scale Sets Auto OS upgrade Support Matrix](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#supported-os-images).
+For Windows operating systems, ensure that you aren't enabling the property
+`virtualMachineConfiguration.windowsConfiguration.enableAutomaticUpdates` when using Auto OS upgrade on the Batch pool.
+ Azure Batch doesn't verify or guarantee that images allowed for use with the service have the latest security updates. Updates to images are under the purview of the publisher of the image, and not that of Azure Batch. For certain images published under `microsoft-azure-batch`, there's no guarantee that these images are kept up-to-date with their upstream derived image.
Pools can be created using third-party images published to Azure Marketplace. Wi
### Container pools
-When specifying a Batch pool with a [virtual network](batch-virtual-network.md), there can be interaction
+When you create a Batch pool with a [virtual network](batch-virtual-network.md), there can be interaction
side effects between the specified virtual network and the default Docker bridge. Docker, by default, will create a network bridge with a subnet specification of `172.17.0.0/16`. Ensure that there are no conflicting IP ranges between the Docker network bridge and your virtual network.
Tasks that only run for one to two seconds aren't ideal. Try to do a significant
### Use pool scope for short tasks on Windows nodes
-When scheduling a task on Batch nodes, you can choose whether to run it with task scope or pool scope. If the task will only run for a short time, task scope can be inefficient due to the resources needed to create the auto-user account for that task. For greater efficiency, consider setting these tasks to pool scope. For more information, see [Run a task as an auto-user with pool scope](batch-user-accounts.md#run-a-task-as-an-auto-user-with-pool-scope).
+When scheduling a task on Batch nodes, you can choose whether to run it with task scope or pool scope. If the task will only run for a short time, task scope can be inefficient due to the resources needed to create the autouser account for that task. For greater efficiency, consider setting these tasks to pool scope. For more information, see [Run a task as an autouser with pool scope](batch-user-accounts.md#run-a-task-as-an-auto-user-with-pool-scope).
## Nodes
promotion into production use.
If you notice a problem involving the behavior of a node or tasks running on a node, collect the Batch agent logs prior to deallocating the nodes in question. The Batch agent logs can be collected using the Upload Batch service logs API. These logs can be supplied as part of a support ticket to Microsoft and will help with issue troubleshooting and resolution.
-### Manage OS upgrades
-
-For user subscription mode Batch accounts, automated OS upgrades can interrupt task progress, especially if the tasks are long-running. [Building idempotent tasks](#build-durable-tasks) can help to reduce errors caused by these interruptions. We also recommend [scheduling OS image upgrades for times when tasks aren't expected to run](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#manually-trigger-os-image-upgrades).
-
-For Windows pools, `enableAutomaticUpdates` is set to `true` by default. Allowing automatic updates is recommended, but you can set this value to `false` if you need to ensure that an OS update doesn't happen unexpectedly.
- ## Batch API ### Timeout Failures
Ensure that your Batch service clients have appropriate retry policies in place
Typically, virtual machines in a Batch pool are accessed through public IP addresses that can change over the lifetime of the pool. This dynamic nature can make it difficult to interact with a database or other external service that limits access to certain IP addresses. To address this concern, you can create a pool using a set of static public IP addresses that you control. For more information, see [Create an Azure Batch pool with specified public IP addresses](create-pool-public-ip.md).
-### Testing connectivity with Cloud Services configuration
-
-You can't use the normal "ping"/ICMP protocol with cloud services, because the ICMP protocol isn't permitted through the Azure load balancer. For more information, see [Connectivity and networking for Azure Cloud Services](../cloud-services/cloud-services-connectivity-and-networking-faq.yml#can-i-ping-a-cloud-service-).
- ## Batch node underlying dependencies Consider the following dependencies and restrictions when designing your Batch solutions.
batch Security Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-best-practices.md
Title: Batch security and compliance best practices description: Learn best practices and useful tips for enhancing security with your Azure Batch solutions. Previously updated : 09/13/2023 Last updated : 06/27/2024
This article provides guidance and best practices for enhancing security when using Azure Batch.
-By default, Azure Batch accounts have a public endpoint and are publicly accessible. When an Azure Batch pool is created, the pool is provisioned in a specified subnet of an Azure virtual network. Virtual machines in the Batch pool are accessed through public IP addresses that are created by Batch. Compute nodes in a pool can communicate with each other when needed, such as to run multi-instance tasks, but nodes in a pool can't communicate with virtual machines outside of the pool.
+By default, Azure Batch accounts have a public endpoint and are publicly accessible. When an Azure Batch pool is created,
+the pool is provisioned in a specified subnet of an Azure virtual network. Virtual machines in the Batch pool are accessed,
+by default, through public IP addresses that Batch creates. Compute nodes in a pool can communicate with each other when needed,
+such as to run multi-instance tasks, but nodes in a pool can't communicate with virtual machines outside of the pool.
:::image type="content" source="media/security-best-practices/typical-environment.png" alt-text="Diagram showing a typical Batch environment.":::
Many features are available to help you create a more secure Azure Batch deploym
### Pool configuration
-Many security features are only available for pools configured using [Virtual Machine Configuration](nodes-and-pools.md#configurations), and not for pools with Cloud Services Configuration. We recommend using Virtual Machine Configuration pools, which utilize [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), whenever possible.
-
-Pools can also be configured in one of two node communication modes, classic or [simplified](simplified-compute-node-communication.md).
+Pools can be configured in one of two node communication modes, classic or [simplified](simplified-compute-node-communication.md).
In the classic node communication model, the Batch service initiates communication to the compute nodes, and compute nodes also require communicating to Azure Storage. In the simplified node communication model, compute nodes initiate communication with the Batch service. Due to the reduced scope of inbound/outbound connections required, and not requiring Azure Storage
node communication model will be
Batch account access supports two methods of authentication: Shared Key and [Microsoft Entra ID](batch-aad-auth.md).
-We strongly recommend using Microsoft Entra ID for Batch account authentication. Some Batch capabilities require this method of authentication, including many of the security-related features discussed here. The service API authentication mechanism for a Batch account can be restricted to only Microsoft Entra ID using the [allowedAuthenticationModes](/rest/api/batchmanagement/batch-account/create) property. When this property is set, API calls using Shared Key authentication will be rejected.
+We strongly recommend using Microsoft Entra ID for Batch account authentication. Some Batch capabilities require this method of authentication, including many of the security-related features discussed here. The service API authentication mechanism for a Batch account can be restricted to only Microsoft Entra ID using the [allowedAuthenticationModes](/rest/api/batchmanagement/batch-account/create) property. When this property is set, API calls using Shared Key authentication is rejected.
### Batch account pool allocation mode When creating a Batch account, you can choose between two [pool allocation modes](accounts.md#batch-accounts): -- **Batch service**: The default option, where the underlying Cloud Service or Virtual Machine Scale Set resources used to allocate and manage pool nodes are created on Batch-owned subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible.-- **User subscription**: The underlying Cloud Service or Virtual Machine Scale Set resources are created in the same subscription as the Batch account. These resources are therefore visible in the subscription, in addition to the corresponding Batch resources.
+- **Batch service**: The default option, where the underlying Virtual Machine Scale Set resources used to allocate and manage pool nodes are created on Batch-owned subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible.
+- **User subscription**: The underlying Virtual Machine Scale Set resources are created in the same subscription as the Batch account. These resources are therefore visible in the subscription, in addition to the corresponding Batch resources.
With user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription mode is required if you want to create Batch pools using Azure Reserved VM Instances, use Azure Policy on Virtual Machine Scale Set resources, and/or manage the core quota on the subscription (shared across all Batch accounts in the subscription). To create a Batch account in user subscription mode, you must also register your subscription with Azure Batch, and associate the account with an Azure Key Vault.
Batch supports both Linux and Windows operating systems. Batch supports Linux wi
distributions. It's recommended that the operating system is kept up-to-date with the latest patches provided by the OS publisher.
+It's recommended to enable [Auto OS upgrade for Batch pools](batch-upgrade-policy.md), which allows the underlying
+Azure infrastructure to coordinate updates across the pool. This option can be configured to be nondisrupting for task
+execution. Automatic OS upgrade doesn't support all operating systems that Batch supports. For more information, see the
+[Virtual Machine Scale Sets Auto OS upgrade Support Matrix](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#supported-os-images).
+For Windows operating systems, ensure that you aren't enabling the property
+`virtualMachineConfiguration.windowsConfiguration.enableAutomaticUpdates` when using Auto OS upgrade on the Batch pool.
+ Batch support for images and node agents phase out over time, typically aligned with publisher support timelines. It's recommended to avoid using images with impending end-of-life (EOL) dates or images that are past their EOL date. It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads
at any time. EOL dates can be discovered via the
The Batch node agent doesn't modify operating system level defaults for SSL/TLS versions or cipher suite ordering. In Windows, SSL/TLS versions and cipher suite order is controlled at the operating system level, and therefore the Batch node agent adopts the settings set by the image used by each compute node. Although the Batch node agent attempts to utilize the
-most secure settings available when possible, it can still be limited by operating system level settings. We recommend that
+most secure settings available when possible, it can still be limited by operating system level settings. We recommend that
you review your OS level defaults and set them appropriately for the most secure mode that is amenable for your workflow and organizational requirements. For more information, please visit [Manage TLS](/windows-server/security/tls/manage-tls) for cipher suite order enforcement and
cloud-services-extended-support Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-portal.md
Title: Deploy Azure Cloud Services (extended support) - Azure portal
-description: Deploy an Azure Cloud Service (extended support) using the Azure portal
-
+description: Deploy Azure Cloud Services (extended support) by using the Azure portal.
+ Previously updated : 10/13/2020 Last updated : 06/18/2024
-# Deploy Azure Cloud Services (extended support) using the Azure portal
-This article explains how to use the Azure portal to create a Cloud Service (extended support) deployment.
+# Deploy Cloud Services (extended support) by using the Azure portal
-## Before you begin
+This article shows you how to use the Azure portal to create an Azure Cloud Services (extended support) deployment.
-Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources.
+## Prerequisites
+
+Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the required resources.
+
+## Deploy Cloud Services (extended support)
+
+To deploy Cloud Services (extended support) by using the portal:
-## Deploy a Cloud Services (extended support)
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Using the search bar located at the top of the Azure portal, search for and select **Cloud Services (extended support)**.
+1. In the search bar, enter **Cloud Services (extended support)**, and then select it in the search results.
+
+ :::image type="content" source="media/deploy-portal-1.png" alt-text="Screenshot that shows a Cloud Services (extended support) search in the Azure portal, and selecting the result.":::
+
+1. On the **Cloud services (extended support)** services pane, select **Create**.
+
+ :::image type="content" source="media/deploy-portal-2.png" alt-text="Screenshot that shows selecting Create in the menu to create a new instance of Cloud Services (extended support).":::
+
+ The **Create a cloud service (extended support)** pane opens.
+
+1. On the **Basics** tab, select or enter the following information:
+
+ - **Subscription**: Select a subscription to use for the deployment.
+ - **Resource group**: Select an existing resource group, or create a new one.
+ - **Cloud service name**: Enter a name for your Cloud Services (extended support) deployment.
+ - The DNS name of the cloud service is separate and is specified by the DNS name label of the public IP address. You can modify the DNS name in **Public IP** on the **Configuration** tab.
+ - **Region**: Select the region to deploy the service to.
+
+ :::image type="content" source="media/deploy-portal-3.png" alt-text="Image shows the Cloud Services (extended support) Basics tab.":::
+
+1. On the **Basics** tab under **Cloud service configuration, package, and service definition**, add your package (.cspkg or .zip) file, configuration (.cscfg) file, and definition (.csdef) file for the deployment. You can add existing files from blob storage or upload the files from your local machine. If you upload the files from your local machine, the files are then stored in a storage account in Azure.
- :::image type="content" source="media/deploy-portal-1.png" alt-text="Image shows the all resources blade in the Azure portal.":::
-
-3. In the Cloud Services (extended support) pane select **Create**.
+ :::image type="content" source="media/deploy-portal-4.png" alt-text="Screenshot that shows the section of the Basics tab where you upload files and select storage.":::
- :::image type="content" source="media/deploy-portal-2.png" alt-text="Image shows purchasing a cloud service from the marketplace.":::
+1. Select the **Configuration** tab, and then select or enter the following information:
-4. The Cloud Services (extended support) creation window will open to the **Basics** tab.
- - Select a Subscription.
- - Choose a resource group or create a new one.
- - Enter the desired name for your Cloud Service (extended support) deployment.
- - The DNS name of the cloud service is separate and specified by the DNS name label of the public IP address and can be modified in the public IP section in the configuration tab.
- - Select the region to deploy to.
+ - **Virtual network**: Select a virtual network to associate with the cloud service, or create a new virtual network.
- :::image type="content" source="media/deploy-portal-3.png" alt-text="Image shows the Cloud Services (extended support) home blade.":::
+ - Cloud Services (extended support) deployments *must* be in a virtual network.
+ - The virtual network *must* also be referenced in the configuration (.cscfg) file under `NetworkConfiguration`.
-5. Add your cloud service configuration, package and definition files. You can add existing files from blob storage or upload these from your local machine. If uploading from your local machine, these will be then be stored in a storage account.
+ - **Public IP**: Select an existing public IP address to associate with the cloud service, or create a new one.
- :::image type="content" source="media/deploy-portal-4.png" alt-text="Image shows the upload section of the basics tab during creation.":::
+ - If you have IP input endpoints defined in your definition (.csdef) file, create a public IP address for your cloud service.
+ - Cloud Services (extended support) supports only a Basic SKU public IP address.
+ - If your configuration (.cscfg) file contains a reserved IP address, set the allocation type for the public IP address to **Static**.
+ - (Optional) You can assign a DNS name for your cloud service endpoint by updating the DNS label property of the public IP address that's associated with the cloud service.
+ - (Optional) **Start cloud service**: Select the checkbox if you want to start the service immediately after it's deployed.
+ - **Key vault**: Select a key vault.
+ - A key vault is required when you specify one or more certificates in your configuration (.cscfg) file. When you select a key vault, we attempt to find the selected certificates that are defined in your configuration (.cscfg) file based on the certificate thumbprints. If any certificates are missing from your key vault, you can upload them now , and then select **Refresh**.
-6. Once all fields have been completed, move to and complete the **Configuration** tab.
- - Select a virtual network to associate with the Cloud Service or create a new one.
- - Cloud Service (extended support) deployments **must** be in a virtual network. The virtual network **must** also be referenced in the Service Configuration (.cscfg) file under the `NetworkConfiguration` section.
- - Select an existing public IP address to associate with the Cloud Service or create a new one.
- - If you have **IP Input Endpoints** defined in your Service Definition (.csdef) file, a public IP address will need to be created for your Cloud Service.
- - Cloud Services (extended support) only supports the Basic IP address SKU.
- - If your Service Configuration (.cscfg) contains a reserved IP address, the allocation type for the public IP must be set tp **Static**.
- - Optionally, assign a DNS name for your cloud service endpoint by updating the DNS label property of the Public IP address that is associated with the cloud service.
- - (Optional) Start Cloud Service. Choose start or not start the service immediately after creation.
- - Select a Key Vault
- - Key Vault is required when you specify one or more certificates in your Service Configuration (.cscfg) file. When you select a key vault we will try to find the selected certificates from your Service Configuration (.cscfg) file based on their thumbprints. If any certificates are missing from your key vault you can upload them now and click **Refresh**.
+ :::image type="content" source="media/deploy-portal-5.png" alt-text="Screenshot that shows the Configuration tab in the Azure portal when you create a Cloud Services (extended support) deployment.":::
- :::image type="content" source="media/deploy-portal-5.png" alt-text="Image shows the configuration blade in the Azure portal when creating a Cloud Services (extended support).":::
+1. When all information is entered or selected, select the **Review + Create** tab to validate your deployment configuration and create your Cloud Services (extended support) deployment.
-7. Once all fields have been completed, move to the **Review and Create** tab to validate your deployment configuration and create your Cloud Service (extended support).
+## Related content
-## Next steps
- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Deploy Cloud Services (extended support) by using [Azure PowerShell](deploy-powershell.md), an [ARM template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md).
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support).
cloud-services-extended-support Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-powershell.md
Title: Deploy a Cloud Service (extended support) - PowerShell
-description: Deploy a Cloud Service (extended support) using PowerShell
-
+ Title: Deploy Azure Cloud Services (extended support) - Azure PowerShell
+description: Deploy Azure Cloud Services (extended support) by using Azure PowerShell.
+ Previously updated : 10/13/2020 Last updated : 06/18/2024
-# Deploy a Cloud Service (extended support) using Azure PowerShell
+# Deploy Cloud Services (extended support) by using Azure PowerShell
-This article shows how to use the `Az.CloudService` PowerShell module to deploy Cloud Services (extended support) in Azure that has multiple roles (WebRole and WorkerRole).
+This article shows you how to use the Az.CloudService Azure PowerShell module to create an Azure Cloud Services (extended support) deployment that has multiple roles (WebRole and WorkerRole).
-## Pre-requisites
+## Prerequisites
-1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources.
-2. Install Az.CloudService PowerShell module.
+Complete the following steps as prerequisites to creating your deployment by using Azure PowerShell.
+
+1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the required resources.
+
+1. Install the Az.CloudService PowerShell module:
```azurepowershell-interactive Install-Module -Name Az.CloudService ```
-3. Create a new resource group. This step is optional if using an existing resource group.
+1. Create a new resource group. This step is optional if you use an existing resource group.
```azurepowershell-interactive New-AzResourceGroup -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ ```
-4. Create a storage account and container, which will be used to store the Cloud Service package (.cspkg) and Service Configuration (.cscfg) files. A unique name for storage account name is required. This step is optional if using an existing storage account.
+1. Create a storage account and container in Azure to store the package (.cspkg or .zip) file and configuration (.cscfg) file for the Cloud Services (extended support) deployment. You must use a unique name for the storage account name. This step is optional if you use an existing storage account.
```azurepowershell-interactive $storageAccount = New-AzStorageAccount -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Name ΓÇ£contosostorageaccountΓÇ¥ -Location ΓÇ£East USΓÇ¥ -SkuName ΓÇ£Standard_RAGRSΓÇ¥ -Kind ΓÇ£StorageV2ΓÇ¥ $container = New-AzStorageContainer -Name ΓÇ£contosocontainerΓÇ¥ -Context $storageAccount.Context -Permission Blob ```
-
-## Deploy a Cloud Services (extended support)
-Use any of the following PowerShell cmdlets to deploy Cloud Services (extended support):
+## Deploy Cloud Services (extended support)
+
+Use any of the following PowerShell cmdlet options to deploy Cloud Services (extended support):
+
+- Quick-create a deployment by using a [storage account](#quick-create-a-deployment-by-using-a-storage-account)
+
+ - This parameter set inputs the package (.cspkg or .zip) file, the configuration (.cscfg) file, and the definition (.csdef) file for the deployment as inputs with the storage account.
+ - The Cloud Services (extended support) role profile, network profile, and OS profile are created by the cmdlet with minimal input.
+ - To input a certificate, you must specify a key vault name. The certificate thumbprints in the key vault are validated against the certificates that you specify in the configuration (.cscfg) file for the deployment.
-1. [**Quick Create Cloud Service using a Storage Account**](#quick-create-cloud-service-using-a-storage-account)
+- Quick-create a deployment by using a [shared access signature URI](#quick-create-a-deployment-by-using-an-sas-uri)
- - This parameter set inputs the .cscfg, .cspkg and .csdef files as inputs along with the storage account.
- - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input from the user.
- - For certificate input, the keyvault name is to be specified. The certificate thumbprints in the keyvault are validated against those specified in the .cscfg file.
-
- 2. [**Quick Create Cloud Service using a SAS URI**](#quick-create-cloud-service-using-a-sas-uri)
-
- - This parameter set inputs the SAS URI of the .cspkg along with the local paths of .csdef and .cscfg files. There is no storage account input required.
- - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input from the user.
- - For certificate input, the keyvault name is to be specified. The certificate thumbprints in the keyvault are validated against those specified in the .cscfg file.
-
-3. [**Create Cloud Service with role, OS, network and extension profile and SAS URIs**](#create-cloud-service-using-profile-objects--sas-uris)
+ - This parameter set inputs the shared access signature (SAS) URI of the package (.cspkg or .zip) file with the local paths to the configuration (.cscfg) file and definition (.csdef) file. No storage account input is required.
+ - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input.
+ - To input a certificate, you must specify a key vault name. The certificate thumbprints in the key vault are validated against the certificates that you specify in the configuration (.cscfg) file for the deployment.
- - This parameter set inputs the SAS URIs of the .cscfg and .cspkg files.
- - The role, network, OS, and extension profile must be specified by the user and must match the values in the .cscfg and .csdef.
+- Create a deployment by using a [role profile, OS profile, network profile, and extension profile with shared access signature URIs](#create-a-deployment-by-using-profile-objects-and-sas-uris)
-### Quick Create Cloud Service using a Storage Account
+ - This parameter set inputs the SAS URIs of the package (.cspkg or .zip) file and configuration (.cscfg) file.
+ - You must specify profile objects: role profile, network profile, OS profile, and extension profile. The profiles must match the values that you set in the configuration (.cscfg) file and definition (.csdef) file.
-Create Cloud Service deployment using .cscfg, .csdef and .cspkg files.
+### Quick-create a deployment by using a storage account
+
+Create a Cloud Services (extended support) deployment by using the package (.cspkg or .zip) file, configuration (.cscfg) file, and definition (.csdef) files:
```azurepowershell-interactive
-$cspkgFilePath = "<Path to cspkg file>"
-$cscfgFilePath = "<Path to cscfg file>"
-$csdefFilePath = "<Path to csdef file>"
+$cspkgFilePath = "<Path to .cspkg file>"
+$cscfgFilePath = "<Path to .cscfg file>"
+$csdefFilePath = "<Path to .csdef file>"
-# Create Cloud Service
+# Create a Cloud Services (extended support) deployment
New-AzCloudService -Name "ContosoCS" ` -ResourceGroupName "ContosOrg" `
New-AzCloudService
[-KeyVaultName <string>] ```
-### Quick Create Cloud Service using a SAS URI
+### Quick-create a deployment by using an SAS URI
-1. Upload your Cloud Service package (cspkg) to the storage account.
+1. Upload the package (.cspkg or .zip) file for the deployment to the storage account:
```azurepowershell-interactive $tokenStartTime = Get-Date
New-AzCloudService
$csdefFilePath = "<Path to csdef file>" ```
- 2. Create Cloud Service deployment using .cscfg, .csdef and .cspkg SAS URI.
+1. Create the Cloud Services (extended support) deployment by using the package (.cspkg or .zip) file, configuration (.cscfg) file, and definition (.csdef) file SAS URI:
```azurepowershell-interactive New-AzCloudService
New-AzCloudService
-PackageURL $cspkgUrl ` [-KeyVaultName <string>] ```
-
-### Create Cloud Service using profile objects & SAS URIs
-1. Upload your cloud service configuration (cscfg) to the storage account.
+### Create a deployment by using profile objects and SAS URIs
+
+1. Upload your Cloud Services (extended support) configuration (.cscfg) file to the storage account:
```azurepowershell-interactive $cscfgBlob = Set-AzStorageBlobContent -File ΓÇ£./ContosoApp/ContosoApp.cscfgΓÇ¥ -Container contosocontainer -Blob ΓÇ£ContosoApp.cscfgΓÇ¥ -Context $storageAccount.Context $cscfgToken = New-AzStorageBlobSASToken -Container ΓÇ£contosocontainerΓÇ¥ -Blob $cscfgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context $cscfgUrl = $cscfgBlob.ICloudBlob.Uri.AbsoluteUri + $cscfgToken ```
-2. Upload your Cloud Service package (cspkg) to the storage account.
+
+1. Upload your Cloud Services (extended support) package (.cspkg or .zip) file to the storage account:
```azurepowershell-interactive $tokenStartTime = Get-Date
New-AzCloudService
$cspkgToken = New-AzStorageBlobSASToken -Container ΓÇ£contosocontainerΓÇ¥ -Blob $cspkgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context $cspkgUrl = $cspkgBlob.ICloudBlob.Uri.AbsoluteUri + $cspkgToken ```
-
-3. Create a virtual network and subnet. This step is optional if using an existing network and subnet. This example uses a single virtual network and subnet for both cloud service roles (WebRole and WorkerRole).
+
+1. Create a virtual network and subnet. This step is optional if you use an existing network and subnet. This example uses a single virtual network and subnet for both Cloud Services (extended support) roles (WebRole and WorkerRole).
```azurepowershell-interactive $subnet = New-AzVirtualNetworkSubnetConfig -Name "ContosoWebTier1" -AddressPrefix "10.0.0.0/24" -WarningAction SilentlyContinue $virtualNetwork = New-AzVirtualNetwork -Name ΓÇ£ContosoVNetΓÇ¥ -Location ΓÇ£East USΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -AddressPrefix "10.0.0.0/24" -Subnet $subnet ```
-
-4. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](../virtual-network/ip-services/public-ip-addresses.md#sku) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
-If you are using a Static IP, you need to reference it as a Reserved IP in Service Configuration (.cscfg) file.
+
+1. Create a public IP address and set a DNS label value for the public IP address. Cloud Services (extended support) supports only a [Basic](../virtual-network/ip-services/public-ip-addresses.md#sku) SKU public IP address. Standard SKU public IP addresses don't work with Cloud Services (extended support).
+
+ If you use a static IP address, you must reference it as a reserved IP address in the configuration (.cscfg) file for the deployment.
```azurepowershell-interactive $publicIp = New-AzPublicIpAddress -Name ΓÇ£ContosIpΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ -AllocationMethod Dynamic -IpAddressVersion IPv4 -DomainNameLabel ΓÇ£contosoappdnsΓÇ¥ -Sku Basic ```
-5. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in Azure Resource Manager. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef).
+1. Create a network profile object, and then associate the public IP address to the front end of the load balancer. The Azure platform automatically creates a Classic SKU load balancer resource in the same subscription as the Cloud Services (extended support) resource. The load balancer is a read-only resource in Azure Resource Manager. You can update resources only via the Cloud Services (extended support) configuration (.cscfg) file and deployment (.csdef) file.
```azurepowershell-interactive $publicIP = Get-AzPublicIpAddress -ResourceGroupName ContosOrg -Name ContosIp
If you are using a Static IP, you need to reference it as a Reserved IP in Servi
$loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig $networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig} ```
-
-6. Create a Key Vault. This Key Vault will be used to store certificates that are associated with the Cloud Service (extended support) roles. The Key Vault must be located in the same region and subscription as cloud service and have a unique name. For more information, see [use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
+
+1. Create a key vault. The key vault stores certificates that are associated with Cloud Services (extended support) roles. The key vault must be in the same region and subscription as the Cloud Services (extended support) deployment and have a unique name. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
```azurepowershell-interactive New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ ```
-7. Update the Key Vault access policy and grant certificate permissions to your user account.
+1. Update the key vault access policy and grant certificate permissions to your user account:
```azurepowershell-interactive Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -EnabledForDeployment Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete ```
- Alternatively, set access policy via ObjectId (which can be obtained by running `Get-AzADUser`).
-
+ Alternatively, set the access policy by using the `ObjectId` value. To get the `ObjectId` value, run `Get-AzADUser`:
+ ```azurepowershell-interactive Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -ObjectId 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' -PermissionsToCertificates create,get,list,delete ```
-
-8. In this example, we will add a self-signed certificate to a Key Vault. The certificate thumbprint needs to be added in Cloud Service Configuration (.cscfg) file for deployment on cloud service roles.
+1. The following example adds a self-signed certificate to a key vault. You must add the certificate thumbprint via the configuration (.cscfg) file for Cloud Services (extended support) roles.
```azurepowershell-interactive $Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" -SubjectName "CN=contoso.com" -IssuerName "Self" -ValidityInMonths 6 -ReuseKeyOnRenewal Add-AzKeyVaultCertificate -VaultName "ContosKeyVault" -Name "ContosCert" -CertificatePolicy $Policy ```
-
-9. Create an OS Profile in-memory object. OS Profile specifies the certificates, which are associated to cloud service roles. This will be the same certificate created in the previous step.
+
+1. Create an OS profile in-memory object. An OS profile specifies the certificates that are associated with Cloud Services (extended support) roles. This is the certificate that you created in the preceding step.
```azurepowershell-interactive $keyVault = Get-AzKeyVault -ResourceGroupName ContosOrg -VaultName ContosKeyVault
If you are using a Static IP, you need to reference it as a Reserved IP in Servi
$osProfile = @{secret = @($secretGroup)} ```
-10. Create a Role Profile in-memory object. Role profile defines a role sku specific properties such as name, capacity, and tier. In this example, we have defined two roles: frontendRole and backendRole. Role profile information should match the role configuration defined in configuration (cscfg) file and service definition (csdef) file.
+1. Create a role profile in-memory object. A role profile defines a role's SKU-specific properties such as name, capacity, and tier. In this example, two roles are defined: frontendRole and backendRole. Role profile information must match the role configuration that's defined in the deployment configuration (.cscfg) file and definition (.csdef) file.
```azurepowershell-interactive $frontendRole = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
If you are using a Static IP, you need to reference it as a Reserved IP in Servi
$roleProfile = @{role = @($frontendRole, $backendRole)} ```
-11. (Optional) Create an Extension Profile in-memory object that you want to add to your cloud service. For this example we will add RDP extension.
+1. (Optional) Create an extension profile in-memory object to add to your Cloud Services (extended support) deployment. This example adds a Remote Desktop Protocol (RDP) extension:
```azurepowershell-interactive $credential = Get-Credential
If you are using a Static IP, you need to reference it as a Reserved IP in Servi
$wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosCS" -StorageAccountName "contosostorageaccount" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true $extensionProfile = @{extension = @($rdpExtension, $wadExtension)} ```
-
- ConfigFile should have only PublicConfig tags and should contain a namespace as following:
-
+
+ The configuration (.cscfg) file should have only `PublicConfig` tags and should contain a namespace as shown in the following example:
+ ```xml <?xml version="1.0" encoding="utf-8"?> <PublicConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> ............... </PublicConfig> ```
-
-12. (Optional) Define Tags as PowerShell hash table that you want to add to your cloud service.
+
+1. (Optional) In a PowerShell hash table, you can define tags to add to your deployment:
```azurepowershell-interactive $tag=@{"Owner" = "Contoso"} ```
-13. Create Cloud Service deployment using profile objects & SAS URLs.
+1. Create the Cloud Services (extended support) deployment by using the profile objects and SAS URIs that you defined:
```azurepowershell-interactive $cloudService = New-AzCloudService `
If you are using a Static IP, you need to reference it as a Reserved IP in Servi
-Tag $tag ```
-## Next steps
+## Related content
+ - Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
+- Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), an [ARM template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md).
- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support).
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-prerequisite.md
Title: Prerequisites for deploying Azure Cloud Services (extended support)
-description: Prerequisites for deploying Azure Cloud Services (extended support)
+ Title: Prerequisites for deploying Cloud Services (extended support)
+description: Learn about the prerequisites for deploying Azure Cloud Services (extended support).
Previously updated : 10/13/2020 Last updated : 06/16/2024 # Prerequisites for deploying Azure Cloud Services (extended support)
-To ensure a successful Cloud Services (extended support) deployment review the below steps and complete each item prior to attempting any deployments.
+To help ensure a successful Azure Cloud Services (extended support) deployment, review the following steps. Complete each prerequisitive before you begin to create a deployment.
-## Required Service Configuration (.cscfg) file updates
+## Required configuration file updates
-### 1) Virtual Network
-Cloud Service (extended support) deployments must be in a virtual network. Virtual network can be created through [Azure portal](../virtual-network/quick-create-portal.md), [PowerShell](../virtual-network/quick-create-powershell.md), [Azure CLI](../virtual-network/quick-create-cli.md) or [ARM Template](../virtual-network/quick-create-template.md). The virtual network and subnets must also be referenced in the Service Configuration (.cscfg) under the [NetworkConfiguration](schema-cscfg-networkconfiguration.md) section.
+Use the information in the following sections to make required updates to the configuration (.cscfg) file for your Cloud Services (extended support) deployment.
-For a virtual networks belonging to the same resource group as the cloud service, referencing only the virtual network name in the Service Configuration (.cscfg) file is sufficient. If the virtual network and cloud service are in two different resource groups, then the complete Azure Resource Manager ID of the virtual network needs to be specified in the Service Configuration (.cscfg) file.
+### Virtual network
+
+Cloud Services (extended support) deployments must be in a virtual network. You can create a virtual network by using the [Azure portal](../virtual-network/quick-create-portal.md), [Azure PowerShell](../virtual-network/quick-create-powershell.md), the [Azure CLI](../virtual-network/quick-create-cli.md), or an [Azure Resource Manager template (ARM template)](../virtual-network/quick-create-template.md). The virtual network and subnets must be referenced in the [NetworkConfiguration](schema-cscfg-networkconfiguration.md) section of the configuration (.cscfg) file.
+
+For a virtual network that is in the same resource group as the cloud service, referencing only the virtual network name in the configuration (.cscfg) file is sufficient. If the virtual network and Cloud Services (extended support) are in two different resource groups, specify the complete Azure Resource Manager ID of the virtual network in the configuration (.cscfg) file.
> [!NOTE]
-> Virtual Network and cloud service located in a different resource groups is not supported in Visual Studio 2019. Please consider using the ARM template or Portal for successful deployments in such scenarios
-
-#### Virtual Network located in same resource group
+> If the virtual network and Cloud Services (extended support) are located in different resource groups, you can't use Visual Studio 2019 for your deployment. For this scenario, consider using an ARM template or the Azure portal to create your deployment.
+
+#### Virtual network in the same resource group
+ ```xml <VirtualNetworkSite name="<vnet-name>"/> <AddressAssignments>
For a virtual networks belonging to the same resource group as the cloud service
</AddressAssignments> ```
-#### Virtual network located in different resource group
+#### Virtual network in a different resource group
+ ```xml <VirtualNetworkSite name="/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.Network/virtualNetworks/<vnet-name>"/> <AddressAssignments>
For a virtual networks belonging to the same resource group as the cloud service
</InstanceAddress> </AddressAssignments> ```
-### 2) Remove the old plugins
-Remove old remote desktop settings from the Service Configuration (.cscfg) file.
+### Remove earlier versions of plugins
+
+Remove earlier versions of remote desktop settings from the configuration (.cscfg) file:
```xml <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" />
Remove old remote desktop settings from the Service Configuration (.cscfg) file.
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2021-12-17T23:59:59.0000000+05:30" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> ```
-Remove old diagnostics settings for each role in the Service Configuration (.cscfg) file.
+
+Remove earlier versions of diagnostics settings for each role in the configuration (.cscfg) file:
```xml <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> ```
-## Required Service Definition file (.csdef) updates
+## Required definition file updates
> [!NOTE]
-> Changes in service definition file (.csdef) requires the package file (.cspkg) to be generated again. Please build and repackage your .cspkg post making the following changes in the .csdef file to get the latest settings for your cloud service
+> If you make changes to the definition (.csdef) file, you must generate the package (.cspkg or .zip) file again. Build and repackage your package (.cspkg or .zip) file after you make the following changes in the definition (.csdef) file to get the latest settings for your cloud service.
-### 1) Virtual Machine sizes
-The sizes listed in the left column below are deprecated in Azure Resource Manager. However, if you want to continue to use them update the `vmsize` name with the associated Azure Resource Manager naming convention.
+### Virtual machine sizes
-| Previous size name | Updated size name |
+The following table lists deprecated virtual machine sizes and updated naming conventions through which you can continue to use the sizes.
+
+The sizes listed in the left column of the table are deprecated in Azure Resource Manager. If you want to continue to use the virtual machine sizes, update the `vmsize` value to use the new naming convention from the right column.
+
+| Previous size name | Updated size name |
|||
-| ExtraSmall | Standard_A1_v2 |
+| ExtraSmall | Standard_A1_v2 |
| Small | Standard_A1_v2 |
-| Medium | Standard_A2_v2 |
-| Large | Standard_A4_v2 |
-| ExtraLarge | Standard_A8_v2 |
-| A5 | Standard_A2m_v2 |
-| A6 | Standard_A4m_v2 |
+| Medium | Standard_A2_v2 |
+| Large | Standard_A4_v2 |
+| ExtraLarge | Standard_A8_v2 |
+| A5 | Standard_A2m_v2 |
+| A6 | Standard_A4m_v2 |
| A7 | Standard_A8m_v2 |
-| A8 | Deprecated |
+| A8 | Deprecated |
| A9 | Deprecated |
-| A10 | Deprecated |
-| A11 | Deprecated |
-| MSODSG5 | Deprecated |
+| A10 | Deprecated |
+| A11 | Deprecated |
+| MSODSG5 | Deprecated |
+
+For example, `<WorkerRole name="WorkerRole1" vmsize="Medium">` becomes `<WorkerRole name="WorkerRole1" vmsize="Standard_A2">`.
- For example, `<WorkerRole name="WorkerRole1" vmsize="Medium"` would become `<WorkerRole name="WorkerRole1" vmsize="Standard_A2"`.
-
> [!NOTE]
-> To retrieve a list of available sizes see [Resource Skus - List](/rest/api/compute/resourceskus/list) and apply the following filters: <br>
-`ResourceType = virtualMachines ` <br>
-`VMDeploymentTypes = PaaS `
+> To retrieve a list of available sizes, see the [list of resource SKUs](/rest/api/compute/resourceskus/list). Apply the following filters:
+>
+> `ResourceType = virtualMachines`
+> `VMDeploymentTypes = PaaS`
+### Remove earlier versions of remote desktop plugins
-### 2) Remove old remote desktop plugins
-Deployments that utilized the old remote desktop plugins need to have the modules removed from the Service Definition (.csdef) file and any associated certificates.
+For deployments that use earlier versions of remote desktop plugins, remove the modules from the definition (.csdef) file and from any associated certificates:
```xml <Imports>
Deployments that utilized the old remote desktop plugins need to have the module
<Import moduleName="RemoteForwarder" /> </Imports> ```
-Deployments that utilized the old diagnostics plugins need the settings removed for each role from the Service Definition (.csdef) file
+
+For deployments that use earlier versions of diagnostics plugins, remove the settings for each role from the definition (.csdef) file:
```xml <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" /> ```
-## Access Control
-The subscription containing networking resources needs to have [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) access or above for Cloud Services (extended support). For more details on please refer to [RBAC built in roles](../role-based-access-control/built-in-roles.md)
+## Access control
+
+The subscription that contains networking resources must have the [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) or greater role for Cloud Services (extended support). For more information, see [RBAC built-in roles](../role-based-access-control/built-in-roles.md).
+
+## Key vault creation
-## Key Vault creation
+Azure Key Vault stores certificates that are associated with Cloud Services (extended support). Add the certificates to a key vault, and then reference the certificate thumbprints in the configuration (.cscfg) file for your deployment. You also must enable the key vault access policy (in the portal) for **Azure Virtual Machines for deployment** so that the Cloud Services (extended support) resource can retrieve the certificate that's stored as secrets in the key vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). You must create the key vault in the same region and subscription as the cloud service. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
-Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration file. You also need to enable Key Vault 'Access policies' (in portal) for 'Azure Virtual Machines for deployment' so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). The key vault must be created in the same region and subscription as the cloud service. For more information, see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
+## Related content
-## Next steps
-- Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
+- Deploy a Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), an [ARM template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md).
- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support).
cloud-services-extended-support Deploy Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-sdk.md
Title: Deploy Cloud Services (extended support) - SDK
-description: Deploy Cloud Services (extended support) by using the Azure SDK
-
+ Title: Deploy Azure Cloud Services (extended support) - SDK
+description: Deploy Azure Cloud Services (extended support) by using the Azure SDK.
+ Previously updated : 10/13/2020 Last updated : 06/18/2024 # Deploy Cloud Services (extended support) by using the Azure SDK
-This article shows how to use the [Azure SDK](https://azure.microsoft.com/downloads/) to deploy a Cloud Services (extended support) instance that has multiple roles (web role and worker role) and the remote desktop extension. Cloud Services (extended support) is a deployment model of Azure Cloud Services that's based on Azure Resource Manager.
+This article shows how to use the [Azure SDK](https://azure.microsoft.com/downloads/) to create an Azure Cloud Services (extended support) deployment that has multiple roles (WebRole and WorkerRole) and the Remote Desktop Protocol (RDP) extension. Cloud Services (extended support) is a deployment model of Azure Cloud Services that's based on Azure Resource Manager.
-## Before you begin
+## Prerequisites
-Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create associated resources.
+Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the required resources.
## Deploy Cloud Services (extended support)
-1. Install the [Azure Compute SDK NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Management.Compute/43.0.0-preview) and initialize the client by using a standard authentication mechanism.
+
+To deploy Cloud Services (extended support) by using the SDK:
+
+1. Install the [Azure Compute SDK NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Management.Compute/43.0.0-preview) and initialize the client by using a standard authentication method:
```csharp public class CustomLoginCredentials : ServiceClientCredentials
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
m_SrpClient.SubscriptionId = m_subId; ```
-2. Create a new resource group by installing the Azure Resource Manager NuGet package.
+1. Create a new resource group by installing the Azure Resource Manager NuGet package:
- ```csharp
+ ```csharp
var resourceGroups = m_ResourcesClient.ResourceGroups; var m_location = ΓÇ£East USΓÇ¥; var resourceGroupName = "ContosoRG";//provide existing resource group name, if created already
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
resourceGroup = await resourceGroups.CreateOrUpdateAsync(resourceGroupName, resourceGroup); ```
-3. Create a storage account and container where you'll store the service package (.cspkg) and service configuration (.cscfg) files. Install the [Azure Storage NuGet package](https://www.nuget.org/packages/Azure.Storage.Common/). This step is optional if you're using an existing storage account. The storage account name must be unique.
+1. Create a storage account and container where you'll store the package (.cspkg or .zip) file and configuration (.cscfg) file for the deployment. Install the [Azure Storage NuGet package](https://www.nuget.org/packages/Azure.Storage.Common/). This step is optional if you're using an existing storage account. The storage account name must be unique.
```csharp string storageAccountName = ΓÇ£ContosoSASΓÇ¥
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
sasConstraints.Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.Write; ```
-4. Upload the service package (.cspkg) file to the storage account. The package URL can be a shared access signature (SAS) URI from any storage account.
+1. Upload the package (.cspkg or .zip) file to the storage account. The package URL can be a shared access signature (SAS) URI from any storage account.
- ```csharp
- CloudBlockBlob cspkgblockBlob = container.GetBlockBlobReference(ΓÇ£ContosoApp.cspkgΓÇ¥);
- cspkgblockBlob.UploadFromFileAsync(ΓÇ£./ContosoApp/ContosoApp.cspkgΓÇ¥). Wait();
+ ```csharp
+ CloudBlockBlob cspkgblockBlob = container.GetBlockBlobReference(ΓÇ£ContosoApp.cspkgΓÇ¥);
+ cspkgblockBlob.UploadFromFileAsync(ΓÇ£./ContosoApp/ContosoApp.cspkgΓÇ¥). Wait();
- //Generate the shared access signature on the blob, setting the constraints directly on the signature.
- string cspkgsasContainerToken = cspkgblockBlob.GetSharedAccessSignature(sasConstraints);
+ //Generate the shared access signature on the blob, setting the constraints directly on the signature.
+ string cspkgsasContainerToken = cspkgblockBlob.GetSharedAccessSignature(sasConstraints);
- //Return the URI string for the container, including the SAS token.
- string cspkgSASUrl = cspkgblockBlob.Uri + cspkgsasContainerToken;
- ```
+ //Return the URI string for the container, including the SAS token.
+ string cspkgSASUrl = cspkgblockBlob.Uri + cspkgsasContainerToken;
+ ```
-5. Upload your service configuration (.cscfg) file to the storage account. Specify service configuration as either string XML or URL format.
+1. Upload the configuration (.cscfg) file to the storage account. Specify the service configuration as either string XML or URL format.
```csharp CloudBlockBlob cscfgblockBlob = container.GetBlockBlobReference(ΓÇ£ContosoApp.cscfgΓÇ¥);
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
string cscfgSASUrl = cscfgblockBlob.Uri + sasCscfgContainerToken; ```
-6. Create a virtual network and subnet. Install the [Azure Network NuGet package](https://www.nuget.org/packages/Azure.ResourceManager.Network/). This step is optional if you're using an existing network and subnet.
+1. Create a virtual network and subnet. Install the [Azure Network NuGet package](https://www.nuget.org/packages/Azure.ResourceManager.Network/). This step is optional if you're using an existing network and subnet.
```csharp VirtualNetwork vnet = new VirtualNetwork(name: vnetName)
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
m_NrpClient.VirtualNetworks.CreateOrUpdate(resourceGroupName, ΓÇ£ContosoVNetΓÇ¥, vnet); ```
-7. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](../virtual-network/ip-services/public-ip-addresses.md#sku) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
-If you are using a Static IP you need to reference it as a Reserved IP in Service Configuration (.cscfg) file
+1. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) supports only a [Basic](../virtual-network/ip-services/public-ip-addresses.md#sku) SKU public IP address. Standard SKU public IP addresses do not work with Cloud Services (extended support).
- ```csharp
+ If you use a static IP address, you must reference it as a reserved IP address in the configuration (.cscfg) file.
+
+ ```csharp
PublicIPAddress publicIPAddressParams = new PublicIPAddress(name: ΓÇ£ContosIpΓÇ¥) { Location = m_location,
If you are using a Static IP you need to reference it as a Reserved IP in Servic
PublicIPAddress publicIpAddress = m_NrpClient.PublicIPAddresses.CreateOrUpdate(resourceGroupName, publicIPAddressName, publicIPAddressParams); ```
-8. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in ARM. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef)
+1. Create a network profile object and associate the public IP address with the front end of the load balancer. The Azure platform automatically creates a Classic SKU load balancer resource in the same subscription as the deployment. The load balancer resource is read-only in Azure Resource Manager. You can update the resource only via the Cloud Services (extended support) configuration (.cscfg) file and definition (.csdef) file.
```csharp LoadBalancerFrontendIPConfiguration feipConfiguration = new LoadBalancerFrontendIPConfiguration()
If you are using a Static IP you need to reference it as a Reserved IP in Servic
```
-9. Create a key vault. This key vault will be used to store certificates that are associated with the Cloud Services (extended support) roles. The key vault must be located in the same region and subscription as the Cloud Services (extended support) instance and have a unique name. For more information, see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
+1. Create a key vault. This key vault stores certificates that are associated with the Cloud Services (extended support) roles. The key vault must be in the same region and subscription as the Cloud Services (extended support) resource and have a unique name. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
- ```powershell
- New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosoOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥
- ```
+ ```powershell
+ New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosoOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥
+ ```
-10. Update the key vault's access policy and grant certificate permissions to your user account.
+1. Update the key vault's access policy and grant certificate permissions to your user account:
- ```powershell
- Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosoOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete
- ```
+ ```powershell
+ Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosoOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete
+ ```
- Alternatively, set the access policy via object ID (which you can get by running `Get-AzADUser`).
+ Alternatively, set the access policy via object ID (which you can get by running `Get-AzADUser`):
```powershell
- Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' - ObjectId 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' -PermissionsToCertificates create,get,list,delete
+ Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -ObjectId 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' -PermissionsToCertificates create,get,list,delete
```
-11. In this example, we'll add a self-signed certificate to a key vault. The certificate thumbprint needs to be added in the service configuration (.cscfg) file for deployment on Cloud Services (extended support) roles.
+1. The following example adds a self-signed certificate to a key vault. The certificate thumbprint must be added in the configuration (.cscfg) file for Cloud Services (extended support) roles.
```powershell
- $Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" - SubjectName "CN=contoso.com" -IssuerName "Self" -ValidityInMonths 6 -ReuseKeyOnRenewal
- Add-AzKeyVaultCertificate -VaultName "ContosKeyVault" -Name "ContosCert" - CertificatePolicy $Policy
+ $Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" - SubjectName "CN=contoso.com" -IssuerName "Self" -ValidityInMonths 6 -ReuseKeyOnRenewal
+ Add-AzKeyVaultCertificate -VaultName "ContosKeyVault" -Name "ContosCert" -CertificatePolicy $Policy
```
-12. Create an OS profile object. The OS profile specifies the certificates that are associated with Cloud Services (extended support) roles. Here, it's the same certificate that we created in the previous step.
+1. Create an OS profile object. The OS profile specifies the certificates that are associated with Cloud Services (extended support) roles. You use the same certificate that you created in the preceding step.
```csharp CloudServiceOsProfile cloudServiceOsProfile =
If you are using a Static IP you need to reference it as a Reserved IP in Servic
}; ```
-13. Create a role profile object. A role profile defines role-specific properties for a SKU, such as name, capacity, and tier.
+1. Create a role profile object. A role profile defines role-specific properties for a SKU, such as name, capacity, and tier.
- In this example, we define two roles: ContosoFrontend and ContosoBackend. Role profile information should match the role configuration defined in the service configuration (.cscfg) file and the service definition (.csdef) file.
+ This example defines two roles: ContosoFrontend and ContosoBackend. Role profile information must match the role that's defined in the configuration (.cscfg) file and definition (.csdef) file.
```csharp CloudServiceRoleProfile cloudServiceRoleProfile = new CloudServiceRoleProfile()
If you are using a Static IP you need to reference it as a Reserved IP in Servic
} ```
-14. (Optional) Create an extension profile object that you want to add to your Cloud Services (extended support) instance. In this example, we add an RDP extension.
+1. (Optional) Create an extension profile object to add to your Cloud Services (extended support) deployment. This example adds a Remote Desktop Protocol (RDP) extension:
```csharp string rdpExtensionPublicConfig = "<PublicConfig>" +
If you are using a Static IP you need to reference it as a Reserved IP in Servic
}; ```
-15. Create the deployment of the Cloud Services (extended support) instance.
+1. Create the Cloud Services (extended support) deployment:
```csharp CloudService cloudService = new CloudService
If you are using a Static IP you need to reference it as a Reserved IP in Servic
CloudService createOrUpdateResponse = m_CrpClient.CloudServices.CreateOrUpdate(ΓÇ£ContosOrgΓÇ¥, ΓÇ£ContosoCSΓÇ¥, cloudService); ```
-## Next steps
+## Related content
+ - Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), a [template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md).-- Visit the [Samples repository for Cloud Services (extended support)](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [Azure PowerShell](deploy-powershell.md), an [ARM template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md).
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support).
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-template.md
Title: Deploy Azure Cloud Services (extended support) - Templates
-description: Deploy Azure Cloud Services (extended support) by using ARM templates
-
+ Title: Deploy Azure Cloud Services (extended support) - ARM template
+description: Deploy Azure Cloud Services (extended support) by using an ARM template.
+ Previously updated : 10/13/2020 Last updated : 06/18/2024
-# Deploy a Cloud Service (extended support) using ARM templates
+# Deploy Cloud Services (extended support) by using an ARM template
-This tutorial explains how to create a Cloud Service (extended support) deployment using [ARM templates](../azure-resource-manager/templates/overview.md).
+This article shows you how to use an [Azure Resource Manager template (ARM template)](../azure-resource-manager/templates/overview.md) to create an Azure Cloud Services (extended support) deployment.
-## Before you begin
+## Prerequisites
-1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources.
+Complete the following steps as prerequisites to creating your deployment by using ARM templates.
-2. Create a new resource group using the [Azure portal](../azure-resource-manager/management/manage-resource-groups-portal.md) or [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md). This step is optional if you are using an existing resource group.
-
-3. Create a new storage account using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you are using an existing storage account.
+1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the required resources.
-4. Upload your Package (.cspkg) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob), or [PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial.
+1. Create a new resource group by using the [Azure portal](../azure-resource-manager/management/manage-resource-groups-portal.md) or [Azure PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md). This step is optional if you use an existing resource group.
-5. (Optional) Create a key vault and upload the certificates.
+1. Create a new storage account by using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [Azure PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you use an existing storage account.
- - Certificates can be attached to cloud services to enable secure communication to and from the service. In order to use certificates, their thumbprints must be specified in your Service Configuration (.cscfg) file and uploaded to a key vault. A key vault can be created through the [Azure portal](../key-vault/general/quick-create-portal.md) or [PowerShell](../key-vault/general/quick-create-powershell.md).
- - The associated key vault must be located in the same region and subscription as cloud service.
- - The associated key vault for must be enabled appropriate permissions so that Cloud Services (extended support) resource can retrieve certificates from Key Vault. For more information, see [Certificates and Key Vault](certificates-and-key-vault.md)
- - The key vault needs to be referenced in the OsProfile section of the ARM template shown in the below steps.
+1. Upload the package (.cspkg or .zip) file and configuration (.cscfg) file to the storage account by using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob) or [Azure PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Save the shared access signature (SAS) URIs for both files to add to the ARM template in a later step.
-## Deploy a Cloud Service (extended support)
+1. (Optional) Create a key vault and upload the certificates.
+
+ - You can attach certificates to your deployment for secure communication to and from the service. If you use certificates, the certificate thumbprints must be specified in your configuration (.cscfg) file and be uploaded to a key vault. You can create a key vault by using the [Azure portal](../key-vault/general/quick-create-portal.md) or [Azure PowerShell](../key-vault/general/quick-create-powershell.md).
+ - The associated key vault must be in the same region and subscription as your Cloud Services (extended support) deployment.
+ - The associated key vault must have the relevant permissions so that Cloud Services (extended support) resources can retrieve certificates from the key vault. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
+ - The key vault must be referenced in the `osProfile` section of the ARM template as shown in a later step.
+
+## Deploy Cloud Services (extended support)
+
+To deploy Cloud Services (extended support) by using a template:
> [!NOTE]
-> An easier and faster way of generating your ARM template and parameter file is via the [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) via the portal to create your Cloud Service via PowerShell
-
-1. Create virtual network. The name of the virtual network must match the references in the Service Configuration (.cscfg) file. If using an existing virtual network, omit this section from the ARM template.
+> An easier and faster way to generate your ARM template and parameter file is by using the [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) in the portal to create your Cloud Services (extended support) via Azure PowerShell.
+
+1. Create a virtual network. The name of the virtual network must match virtual network references in the configuration (.cscfg) file. If you use an existing virtual network, omit this section from the ARM template.
```json "resources": [
This tutorial explains how to create a Cloud Service (extended support) deployme
} ] ```
-
- If creating a new virtual network, add the following to the `dependsOn` section to ensure the platform creates the virtual network prior to creating the cloud service.
+
+ If you create a new virtual network, add the following lines to the `dependsOn` section to ensure that the platform creates the virtual network before it creates the Cloud Services (extended support) instance:
```json "dependsOn": [ "[concat('Microsoft.Network/virtualNetworks/', parameters('vnetName'))]" ] ```
-
-2. Create a public IP address and (optionally) set the DNS label property of the public IP address. If you are using a Static IP you need to reference it as a Reserved IP in Service Configuration (.cscfg) file. If using an existing IP address, skip this step and add the IP address information directly into the load balancer configuration settings of your ARM template.
-
+
+1. Create a public IP address and (optionally) set the DNS label property of the public IP address. If you use a static IP address, you must reference it as a reserved IP address in the configuration (.cscfg) file. If you use an existing IP address, skip this step and add the IP address information directly in the load balancer configuration settings in your ARM template.
+ ```json "resources": [ {
This tutorial explains how to create a Cloud Service (extended support) deployme
} ] ```
-
- If creating a new IP address, add the following to the `dependsOn` section to ensure the platform creates the IP address prior to creating the cloud service.
-
+
+ If you create a new IP address, add the following lines to the `dependsOn` section to ensure that the platform creates the IP address before it creates the Cloud Services (extended support) instance:
+ ```json "dependsOn": [ "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPName'))]" ] ```
-
-3. Create a Cloud Service (Extended Support) object, adding appropriate `dependsOn` references if you are deploying Virtual Networks or Public IP within your template.
+
+1. Create a Cloud Services (extended support) object. Add relevant `dependsOn` references if you are deploying virtual networks or public IP addresses in your template.
```json {
This tutorial explains how to create a Cloud Service (extended support) deployme
} } ```
-4. Create a Network Profile Object for your Cloud Service and associate the public IP address to the frontend of the load balancer. A Load balancer is automatically created by the platform.
+
+1. Create a network profile object for your deployment, and associate the public IP address with the front end of the load balancer. The Azure platform automatically creates a load balancer.
```json "networkProfile": {
This tutorial explains how to create a Cloud Service (extended support) deployme
] } ```
-
-5. Add your key vault reference in the `OsProfile` section of the ARM template. Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration (.cscfg) file. You also need to enable Key Vault 'Access policies' for 'Azure Virtual Machines for deployment'(on portal) so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. The key vault must be located in the same region and subscription as cloud service and have a unique name. For more information, see [using certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
-
+1. Add your key vault reference in the `osProfile` section of the ARM template. A key vault stores certificates that are associated with Cloud Services (extended support). Add the certificates to the key vault, and then reference the certificate thumbprints in the configuration (.cscfg) file. Also, set the key vault access policy for **Azure Virtual Machines for deployment** in the Azure portal so that the Cloud Services (extended support) resource can retrieve the certificates that are stored as secrets in the key vault. The key vault must be in the same region and subscription as your Cloud Services (extended support) resource and have a unique name. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
+ ```json "osProfile": { "secrets": [
This tutorial explains how to create a Cloud Service (extended support) deployme
``` > [!NOTE]
- > SourceVault is the ARM Resource ID to your key vault. You can find this information by locating the Resource ID in the properties section of your key vault.
- > - certificateUrl can be found by navigating to the certificate in the key vault labeled as **Secret Identifier**.ΓÇ»
- > - certificateUrl should be of the form https://{keyvault-endpoin}/secrets/{secretname}/{secret-id}
+ > `sourceVault`in the ARM template  is the value of the resource ID for your key vault. You can get this information by finding **Resource ID** in the **Properties** section of your key vault.
+ > - You can get the value for `certificateUrl` by going to the certificate in the key vault that's labeled **Secret Identifier**.ΓÇ»
+ > - `certificateUrl` should be of the form of `https://{keyvault-endpoint}/secrets/{secret-name}/{secret-id}`.
-6. Create a Role Profile. Ensure that the number of roles, role names, number of instances in each role and sizes are the same across the Service Configuration (.cscfg), Service Definition (.csdef) and role profile section in ARM template.
-
+1. Create a role profile. Ensure that the number of roles, the number of instances in each role, role names, and role sizes are the same across the configuration (.cscfg) file, the definition (.csdef) file, and the `roleProfile` section in the ARM template.
+
```json "roleProfile": { "roles": {
This tutorial explains how to create a Cloud Service (extended support) deployme
} ```
-7. (Optional) Create an extension profile to add extensions to your cloud service. For this example, we are adding the remote desktop and Windows Azure diagnostics extension.
- > [!Note]
- > The password for remote desktop must be between 8-123 characters long and must satisfy at least 3 of password complexity requirements from the following: 1) Contains an uppercase character 2) Contains a lowercase character 3) Contains a numeric digit 4) Contains a special character 5) Control characters are not allowed
+1. (Optional) Create an extension profile to add extensions to your Cloud Services (extended support) deployment. The following example adds the Remote Desktop Protocol (RDP) extension and the Azure Diagnostics extension.
+
+ > [!NOTE]
+ > The password for RDP must from 8 to 123 characters and must satisfy at least *three* of the following password-complexity requirements:
+ >
+ > Contains an uppercase character.
+ > Contains a lowercase character.
+ > Contains a numeric digit.
+ > Contains a special character.
+ > Cannot contain a control character.
```json "extensionProfile": {
This tutorial explains how to create a Cloud Service (extended support) deployme
} ```
-8. Review the full template.
+1. Review the full template:
```json {
This tutorial explains how to create a Cloud Service (extended support) deployme
"packageSasUri": { "type": "securestring", "metadata": {
- "description": "SAS Uri of the CSPKG file to deploy"
+ "description": "SAS URI of the package (.cspkg) file to deploy"
} }, "configurationSasUri": { "type": "securestring", "metadata": {
- "description": "SAS Uri of the service configuration (.cscfg)"
+ "description": "SAS URI of the configuration (.cscfg) file"
} }, "roles": {
This tutorial explains how to create a Cloud Service (extended support) deployme
"wadPublicConfig_WebRole1": { "type": "string", "metadata": {
- "description": "Public configuration of Windows Azure Diagnostics extension"
+ "description": "Public configuration of the Azure Diagnostics extension"
} }, "wadPrivateConfig_WebRole1": { "type": "securestring", "metadata": {
- "description": "Private configuration of Windows Azure Diagnostics extension"
+ "description": "Private configuration of the Azure Diagnostics extension"
} }, "vnetName": {
This tutorial explains how to create a Cloud Service (extended support) deployme
} ```
-9. Deploy the template and parameter file (defining parameters in template file) to create the Cloud Service (extended support) deployment. Please refer these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support) as required.
+1. Deploy the template and parameter file (to define parameters in the template file) to create the Cloud Services (extended support) deployment. You can use these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support).
```powershell New-AzResourceGroupDeployment -ResourceGroupName "ContosOrg" -TemplateFile "file path to your template file" -TemplateParameterFile "file path to your parameter file" ```
-## Next steps
+## Related content
- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [Azure PowerShell](deploy-powershell.md), or [Visual Studio](deploy-visual-studio.md).
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support).
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md
These are top scenarios involving combinations of resources, features and Cloud
| Migration of deployments containing both production and staging slot deployment using Reserved IP addresses | Not supported. | | Migration of production and staging deployment in different virtual network|Migration of a two slot cloud service requires deleting the staging slot. Once the staging slot is deleted, migrate the production slot as an independent cloud service (extended support) in Azure Resource Manager. A new Cloud Services (extended support) deployment can then be linked to the migrated deployment with swappable property enabled. Deployments files of the old staging slot deployment can be reused to create this new swappable deployment. | | Migration of empty Cloud Service (Cloud Service with no deployment) | Not supported. |
-| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins aren't recommended](./deploy-prerequisite.md#required-service-definition-file-csdef-updates) for use on Cloud Services (extended support).|
+| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins aren't recommended](./deploy-prerequisite.md#required-definition-file-updates) for use on Cloud Services (extended support).|
| Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This will cause downtime. | Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The role sizes need to be updated before migration. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)| | Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This will cause downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This will cause downtime. |
cloud-services-extended-support Post Migration Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/post-migration-changes.md
Last updated 2/08/2021
-# Post migration changes
-The Cloud Services (classic) deployment is converted to a Cloud Service (extended support) deployment. For more information, see [Cloud Services (extended support) documentation](deploy-prerequisite.md).
+# Post-migration changes
+
+The Cloud Services (classic) deployment is converted to a Cloud Services (extended support) deployment. For more information, see [Cloud Services (extended support) documentation](deploy-prerequisite.md).
## Changes to deployment files
Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deploy
- Virtual Network uses full Azure Resource Manager resource ID instead of just the resource name in the NetworkConfiguration section of the .cscfg file. For example, `/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Network/virtualNetworks/vnet-name`. For virtual networks belonging to the same resource group as the cloud service, you can choose to update the .cscfg file back to using just the virtual network name. -- Classic sizes like Small, Large, ExtraLarge are replaced by their new size names, Standard_A*. The size names need to be changed to their new names in .csdef file. For more information, see [Cloud Services (extended support) deployment prerequisites](deploy-prerequisite.md#required-service-definition-file-csdef-updates)
+- Classic sizes like Small, Large, ExtraLarge are replaced by their new size names, Standard_A*. The size names need to be changed to their new names in .csdef file. For more information, see [Cloud Services (extended support) deployment prerequisites](deploy-prerequisite.md#required-definition-file-updates)
- Use the Get API to get the latest copy of the deployment files. - Get the template using [Portal](../azure-resource-manager/templates/export-template-portal.md), [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), [CLI](../azure-resource-manager/management/manage-resource-groups-cli.md#export-resource-groups-to-templates), and [REST API](/rest/api/resources/resourcegroups/exporttemplate)
Customers need to update their tooling and automation to start using the new API
- As part of migration, the names of few resources like the Cloud Service, public IP addresses, etc. change. These changes might need to be reflected in deployment files before update of Cloud Service. [Learn More about the names of resources changing](in-place-migration-technical-details.md#translation-of-resources-and-naming-convention-post-migration). - Recreate rules and policies required to manage and scale cloud services
- - [Auto Scale rules](configure-scaling.md) are not migrated. After migration, recreate the auto scale rules.
- - [Alerts](enable-alerts.md) are not migrated. After migration, recreate the alerts.
+ - [Auto Scale rules](configure-scaling.md) aren't migrated. After migration, recreate the auto scale rules.
+ - [Alerts](enable-alerts.md) aren't migrated. After migration, recreate the alerts.
- The Key Vault is created without any access policies. [Create appropriate policies](../key-vault/general/assign-access-policy-portal.md) on the Key Vault to view or manage your certificates. Certificates will be visible under settings on the tab called secrets.
Customers need to update their tooling and automation to start using the new API
As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell or REST API.
-Currently, Azure Portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate is not found. However, if you are planning to use Certificates as secrets, then these certificates cannot be validated for their thumbprint and any update operation which involves addition of secrets would fail via Portal. Customers are reccomended to use PowerShell or RestAPI to continue updates involving Secrets.
+Currently, the Azure portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate isn't found. However, if you're planning to use Certificates as secrets, then these certificates can't be validated for their thumbprint and any update operation that involves addition of secrets would fail via Portal. Customers are recommended to use PowerShell or RestAPI to continue updates involving Secrets.
## Changes for Update via Visual Studio
-If you were publishing updates via Visual Studio directly, then you would need to first download the latest CSCFG file from your deployment post migration. Use this file as reference to add Network Configuration details to your current CSCFG file in Visual Studio project. Then build the solution and publish it. You may have to choose the Key Vault and Resource Group for this update.
+If you were publishing updates via Visual Studio directly, then you would need to first download the latest CSCFG file from your deployment post migration. Use this file as reference to add Network Configuration details to your current CSCFG file in Visual Studio project. Then build the solution and publish it. You might have to choose the Key Vault and Resource Group for this update.
## Next steps
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
ms.assetid: 6306cafe-1153-44c7-8554-623b03d59a34 Previously updated : 06/06/2024 Last updated : 06/28/2024
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **June 27, 2024**
+The June Guest OS has released.
+ ###### **June 1, 2024** The May Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.42_202406-01 | June 27, 2024 | Post 7.45 |
| WA-GUEST-OS-7.41_202405-01 | June 1, 2024 | Post 7.44 | | WA-GUEST-OS-7.40_202404-01 | April 19, 2024 | Post 7.43 |
-| WA-GUEST-OS-7.39_202403-02 | April 9, 2024 | Post 7.42 |
+|~~WA-GUEST-OS-7.39_202403-02~~| April 9, 2024 | June 27, 2024 |
|~~WA-GUEST-OS-7.38_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-7.37_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-7.36_202312-01~~| January 16, 2024 | April 9, 2024 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.72_202406-01 | June 27, 2024 | Post 6.75 |
| WA-GUEST-OS-6.71_202405-01 | June 1, 2024 | Post 6.74 | | WA-GUEST-OS-6.70_202404-01 | April 19, 2024 | Post 6.73 |
-| WA-GUEST-OS-6.69_202403-02 | April 9, 2024 | Post 6.72 |
+|~~WA-GUEST-OS-6.69_202403-02~~| April 9, 2024 | June 27, 2024 |
|~~WA-GUEST-OS-6.68_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-6.67_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-6.66_202312-01~~| January 16, 2024 | April 9, 2024 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.96_202406-01 | June 27, 2024 | Post 5.99 |
| WA-GUEST-OS-5.95_202405-01 | June 1, 2024 | Post 5.98 | | WA-GUEST-OS-5.94_202404-01 | April 19, 2024 | Post 5.97 |
-| WA-GUEST-OS-5.93_202403-02 | April 9, 2024 | Post 5.96 |
+|~~WA-GUEST-OS-5.93_202403-02~~| April 9, 2024 | June 27, 2024 |
|~~WA-GUEST-OS-5.92_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-5.91_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-5.90_202312-01~~| January 16, 2024 | April 9, 2024 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.132_202406-01 | June 27, 2024 | Post 4.135 |
| WA-GUEST-OS-4.131_202405-01 | June 1, 2024 | Post 4.134 | | WA-GUEST-OS-4.130_202404-01 | April 19, 2024 | Post 4.133 |
-| WA-GUEST-OS-4.129_202403-02 | April 9, 2024 | Post 4.132 |
+|~~WA-GUEST-OS-4.129_202403-02~~| April 9, 2024 | June 27, 2024 |
|~~WA-GUEST-OS-4.128_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-4.127_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-4.126_202312-01~~| January 16, 2024 | April 9, 2024 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.140_202406-01 | June 27, 2024 | Post 3.143 |
| WA-GUEST-OS-3.139_202405-01 | June 1, 2024 | Post 3.142 | | WA-GUEST-OS-3.138_202404-01 | April 19, 2024 | Post 3.141 |
-| WA-GUEST-OS-3.137_202403-02 | April 9, 2024 | Post 3.140 |
+|~~WA-GUEST-OS-3.137_202403-02~~| April 9, 2024 | June 27, 2024 |
|~~WA-GUEST-OS-3.136_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-3.135_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-3.134_202312-01~~| January 16, 2024 | April 9, 2024 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.152_202406-01 | June 27, 2024 | Post 2.155 |
| WA-GUEST-OS-2.151_202405-01 | June 1, 2024 | Post 2.154 | | WA-GUEST-OS-2.150_202404-01 | April 19, 2024 | Post 2.153 |
-| WA-GUEST-OS-2.149_202403-02 | April 9, 2024 | Post 2.152 |
+|~~WA-GUEST-OS-2.149_202403-02~~| April 9, 2024 | June 27, 2024 |
|~~WA-GUEST-OS-2.148_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-2.147_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-2.146_202312-01~~| January 16, 2024 | April 9, 2024 |
communication-services Incoming Call Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md
Title: Incoming call concepts description: Learn about Azure Communication Services IncomingCall notification-+ Last updated 09/26/2022-+ # Incoming call concepts
communication-services Teams Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md
Tenant configurations are organization-wide settings that impact everyone in the
|Setting name | Description| Tenant configuration |Property | |--|--|--|--|
-|Enable federation with Azure Communication Services| If enabled, Azure Communication Services users can join Teams meeting as Communication Services users even if Teams anonymous users are not allowed| [CsTeamsAcsFederationConfiguration](/PowerShell/module/teams/set-csteamsacsfederationconfiguration)| EnableAcsUsers|
-|List federated Azure Communication Services resources | Users from listed Azure Communication Services resources can join Teams meeting if Teams anonymous users are not allowed to join. |[CsTeamsAcsFederationConfiguration](/PowerShell/module/teams/set-csteamsacsfederationconfiguration)| AllowedAcsResources |
|[Anonymous users can join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | If disabled, Teams external users can't join Teams meetings. | [CsTeamsMeetingConfiguration](/PowerShell/module/skype/set-csteamsmeetingconfiguration) | DisableAnonymousJoin | Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings. Use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Azure Communication Services provides a concept of a room for developers who are
Here are the main scenarios where rooms are useful: - **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services.-- **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This will allow only a subset of users with assigned Communication Services identities to join a room call.
+- **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This allows only a subset of users with assigned Communication Services identities to join a room call.
- **Rooms enable structured communications through roles and permissions.** Rooms allow developers to assign predefined roles to users to exercise a higher degree of control and structure in communication. Ensure only presenters can speak and share content in a large meeting or in a virtual conference. - **Add PSTN participants.** Invite public switched telephone network (PSTN) participants to a call using a number purchased through your subscription or via Azure direct routing to your Session Border Controller (SBC).
Rooms are created and managed via rooms APIs or SDKs. Use the rooms API/SDKs in
Use the [Calling SDKs](../voice-video-calling/calling-sdk-features.md) to join the room call. Room calls can be joined using the Web, iOS or Android Calling SDKs. You can find quick start samples for joining room calls [here](../../quickstarts/rooms/join-rooms-call.md).
-Rooms can also be accessed using the [Azure Communication Services UI Library](https://azure.github.io/communication-ui-library/?path=/docs/rooms--page). The UI Library enables developers to add a call client that is Rooms enabled into their application with only a couple lines of code.
+Rooms can also be accessed using the [Azure Communication Services UI Library](../../concepts/ui-library/ui-library-overview.md). The UI Library enables developers to add a call client that is Rooms enabled into their application with only a couple lines of code.
## Predefined participant roles and permissions
communication-services Classification Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/classification-concepts.md
Title: Job classification concepts for Azure Communication Services description: Learn about the Azure Communication Services Job Router classification concepts.-+ -+ Last updated 10/14/2021
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md
Title: Job Router overview for Azure Communication Services description: Learn about the Azure Communication Services Job Router.-+ -+ Last updated 10/14/2021
communication-services Router Rule Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/router-rule-concepts.md
Title: Job Router rule engines description: Learn about the Azure Communication Services Job Router rules engine concepts.-+ --
+
+ Last updated 10/14/2021
communication-services Escalate Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/escalate-job.md
Title: Escalate a Job in Job Router description: Use Azure Communication Services SDKs to escalate a Job--++ Last updated 10/14/2021
communication-services Job Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/job-classification.md
Title: Classify a Job description: Use Azure Communication Services SDKs to change the properties of a job--++ Last updated 10/14/2021
communication-services Manage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/manage-queue.md
Title: Manage a queue in Job Router description: Use Azure Communication Services SDKs to manage the behavior of a queue--++ Last updated 10/14/2021
communication-services Subscribe Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/subscribe-events.md
Title: Subscribe to events in Job Router description: Use Azure Communication Services SDKs to subscribe to Job Router events from Event Grid--++ Last updated 10/14/2021
communication-services Theming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/theming.md
The Azure Communication Services UI Library is a set of components, icons, and c
In this article, you learn how to change the theme for UI Library components as you configure an application.
-The UI Library is fully documented for developers on a separate site. The documentation is interactive and helps you understand how the APIs work by giving you the ability to try them directly from a webpage. For more information, see the [UI Library documentation](https://azure.github.io/communication-ui-library/?path=/docs/overview--page).
+The UI Library is fully documented for developers on a separate site. The documentation is interactive and helps you understand how the APIs work by giving you the ability to try them directly from a webpage. For more information, see the [UI Library documentation](../../concepts/ui-library/ui-library-overview.md).
## Prerequisites
communication-services Get Started Router https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/router/get-started-router.md
Title: Quickstart - Submit a Job for queuing and routing description: In this quickstart, you'll learn how to create a Job Router client, Distribution Policy, Queue, and Job within your Azure Communication Services resource.-+ -+ Last updated 10/18/2021
communication-services Understanding Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/general-troubleshooting-strategies/understanding-error-codes.md
There are different explanations for why a call ended. Here are the meanings of
||||--|--| | 0 | 0 | Call ended successfully by local participant. | Success | | | 0 | 487 | Call ended successfully as caller canceled the call. | Success | |
-| 0 | 603 | Call ended successfully as it was declined from callee. | Success | |
+| 0 | 603 | Call ended successfully as it was declined by the callee. | Success | |
+| 3100 | 410 | Call setup failed due to unexpected network problem on the client, please check client's network and retry. | UnxpectedClientError | - Ensure that you're using the latest SDK in a supported environment.<br> |
+| 3101 | 410 | Call dropped due to unexpected network problem on the client, please check client's network and retry. | UnxpectedClientError | |
+| 3112 | 410 | Call setup failed due to network configuration problem on the client side, please check client's network configuration, and retry. | ExpectedError | |
| 4097 | 0 | Call ended for all users by the meeting organizer. | Success | |
-| 4507 | 495 | Call ended as application didn't provide valid Azure Communication Services token. | UnexpectedClientError |- Ensure that your application implements token refresh mechanism correctly. |
+| 4507 | 495 | Call ended as application didn't provide a valid Azure Communication Services token. | UnexpectedClientError |- Ensure that your application implements token refresh mechanism correctly. |
+| 4521 | 0 | Call ended because user disconnected from the call abruptly, this may be a result of a user closing the application that hosted the call, eg a user terminated application, closed browser of browser tab without proper hang-up. | ExpectedError | |
| 5000 | 0 | Call ended for this participant as it was removed from the conversation by another participant. | Success | | | 5003 | 0 | Call ended successfully, as all callee endpoints declined the call. | Success | | | 5300 | 0 | Call ended for this participant as it was removed from the conversation by another participant. | Success | | | 7000 | 0 | Call ended by Azure Communication Services platform. | Success | | | 10003 | 487 | Call was accepted elsewhere, by another endpoint of this user. | Success | | | 10004 | 487 | Call was canceled on timeout, no callee endpoint accepted on time. Ensure that user saw the notification and try to initiate that call again. | ExpectedError | |
-| 10024 | 487 | Call ended successfully as it was declined by all callee endpoint. | Success | - Try to place the call again. |
+| 10024 | 487 | Call ended successfully as it was declined by all callee endpoints. | Success | - Try to place the call again. |
+| 10057 | 408 | Call failed, callee failed to finalize call setup, most likely callee lost network or terminated the application abruptly. Ensure clients are connected and available. | ExpectedError | |
| 301005 | 410 | Participant was removed from the call by the Azure Communication Services infrastructure due to loss of media connectivity with Azure Communication Services infrastructure, this usually happens if participant leaves the call abruptly or looses network connectivity. If participant wants to continue the call, it should reconnect. | UnexpectedClientError | - Ensure that you're using the latest SDK in a supported environment.<br> | | 510403 | 403 | Call ended, as it has been marked as a spam and got blocked. | ExpectedError | - Ensure that your Communication Services token is valid and not expired.<br> - Ensure to pass in AlternateId in the call options.<br> | | 540487 | 487 | Call ended successfully as caller canceled the call. | Success | | | 560000 | 0 | Call ended successfully by remote PSTN participant. | Success |Possible causes:<br> - User ended the call.<br> - Call was ended by media agent.<br> |
-| 560486 | 486 | Call ended because remote PSTN participant was busy. The number called was already in a call or having technical iss
+| 560486 | 486 | Call ended because remote PSTN participant was busy. The number called was already in a call or having technical issues. | Success | - For Direct Routing calls, check your Session Border Control logs and settings and timeouts configuration.<br> Possible causes: <br> - The number called was already in a call or having technical issues.<br> |
## Azure Communication Services Calling SDK client error codes and subcodes For client errors, if the resultCategories property is `ExpectedError`, the error is expected from the SDK's perspective. Such errors are commonly encountered in precondition failures, such as incorrect arguments passed by the app, or when the current system state doesn't allow the API call. The application should check the error reason and the logic for invoking API.
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
Title: Build a custom event management platform with Microsoft Teams, Microsoft Graph and Azure Communication Services
-description: Learn how to use Microsoft Teams, Graph and Azure Communication Services to build a custom event management platform.
+description: Learn how to use Microsoft Teams, Graph, and Azure Communication Services to build a custom event management platform.
The goal of this document is to reduce the time it takes for Event Management Pl
## What are virtual events and event management platforms?
-Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Microsoft Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
+Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Microsoft Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars, and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
## What are the building blocks of an event management platform?
To get started, event organizers must schedule and configure the event. This pro
### 2. Attendee experience
-For event attendees, they are presented with an experience that enables them to attend, participate, and engage with an eventΓÇÖs content. This experience might include capabilities like watching content, sharing their camera stream, asking questions, responding to polls, and more. Microsoft provides two options for attendees to consume events powered by Teams and Azure Communication
+For event attendees, they're presented with an experience that enables them to attend, participate, and engage with an eventΓÇÖs content. This experience might include capabilities like watching content, sharing their camera stream, asking questions, responding to polls, and more. Microsoft provides two options for attendees to consume events powered by Teams and Azure Communication
- Teams Client (Web or Desktop): Attendees can directly join events using a Teams Client by using a provided join link. They get access to the full Teams experience.
Event hosts and organizers require the ability to present content, manage attend
## Building a custom solution for event management with Azure Communication Services and Microsoft Graph
-Throughout the rest of this tutorial, we will focus on how using Azure Communication Services and Microsoft Graph to build a custom event management platform. We will be using the sample architecture below. Based on that architecture we will be focusing on setting up scheduling and registration flows and embedding the attendee experience right on the event platform to join the event.
+Throughout the rest of this tutorial, we'll focus on how using Azure Communication Services and Microsoft Graph to build a custom event management platform. We'll be using the sample architecture below. Based on that architecture we'll be focusing on setting up scheduling and registration flows and embedding the attendee experience right on the event platform to join the event.
:::image type="content" source="./media/event-management-platform-architecture.svg" alt-text="Diagram showing sample architecture for event management platform":::
Microsoft Graph enables event management platforms to empower organizers to sche
1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of reminders.
- 2. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](/entra/identity-platform/access-tokens). and [refresh tokens](/entra/identity-platform/refresh-tokens).
+ 2. As part of the application setup, the service account is used to log in into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](/entra/identity-platform/access-tokens). and [refresh tokens](/entra/identity-platform/refresh-tokens).
3. The application will require "on behalf of" permissions with the [offline scope](/entra/identity-platform/permissions-consent-overview#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Microsoft Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
Through Azure Communication Services, developers can use SMS and Email capabilit
>[!NOTE] > Limitations when using Azure Communication Services as part of a Teams Webinar experience. Please visit our [documentation for more details.](../concepts/join-teams-meeting.md#limitations-and-known-issues)
-Attendee experience can be directly embedded into an application or platform using [Azure Communication Services](../overview.md) so that your attendees never need to leave your platform. It provides low-level calling and chat SDKs which support [interoperability with Teams Events](../concepts/teams-interop.md), as well as a turn-key UI Library which can be used to reduce development time and easily embed communications. Azure Communication Services enables developers to have flexibility with the type of solution they need. Review [limitations](../concepts/join-teams-meeting.md#limitations-and-known-issues) of using Azure Communication Services for webinar scenarios.
+Attendee experience can be directly embedded into an application or platform using [Azure Communication Services](../overview.md) so that your attendees never need to leave your platform. It provides low-level calling and chat SDKs that support [interoperability with Teams Events](../concepts/teams-interop.md), as well as a turn-key UI Library, which can be used to reduce development time and easily embed communications. Azure Communication Services enables developers to have flexibility with the type of solution they need. Review [limitations](../concepts/join-teams-meeting.md#limitations-and-known-issues) of using Azure Communication Services for webinar scenarios.
1. To start, developers can leverage Microsoft Graph APIs to retrieve the join URL. This URL is provided uniquely per attendee during [registration](/graph/api/externalmeetingregistrant-post?tabs=http&view=graph-rest-beta&preserve-view=true). Alternatively, it can be [requested for a given meeting](/graph/api/onlinemeeting-get?tabs=http&view=graph-rest-beta&preserve-view=true).
Attendee experience can be directly embedded into an application or platform usi
3. Once a resource is created, developers must [generate access tokens](../quickstarts/identity/access-tokens.md?pivots=programming-language-javascript&preserve-view=true) for attendees to access Azure Communication Services. We recommend using a [trusted service architecture](../concepts/client-and-server-architecture.md).
-4. Developers can leverage [headless SDKs](../concepts/teams-interop.md) or [UI Library](https://azure.github.io/communication-ui-library/) using the join link URL to join the Teams meeting through [Teams Interoperability](../concepts/teams-interop.md). Details below:
+4. Developers can leverage [headless SDKs](../concepts/teams-interop.md) or [UI Library](../concepts/ui-library/ui-library-overview.md) using the join link URL to join the Teams meeting through [Teams Interoperability](../concepts/teams-interop.md). Details below:
|Headless SDKs | UI Library | |-||
-| Developers can leverage the [calling](../quickstarts/voice-video-calling/get-started-teams-interop.md?pivots=platform-javascript&preserve-view=true) and [chat](../quickstarts/chat/meeting-interop.md?pivots=platform-javascript&preserve-view=true) SDKs to join a Teams meeting with your custom client | Developers can choose between the [call + chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-meeting-basicexample--basic-example) or pure [call](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-basicexample--basic-example) and [chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-chat-basicexample--basic-example) composites to build their experience. Alternatively, developers can leverage [composable components](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-uicomponents--page) to build a custom Teams interop experience.|
+| Developers can leverage the [calling](../quickstarts/voice-video-calling/get-started-teams-interop.md?pivots=platform-javascript&preserve-view=true) and [chat](../quickstarts/chat/meeting-interop.md?pivots=platform-javascript&preserve-view=true) SDKs to join a Teams meeting with your custom client | Developers can choose between the [call + chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-meeting-basicexample--basic-example) or pure [call](../concepts/ui-library/ui-library-overview.md) and [chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-chat-basicexample--basic-example) composites to build their experience. Alternatively, developers can leverage [composable components](../concepts/ui-library/ui-library-use-cases.md) to build a custom Teams interop experience.|
>[!NOTE]
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
To get a token for a resource, make an HTTP `GET` request to the endpoint, inclu
-## Use managed identity for scale rules
+## <a name="scale-rules"></a>Use managed identity for scale rules
-Starting in API version `2024-02-02-preview`, you can use managed identities in your scale rules to authenticate with Azure services that support managed identities. To use a managed identity in your scale rule, use the `identity` property instead of the `auth` property in your scale rule. Acceptable values for the `identity` property are either the Azure resource ID of a user-assigned identity, or `system` to use a system-assigned identity
+You can use managed identities in your scale rules to authenticate with Azure services that support managed identities. To use a managed identity in your scale rule, use the `identity` property instead of the `auth` property in your scale rule. Acceptable values for the `identity` property are either the Azure resource ID of a user-assigned identity, or `system` to use a system-assigned identity.
-The following example shows how to use a managed identities with an Azure Queue Storage scale rule. The queue storage account uses the `accountName` property to identify the storage account, while the `identity` property specifies which managed identity to use. You do not need to use the `auth` property.
+> [!NOTE]
+> Managed identity authentication in scale rules is in public preview. It's available in API version `2024-02-02-preview`.
+
+The following ARM template example shows how to use a managed identity with an Azure Queue Storage scale rule:
+
+The queue storage account uses the `accountName` property to identify the storage account, while the `identity` property specifies which managed identity to use. You do not need to use the `auth` property.
```json "scale": {
The following example shows how to use a managed identities with an Azure Queue
}] } ```
+To learn more about using managed identity with scale rules, see [Set scaling rules in Azure Container Apps](scale-app.md?pivots=azure-portal#authentication-2).
## Control managed identity availability
-Container Apps allow you to specify [init containers](containers.md#init-containers) and main containers. By default, both main and init containers in a consumption workload profile environment can use managed identity to access other Azure services. In consumption-only environments and dedicated workload profile environments, only main containers can use managed identity. Managed identity access tokens are available for every managed identity configured on the container app. However, in some situations only the init container or the main container require access tokens for a managed identity. Other times, you may use a managed identity only to access your Azure Container Registry to pull the container image, and your application itself doesn't need to have access to your Azure Container Registry.
+Container Apps allows you to specify [init containers](containers.md#init-containers) and main containers. By default, both main and init containers in a consumption workload profile environment can use managed identity to access other Azure services. In consumption-only environments and dedicated workload profile environments, only main containers can use managed identity. Managed identity access tokens are available for every managed identity configured on the container app. However, in some situations only the init container or the main container require access tokens for a managed identity. Other times, you may use a managed identity only to access your Azure Container Registry to pull the container image, and your application itself doesn't need to have access to your Azure Container Registry.
Starting in API version `2024-02-02-preview`, you can control which managed identities are available to your container app during the init and main phases to follow the security principle of least privilege. The following options are available: -- `Init`: available only to init containers. Use this when you want to perform some intilization work that requires a managed identity, but you no longer need the managed identity in the main container. This option is currently only supported in [workload profile consumption environments](environment.md#types)-- `Main`: available only to main containers. Use this if your init container does not need managed identity.-- `All`: available to all containers. This is the default setting.-- `None`: not available to any containers. Use this when you have a managed identity that is only used for ACR image pull, scale rules, or Key Vault secrets and does not need to be available to the code running in your containers.
+- `Init`: Available only to init containers. Use this when you want to perform some initialization work that requires a managed identity, but you no longer need the managed identity in the main container. This option is currently only supported in [workload profile consumption environments](environment.md#types)
+- `Main`: Available only to main containers. Use this if your init container does not need managed identity.
+- `All`: Available to all containers. This value is the default setting.
+- `None`: Not available to any containers. Use this when you have a managed identity that is only used for ACR image pull, scale rules, or Key Vault secrets and does not need to be available to the code running in your containers.
-The following example shows how to configure a container app on a workload profile consumption environment that:
+The following ARM template example shows how to configure a container app on a workload profile consumption environment that:
- Restricts the container app's system-assigned identity to main containers only. - Restricts a specific user-assigned identity to init containers only.
This approach limits the resources that can be accessed if a malicious actor wer
"identitySettings":[ { "identity": "ACR_IMAGEPULL_IDENTITY_RESOURCE_ID",
- "lifecycle": "none"
+ "lifecycle": "None"
}, { "identity": "<IDENTITY1_RESOURCE_ID>",
- "lifecycle": "init"
+ "lifecycle": "Init"
}, { "identity": "system",
- "lifecycle": "main"
+ "lifecycle": "Main"
}] }, "template": {
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
The following example demonstrates how to create a custom scale rule.
This example shows how to convert an [Azure Service Bus scaler](https://keda.sh/docs/latest/scalers/azure-service-bus/) to a Container Apps scale rule, but you use the same process for any other [ScaledObject](https://keda.sh/docs/latest/concepts/scaling-deployments/)-based [KEDA scaler](https://keda.sh/docs/latest/scalers/) specification.
-For authentication, KEDA scaler authentication parameters convert into [Container Apps secrets](manage-secrets.md).
+For authentication, KEDA scaler authentication parameters take [Container Apps secrets](manage-secrets.md) or [managed identity](managed-identity.md#scale-rules).
::: zone pivot="azure-resource-manager"
First, you define the type and metadata of the scale rule.
### Authentication
-A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the `authenticationRef` property. You can map the TriggerAuthentication object to the Container Apps scale rule.
+Container Apps scale rules support secrets-based authentication. Scale rules for Azure resources, including Azure Queue Storage, Azure Service Bus, and Azure Event Hubs, also support managed identity. Where possible, use managed identity authentication to avoid storing secrets within the app.
-> [!NOTE]
-> Container Apps scale rules only support secret references. Other authentication types such as pod identity are not supported.
+#### Use secrets
+
+To use secrets for authentication, you need to create a secret in the container app's `secrets` array. The secret value is used in the `auth` array of the scale rule.
+
+KEDA scalers can use secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the `authenticationRef` property. You can map the TriggerAuthentication object to the Container Apps scale rule.
1. Find the `TriggerAuthentication` object referenced by the KEDA `ScaledObject` specification.
A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s
Refer to the [considerations section](#considerations) for more security related information.
+#### Using managed identity
+
+Container Apps scale rules can use managed identity to authenticate with Azure services. The following ARM template passes in system-based managed identity to authenticate for an Azure Queue scaler.
+
+```
+"scale": {
+ "minReplicas": 0,
+ "maxReplicas": 4,
+ "rules": [
+ {
+ "name": "azure-queue",
+ "custom": {
+ "type": "azure-queue",
+ "metadata": {
+ "accountName": "apptest123",
+ "queueName": "queue1",
+ "queueLength": "1"
+ },
+ "identity": "system"
+ }
+ }
+ ]
+}
+```
+
+To learn more about using managed identity with scale rules, see [Managed identity](managed-identity.md#scale-rules).
+ ::: zone-end ::: zone pivot="azure-cli"
A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s
### Authentication
-A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the authenticationRef property. You can map the TriggerAuthentication object to the Container Apps scale rule.
+Container Apps scale rules support secrets-based authentication. Scale rules for Azure resources, including Azure Queue Storage, Azure Service Bus, and Azure Event Hubs, also support managed identity. Where possible, use managed identity authentication to avoid storing secrets within the app.
-> [!NOTE]
-> Container Apps scale rules only support secret references. Other authentication types such as pod identity are not supported.
+#### Use secrets
+
+To configure secrets-based authentication for a Container Apps scale rule, you configure the secrets in the container app and reference them in the scale rule.
+
+A KEDA scaler supports secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) which the `authenticationRef` property uses for reference. You can map the `TriggerAuthentication` object to the Container Apps scale rule.
1. Find the `TriggerAuthentication` object referenced by the KEDA `ScaledObject` specification. Identify each `secretTargetRef` of the `TriggerAuthentication` object.
A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s
1. Create an authentication entry with the `--scale-rule-auth` parameter. If there are multiple entries, separate them with a space. :::code language="bash" source="~/azure-docs-snippets-pr/container-apps/container-apps-azure-service-bus-cli.bash" highlight="8,14":::
+
+#### Using managed identity
+
+Container Apps scale rules can use managed identity to authenticate with Azure services. The following command creates a container app with a user-assigned managed identity and uses it to authenticate for an Azure Queue scaler.
+
+```bash
+az containerapp create \
+ --resource-group <RESOURCE_GROUP> \
+ --name <APP_NAME> \
+ --environment <ENVIRONMENT_ID> \
+ --user-assigned <USER_ASSIGNED_IDENTITY_ID> \
+ --scale-rule-name azure-queue \
+ --scale-rule-type azure-queue \
+ --scale-rule-metadata "accountName=<AZURE_STORAGE_ACCOUNT_NAME>" "queueName=queue1" "queueLength=1" \
+ --scale-rule-identity <USER_ASSIGNED_IDENTITY_ID>
+```
+
+Replace placeholders with your values.
::: zone-end
A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s
### Authentication
-A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the authenticationRef property. You can map the TriggerAuthentication object to the Container Apps scale rule.
+Container Apps scale rules support secrets-based authentication. Scale rules for Azure resources, including Azure Queue Storage, Azure Service Bus, and Azure Event Hubs, also support managed identity. Where possible, use managed identity authentication to avoid storing secrets within the app.
-> [!NOTE]
-> Container Apps scale rules only support secret references. Other authentication types such as pod identity are not supported.
+#### Use secrets
1. In your container app, create the [secrets](./manage-secrets.md) that you want to reference.
A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s
1. In the *Authentication* section, select **Add** to create an entry for each KEDA `secretTargetRef` parameter.
+#### Using managed identity
+
+Managed identity authentication is not supported in the Azure portal. Use the [Azure CLI](scale-app.md?pivots=azure-cli#authentication) or [Azure Resource Manager](scale-app.md?&pivots=azure-resource-manager#authentication) to authenticate using managed identity.
+ ::: zone-end ## Default scale rule
container-registry Tasks Agent Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-agent-pools.md
This feature is available in the **Premium** container registry service tier. Fo
Agent pool tiers provide the following resources per instance in the pool.
-|Tier | Type | CPU |Memory (GB) |
-|||||
-|S1 | standard | 2 | 3 |
-|S2 | standard | 4 | 8 |
-|S3 | standard | 8 | 16 |
-|I6 | isolated | 64 | 216 |
+| Tier | Type | CPU | Memory (GB) |
+| - | -- | | -- |
+| S1 | standard | 2 | 3 |
+| S2 | standard | 4 | 8 |
+| S3 | standard | 8 | 16 |
+| I6 | isolated | 64 | 216 |
## Create and manage a task agent pool
az acr agentpool update \
Task agent pools require access to the following Azure services. The following firewall rules must be added to any existing network security groups or user-defined routes. | Direction | Protocol | Source | Source Port | Destination | Dest Port | Used |
-|--|-|-|-|-|--||
+| | -- | -- | -- | -- | | - |
| Outbound | TCP | VirtualNetwork | Any | AzureKeyVault | 443 | Default | | Outbound | TCP | VirtualNetwork | Any | Storage | 443 | Default | | Outbound | TCP | VirtualNetwork | Any | EventHub | 443 | Default | | Outbound | TCP | VirtualNetwork | Any | AzureActiveDirectory | 443 | Default |
-| Outbound | TCP | VirtualNetwork | Any | AzureMonitor | 443 | Default |
+| Outbound | TCP | VirtualNetwork | Any | AzureMonitor | 443,12000 | Default |
> [!NOTE] > If your tasks require additional resources from the public internet, add the corresponding rules. For example, additional rules are needed to run a docker build task that pulls the base images from Docker Hub, or restores a NuGet package.
cosmos-db Ai Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-agents.md
Unlike standalone large language models (LLMs) or rule-based software/hardware s
- [Planning](#reasoning-and-planning). AI agents can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities. - [Tool usage](#frameworks). Advanced AI agents can utilize various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. Tool usage is often done through function calling. - [Perception](#frameworks). AI agents can perceive and process information from their environment, including visual, auditory, and other sensory data, making them more interactive and context aware.-- [Memory](#agent-memory-system). AI agents possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). They store these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
+- [Memory](#ai-agent-memory-system). AI agents possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). They store these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
> [!NOTE] > The usage of the term "memory" in the context of AI agents should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
For advanced and autonomous planning and execution workflows, [AutoGen](https://
> [!TIP] > See the implementation sample section at the end of this article for tutorial on building a simple multi-agent system using one of the popular frameworks and a unified agent memory system.
-### Agent memory system
+### AI agent memory system
The prevalent practice for experimenting with AI-enhanced applications in 2022 through 2024 has been using standalone database management systems for various data workflows or types. For example, an in-memory database for caching, a relational database for operational data (including tracing/activity logs and LLM conversation history), and a [pure vector database](vector-database.md#integrated-vector-database-vs-pure-vector-database) for embedding management. However, this practice of using a complex web of standalone databases can hurt AI agent's performance. Integrating all these disparate databases into a cohesive, interoperable, and resilient memory system for AI agents is a significant challenge in and of itself. Moreover, many of the frequently used database services are not optimal for the speed and scalability that AI agent systems need. These databases' individual weaknesses are exacerbated in multi-agent systems:
-**In-memory databases** are excellent for speed but may struggle with the large-scale data persistence that AI agents require.
+#### In-memory databases
+In-memory databases are excellent for speed but may struggle with the large-scale data persistence that AI agents require.
-**Relational databases** are not ideal for the varied modalities and fluid schemas of data handled by agents. Moreover, relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding.
+#### Relational databases
+Relational databases are not ideal for the varied modalities and fluid schemas of data handled by agents. Moreover, relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding.
-**Pure vector databases** tend to be less effective for transactional operations, real-time updates, and distributed workloads. The popular pure vector databases nowadays typically offer
+#### Pure vector databases
+Pure vector databases tend to be less effective for transactional operations, real-time updates, and distributed workloads. The popular pure vector databases nowadays typically offer
- no guarantee on reads & writes - limited ingestion throughput - low availability (below 99.9%, or annualized outage of almost 9 hours or more)
However, this practice of using a complex web of standalone databases can hurt A
The next section dives deeper into what makes a robust AI agent memory system.
-## Memory can make or break AI agents
+## Memory can make or break agents
Just as efficient database management systems are critical to software applications' performances, it is critical to provide LLM-powered agents with relevant and useful information to guide their inference. Robust memory systems enable organizing and storing different kinds of information that the agents can retrieve at inference time.
Currently, LLM-powered applications often use [retrieval-augmented generation](v
For example, if the task is to write code, vector search may not be able to retrieve the syntax tree, file system layout, code summaries, or API signatures that are important for generating coherent and correct code. Similarly, if the task is to work with tabular data, vector search may not be able to retrieve the schema, the foreign keys, the stored procedures, or the reports that are useful for querying or analyzing the data.
-Weaving together [a web of standalone in-memory, relational, and vector databases](#agent-memory-system) is not an optimal solution for the varied data types, either. This approach may work for prototypical agent systems; however, it adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
+Weaving together [a web of standalone in-memory, relational, and vector databases](#ai-agent-memory-system) is not an optimal solution for the varied data types, either. This approach may work for prototypical agent systems; however, it adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
Therefore, a robust memory system should have the following characteristics:
At the macro level, memory systems should enable multiple AI agents to collabora
Not only are memory systems critical to AI agents; they are also important for the humans who develop, maintain, and use these agents. For example, humans may need to supervise agents' planning and execution workflows in near real-time. While supervising, humans may interject with guidance or make in-line edits of agents' dialogues or monologues. Humans may also need to audit the reasoning and actions of agents to verify the validity of the final output. Human-agent interactions are likely in natural or programming languages, while agents "think," "learn," and "remember" through embeddings. This data modal difference poses another requirement on memory systems' consistency across data modalities.
-## Infastructure for a robust memory system
+## Building a robust AI agent memory system
-The above characteristics require AI agent memory systems to be highly scalable and swift. Painstakingly weaving together [a plethora of disparate in-memory, relational, and vector databases](#agent-memory-system) may work for early-stage AI-enabled applications; however, this approach adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
+The above characteristics require AI agent memory systems to be highly scalable and swift. Painstakingly weaving together [a plethora of disparate in-memory, relational, and vector databases](#ai-agent-memory-system) may work for early-stage AI-enabled applications; however, this approach adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
In place of all the standalone databases, Azure Cosmos DB can serve as a unified solution for AI agent memory systems. Its robustness successfully [enabled OpenAI's ChatGPT service](https://www.youtube.com/watch?v=6IIUtEFKJec&t) to scale dynamically with high reliability and low maintenance. Powered by an atom-record-sequence engine, it is the world's first globally distributed [NoSQL](distributed-nosql.md), [relational](distributed-relational.md), and [vector database](vector-database.md) service that offers a serverless mode. AI agents built on top of Azure Cosmos DB enjoy speed, scale, and simplicity.
The five available [consistency levels](consistency-levels.md) (from strong to e
This section explores the implementation of an autonomous agent to process traveler inquiries and bookings in a CruiseLine travel application.
-Chatbots have been a long-standing concept, but AI agents are advancing beyond basic human conversation to carry out tasks based on natural language, traditionally requiring coded logic. This AI travel agent uses the LangChain Agent framework for agent planning, tool usage, and perception. Its [unified memory system](#memory-can-make-or-break-ai-agents) uses the [vector database](vector-database.md) and document store capabilities of Azure Cosmos DB to address traveler inquiries and facilitate trip bookings, ensuring [speed, scale, and simplicity](#infastructure-for-a-robust-memory-system). It operates within a Python FastAPI backend and support user interactions through a React JS user interface.
+Chatbots have been a long-standing concept, but AI agents are advancing beyond basic human conversation to carry out tasks based on natural language, traditionally requiring coded logic. This AI travel agent uses the LangChain Agent framework for agent planning, tool usage, and perception. Its [unified memory system](#memory-can-make-or-break-agents) uses the [vector database](vector-database.md) and document store capabilities of Azure Cosmos DB to address traveler inquiries and facilitate trip bookings, ensuring [speed, scale, and simplicity](#building-a-robust-ai-agent-memory-system). It operates within a Python FastAPI backend and support user interactions through a React JS user interface.
### Prerequisites
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain.agents import AgentExecutor, create_openai_tools_agent from service import TravelAgentTools as agent_tools
-load_dotenv(override=True)
+load_dotenv(override=False)
chat : ChatOpenAI | None=None
def LLM_init():
LLM_init() ```
-The **init.py** file commences by initiating the loading of environment variables from a **.env** file utilizing the ```load_dotenv(override=True)``` method. Then, a global variable named ```agent_with_chat_history``` is instantiated for the agent, intended for use by our **TravelAgent.py**. The ```LLM_init()``` method is invoked during module initialization to configure our AI agent for conversation via the API web layer. The OpenAI Chat object is instantiated using the GPT-3.5 model, incorporating specific parameters such as model name and temperature. The chat object, tools list, and prompt template are combined to generate an ```AgentExecutor```, which operates as our AI Travel Agent. Lastly, the agent with history, ```agent_with_chat_history```, is established using ```RunnableWithMessageHistory``` with chat history (MongoDBChatMessageHistory), enabling it to maintain a complete conversation history via Azure Cosmos DB.
+The **init.py** file commences by initiating the loading of environment variables from a **.env** file utilizing the ```load_dotenv(override=False)``` method. Then, a global variable named ```agent_with_chat_history``` is instantiated for the agent, intended for use by our **TravelAgent.py**. The ```LLM_init()``` method is invoked during module initialization to configure our AI agent for conversation via the API web layer. The OpenAI Chat object is instantiated using the GPT-3.5 model, incorporating specific parameters such as model name and temperature. The chat object, tools list, and prompt template are combined to generate an ```AgentExecutor```, which operates as our AI Travel Agent. Lastly, the agent with history, ```agent_with_chat_history```, is established using ```RunnableWithMessageHistory``` with chat history (MongoDBChatMessageHistory), enabling it to maintain a complete conversation history via Azure Cosmos DB.
#### Prompt
from model.prompt import PromptResponse
import time from dotenv import load_dotenv
-load_dotenv(override=True)
+load_dotenv(override=False)
def agent_chat(input:str, session_id:str)->str:
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Today's applications are required to be highly responsive and always online. The
The surge of AI-powered applications created another layer of complexity, because many of these applications integrate a multitude of data stores. For example, some organizations built applications that simultaneously connect to MongoDB, Postgres, Redis, and Gremlin. These databases differ in implementation workflow and operational performances, posing extra complexity for scaling applications.
-Azure Cosmos DB simplifies and expedites your application development by being the single database for your operational data needs, from [geo-replicated distributed caching](https://medium.com/@marcodesanctis2/using-azure-cosmos-db-as-your-persistent-geo-replicated-distributed-cache-b381ad80f8a0) to backup to [vector indexing and search](vector-database.md). It provides the data infrastructure for modern applications like AI, digital commerce, Internet of Things, and booking management. It can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table.
+Azure Cosmos DB simplifies and expedites your application development by being the single database for your operational data needs, from [geo-replicated distributed caching](https://medium.com/@marcodesanctis2/using-azure-cosmos-db-as-your-persistent-geo-replicated-distributed-cache-b381ad80f8a0) to backup to [vector indexing and search](vector-database.md). It provides the data infrastructure for modern applications like [AI agents](ai-agents.md), digital commerce, Internet of Things, and booking management. It can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table.
## An AI database providing industry-leading capabilities...
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
In this example, `vectorIndex` is returned with all the `cosmosSearch` parameter
## Example using an IVF Index
+Inverted File (IVF) Indexing is a method that organizes vectors into clusters. During a vector search, the query vector is first compared against the centers of these clusters. The search is then conducted within the cluster whose center is closest to the query vector.
+
+The `numList`s parameter determines the number of clusters to be created. A single cluster implies that the search is conducted against all vectors in the database, akin to a brute-force or kNN search. This setting provides the highest accuracy but also the highest latency.
+
+Increasing the `numLists` value results in more clusters, each containing fewer vectors. For instance, if `numLists=2`, each cluster contains more vectors than if `numLists=3`, and so on. Fewer vectors per cluster speed up the search (lower latency, higher queries per second). However, this increases the likelihood of missing the most similar vector in your database to the query vector. This is due to the imperfect nature of clustering, where the search might focus on one cluster while the actual ΓÇ£closestΓÇ¥ vector resides in a different cluster.
+
+The `nProbes` parameter controls the number of clusters to be searched. By default, itΓÇÖs set to 1, meaning it searches only the cluster with the center closest to the query vector. Increasing this value allows the search to cover more clusters, improving accuracy but also increasing latency (thus decreasing queries per second) as more clusters and vectors are being searched.
+ The following examples show you how to index vectors, add documents that have vector properties, perform a vector search, and retrieve the index configuration. ### Create a vector index
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
This article walks you through how to create your account, limits, and upgrading
If you decide that Azure Cosmos DB is right for you, you can receive up to 63% discount on [Azure Cosmos DB prices through Reserved Capacity](reserved-capacity.md).
+<br>
+
+> [!VIDEO https://www.youtube.com/embed/7EFcxFGRB5Y?si=e7BiJ-JGK7WH79NG]
+ ## Limits to free account ### [NoSQL / Cassandra/ Gremlin / Table](#tab/nosql+cassandra+gremlin+table)
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
There are two common types of vector database implementations - pure vector data
A pure vector database is designed to efficiently store and manage vector embeddings, along with a small amount of metadata; it is separate from the data source from which the embeddings are derived.
-A vector database that is integrated in a highly performant NoSQL or relational database provides additional capabilities. The integrated vector database in a NoSQL or relational database can store, index, and query embeddings alongside the corresponding original data. This approach eliminates the extra cost of replicating data in a separate pure vector database. Moreover, keeping the vector embeddings and original data together better facilitates multi-modal data operations, and enables greater data consistency, scale, and performance.
+A vector database that is integrated in a highly performant NoSQL or relational database provides additional capabilities. The integrated vector database in a NoSQL or relational database can store, index, and query embeddings alongside the corresponding original data. This approach eliminates the extra cost of replicating data in a separate pure vector database. Moreover, keeping the vector embeddings and original data together better facilitates multi-modal data operations, and enables greater data consistency, scale, and performance. A highly performant database with schema flexibility and integrated vector database is especially optimal for [AI agents](ai-agents.md).
### Vector database use cases
cost-management-billing Manage Billing Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-across-tenants.md
Previously updated : 03/21/2024 Last updated : 06/28/2024 # Manage billing across multiple tenants using associated billing tenants
-You can simplify billing management for your organization by creating multi-tenant billing relationships using associated billing tenants. A multi-tenant billing relationship lets you securely share your organizationΓÇÖs billing account with other tenants, while maintaining control over your billing data. You can move subscriptions in different tenants and provide users in those tenants with access to your organizationΓÇÖs billing account. This relationship lets users on those tenants do billing activities like viewing and downloading invoices or managing licenses.
+You can simplify billing management for your organization by creating multitenant billing relationships using associated billing tenants. A multitenant billing relationship lets you securely share your organizationΓÇÖs billing account with other tenants, while maintaining control over your billing data. You can move subscriptions in different tenants and provide users in those tenants with access to your organizationΓÇÖs billing account. This relationship lets users on those tenants do billing activities like viewing and downloading invoices or managing licenses.
## Understand tenant types
Primary billing tenant: The primary billing tenant is the tenant used when the b
Associated billing tenants: An associated billing tenant is a tenant that is linked to your primary billing tenantΓÇÖs billing account. You can move Microsoft 365 subscriptions to these tenants. You can also assign billing account roles to users in associated billing tenants.
-> [!IMPORTANT]
-> Adding associated billing tenants, moving subscriptions and assigning roles to users in associated billing tenants are only available for billing accounts of type Microsoft Customer Agreement that are created by working with a Microsoft sales representative. To learn more about types of billing accounts, see [Billing accounts and scopes in the Azure portal](view-all-accounts.md).
+## Prerequisites
+
+You must have a Microsoft Customer Agreement - enterprise billing account to use associated billing tenants. An enterprise billing account is a billing account that is created by working with a Microsoft sales representative.
+
+If you don't have one, you don't see the **Associated billing tenants** option in the Azure portal. You also can't move subscriptions to other tenants or assign roles to users in other tenants.
+
+To learn more about types of billing accounts, see [Billing accounts and scopes in the Azure portal](view-all-accounts.md).
+ ## Access settings for associated billing tenants
Before assigning roles, make sure you [add a tenant as an associated billing ten
1. Select **Access control (IAM)** on the left side of the page. 1. On the Access control (IAM) page, select **Add** at the top of the page. :::image type="content" source="./media/manage-billing-across-tenants/access-management-add-role-assignment-button.png" alt-text="Screenshot showing access control page while assigning roles." lightbox="./media/manage-billing-across-tenants/access-management-add-role-assignment-button.png" :::
-1. In the Add role assignment pane, select a role, select the associated billing tenant from the tenant dropdown, then enter the email address of the users, groups or apps to whom you want to assign roles.
+1. In the Add role assignment pane, select a role, select the associated billing tenant from the tenant dropdown, then enter the email address of the users, groups, or apps to whom you want to assign roles.
1. Select **Add**. :::image type="content" source="./media/manage-billing-across-tenants/associated-tenants-add-role-assignment.png" alt-text="Screenshot showing saving a role assignment." lightbox="./media/manage-billing-across-tenants/associated-tenants-add-role-assignment.png" ::: 1. The users receive an email with a link to review the role assignment request. After they accept the role, they have access to your billing account.
Choosing to assign roles to users from associated billing tenants might be the r
| Consideration |Associated billing tenants |Azure B2B | ||||
-|Security | The users that you invite to share your billing account will follow their tenant's security policies. | The users that you invite to share your billing account will follow your tenant's security policies. |
+|Security | The users that you invite to share your billing account follow their tenant's security policies. | The users that you invite to share your billing account follow your tenant's security policies. |
|Access | The users get access to your billing account in their own tenant and can manage billing and make purchases without switching tenants. | External guest identities are created for users in your tenant and these identities get access to your billing account. Users would have to switch tenant to manage billing and make purchases. | ## Move Microsoft 365 subscriptions to a billing tenant
cost-management-billing How To View Csp Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/how-to-view-csp-reservations.md
Previously updated : 03/21/2024 Last updated : 06/28/2024
Cloud Solution Providers can access reservations that are purchased for their customers. Use the following information to view reservations in the Azure portal.
-Roles assigned with Azure Lighthouse aren't supported by reservations. To view reservations, you need to be a global admin or an admin agent in the customer's tenant.
+Reservations don't support roles assigned with Azure Lighthouse. To view reservations, you need to be a global admin or an admin agent in the customer's tenant.
## View reservations
Roles assigned with Azure Lighthouse aren't supported by reservations. To view r
1. In the Azure portal, go to **Reservations**. > [!NOTE]
-> Being a guest in the customer's tenant prevents you from viewing reservations. If you have guest access, you need to remove it from the tenant. Admin agent privilege doesn't override guest access.
+> Being a guest in the customer's tenant allows you to view reservations. However, guest access prevents you from refunding or exchanging reservations. To make changes to reservations, you must remove guest access from the tenant. Admin agent privilege doesn't override guest access.
- To remove your guest access in the Partner Center, navigate to **My Account** > **[Organizations](https://myaccount.microsoft.com/organizations)** and then select **Leave organization**. Alternately, ask another user who can access the reservation to add your guest account to the reservation order.
-## Next steps
+## Related content
- [View Azure reservations](view-reservations.md)
cost-management-billing Review Enterprise Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-enterprise-agreement-bill.md
Previously updated : 03/08/2024 Last updated : 06/28/2024
You receive an Azure invoice when any of the following events occur during your
Your invoice shows Azure usage charges with costs associated to them first, followed by any marketplace charges. If you have a credit balance, it gets applied to Azure usage and your invoice shows Azure usage and marketplace usage without any cost, last in the list.
+If an invoice includes over 1,000 line items, it gets split into multiple invoices.
+ Compare your combined total amount shown in the Azure portal in **Usage & Charges** with your Azure invoice. The amounts in the **Total Charges** don't include tax. 1. Sign in to the [Azure portal](https://portal.azure.com).
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
Previously updated : 06/26/2024 Last updated : 06/28/2024 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
Below is a list of URNs for some of the most commonly used images. If you just w
| Windows Desktop | Windows 10 20H2 Pro | 19042.928.2104091209 | MicrosoftWindowsDesktop:Windows-10:20h2-pro:19042.928.2104091209 | | Ubuntu Server | Canonical Ubuntu Server 18.04 LTS | 18.04.202002180 | Canonical:UbuntuServer:18.04-LTS:18.04.202002180 | | Ubuntu Server | Canonical Ubuntu Server 16.04 LTS | 16.04.202104160 | Canonical:UbuntuServer:16.04-LTS:16.04.202104160 |
-| CentOS | CentOS 8.1 | 8.1.2020062400 | OpenLogic:CentOS:8_1:8.1.2020062400 |
## Create a new managed disk from the Marketplace image
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md
Previously updated : 06/28/2022 Last updated : 06/28/2024 #Customer intent: As an IT admin, I need to understand how install GPU extension on GPU virtual machines (VMs) on my Azure Stack Edge Pro GPU device.
This article describes how to install GPU driver extension to install appropriat
Before you install GPU extension on the GPU VMs running on your device, make sure that:
-1. You have access to an Azure Stack Edge device on which you've deployed one or more GPU VMs. See how to [Deploy a GPU VM on your device](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
+1. You have access to an Azure Stack Edge device on which you deploy one or more GPU VMs. See how to [Deploy a GPU VM on your device](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
- Make sure that the port enabled for compute network on your device is connected to Internet and has access. The GPU drivers are downloaded through the internet access. Here's an example where Port 2 was connected to the internet and was used to enable the compute network. If Kubernetes isn't deployed on your environment, you can skip the Kubernetes node IP and external service IP assignment. ![Screenshot of the Compute pane for an Azure Stack Edge device. Compute settings for Port 2 are highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension/enable-compute-network-1.png)
-1. [Download the GPU extension templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory.
-1. Verify that the client you'll use to access your device is still connected to the Azure Resource Manager over Azure PowerShell. The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute will return error messages to the effect that you aren't connected to Azure anymore. You'll need to sign in again. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
+1. [Download the GPU extension templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory you use as a working directory.
+1. Verify that the client you'll use to access your device is still connected to the Azure Resource Manager over Azure PowerShell. The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute will return error messages to the effect that you aren't connected to Azure anymore. You must sign in again. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
## Edit parameters file
Here's a sample Ubuntu parameter file that was used in this article:
If you created your VM using a Red Hat Enterprise Linux Bring Your Own Subscription image (RHEL BYOS), make sure that: -- You've followed the steps in [using RHEL BYOS image](azure-stack-edge-gpu-create-virtual-machine-image.md).
+- You follow the steps in [using RHEL BYOS image](azure-stack-edge-gpu-create-virtual-machine-image.md).
- After you created the GPU VM, register and subscribe the VM with the Red Hat Customer portal. If your VM isn't properly registered, installation doesn't proceed as the VM isn't entitled. See [Register and automatically subscribe in one step using the Red Hat Subscription Manager](https://access.redhat.com/solutions/253273). This step allows the installation script to download relevant packages for the GPU driver.-- You either manually install the `vulkan-filesystem` package or add CentOS7 repo to your yum repo list. When you install the GPU extension, the installation script looks for a `vulkan-filesystem` package that is on CentOS7 repo (for RHEL7).
+- You install the `vulkan-filesystem` package, as the installation script looks for a `vulkan-filesystem` package.
PS C:\WINDOWS\system32>
Extension execution output is logged to the following file. Refer to this file `C:\Packages\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\1.3.0.0\Status` to track the status of installation.
-A successful install is indicated by a `message` as `Enable Extension` and `status` as `success`.
+A successful install displays a `message` with `Enable Extension` and `status` of `success`.
```powershell "status": {
Follow these steps to verify the driver installation:
Administrator@VM1:~$ ```
-2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you'll be able to run the utility and see the following output:
+2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you are able to run the utility and see the following output:
```powershell Administrator@VM1:~$ nvidia-smi
defender-for-cloud Recommendations Reference Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-devops.md
DevOps recommendations don't affect your [secure score](secure-score-security-co
**Severity**: Medium
+### [(Preview) Azure DevOps repositories should require minimum two-reviewer approval for code pushes](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/470742ea-324a-406c-b91f-fc1da6a27c0c)
+
+**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend requiring at least two code reviewers to approve pull requests before the code is merged with the default branch. By requiring approval from a minimum number of two reviewers, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities.
+
+**Severity**: High
+
+### [(Preview) Azure DevOps repositories should not allow requestors to approve their own Pull Requests](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/98b5895a-0ad8-4ed9-8c9d-d654f5bda816)
+
+**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend prohibiting pull request creators from approving their own submissions to ensure that every change undergoes objective review by someone other than the author. By doing this, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities.
+
+**Severity**: High
+ ### GitHub recommendations ### [GitHub repositories should have secret scanning enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/b6ad173c-0cc6-4d44-b954-8217c8837a8e/showSecurityCenterCommandBar~/false)
DevOps recommendations don't affect your [secure score](secure-score-security-co
**Severity**: Medium
+### [(Preview) GitHub organizations should not make action secrets accessible to all repositories](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6331fad3-a7a2-497d-b616-52672057e0f3)
+
+**Description**: For secrets used in GitHub Action workflows that are stored at the GitHub organization-level, you can use access policies to control which repositories can use organization secrets. Organization-level secrets let you share secrets between multiple repositories, which reduces the need for creating duplicate secrets. However, once a secret is made accessible to a repository, anyone with write access on repository can access the secret from any branch in a workflow. To reduce the attack surface, ensure that the secret is accessible from selected repositories only.
+
+**Severity**: High
+ ### GitLab recommendations ### [GitLab projects should have secret scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/867001c3-2d01-4db7-b513-5cb97638f23d/showSecurityCenterCommandBar~/false)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
If you're looking for items older than six months, you can find them in the [Arc
|Date | Update | |--|--|
-| June 27 | [General Availability of Checkov IaC Scanning in Defender for Cloud](#general-availability-of-checkov-iac-scanning-in-defender-for-cloud) |
-| June 27 | [Four security incidents have been deprecated](#four-security-incidents-have-been-deprecated) |
+| June 28 | [New DevOps security recommendations](#new-devops-security-recommendations) |
+| June 27 | [General Availability of Checkov IaC Scanning in Defender for Cloud](#general-availability-of-checkov-iac-scanning-in-defender-for-cloud) |
+| June 27 | [Four security incidents have been deprecated](#four-security-incidents-have-been-deprecated) |
| June 24 | [Change in pricing for Defender for Containers in multicloud](#change-in-pricing-for-defender-for-containers-in-multicloud) | | June 10 | [Copilot for Security in Defender for Cloud (Preview)](#copilot-for-security-in-defender-for-cloud-preview) |
+### New DevOps security recommendations
+
+June 28, 2024
+
+We're announcing DevOps security recommendations that improve the security posture of Azure DevOps and GitHub environments. If issues are found, these recommendations offer remediation steps.
+
+The following new recommendations are supported if you have connected Azure DevOps or GitHub to Microsoft Defender for Cloud. All recommendations are included in Foundational Cloud Security Posture Management.
+
+| Recommendation name | Description | Severity |
+|--|--|--|
+| [Azure DevOps repositories should require minimum two-reviewer approval for code pushes](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/470742ea-324a-406c-b91f-fc1da6a27c0c) | To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend requiring at least two code reviewers to approve pull requests before the code is merged with the default branch. By requiring approval from a minimum number of two reviewers, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities. | High |
+| [Azure DevOps repositories should not allow requestors to approve their own Pull Requests](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/98b5895a-0ad8-4ed9-8c9d-d654f5bda816) | To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend prohibiting pull request creators from approving their own submissions to ensure that every change undergoes objective review by someone other than the author. By doing this, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities. | High |
+| [GitHub organizations should not make action secrets accessible to all repositories](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6331fad3-a7a2-497d-b616-52672057e0f3) | For secrets used in GitHub Action workflows that are stored at the GitHub organization-level, you can use access policies to control which repositories can use organization secrets. Organization-level secrets let you share secrets between multiple repositories, which reduces the need for creating duplicate secrets. However, once a secret is made accessible to a repository, anyone with write access on repository can access the secret from any branch in a workflow. To reduce the attack surface, ensure that the secret is accessible from selected repositories only. | High |
+ ### General Availability of Checkov IaC Scanning in Defender for Cloud June 27, 2024
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
To complete these steps, you need:
* To ensure that the credentials used to connect to target SQL Managed Instance has the CONTROL DATABASE permission on the target SQL Managed Instance databases. > [!IMPORTANT]
- > For online migrations, you must already have set up your Microsoft Entra credentials. For more information, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md).
+ > For online migrations, you must already have set up your Microsoft Entra credentials. For more information, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal).
## Create a resource group
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
This article provides a list of known issues and troubleshooting steps associate
- **Cause**: Before migrating data, you need to migrate the certificate of the source SQL Server instance from a database that is protected by Transparent Data Encryption (TDE) to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine. -- **Recommendation**: Migrate the TDE certificate to the target instance and retry the process. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](/azure/dms/tutorial-transparent-data-encryption-migration-ads).
+- **Recommendation**: Migrate the TDE certificate to the target instance and retry the process. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](tutorial-transparent-data-encryption-migration-ads.md).
- **Message**: `Migration for Database <DatabaseName> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3169 The database was backed up on a server running version %ls. That version is incompatible with this server, which is running version %ls. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server.`
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
For information about specific migration scenarios and Azure SQL targets, see th
| Migration scenario | Migration mode |||
-SQL Server to Azure SQL Managed Instance| [Online](./tutorial-sql-server-managed-instance-online-ads.md) / [Offline](./tutorial-sql-server-managed-instance-offline-ads.md)
-SQL Server to SQL Server on an Azure virtual machine|[Online](./tutorial-sql-server-to-virtual-machine-online-ads.md) / [Offline](./tutorial-sql-server-to-virtual-machine-offline-ads.md)
-SQL Server to Azure SQL Database | [Offline](./tutorial-sql-server-azure-sql-database-offline.md)
+SQL Server to Azure SQL Managed Instance| [Online](/data-migration/sql-server/managed-instance/database-migration-service) / [Offline](/data-migration/sql-server/managed-instance/database-migration-service)
+SQL Server to SQL Server on an Azure virtual machine|[Online](/data-migration/sql-server/virtual-machines/database-migration-service) / [Offline](/data-migration/sql-server/virtual-machines/database-migration-service)
+SQL Server to Azure SQL Database | [Offline](/data-migration/sql-server/database/database-migration-service)
> [!IMPORTANT] > If your target is Azure SQL Database, you can migrate database Schema and data both using Database Migration Service via Azure Portal. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio to deploy the database schema before you begin the data migration.
dms Resource Custom Roles Sql Database Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-database-ads.md
- Title: "Custom roles for SQL Server to Azure SQL Database migrations in Azure Data Studio"-
-description: Learn how to use custom roles for SQL Server to Azure SQL Database migrations in Azure Data Studio.
-- Previously updated : 09/28/2022---
- - sql-migration-content
--
-# Custom roles for SQL Server to Azure SQL Database migrations in Azure Data Studio
-
-This article explains how to set up a custom role in Azure for SQL Server database migrations. A custom role will have only the permissions that are required to create and run an instance of Azure Database Migration Service with Azure SQL Database as a target.
-
-Use the AssignableScopes section of the role definition JSON string to control where the permissions appear in the **Add role assignment** UI in the Azure portal. To avoid cluttering the UI with extra roles, you might want to define the role at the level of the resource group, or even the level of the resource. The resource that the custom role applies to doesn't perform the actual role assignment.
-
-```json
-{
- "properties": {
- "roleName": "DmsCustomRoleDemoForSqlDB",
- "description": "",
- "assignableScopes": [
- "/subscriptions/<SQLDatabaseSubscription>/resourceGroups/<SQLDatabaseResourceGroup>",
- "/subscriptions/<DatabaseMigrationServiceSubscription>/resourceGroups/<DatabaseMigrationServiceResourceGroup>"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.Sql/servers/read",
- "Microsoft.Sql/servers/write",
- "Microsoft.Sql/servers/databases/read",
- "Microsoft.Sql/servers/databases/write",
- "Microsoft.Sql/servers/databases/delete",
- "Microsoft.DataMigration/locations/operationResults/read",
- "Microsoft.DataMigration/locations/operationStatuses/read",
- "Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read",
- "Microsoft.DataMigration/databaseMigrations/write",
- "Microsoft.DataMigration/databaseMigrations/read",
- "Microsoft.DataMigration/databaseMigrations/delete",
- "Microsoft.DataMigration/databaseMigrations/cancel/action",
- "Microsoft.DataMigration/sqlMigrationServices/write",
- "Microsoft.DataMigration/sqlMigrationServices/delete",
- "Microsoft.DataMigration/sqlMigrationServices/read",
- "Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action",
- "Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action",
- "Microsoft.DataMigration/sqlMigrationServices/deleteNode/action",
- "Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action",
- "Microsoft.DataMigration/sqlMigrationServices/listMigrations/read",
- "Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
-}
-```
-
-You can use either the Azure portal, Azure PowerShell, the Azure CLI, or the Azure REST API to create the roles.
-
-For more information, see [Create custom roles by using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
-
-## Permissions required to migrate to Azure SQL Database
-
-| Permission action | Description |
-| - | --|
-| Microsoft.Sql/servers/read | Return the list of SQL database resources or get the properties for the specified SQL database. |
-| Microsoft.Sql/servers/write | Create a SQL database with the specified parameters or update the properties or tags for the specified SQL database. |
-| Microsoft.Sql/servers/databases/read | Get an existing SQL database. |
-| Microsoft.Sql/servers/databases/write | Create a new database or update an existing database. |
-| Microsoft.Sql/servers/databases/delete | Delete an existing SQL database. |
-| Microsoft.DataMigration/locations/operationResults/read | Get the results of a long-running operation related to a 202 Accepted response. |
-| Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response. |
-| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve service operation results. |
-| Microsoft.DataMigration/databaseMigrations/write | Create or update a database migration resource. |
-| Microsoft.DataMigration/databaseMigrations/read | Retrieve a database migration resource. |
-| Microsoft.DataMigration/databaseMigrations/delete | Delete a database migration resource. |
-| Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database. |
-| Microsoft.DataMigration/sqlMigrationServices/write | Create a new service or change the properties of an existing service. |
-| Microsoft.DataMigration/sqlMigrationServices/delete | Delete an existing service. |
-| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve the details of the migration service. |
-| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the list of authentication keys. |
-| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate authentication keys. |
-| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | Deregister the integration runtime node. |
-| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | List the monitoring data for all migrations. |
-| Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | Lists the migrations for the user. |
-| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the monitoring data. |
-
-## Assign a role
-
-To assign a role to a user or an app ID:
-
-1. In the Azure portal, go to the resource.
-
-1. In the left menu, select **Access control (IAM)**, and then scroll to find the custom roles you created.
-
-1. Select the roles to assign, select the user or app ID, and then save the changes.
-
- The user or app ID now appears on the **Role assignments** tab.
-
-## Next steps
--- Review the [migration guidance for your scenario](/data-migration/).
dms Resource Custom Roles Sql Db Managed Instance Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance-ads.md
- Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations using ADS"-
-description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance migrations.
-- Previously updated : 05/02/2022---
- - sql-migration-content
--
-# Custom roles for SQL Server to Azure SQL Managed Instance migrations using ADS
-
-This article explains how to set up a custom role in Azure for Database Migrations. The custom role will only have the permissions necessary to create and run a Database Migration Service with SQL Managed Instance as a target.
-
-The AssignableScopes section of the role definition json string allows you to control where the permissions appear in the **Add Role Assignment** UI in the portal. You'll likely want to define the role at the resource group or even resource level to avoid cluttering the UI with extra roles. This doesn't perform the actual role assignment.
-
-```json
-{
- "properties": {
- "roleName": "DmsCustomRoleDemoForMI",
- "description": "",
- "assignableScopes": [
- "/subscriptions/<storageSubscription>/resourceGroups/<storageAccountRG>",
- "/subscriptions/<ManagedInstanceSubscription>/resourceGroups/<managedInstanceRG>",
- "/subscriptions/<DMSSubscription>/resourceGroups/<dmsServiceRG>"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.Storage/storageAccounts/read",
- "Microsoft.Storage/storageAccounts/listkeys/action",
- "Microsoft.Storage/storageAccounts/blobServices/read",
- "Microsoft.Storage/storageAccounts/blobServices/write",
- "Microsoft.Storage/storageAccounts/blobServices/containers/read",
- "Microsoft.Sql/managedInstances/read",
- "Microsoft.Sql/managedInstances/write",
- "Microsoft.Sql/managedInstances/databases/read",
- "Microsoft.Sql/managedInstances/databases/write",
- "Microsoft.Sql/managedInstances/databases/delete",
- "Microsoft.DataMigration/locations/operationResults/read",
- "Microsoft.DataMigration/locations/operationStatuses/read",
- "Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read",
- "Microsoft.DataMigration/databaseMigrations/write",
- "Microsoft.DataMigration/databaseMigrations/read",
- "Microsoft.DataMigration/databaseMigrations/delete",
- "Microsoft.DataMigration/databaseMigrations/cancel/action",
- "Microsoft.DataMigration/databaseMigrations/cutover/action",
- "Microsoft.DataMigration/sqlMigrationServices/write",
- "Microsoft.DataMigration/sqlMigrationServices/delete",
- "Microsoft.DataMigration/sqlMigrationServices/read",
- "Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action",
- "Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action",
- "Microsoft.DataMigration/sqlMigrationServices/deleteNode/action",
- "Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action",
- "Microsoft.DataMigration/sqlMigrationServices/listMigrations/read",
- "Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
-}
-```
-You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure REST API to create the roles.
-
-For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
-
-## Description of permissions needed to migrate to Azure SQL Managed Instance
-
-| Permission Action | Description |
-| - | --|
-| Microsoft.Storage/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. |
-| Microsoft.Storage/storageAccounts/listkeys/action | Returns the access keys for the specified storage account. |
-| Microsoft.Storage/storageAccounts/blobServices/read | List blob services. |
-| Microsoft.Storage/storageAccounts/blobServices/write | Returns the result of put blob service properties. |
-| Microsoft.Storage/storageAccounts/blobServices/containers/read | Returns list of containers. |
-| Microsoft.Sql/managedInstances/read | Return the list of managed instances or gets the properties for the specified managed instance. |
-| Microsoft.Sql/managedInstances/write | Creates a managed instance with the specified parameters or update the properties or tags for the specified managed instance. |
-| Microsoft.Sql/managedInstances/databases/read | Gets existing managed database. |
-| Microsoft.Sql/managedInstances/databases/write | Creates a new database or updates an existing database. |
-| Microsoft.Sql/managedInstances/databases/delete | Deletes an existing managed database. |
-| Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response. |
-| Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response. |
-| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results. |
-| Microsoft.DataMigration/databaseMigrations/write | Create or Update Database Migration resource. |
-| Microsoft.DataMigration/databaseMigrations/read | Retrieve the Database Migration resource. |
-| Microsoft.DataMigration/databaseMigrations/delete | Delete Database Migration resource. |
-| Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database. |
-| Microsoft.DataMigration/databaseMigrations/cutover/action | Cutover online migration operation for the database. |
-| Microsoft.DataMigration/sqlMigrationServices/write | Create a new or change properties of existing Service |
-| Microsoft.DataMigration/sqlMigrationServices/delete | Delete existing Service. |
-| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Service. |
-| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the List of Authentication Keys. |
-| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate the Authentication Keys. |
-| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | De-register the IR node. |
-| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | Lists the Monitoring Data for all migrations. |
-| Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | Lists the migrations for the user. |
-| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the Monitoring Data. |
-| Microsoft.SqlVirtualMachine/sqlVirtualMachines/read | Retrieve details of SQL virtual machine. |
-| Microsoft.SqlVirtualMachine/sqlVirtualMachines/write | Create a new or change properties of existing SQL virtual machine. |
-
-## Role assignment
-
-To assign a role to users/APP ID, open the Azure portal, perform the following steps:
-
-1. Navigate to the resource, go to **Access Control**, and then scroll to find the custom roles you created.
-
-2. Select the appropriate role, select the User or APP ID, and then save the changes.
-
- The user or APP ID(s) now appears listed on the **Role assignments** tab.
-
-## Next steps
-
-* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](/data-migration/).
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
We currently recommend creating a minimum of two custom roles for the APP ID, on
> [!NOTE] > The last custom role requirement may eventually be removed, as new SQL Managed Instance code is deployed to Azure.
-**Custom Role for the APP ID**. This role is required for Azure Database Migration Service migration at the *resource* or *resource group* level that hosts the Azure Database Migration Service (for more information about the APP ID, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md)).
+**Custom Role for the APP ID**. This role is required for Azure Database Migration Service migration at the *resource* or *resource group* level that hosts the Azure Database Migration Service (for more information about the APP ID, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal)).
```json {
dms Resource Custom Roles Sql Db Virtual Machine Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-virtual-machine-ads.md
- Title: "Custom roles: Online SQL Server to Azure Virtual Machines migrations with ADS"-
-description: Learn to use the custom roles for SQL Server to Azure VM's migrations.
-- Previously updated : 05/02/2022---
- - sql-migration-content
--
-# Custom roles for SQL Server to Azure Virtual Machines migrations using ADS
-
-This article explains how to set up a custom role in Azure for Database Migrations. The custom role will only have the permissions necessary to create and run a Database Migration Service with an Azure Virtual Machine as a target.
-
-The AssignableScopes section of the role definition json string allows you to control where the permissions appear in the **Add Role Assignment** UI in the portal. You'll likely want to define the role at the resource group or even resource level to avoid cluttering the UI with extra roles. This doesn't perform the actual role assignment.
-
-```json
-{
- "properties": {
- "roleName": "DmsCustomRoleDemoForVM",
- "description": "",
- "assignableScopes": [
- "/subscriptions/<storageSubscription>/resourceGroups/<storageAccountRG>",
- "/subscriptions/<ManagedInstanceSubscription>/resourceGroups/<virtualMachineRG>",
- "/subscriptions/<DMSSubscription>/resourceGroups/<dmsServiceRG>"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.Storage/storageAccounts/read",
- "Microsoft.Storage/storageAccounts/listkeys/action",
- "Microsoft.Storage/storageAccounts/blobServices/read",
- "Microsoft.Storage/storageAccounts/blobServices/write",
- "Microsoft.Storage/storageAccounts/blobServices/containers/read",
- "Microsoft.SqlVirtualMachine/sqlVirtualMachines/read",
- "Microsoft.SqlVirtualMachine/sqlVirtualMachines/write",
- "Microsoft.DataMigration/locations/operationResults/read",
- "Microsoft.DataMigration/locations/operationStatuses/read",
- "Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read",
- "Microsoft.DataMigration/databaseMigrations/write",
- "Microsoft.DataMigration/databaseMigrations/read",
- "Microsoft.DataMigration/databaseMigrations/delete",
- "Microsoft.DataMigration/databaseMigrations/cancel/action",
- "Microsoft.DataMigration/databaseMigrations/cutover/action",
- "Microsoft.DataMigration/sqlMigrationServices/write",
- "Microsoft.DataMigration/sqlMigrationServices/delete",
- "Microsoft.DataMigration/sqlMigrationServices/read",
- "Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action",
- "Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action",
- "Microsoft.DataMigration/sqlMigrationServices/deleteNode/action",
- "Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action",
- "Microsoft.DataMigration/sqlMigrationServices/listMigrations/read",
- "Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
-}
-```
-You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure REST API to create the roles.
-
-For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
-
-## Description of permissions needed to migrate to a virtual machine
-
-| Permission Action | Description |
-| - | --|
-| Microsoft.Storage/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. |
-| Microsoft.Storage/storageAccounts/listkeys/action | Returns the access keys for the specified storage account. |
-| Microsoft.Storage/storageAccounts/blobServices/read | List blob services. |
-| Microsoft.Storage/storageAccounts/blobServices/write | Returns the result of put blob service properties. |
-| Microsoft.Storage/storageAccounts/blobServices/containers/read | Returns list of containers. |
-| Microsoft.Sql/managedInstances/read | Return the list of managed instances or gets the properties for the specified managed instance. |
-| Microsoft.Sql/managedInstances/write | Creates a managed instance with the specified parameters or update the properties or tags for the specified managed instance. |
-| Microsoft.Sql/managedInstances/databases/read | Gets existing managed database. |
-| Microsoft.Sql/managedInstances/databases/write | Creates a new database or updates an existing database. |
-| Microsoft.Sql/managedInstances/databases/delete | Deletes an existing managed database. |
-| Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response. |
-| Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response. |
-| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results. |
-| Microsoft.DataMigration/databaseMigrations/write | Create or Update Database Migration resource. |
-| Microsoft.DataMigration/databaseMigrations/read | Retrieve the Database Migration resource. |
-| Microsoft.DataMigration/databaseMigrations/delete | Delete Database Migration resource. |
-| Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database. |
-| Microsoft.DataMigration/databaseMigrations/cutover/action | Cutover online migration operation for the database. |
-| Microsoft.DataMigration/sqlMigrationServices/write | Create a new or change properties of existing Service |
-| Microsoft.DataMigration/sqlMigrationServices/delete | Delete existing Service. |
-| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Service. |
-| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the List of Authentication Keys. |
-| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate the Authentication Keys. |
-| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | De-register the IR node. |
-| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | Lists the Monitoring Data for all migrations. |
-| Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | Lists the migrations for the user. |
-| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the Monitoring Data. |
-| Microsoft.SqlVirtualMachine/sqlVirtualMachines/read | Retrieve details of SQL virtual machine. |
-| Microsoft.SqlVirtualMachine/sqlVirtualMachines/write | Create a new or change properties of existing SQL virtual machine. |
-
-## Role assignment
-
-To assign a role to users/APP ID, open the Azure portal, perform the following steps:
-
-1. Navigate to the resource, go to **Access Control**, and then scroll to find the custom roles you created.
-
-2. Select the appropriate role, select the User or APP ID, and then save the changes.
-
- The user or APP ID(s) now appears listed on the **Role assignments** tab.
-
-## Next steps
-
-* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](/data-migration/).
dms Tutorial Login Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-login-migration-ads.md
Before you begin the tutorial:
| Migration scenario | Migration mode | | | |
- | SQL Server to Azure SQL Managed Instance | [Online](tutorial-sql-server-managed-instance-online-ads.md) / [Offline](tutorial-sql-server-managed-instance-offline-ads.md) |
- | SQL Server to SQL Server on an Azure virtual machine | [Online](tutorial-sql-server-to-virtual-machine-online-ads.md) / [Offline](./tutorial-sql-server-to-virtual-machine-offline-ads.md) |
+ | SQL Server to Azure SQL Managed Instance | [Online](/data-migration/sql-server/managed-instance/database-migration-service) / [Offline](/data-migration/sql-server/managed-instance/database-migration-service) |
+ | SQL Server to SQL Server on an Azure virtual machine | [Online](/data-migration/sql-server/virtual-machines/database-migration-service) / [Offline](/data-migration/sql-server/virtual-machines/database-migration-service) |
> [!IMPORTANT] > If you haven't completed the database migration and the login migration process is started, the migration of logins and server roles will still happen, but login/role mappings won't be performed correctly.
The following table describes the current status of the Login migration support
## Next steps - [Migrate databases with Azure SQL Migration extension for Azure Data Studio](./migration-using-azure-data-studio.md)-- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](./tutorial-sql-server-azure-sql-database-offline.md)-- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](./tutorial-sql-server-managed-instance-online-ads.md)-- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](./tutorial-sql-server-to-virtual-machine-online-ads.md)
+- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](/data-migration/sql-server/database/database-migration-service)
+- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](/data-migration/sql-server/managed-instance/database-migration-service)
+- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](/data-migration/sql-server/virtual-machines/database-migration-service)
dms Tutorial Sql Server Azure Sql Database Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline.md
- Title: "Tutorial: Migrate SQL Server to Azure SQL Database (offline)"-
-description: Learn how to migrate on-premises SQL Server to Azure SQL Database offline by using Azure Database Migration Service.
--- Previously updated : 10/10/2023---
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to Azure SQL Database (offline)
-
-You can use Azure Database Migration Service via the Azure SQL Migration extension for Azure Data Studio, or the Azure portal, to migrate databases from an on-premises instance of SQL Server to Azure SQL Database (offline).
-
-In this tutorial, learn how to migrate the sample `AdventureWorks2019` database from an on-premises instance of SQL Server to an instance of Azure SQL Database, by using Database Migration Service. This tutorial uses offline migration mode, which considers an acceptable downtime during the migration process.
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
-> - Open the Migrate to Azure SQL wizard in Azure Data Studio
-> - Run an assessment of your source SQL Server databases
-> - Collect performance data from your source SQL Server instance
-> - Get a recommendation of the Azure SQL Database SKU that will work best for your workload
-> - Create an instance of Azure Database Migration Service
-> - Start your migration and monitor progress to completion
--
-> [!IMPORTANT]
-> Currently, *online* migrations for Azure SQL Database targets aren't available.
-
-## Migration options
-
-The following section describes how to use Azure Database Migration Service with the Azure SQL Migration extension, or in the Azure portal.
-
-## [Migrate using Azure SQL Migration extension](#tab/azure-data-studio)
-
-### Prerequisites
-
-Before you begin the tutorial:
--- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.-- Have an Azure account that's assigned to one of the following built-in roles:-
- - Contributor for the target instance of Azure SQL Database
- - Reader role for the Azure resource group that contains the target instance of Azure SQL Database
- - Owner or Contributor role for the Azure subscription (required if you create a new instance of Azure Database Migration Service)
-
- As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md).
-
- > [!IMPORTANT]
- > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio.
--- Create a target instance of [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).--- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the db_datareader role and that the login for the target SQL Server instance is a member of the db_owner role.--- To migrate the database schema from source to target Azure SQL DB by using the Database Migration Service, the minimum supported [SHIR version](https://www.microsoft.com/download/details.aspx?id=39717) required is 5.37 or above.
-
-- If you're using Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).-
-> [!NOTE]
-> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate.
->
-> If no table exist on the Azure SQL Database target, or no tables are selected before starting the migration, the **Next** button isn't available to select to initiate the migration task. If no table exists on target then you must select the Schema migration option to move forward.
-
-### Open the Migrate to Azure SQL wizard in Azure Data Studio
-
-To open the Migrate to Azure SQL wizard:
-
-1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine.
-
-1. Right-click the server connection and select **Manage**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/azure-data-studio-manage-panel.png" alt-text="Screenshot that shows a server connection and the Manage option in Azure Data Studio." lightbox="media/tutorial-sql-server-azure-sql-database-offline/azure-data-studio-manage-panel.png":::
-
-1. In the server menu under **General**, select **Azure SQL Migration**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/launch-migrate-to-azure-sql-wizard-1.png" alt-text="Screenshot that shows the Azure Data Studio server menu.":::
-
-1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/launch-migrate-to-azure-sql-wizard-2.png" alt-text="Screenshot that shows the Migrate to Azure SQL wizard.":::
-
-1. On the first page of the wizard, start a new session or resume a previously saved session.
-
-### Run database assessment, collect performance data, and get Azure recommendations
-
-1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/assessment-database-selection.png" alt-text="Screenshot that shows selecting a database for assessment.":::
-
-1. In **Step 2: Assessment results and recommendations**, complete the following steps:
-
- 1. In **Choose your Azure SQL target**, select **Azure SQL Database**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/assessment-target-selection.png" alt-text="Screenshot that shows selecting the Azure SQL Database target.":::
-
- 1. Select **View/Select** to view the assessment results.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/assessment.png" alt-text="Screenshot that shows view/select assessment results.":::
-
- 1. In the assessment results, select the database, and then review the assessment report to make sure no issues were found.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/assessment-issues-details.png" alt-text="Screenshot that shows the assessment report.":::
-
- 1. Select **Get Azure recommendation** to open the recommendations pane.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/get-azure-recommendation.png" alt-text="Screenshot that shows Azure recommendations.":::
-
- 1. Select **Collect performance data now**. Select a folder on your local computer to store the performance logs, and then select **Start**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/get-azure-recommendation-zoom.png" alt-text="Screenshot that shows performance data collection.":::
-
- Azure Data Studio collects performance data until you either stop data collection or you close Azure Data Studio.
-
- After 10 minutes, Azure Data Studio indicates that a recommendation is available for Azure SQL Database. After the first recommendation is generated, you can select **Restart data collection** to continue the data collection process and refine the SKU recommendation. An extended assessment is especially helpful if your usage patterns vary over time.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/get-azure-recommendation-collected.png" alt-text="Screenshot that shows performance data collected.":::
-
- 1. In the selected **Azure SQL Database** target, select **View details** to open the detailed SKU recommendation report:
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/get-azure-recommendation-view-details.png" alt-text="Screenshot that shows the View details link for the target database recommendations.":::
-
- 1. In **Review Azure SQL Database Recommendations**, review the recommendation. To save a copy of the recommendation, select **Save recommendation report**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/azure-sku-recommendation-zoom.png" alt-text="Screenshot that shows SKU recommendation details.":::
-
-1. Select **Close** to close the recommendations pane.
-
-1. Select **Next** to continue your database migration in the wizard.
-
-### Configure migration settings
-
-1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, complete these steps for your target Azure SQL Database instance:
-
- 1. Select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the Azure SQL Database deployment.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/configuration-azure-target-account.png" alt-text="Screenshot that shows Azure account details.":::
-
- 1. For **Azure SQL Database Server**, select the target Azure SQL Database server (logical server). Enter a username and password for the target database deployment. Then, select **Connect**. Enter the credentials to verify connectivity to the target database.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/configuration-azure-target-database.png" alt-text="Screenshot that shows Azure SQL Database details.":::
-
- 1. Next, map the source database and the target database for the migration. For **Target database**, select the Azure SQL Database target. Then, select **Next** to move to the next step in the migration wizard.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/configuration-azure-target-map.png" alt-text="Screenshot that shows source and target mapping.":::
-
-1. In **Step 4: Migration mode**, select **Offline migration**, and then select **Next**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/migration-mode.png" alt-text="Screenshot that shows offline migrations selection.":::
-
-1. In **Step 5: Data source configuration**, complete the following steps:
-
- 1. Under **Source credentials**, enter the source SQL Server credentials.
-
- 1. Under **Select tables**, select the **Edit** pencil icon.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/migration-source-credentials.png" alt-text="Screenshot that shows source SQL Server credentials.":::
-
- 1. In **Select tables for \<database-name\>**, select the tables to migrate to the target. The **Has rows** column indicates whether the target table has rows in the target database. You can select one or more tables. Then, select **Update**.
-
- You can update the list of selected tables anytime before you start the migration.
-
- In the following example, a text filter is applied to select tables that contain the word `Employee`. Select a list of tables based on your migration needs.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/migration-source-tables.png" alt-text="Screenshot that shows the table selection.":::
-
-1. Review your table selections, and then select **Next** to move to the next step in the migration wizard.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/migration-target-tables.png" alt-text="Screenshot that shows selected tables to migrate.":::
-
-> [!NOTE]
-> If no tables are selected or if a username and password aren't entered, the **Next** button isn't available to select.
->
-> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate Schema before selecting the list of tables to migrate.
-
-### Create a Database Migration Service instance
-
-In **Step 6: Azure Database Migration Service** in the Migrate to Azure SQL wizard, create a new instance of Database Migration Service, or reuse an existing instance that you created earlier.
-
-> [!NOTE]
-> If you previously created a Database Migration Service instance by using the Azure portal, you can't reuse the instance in the migration wizard in Azure Data Studio. You can reuse an instance only if you created the instance by using Azure Data Studio.
-
-#### Use an existing instance of Database Migration Service
-
-To use an existing instance of Database Migration Service:
-
-1. In **Resource group**, select the resource group that contains an existing instance of Database Migration Service.
-
-1. In **Azure Database Migration Service**, select an existing instance of Database Migration Service that's in the selected resource group.
-
-1. Select **Next**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/create-dms.png" alt-text="Screenshot that shows Database Migration Service selection.":::
-
-#### Create a new instance of Database Migration Service
-
-To create a new instance of Database Migration Service:
-
-1. In **Resource group**, create a new resource group to contain a new instance of Database Migration Service.
-
-1. Under **Azure Database Migration Service**, select **Create new**.
-
-1. In **Create Azure Database Migration Service**, enter a name for your Database Migration Service instance, and then select **Create**.
-
-1. Under **Set up integration runtime**, complete the following steps:
-
- 1. Select the **Download and install integration runtime** link to open the download link in a web browser. Download the integration runtime, and then install it on a computer that meets the prerequisites for connecting to the source SQL Server instance.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/create-dms-integration-runtime-download.png" alt-text="Screenshot that shows the Download and install integration runtime link.":::
-
- When installation is finished, Microsoft Integration Runtime Configuration Manager automatically opens to begin the registration process.
-
- 1. In the **Authentication key** table, copy one of the authentication keys that are provided in the wizard and paste it in Azure Data Studio.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/create-dms-integration-runtime-authentication-key.png" alt-text="Screenshot that highlights the authentication key table in the wizard.":::
-
- If the authentication key is valid, a green check icon appears in Integration Runtime Configuration Manager. A green check indicates that you can continue to **Register**.
-
- After you register the self-hosted integration runtime, close Microsoft Integration Runtime Configuration Manager.
-
- > [!NOTE]
- > For more information about the self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md).
-
-1. In **Create Azure Database Migration Service** in Azure Data Studio, select **Test connection** to validate that the newly created Database Migration Service instance is connected to the newly registered self-hosted integration runtime.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/create-dms-integration-runtime-connected.png" alt-text="Screenshot that shows IR connectivity test.":::
-
-1. Return to the migration wizard in Azure Data Studio.
-
-### Start the database migration
-
-In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configuration you created, and then select **Start migration** to start the database migration.
--
-### Monitor the database migration
-
-1. In Azure Data Studio, in the server menu under **General**, select **Azure SQL Migration** to go to the dashboard for your Azure SQL Database migrations.
-
- Under **Database migration status**, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard.png" alt-text="Screenshot that shows monitor migration dashboard." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard.png":::
-
-1. Select **Database migrations in progress** to view active migrations.
-
- To get more information about a specific migration, select the database name.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-details.png" alt-text="Screenshot that shows database migration details." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-details.png":::
-
- Database Migration Service returns the latest known migration status each time migration status refreshes. The following table describes possible statuses:
-
- | Status | Description |
- | | |
- | Preparing for copy | The service is disabling autostats, triggers, and indexes in the target table. |
- | Copying | Data is being copied from the source database to the target database. |
- | Copy finished | Data copy is finished. The service is waiting on other tables to finish copying to begin the final steps to return tables to their original schema. |
- | Rebuilding indexes | The service is rebuilding indexes on target tables. |
- | Succeeded | All data is copied and the indexes are rebuilt. |
-
-1. Check the migration details page to view the current status for each database.
-
- Here's an example of the `AdventureWorks2019` database migration with the status **Creating**:
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-creating.png" alt-text="Screenshot that shows a creating migration status." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-creating.png":::
-
-1. In the menu bar, select **Refresh** to update the migration status.
-
- After migration status is refreshed, the updated status for the example `AdventureWorks2019` database migration is **In progress**:
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-in-progress.png" alt-text="Screenshot that shows a migration in progress status." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-in-progress.png":::
-
-1. Select a database name to open the table view. In this view, you see the current status of the migration, the number of tables that currently are in that status, and a detailed status of each table.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-monitoring-panel-in-progress.png" alt-text="Screenshot that shows monitoring table migration." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-monitoring-panel-in-progress.png":::
-
- When all table data is migrated to the Azure SQL Database target, Database Migration Service updates the migration status from **In progress** to **Succeeded**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-monitoring-panel-succeeded.png" alt-text="Screenshot that shows succeeded migration." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-monitoring-panel-succeeded.png":::
-
-> [!NOTE]
-> Database Migration Service optimizes migration by skipping tables with no data (0 rows). Tables that don't have data don't appear in the list, even if you select the tables when you create the migration.
-
-You've completed the migration to Azure SQL Database. We encourage you to go through a series of post-migration tasks to ensure that everything functions smoothly and efficiently.
-
-> [!IMPORTANT]
-> Be sure to take advantage of the advanced cloud-based features of Azure SQL Database. The features include [built-in high availability](/azure/azure-sql/database/high-availability-sla), [threat detection](/azure/azure-sql/database/azure-defender-for-sql), and [monitoring and tuning your workload](/azure/azure-sql/database/monitor-tune-overview).
-
-## [Migrate using Azure portal](#tab/portal)
-
-### Prerequisites
-
-Before you begin the tutorial:
--- Ensure that you can access the [Azure portal](https://portal.azure.com)--- Have an Azure account that's assigned to one of the following built-in roles:
- - Contributor for the target instance of Azure SQL Database
- - Reader role for the Azure resource group that contains the target instance of Azure SQL Database
- - Owner or Contributor role for the Azure subscription (required if you create a new instance of Azure Database Migration Service)
-
- As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md).
--- Create a target instance of [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).--- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the **db_datareader** role, and that the login for the target SQL Server instance is a member of the **db_owner** role.--- To migrate the database Schema from source to target Azure SQL DB by using the Database Migration Service, the minimum supported [SHIR version](https://www.microsoft.com/download/details.aspx?id=39717) required is 5.37 or above.--- If you're using Database Migration Service for the first time, make sure that the `Microsoft.DataMigration` [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).-
-> [!NOTE]
-> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate.
->
-> If no table exists on the Azure SQL Database target, or no tables are selected before starting the migration. The **Next** button isn't available to select to initiate the migration task. If no table exists on target then you must select the Schema migration option to move forward.
--
-### Start a new migration
-
-1. In **Step 2** to start a new migration using Database Migration Service from Azure portal, under **Azure Database Migration Services**, select an existing instance of Database Migration Service that you want to use, and then select either **New Migration** or **Start migrations**.
-
-1. Under **Select new migration** scenario, choose your source, target server type, migration mode and choose **Select**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-select-migration.png" alt-text="Screenshot that shows new migration scenario details.":::
-
-1. Now under Azure SQL Database Offline Migration wizard:
-
- 1. Provide below details to **connect to source SQL server** and select Next:
-
- - Source server name
- - Authentication type
- - User name and password
- - Connection properties
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-connect.png" alt-text="Screenshot that shows source SQL server details." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-connect.png":::
-
- 1. On next page, **select databases for migration**. This page might take some time to populate the list of databases from source.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-select-database.png" alt-text="Screenshot that shows list of databases from source.":::
-
- 1. Assuming you have already provisioned the Target based upon the assessment results, provide the target details on **Connect to target Azure SQL Database** page, and select Next:
-
- - Azure subscription
- - Azure resource group
- - Target Azure SQL Database server
- - Authentication type
- - User name and password
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-connect-target.png" alt-text="Screenshot that shows details for target.":::
-
- 1. Under **Map source and target databases**, map the databases between source and target.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-map-target.png" alt-text="Screenshot that shows list of mapping between source and target.":::
-
- 1. Before moving to this step, ensure to migrate the schema from source to target for all selected databases. Then, **Select database tables to migrate** for each selected database and select the table/s for which you want to migrate the data".
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-select-table.png" alt-text="Screenshot that shows list of tables select source database to migrate data to target." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-select-table.png":::
-
- 1. Review all the inputs provided on **Database migration summary** page and select **Start migration** button to start the database migration.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-summary.png" alt-text="Screenshot that shows summary of the migration configuration." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-summary.png":::
-
- > [!NOTE]
- > In an offline migration, application downtime starts when the migration starts.
- >
- > Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate.
-
-### Monitor the database migration
-
-1. In the Database Migration Service instance overview, select Monitor migrations to view the details of your database migrations.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-overview.png" alt-text="Screenshot that shows monitor migration dashboard." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-overview.png":::
-
-1. Under the **Migrations** tab, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations. In the menu bar, select **Refresh** to update the migration status.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-in-progress.png" alt-text="Screenshot that shows database migration details." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-in-progress.png":::
-
- Database Migration Service returns the latest known migration status each time migration status refreshes. The following table describes possible statuses:
-
- | Status | Description |
- | | |
- | Preparing for copy | The service is disabling autostats, triggers, and indexes in the target table. |
- | Copying | Data is being copied from the source database to the target database. |
- | Copy finished | Data copy is finished. The service is waiting on other tables to finish copying to begin the final steps to return tables to their original schema. |
- | Rebuilding indexes | The service is rebuilding indexes on target tables. |
- | Succeeded | All data is copied and the indexes are rebuilt. |
-
-1. Under **Source name** , select a database name to open the table view. In this view, you see the current status of the migration, the number of tables that currently are in that status, and a detailed status of each table.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-copy.png" alt-text="Screenshot that shows a migration status." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-copy.png":::
-
-1. When all table data is migrated to the Azure SQL Database target, Database Migration Service updates the migration status from **In progress** to **Succeeded**.
-
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-succeeded.png" alt-text="Screenshot that shows succeeded migration." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-succeeded.png":::
-
-> [!NOTE]
-> Database Migration Service optimizes migration by skipping tables with no data (0 rows). Tables that don't have data don't appear in the list, even if you select the tables when you create the migration.
-
-You've completed the migration to Azure SQL Database. We encourage you to go through a series of post-migration tasks to ensure that everything functions smoothly and efficiently.
---
-## Limitations
--
-## Next steps
--- [Create an Azure SQL database](/azure/azure-sql/database/single-database-create-quickstart)-- [Azure SQL Database overview](/azure/azure-sql/database/sql-database-paas-overview)-- [Connect apps to Azure SQL Database](/azure/azure-sql/database/connect-query-content-reference-guide)-- [Known issues](known-issues-azure-sql-migration-azure-data-studio.md)
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
- Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio"-
-description: Learn how to migrate on-premises SQL Server to Azure SQL Managed Instance offline by using Azure Data Studio and Azure Database Migration Service.
-- Previously updated : 06/07/2023---
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio
-
-You can use Azure Database Migration Service and the Azure SQL Migration extension in Azure Data Studio to migrate databases from an on-premises instance of SQL Server to Azure SQL Managed Instance offline and with minimal downtime.
-
-For database migration methods that might require some manual configuration, see [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
-
-In this tutorial, learn how to migrate the AdventureWorks database from an on-premises instance of SQL Server to an instance of Azure SQL Managed Instance by using Azure Data Studio and Database Migration Service. This tutorial uses offline migration mode, which considers an acceptable downtime during the migration process.
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
->
-> - Open the Migrate to Azure SQL wizard in Azure Data Studio
-> - Run an assessment of your source SQL Server databases
-> - Collect performance data from your source SQL Server instance
-> - Get a recommendation of the Azure SQL Managed Instance SKU that will work best for your workload
-> - Specify details of your source SQL Server instance, backup location, and target instance of Azure SQL Managed Instance
-> - Create an instance of Azure Database Migration Service
-> - Start your migration and monitor progress to completion
--
-This tutorial describes an offline migration from SQL Server to Azure SQL Managed Instance. For an online migration, see [Migrate SQL Server to Azure SQL Managed Instance online in Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md).
-
-## Prerequisites
-
-Before you begin the tutorial:
--- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.-- Have an Azure account that's assigned to one of the following built-in roles:-
- - Contributor for the target instance of Azure SQL Managed Instance and for the storage account where you upload your database backup files from a Server Message Block (SMB) network share
- - Reader role for the Azure resource groups that contain the target instance of Azure SQL Managed Instance or your Azure storage account
- - Owner or Contributor role for the Azure subscription (required if you create a new Database Migration Service instance)
-
- As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md).
-
- > [!IMPORTANT]
- > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio.
--- Create a target instance of [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).--- Ensure that the logins that you use to connect the source SQL Server instance are members of the SYSADMIN server role or have CONTROL SERVER permission.--- Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files. Database Migration Service uses the backup location during database migration.-
- > [!IMPORTANT]
- >
- > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration.
- > - If your database backup files are in an SMB network share, [create an Azure storage account](../storage/common/storage-account-create.md) that Database Migration Service can use to upload database backup files to and to migrate databases. Make sure you create the Azure storage account in the same region where you create your instance of Database Migration Service.
- > - You can write each backup to either a separate backup file or to multiple backup files. Appending multiple backups such as full and transaction logs into a single backup media isn't supported.
- > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
--- Ensure that the service account that's running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.--- If you're migrating a database that's protected by Transparent Data Encryption (TDE), the certificate from the source SQL Server instance must be migrated to your target managed instance before you restore the database. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](./tutorial-transparent-data-encryption-migration-ads.md).-
- > [!TIP]
- > If your database contains sensitive data that's protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), the migration process automatically migrates your Always Encrypted keys to your target managed instance.
--- If your database backups are on a network file share, provide a computer on which you can install a [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard gives you the download link and authentication keys to download and install your self-hosted integration runtime.-
- In preparation for the migration, ensure that the computer on which you install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
-
- | Domain names | Outbound port | Description |
- | -- | -- | |
- | Public cloud: `{datafactory}.{region}.datafactory.azure.net`<br />or `*.frontend.clouddatahub.net` <br /><br /> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br /><br /> Microsoft Azure operated by 21Vianet: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to Database Migration Service. <br/><br/>For a newly created data factory in a public cloud, locate the fully qualified domain name (FQDN) from your self-hosted integration runtime key, in the format `{datafactory}.{region}.datafactory.azure.net`. <br /><br /> For an existing data factory, if you don't see the FQDN in your self-hosted integration key, use `*.frontend.clouddatahub.net` instead. |
- | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled autoupdate, you can skip configuring this domain. |
- | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account to upload database backups from your network share |
-
- > [!TIP]
- > If your database backup files are already provided in an Azure storage account, a self-hosted integration runtime isn't required during the migration process.
--- If you use a self-hosted integration runtime, make sure that the computer on which the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located.--- Enable outbound port 445 to allow access to the network file share. For more information, see [recommendations for using a self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations).--- If you're using Database Migration Service for the first time, make sure that the Microsoft.DataMigration resource provider is registered in your subscription. You can complete the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).-
-## Open the Migrate to Azure SQL wizard in Azure Data Studio
-
-To open the Migrate to Azure SQL wizard:
-
-1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine.
-
-1. Right-click the server connection and select **Manage**.
-
-1. In the server menu, under **General**, select **Azure SQL Migration**.
-
-1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard.
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard":::
-
-1. On the first page of the wizard, start a new session or resume a previously saved session.
-
-## Run a database assessment, collect performance data, and get Azure recommendations
-
-1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**.
-
-1. In **Step 2: Assessment results and recommendations**, complete the following steps:
-
- 1. In **Choose your Azure SQL target**, select **Azure SQL Managed Instance**.
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation":::
-
-1. Select **View/Select** to view the assessment results.
-
-1. In the assessment results, select the database, and then review the assessment report to make sure no issues were found.
-
- 1. Select **Get Azure recommendation** to open the recommendations pane.
-
- 1. Select **Collect performance data now**. Select a folder on your local computer to store the performance logs, and then select **Start**.
-
- Azure Data Studio collects performance data until you either stop data collection or you close Azure Data Studio.
-
- After 10 minutes, Azure Data Studio indicates that a recommendation is available for Azure SQL Managed Instance. After the first recommendation is generated, you can select **Restart data collection** to continue the data collection process and refine the SKU recommendation. An extended assessment is especially helpful if your usage patterns vary over time.
-
- 1. In the selected **Azure SQL Managed Instance** target, select **View details** to open the detailed SKU recommendation report:
-
- 1. In **Review Azure SQL Managed Instance Recommendations**, review the recommendation. To save a copy of the recommendation, select the **Save recommendation report** checkbox.
-
-1. Select **Close** to close the recommendations pane.
-
-1. Select **Next** to continue your database migration in the wizard.
-
-## Configure migration settings
-
-1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the target instance of Azure SQL Managed Instance. Then, select **Next**.
-
-1. In **Step 4: Migration mode**, select **Offline migration**, and then select **Next**.
-
- > [!NOTE]
- > In offline migration mode, the source SQL Server database shouldn't be used for write activity while database backups are restored on a target instance of Azure SQL Managed Instance. Application downtime needs to be considered until the migration is finished.
-
-1. In **Step 5: Data source configuration**, select the location of your database backups. Your database backups can be located either on an on-premises network share or in an Azure storage blob container.
--- For backups that are located on a network share, enter or select the following information:-
- |Name |Description |
- ||-|
- |**Source Credentials - Username** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Source Credentials - Password** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backup files in the network share that don't belong to the valid backup set are automatically ignored during the migration process. |
- |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. |
- |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. |
- |**Target database name** |You can modify the target database name during the migration process. |
- |**Storage account details** |The resource group and storage account where backup files are uploaded. You don't need to create a container. Database Migration Service automatically creates a blob container in the specified storage account during the upload process. |
--- For backups that are stored in an Azure storage blob container, enter or select the following information:-
- |Name |Description |
- ||-|
- |**Target database name** |You can modify the target database name during the migration process. |
- |**Storage account details** |The resource group, storage account, and container where backup files are located.
- |**Last Backup File** |The file name of the last backup of the database you're migrating.
-
- > [!IMPORTANT]
- > If loopback check functionality is enabled and the source SQL Server instance and file share are on the same computer, the source can't access the file share by using an FQDN. To fix this issue, [disable loopback check functionality](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd).
--- The [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) no longer requires specific configurations on your Azure Storage account network settings to migrate your SQL Server databases to Azure. However, depending on your database backup location and desired storage account network settings, there are a few steps needed to ensure your resources can access the Azure Storage account. See the following table for the various migration scenarios and network configurations:-
- | Scenario | SMB network share | Azure Storage account container |
- | | | |
- | Enabled from all networks | No extra steps | No extra steps |
- | Enabled from selected virtual networks and IP addresses | [See 1a](#1aazure-blob-storage-network-configuration) | [See 2a](#2aazure-blob-storage-network-configuration-private-endpoint)|
- | Enabled from selected virtual networks and IP addresses + private endpoint | [See 1b](#1bazure-blob-storage-network-configuration) | [See 2b](#2bazure-blob-storage-network-configuration-private-endpoint) |
-
- ### 1a - Azure Blob storage network configuration
- If you have your Self-Hosted Integration Runtime (SHIR) installed on an Azure VM, see section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration). If you have your Self-Hosted Integration Runtime (SHIR) installed on your on-premises network, you need to add your client IP address of the hosting machine in your Azure Storage account as so:
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/storage-networking-details.png" alt-text="Screenshot that shows the storage account network details":::
-
- To apply this specific configuration, connect to the Azure portal from the SHIR machine, open the Azure Storage account configuration, select **Networking**, and then mark the **Add your client IP address** checkbox. Select **Save** to make the change persistent. See section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining steps.
-
- ### 1b - Azure Blob storage network configuration
- If your SHIR is hosted on an Azure VM, you need to add the virtual network of the VM to the Azure Storage account since the Virtual Machine has a nonpublic IP address that can't be added to the IP address range section.
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/storage-networking-firewall.png" alt-text="Screenshot that shows the storage account network firewall configuration":::
-
- To apply this specific configuration, locate your Azure Storage account, from the **Data storage** panel select **Networking**, then mark the **Add existing virtual network** checkbox. A new panel opens up, select the subscription, virtual network, and subnet of the Azure VM hosting the Integration Runtime. This information can be found on the **Overview** page of the Azure Virtual Machine. The subnet may say **Service endpoint required** if so, select **Enable**. Once everything is ready, save the updates. Refer to section [2a - Azure Blob storage network configuration (Private endpoint)a](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining required steps.
-
- ### 2a - Azure Blob storage network configuration (Private endpoint)
- If your backups are placed directly into an Azure Storage Container, all the above steps are unnecessary since there's no Integration Runtime communicating with the Azure Storage account. However, we still need to ensure that the target SQL Server instance can communicate with the Azure Storage account to restore the backups from the container. To apply this specific configuration, follow the instructions in section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration), specifying the target SQL instance Virtual Network when filling out the "Add existing virtual network" popup.
-
- ### 2b - Azure Blob storage network configuration (Private endpoint)
- If you have a private endpoint set up on your Azure Storage account, follow the steps outlined in section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint). However, you need to select the subnet of the private endpoint, not just the target SQL Server subnet. Ensure the private endpoint is hosted in the same VNet as the target SQL Server instance. If it isn't, create another private endpoint using the process in the Azure Storage account configuration section.
-
-## Create a Database Migration Service instance
-
-In **Step 6: Azure Database Migration Service** in the Migrate to Azure SQL wizard, create a new instance of Azure Database Migration Service or reuse an existing instance that you created earlier.
-
-> [!NOTE]
-> If you previously created a Database Migration Service instance by using the Azure portal, you can't reuse the instance in the migration wizard in Azure Data Studio. You can reuse an instance only if you created the instance by using Azure Data Studio.
-
-### Use an existing instance of Database Migration Service
-
-To use an existing instance of Database Migration Service:
-
-1. In **Resource group**, select the resource group that contains an existing instance of Database Migration Service.
-
-1. In **Azure Database Migration Service**, select an existing instance of Database Migration Service that's in the selected resource group.
-
-1. Select **Next**.
-
-### Create a new instance of Database Migration Service
-
-To create a new instance of Database Migration Service:
-
-1. In **Resource group**, create a new resource group to contain a new instance of Database Migration Service.
-
-1. Under **Azure Database Migration Service**, select **Create new**.
-
-1. In **Create Azure Database Migration Service**, enter a name for your Database Migration Service instance, and then select **Create**.
-
-1. Under **Set up integration runtime**, complete the following steps:
-
- 1. Select the **Download and install integration runtime** link to open the download link in a web browser. Download the integration runtime, and then install it on a computer that meets the prerequisites to connect to the source SQL Server instance.
-
- When installation is finished, Microsoft Integration Runtime Configuration Manager automatically opens to begin the registration process.
-
- 1. In the **Authentication key** table, copy one of the authentication keys that are provided in the wizard and paste it in Azure Data Studio. If the authentication key is valid, a green check icon appears in Integration Runtime Configuration Manager. A green check indicates that you can continue to **Register**.
-
- After you register the self-hosted integration runtime, close Microsoft Integration Runtime Configuration Manager.
-
- > [!NOTE]
- > For more information about how to use the self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md).
-
-1. In **Create Azure Database Migration Service** in Azure Data Studio, select **Test connection** to validate that the newly created Database Migration Service instance is connected to the newly registered self-hosted integration runtime.
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/test-connection-integration-runtime-complete.png" alt-text="Test connection integration runtime":::
-
-1. Return to the migration wizard in Azure Data Studio.
-
-## Start the database migration
-
-In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configuration you created, and then select **Start migration** to start the database migration.
-
-## Monitor the database migration
-
-1. In Azure Data Studio, in the server menu, under **General**, select **Azure SQL Migration** to go to the dashboard for your Azure SQL migrations.
-
- Under **Database migration status**, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations.
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard":::
-
-1. Select **Database migrations in progress** to view active migrations.
-
- To get more information about a specific migration, select the database name.
-
- The migration details pane displays the backup files and their corresponding status:
-
- | Status | Description |
- |--|-|
- | Arrived | The backup file arrived in the source backup location and was validated. |
- | Uploading | The integration runtime is uploading the backup file to the Azure storage account. |
- | Uploaded | The backup file was uploaded to the Azure storage account. |
- | Restoring | The service is restoring the backup file to Azure SQL Managed Instance. |
- | Restored | The backup file is successfully restored in Azure SQL Managed Instance. |
- | Canceled | The migration process was canceled. |
- | Ignored | The backup file was ignored because it doesn't belong to a valid database backup chain. |
-
-After all database backups are restored on the instance of Azure SQL Managed Instance, an automatic migration cutover is initiated by Database Migration Service to ensure that the migrated database is ready to use. The migration status changes from **In progress** to **Succeeded**.
-
-> [!IMPORTANT]
-> After the migration, the availability of SQL Managed Instance with Business Critical service tier might take significantly longer than the General Purpose tier because three secondary replicas have to be seeded for an Always On High Availability group. The duration of this operation depends on the size of the data. For more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
-
-## Limitations
-
-Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
---
-## Next steps
--- Complete a quickstart to [migrate a database to SQL Managed Instance by using the T-SQL RESTORE command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart).-- Learn more about [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview).-- Learn how to [connect apps to SQL Managed Instance](/azure/azure-sql/managed-instance/connect-application-instance).-- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
- Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance online by using Azure Data Studio"-
-description: Learn how to migrate on-premises SQL Server to Azure SQL Managed Instance only by using Azure Data Studio and Azure Database Migration Service.
--- Previously updated : 06/07/2023---
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to Azure SQL Managed Instance online in Azure Data Studio
-
-Use the Azure SQL migration extension in Azure Data Studio to migrate database(s) from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) with minimal downtime. For methods that might require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
-
-In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance with minimal downtime by using Azure Data Studio with Azure Database Migration Service (DMS). This tutorial focuses on the online migration mode where application downtime is limited to a short cutover at the end of the migration.
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
->
-> * Launch the *Migrate to Azure SQL* wizard in Azure Data Studio
-> * Run an assessment of your source SQL Server database(s)
-> * Collect performance data from your source SQL Server
-> * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload
-> * Specify details of your source SQL Server, backup location and your target Azure SQL Managed Instance
-> * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups
-> * Start and monitor the progress for your migration
-> * Perform the migration cutover when you are ready
-
-> [!IMPORTANT]
-> Prepare for migration and reduce the duration of the online migration process as much as possible to minimize the risk of interruption caused by instance reconfiguration or planned maintenance. In case of such an event, migration process will start from the beginning. In case of planned maintenance, there is a grace period of 36 hours where the target Azure SQL Managed Instance configuration or maintenance will be held before migration process is restarted.
--
-This article describes an online database migration from SQL Server to Azure SQL Managed Instance. For an offline database migration, see [Migrate SQL Server to a SQL Managed Instance offline using Azure Data Studio with DMS](tutorial-sql-server-managed-instance-offline-ads.md).
-
-## Prerequisites
-
-To complete this tutorial, you need to:
-
-* [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
-* Have an Azure account that is assigned to one of the built-in roles listed below:
- - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
- - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-managed-instance-ads.md)
- > [!IMPORTANT]
- > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
-* Create a target [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission.
-* Use one of the following storage options for the full database and transaction log backup files:
- - SMB network share
- - Azure storage account file share or blob container
-
- > [!IMPORTANT]
- > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration.
- > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
- > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (that is, full and t-log) into a single backup media isn't supported.
- > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
-* Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
-* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target Azure SQL Managed Instance or SQL Server on Azure virtual machine before you migrate data. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](./tutorial-transparent-data-encryption-migration-ads.md).
- > [!TIP]
- > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), the migration process that uses Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target Azure SQL Managed Instance or SQL Server on Azure virtual machine.
-
-* If your database backups are in a network file share, provide a machine to install [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard provides the download link and authentication keys to download and install your self-hosted integration runtime. In preparation for the migration, ensure that the machine where you plan to install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
-
- | Domain names | Outbound ports | Description |
- | -- | -- | |
- | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For a newly created data factory in the public cloud, locate the FQDN from your self-hosted integration runtime key, which is in format `{datafactory}.{region}.datafactory.azure.net`. For the old data factory, if you don't see the FQDN in your self-hosted integration key, use *.frontend.clouddatahub.net instead. |
- | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled autoupdate, you can skip configuring this domain. |
- | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account for uploading database backups from your network share |
-
- > [!TIP]
- > If your database backup files are already provided in an Azure storage account, a self-hosted integration runtime is not required during the migration process.
-
-* When you're using a self-hosted integration runtime, make sure that the machine where the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share. Also see [recommendations for using a self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations)
-* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
-
-## Launch the Migrate to Azure SQL wizard in Azure Data Studio
-
-1. Open Azure Data Studio and select the server icon to connect to your on-premises SQL Server (or SQL Server on Azure virtual machine).
-1. On the server connection, right-click and select **Manage**.
-1. On the server's home page, select **Azure SQL Migration** extension.
-1. On the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to launch the migration wizard.
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard":::
-1. The first page of the wizard allows you to start a new session or resume a previously saved one. Pick the first option to start a new session.
-## Run database assessment, collect performance data and get Azure recommendation
-
-1. Select the database(s) to run assessment and select **Next**.
-1. Select Azure SQL Managed Instance as the target.
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation":::
-1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**. If any issues are displayed in the assessment results, they need to be remediated before proceeding with the next steps.
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/assessment-issues-details.png" alt-text="Database assessment details":::
-1. Select the **Get Azure recommendation** button.
-2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and select the **Start** button.
-3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
-4. After 10 minutes you see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link after the initial
-10 minutes to refresh the recommendation with the extra data collected.
-5. In the above **Azure SQL Managed Instance*** box, select the **View details** button for more information about your recommendation.
-6. Close the view details box and press the **Next** button.
-
-## Configure migration settings
-
-1. Specify your **Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**.
-1. Select **Online migration** as the migration mode.
- > [!NOTE]
- > In the online migration mode, the source SQL Server database can be used for read and write activity while database backups are continuously restored on target Azure SQL Managed Instance. Application downtime is limited to duration for the cutover at the end of migration.
-1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
- > [!NOTE]
- > If your database backups are provided in an on-premises network share, DMS will require you to set up a self-hosted integration runtime in the next step of the wizard. If a self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to your Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you don't need to set up a self-hosted integration runtime.
--- For backups located on a network share, provide the following details of your source SQL Server, source backup location, target database name, and Azure storage account for the backup files to be uploaded to:-
- |Field |Description |
- ||-|
- |**Source Credentials - Username** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Source Credentials - Password** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backups files in the network share that don't belong to the valid backup set will be automatically ignored during the migration process. |
- |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. |
- |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. |
- |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
- |**Storage account details** |The resource group and storage account where backup files are uploaded to. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
--- For backups stored in an Azure storage blob container, specify the below details of the Target database name,
-Resource group, Azure storage account, and Blob container from the corresponding drop-down lists.
-
- |Field |Description |
- ||-|
- |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
- |**Storage account details** |The resource group, storage account and container where backup files are located.
-
- > [!IMPORTANT]
- > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the file share using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
--- The [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) no longer requires specific configurations on your Azure Storage account network settings to migrate your SQL Server databases to Azure. However, depending on your database backup location and desired storage account network settings, there are a few steps needed to ensure your resources can access the Azure Storage account. See the following table for the various migration scenarios and network configurations:-
- | Scenario | SMB network share | Azure Storage account container |
- | | | |
- | Enabled from all networks | No extra steps | No extra steps |
- | Enabled from selected virtual networks and IP addresses | [See 1a](#1aazure-blob-storage-network-configuration) | [See 2a](#2aazure-blob-storage-network-configuration-private-endpoint)|
- | Enabled from selected virtual networks and IP addresses + private endpoint | [See 1b](#1bazure-blob-storage-network-configuration) | [See 2b](#2bazure-blob-storage-network-configuration-private-endpoint) |
-
- ### 1a - Azure Blob storage network configuration
- If you have your Self-Hosted Integration Runtime (SHIR) installed on an Azure VM, see section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration). If you have your Self-Hosted Integration Runtime (SHIR) installed on your on-premises network, you need to add your client IP address of the hosting machine in your Azure Storage account as so:
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/storage-networking-details.png" alt-text="Screenshot that shows the storage account network details":::
-
- To apply this specific configuration, connect to the Azure portal from the SHIR machine, open the Azure Storage account configuration, select **Networking**, and then mark the **Add your client IP address** checkbox. Select **Save** to make the change persistent. See section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining steps.
-
- ### 1b - Azure Blob storage network configuration
- If your SHIR is hosted on an Azure VM, you need to add the virtual network of the VM to the Azure Storage account since the Virtual Machine has a nonpublic IP address that can't be added to the IP address range section.
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/storage-networking-firewall.png" alt-text="Screenshot that shows the storage account network firewall configuration.":::
-
- To apply this specific configuration, locate your Azure Storage account, from the **Data storage** panel select **Networking**, then mark the **Add existing virtual network** checkbox. A new panel opens up, select the subscription, virtual network, and subnet of the Azure VM hosting the Integration Runtime. This information can be found on the **Overview** page of the Azure Virtual Machine. The subnet may say **Service endpoint required** if so, select **Enable**. Once everything is ready, save the updates. Refer to section [2a - Azure Blob storage network configuration (Private endpoint)a](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining required steps.
-
- ### 2a - Azure Blob storage network configuration (Private endpoint)
- If your backups are placed directly into an Azure Storage Container, all the above steps are unnecessary since there's no Integration Runtime communicating with the Azure Storage account. However, we still need to ensure that the target SQL Server instance can communicate with the Azure Storage account to restore the backups from the container. To apply this specific configuration, follow the instructions in section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration), specifying the target SQL instance Virtual Network when filling out the "Add existing virtual network" popup.
-
- ### 2b - Azure Blob storage network configuration (Private endpoint)
- If you have a private endpoint set up on your Azure Storage account, follow the steps outlined in section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint). However, you need to select the subnet of the private endpoint, not just the target SQL Server subnet. Ensure the private endpoint is hosted in the same VNet as the target SQL Server instance. If it isn't, create another private endpoint using the process in the Azure Storage account configuration section.
-
-## Create Azure Database Migration Service
-
-1. Create a new Azure Database Migration Service or reuse an existing Service that you previously created.
- > [!NOTE]
- > If you had previously created DMS using the Azure Portal, you cannot reuse it in the migration wizard in Azure Data Studio. Only DMS created previously using Azure Data Studio can be reused.
-1. Select the **Resource group** where you have an existing DMS or need to create a new one. The **Azure Database Migration Service** dropdown lists any existing DMS in the selected resource group.
-1. To reuse an existing DMS, select it from the dropdown list and the status of the self-hosted integration runtime will be displayed at the bottom of the page.
-1. To create a new DMS, select **Create new**. On the **Create Azure Database Migration Service**, screen provide the name for your DMS and select **Create**.
-1. After successful creation of DMS, you'll be provided with details to set up **integration runtime**.
-1. Select on **Download and install integration runtime** to open the download link in a web browser. Complete the download. Install the integration runtime on a machine that meets the prerequisites of connecting to source SQL Server and the location containing the source backup.
-1. After the installation is complete, the **Microsoft Integration Runtime Configuration Manager** will automatically launch to begin the registration process.
-1. Copy and paste one of the authentication keys provided in the wizard screen in Azure Data Studio. If the authentication key is valid, a green check icon is displayed in the Integration Runtime Configuration Manager indicating that you can continue to **Register**.
-1. After successfully completing the registration of self-hosted integration runtime, close the **Microsoft Integration Runtime Configuration Manager** and switch back to the migration wizard in Azure Data Studio.
-1. Select **Test connection** in the **Create Azure Database Migration Service** screen in Azure Data Studio to validate that the newly created DMS is connected to the newly registered self-hosted integration runtime.
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/test-connection-integration-runtime-complete.png" alt-text="Test connection integration runtime":::
-1. Review the migration summary and select **Done** to start the database migration.
-
-## Monitor your migration
-
-1. On the **Database Migration Status**, you can track the migrations in progress, migrations completed, and migrations failed (if any).
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard":::
-1. Select **Database migrations in progress** to view ongoing migrations and get further details by selecting the database name.
-1. The migration details page displays the backup files and the corresponding status:
-
- | Status | Description |
- |--|-|
- | Arrived | Backup file arrived in the source backup location and validated |
- | Uploading | Integration runtime is currently uploading the backup file to Azure storage|
- | Uploaded | Backup file is uploaded to Azure storage |
- | Restoring | Azure Database Migration Service is currently restoring the backup file to Azure SQL Managed Instance|
- | Restored | Backup file is successfully restored on Azure SQL Managed Instance |
- | Canceled | Migration process was canceled |
- | Ignored | Backup file was ignored as it doesn't belong to a valid database backup chain |
-
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/online-to-mi-migration-details-all-backups-restored.png" alt-text="backup restore details":::
-
-## Complete migration cutover
-
-The final step of the tutorial is to complete the migration cutover to ensure the migrated database in Azure SQL Managed Instance is ready for use. This process is the only part that requires downtime for applications that connect to the database and hence the timing of the cutover needs to be carefully planned with business or application stakeholders.
-
-To complete the cutover:
-
-1. Stop all incoming transactions to the source database.
-2. Make application configuration changes to point to the target database in Azure SQL Managed Instance.
-3. Take a final log backup of the source database in the backup location specified
-4. Put the source database in read-only mode. Therefore, users can read data from the database but not modify it.
-5. Ensure all database backups have the status *Restored* in the monitoring details page.
-6. Select *Complete cutover* in the monitoring details page.
-
-During the cutover process, the migration status changes from *in progress* to *completing*. When the cutover process is completed, the migration status changes to *succeeded* to indicate that the database migration is successful and that the migrated database is ready for use.
-
-> [!IMPORTANT]
-> After the cutover, availability of SQL Managed Instance with Business Critical service tier only can take significantly longer than General Purpose as three secondary replicas have to be seeded for Always On High Availability group. This operation duration depends on the size of data, for more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
-
-## Limitations
-
-Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
--
-## Next steps
-
-* For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart).
-* For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview).
-* For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
-* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
[!INCLUDE [Azure Database Migration Service (classic) - SQL scenarios retirement announcement](./includes/deprecation-announcement-dms-classic-sql.md)] > [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md).
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/managed-instance/database-migration-service).
> > To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
To complete this tutorial, you need to:
* Provide an SMB network share that contains all your database full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration. * Ensure that the service account running the source SQL Server instance has write privileges on the network share that you created and that the computer account for the source server has read/write access to the same share. * Make a note of a Windows user (and password) that has full control privilege on the network share that you previously created. Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation.
-* Create a Microsoft Entra Application ID that generates the Application ID key that Azure Database Migration Service can use to connect to target Azure SQL Managed Instance and Azure Storage Container. For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md).
+* Create a Microsoft Entra Application ID that generates the Application ID key that Azure Database Migration Service can use to connect to target Azure SQL Managed Instance and Azure Storage Container. For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal).
> [!NOTE] > The Application ID used by the Azure Database Migration Service supports secret (password-based) authentication for service principals. It does not support certificate-based authentication.
After an instance of the service is created, locate it within the Azure portal,
1. On the **Select target** screen, specify the **Application ID** and **Key** that the DMS instance can use to connect to the target instance of SQL Managed Instance and the Azure Storage Account.
- For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md).
+ For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal).
2. Select the **Subscription** containing the target instance of SQL Managed Instance, and then choose the target SQL Managed instance.
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
[!INCLUDE [Azure Database Migration Service (classic) - SQL scenarios retirement announcement](./includes/deprecation-announcement-dms-classic-sql.md)] > [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline.md).
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/database/database-migration-service).
> > To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
[!INCLUDE [Azure Database Migration Service (classic) - SQL scenarios retirement announcement](./includes/deprecation-announcement-dms-classic-sql.md)] > [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-offline-ads.md).
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/managed-instance/database-migration-service).
> > To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
- Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio"-
-description: Learn how to migrate on-premises SQL Server to SQL Server on Azure Virtual Machines offline by using Azure Data Studio and Azure Database Migration Service.
--- Previously updated : 06/07/2023---
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio
-
-You can use Azure Database Migration Service and the Azure SQL Migration extension in Azure Data Studio to migrate databases from an on-premises instance of SQL Server to [SQL Server on Azure Virtual Machines (SQL Server 2016 and later)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) offline and with minimal downtime.
-
-For database migration methods that might require some manual configuration, see [SQL Server instance migration to SQL Server on Azure Virtual Machines](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview).
-
-In this tutorial, learn how to migrate the example AdventureWorks database from an on-premises instance of SQL Server to an instance of SQL Server on Azure Virtual Machines by using Azure Data Studio and Azure Database Migration Service. This tutorial uses offline migration mode, which considers an acceptable downtime during the migration process.
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
->
-> - Open the Migrate to Azure SQL wizard in Azure Data Studio
-> - Run an assessment of your source SQL Server databases
-> - Collect performance data from your source SQL Server instance
-> - Get a recommendation of the SQL Server on Azure Virtual Machines SKU that will work best for your workload
-> - Set the details of your source SQL Server instance, backup location, and target instance of SQL Server on Azure Virtual Machines
-> - Create an instance of Azure Database Migration Service
-> - Start your migration and monitor progress to completion
-
-This tutorial describes an offline migration from SQL Server to SQL Server on Azure Virtual Machines. For an online migration, see [Migrate SQL Server to SQL Server on Azure Virtual Machines online in Azure Data Studio](tutorial-sql-server-to-virtual-machine-online-ads.md).
-
-## Prerequisites
-
-Before you begin the tutorial:
--- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.-- Have an Azure account that's assigned to one of the following built-in roles:-
- - Contributor for the target instance of SQL Server on Azure Virtual Machines and for the storage account where you upload your database backup files from a Server Message Block (SMB) network share
- - Reader role for the Azure resource group that contains the target instance of SQL Server on Azure Virtual Machines or for your Azure Storage account
- - Owner or Contributor role for the Azure subscription
-
- As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md).
-
- > [!IMPORTANT]
- > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio.
--- Create a target instance of [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal).-
- > [!IMPORTANT]
- > If you have an existing Azure virtual machine, it should be registered with the [SQL IaaS Agent extension in Full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes).
--- Ensure that the logins that you use to connect the source SQL Server instance are members of the SYSADMIN server role or have CONTROL SERVER permission.--- Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files. Database Migration Service uses the backup location during database migration.-
- > [!IMPORTANT]
- >
- > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration.
- > - If your database backup files are in an SMB network share, [create an Azure storage account](../storage/common/storage-account-create.md) that Database Migration Service can use to upload database backup files to and to migrate databases. Make sure you create the Azure storage account in the same region where you create your instance of Database Migration Service.
- > - You can write each backup to either a separate backup file or to multiple backup files. Appending multiple backups such as full and transaction logs into a single backup media isn't supported.
- > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
--- Ensure that the service account that's running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.--- If you're migrating a database that's protected by Transparent Data Encryption (TDE), the certificate from the source SQL Server instance must be migrated to SQL Server on Azure Virtual Machines before you migrate data. To learn more, see [Move a TDE-protected database to another SQL Server instance](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).-
- > [!TIP]
- > If your database contains sensitive data that's protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), the migration process automatically migrates your Always Encrypted keys to your target instance of SQL Server on Azure Virtual Machines.
--- If your database backups are on a network file share, provide a computer on which you can install a [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard gives you the download link and authentication keys to download and install your self-hosted integration runtime.-
- In preparation for the migration, ensure that the computer on which you install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
-
- | Domain names | Outbound port | Description |
- | -- | -- | |
- | Public cloud: `{datafactory}.{region}.datafactory.azure.net`<br />or `*.frontend.clouddatahub.net` <br /><br /> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br /><br /> Microsoft Azure operated by 21Vianet: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to Database Migration Service. <br/><br/>For a newly created data factory in a public cloud, locate the fully qualified domain name (FQDN) from your self-hosted integration runtime key, in the format `{datafactory}.{region}.datafactory.azure.net`. <br /><br /> For an existing data factory, if you don't see the FQDN in your self-hosted integration key, use `*.frontend.clouddatahub.net` instead. |
- | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled autoupdate, you can skip configuring this domain. |
- | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account to upload database backups from your network share |
-
- > [!TIP]
- > If your database backup files are already provided in an Azure storage account, a self-hosted integration runtime isn't required during the migration process.
--- If you use a self-hosted integration runtime, make sure that the computer on which the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located.--- Enable outbound port 445 to allow access to the network file share. For more information, see [recommendations for using a self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations).--- If you're using Azure Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).-
-## Open the Migrate to Azure SQL wizard in Azure Data Studio
-
-To open the Migrate to Azure SQL wizard:
-
-1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine.
-
-1. Right-click the server connection and select **Manage**.
-
-1. In the server menu under **General**, select **Azure SQL Migration**.
-
-1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard.
-
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Screenshot that shows how to open the Migrate to Azure SQL wizard.":::
-
-1. On the first page of the wizard, start a new session or resume a previously saved session.
-
-## Run a database assessment, collect performance data, and get Azure recommendations
-
-1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**.
-
-1. In **Step 2: Assessment results and recommendations**, complete the following steps:
-
- 1. In **Choose your Azure SQL target**, select **SQL Server on Azure Virtual Machine**.
-
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/assessment-complete-target-selection.png" alt-text="Screenshot that shows an assessment confirmation.":::
-
- 1. Select **View/Select** to view the assessment results.
-
- 1. In the assessment results, select the database, and then review the assessment report to make sure no issues were found.
-
- 1. Select **Get Azure recommendation** to open the recommendations pane.
-
- 1. Select **Collect performance data now**. Select a folder on your local computer to store the performance logs, and then select **Start**.
-
- Azure Data Studio collects performance data until you either stop data collection or you close Azure Data Studio.
-
- After 10 minutes, Azure Data Studio indicates that a recommendation is available for SQL Server on Azure Virtual Machines. After the first recommendation is generated, you can select **Restart data collection** to continue the data collection process and refine the SKU recommendation. An extended assessment is especially helpful if your usage patterns vary over time.
-
- 1. In the selected **SQL Server on Azure Virtual Machines** target, select **View details** to open the detailed SKU recommendation report:
-
- 1. In **Review SQL Server on Azure Virtual Machines Recommendations**, review the recommendation. To save a copy of the recommendation, select the **Save recommendation report** checkbox.
-
-1. Select **Close** to close the recommendations pane.
-
-1. Select **Next** to continue your database migration in the wizard.
-
-## Configure migration settings
-
-1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the target SQL Server to Azure Virtual Machines instance. Then, select **Next**.
-
-1. In **Step 4: Migration mode**, select **Offline migration**, and then select **Next**.
-
- > [!NOTE]
- > In offline migration mode, the source SQL Server database shouldn't be used for write activity while database backup files are restored on the target instance of SQL Server to Azure Virtual Machines. Application downtime persists from the start of the migration process until it's finished.
-
-1. In **Step 5: Data source configuration**, select the location of your database backups. Your database backups can be located either on an on-premises network share or in an Azure storage blob container.
-
- > [!NOTE]
- > If your database backups are provided in an on-premises network share, you must set up a self-hosted integration runtime in the next step of the wizard. A self-hosted integration runtime is required to access your source database backups, check the validity of the backup set, and upload backups to Azure storage account.
- >
- > If your database backups are already in an Azure storage blob container, you don't need to set up a self-hosted integration runtime.
-
-- For backups that are located on a network share, enter or select the following information:-
- |Name |Description |
- ||-|
- |**Source Credentials - Username** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Source Credentials - Password** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backup files in the network share that don't belong to the valid backup set are automatically ignored during the migration process. |
- |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. |
- |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. |
- |**Target database name** |You can modify the target database name during the migration process. |
--- For backups that are stored in an Azure storage blob container, enter or select the following information:-
- |Name |Description |
- ||-|
- |**Target database name** |You can modify the target database name during the migration process. |
- |**Storage account details** |The resource group, storage account, and container where backup files are located.
- |**Last Backup File** |The file name of the last backup of the database you're migrating.
-
- > [!IMPORTANT]
- > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, the source won't be able to access the file share by using the FQDN. To fix this issue, [disable loopback check functionality](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd).
--- The [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) no longer requires specific configurations on your Azure Storage account network settings to migrate your SQL Server databases to Azure. However, depending on your database backup location and desired storage account network settings, there are a few steps needed to ensure your resources can access the Azure Storage account. See the following table for the various migration scenarios and network configurations:-
- | Scenario | SMB network share | Azure Storage account container |
- | | | |
- | Enabled from all networks | No extra steps | No extra steps |
- | Enabled from selected virtual networks and IP addresses | [See 1a](#1aazure-blob-storage-network-configuration) | [See 2a](#2aazure-blob-storage-network-configuration-private-endpoint)|
- | Enabled from selected virtual networks and IP addresses + private endpoint | [See 1b](#1bazure-blob-storage-network-configuration) | [See 2b](#2bazure-blob-storage-network-configuration-private-endpoint) |
-
- ### 1a - Azure Blob storage network configuration
- If you have your Self-Hosted Integration Runtime (SHIR) installed on an Azure VM, see section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration). If you have your Self-Hosted Integration Runtime (SHIR) installed on your on-premises network, you need to add your client IP address of the hosting machine in your Azure Storage account as so:
-
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/storage-networking-details.png" alt-text="Screenshot that shows the storage account network details":::
-
- To apply this specific configuration, connect to the Azure portal from the SHIR machine, open the Azure Storage account configuration, select **Networking**, and then mark the **Add your client IP address** checkbox. Select **Save** to make the change persistent. See section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining steps.
-
- ### 1b - Azure Blob storage network configuration
- If your SHIR is hosted on an Azure VM, you need to add the virtual network of the VM to the Azure Storage account since the Virtual Machine has a nonpublic IP address that can't be added to the IP address range section.
-
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/storage-networking-firewall.png" alt-text="Screenshot that shows the storage account network firewall configuration.":::
-
- To apply this specific configuration, locate your Azure Storage account, from the **Data storage** panel select **Networking**, then mark the **Add existing virtual network** checkbox. A new panel opens up, select the subscription, virtual network, and subnet of the Azure VM hosting the Integration Runtime. This information can be found on the **Overview** page of the Azure Virtual Machine. The subnet may say **Service endpoint required** if so, select **Enable**. Once everything is ready, save the updates. Refer to section [2a - Azure Blob storage network configuration (Private endpoint)a](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining required steps.
-
- ### 2a - Azure Blob storage network configuration (Private endpoint)
- If your backups are placed directly into an Azure Storage Container, all the above steps are unnecessary since there's no Integration Runtime communicating with the Azure Storage account. However, we still need to ensure that the target SQL Server instance can communicate with the Azure Storage account to restore the backups from the container. To apply this specific configuration, follow the instructions in section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration), specifying the target SQL instance Virtual Network when filling out the "Add existing virtual network" popup.
-
- ### 2b - Azure Blob storage network configuration (Private endpoint)
- If you have a private endpoint set up on your Azure Storage account, follow the steps outlined in section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint). However, you need to select the subnet of the private endpoint, not just the target SQL Server subnet. Ensure the private endpoint is hosted in the same VNet as the target SQL Server instance. If it isn't, create another private endpoint using the process in the Azure Storage account configuration section.
-
-## Create a Database Migration Service instance
-
-In **Step 6: Azure Database Migration Service** in the Migrate to Azure SQL wizard, create a new instance of Azure Database Migration Service or reuse an existing instance that you created earlier.
-
-> [!NOTE]
-> If you previously created a Database Migration Service instance by using the Azure portal, you can't reuse the instance in the migration wizard in Azure Data Studio. You can reuse an instance only if you created the instance by using Azure Data Studio.
-
-### Use an existing instance of Database Migration Service
-
-To use an existing instance of Database Migration Service:
-
-1. In **Resource group**, select the resource group that contains an existing instance of Database Migration Service.
-
-1. In **Azure Database Migration Service**, select an existing instance of Database Migration Service that's in the selected resource group.
-
-1. Select **Next**.
-
-### Create a new instance of Database Migration Service
-
-To create a new instance of Database Migration Service:
-
-1. In **Resource group**, create a new resource group to contain a new instance of Database Migration Service.
-
-1. Under **Azure Database Migration Service**, select **Create new**.
-
-1. In **Create Azure Database Migration Service**, enter a name for your Database Migration Service instance, and then select **Create**.
-
-1. Under **Set up integration runtime**, complete the following steps:
-
- 1. Select the **Download and install integration runtime** link to open the download link in a web browser. Download the integration runtime, and then install it on a computer that meets the prerequisites to connect to the source SQL Server instance.
-
- When installation is finished, Microsoft Integration Runtime Configuration Manager automatically opens to begin the registration process.
-
- 1. In the **Authentication key** table, copy one of the authentication keys that are provided in the wizard and paste it in Azure Data Studio. If the authentication key is valid, a green check icon appears in Integration Runtime Configuration Manager. A green check indicates that you can continue to **Register**.
-
- After you register the self-hosted integration runtime, close Microsoft Integration Runtime Configuration Manager.
-
- > [!NOTE]
- > For more information about how to use the self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md).
-
-1. In **Create Azure Database Migration Service** in Azure Data Studio, select **Test connection** to validate that the newly created Database Migration Service instance is connected to the newly registered self-hosted integration runtime.
-
-1. Return to the migration wizard in Azure Data Studio.
-
-## Start the database migration
-
-In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configuration you created, and then select **Start migration** to start the database migration.
-
-## Monitor the database migration
-
-1. In Azure Data Studio, in the server menu under **General**, select **Azure SQL Migration** to go to the dashboard for your Azure SQL migrations.
-
- Under **Database migration status**, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations.
-
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard":::
-
-1. Select **Database migrations in progress** to view active migrations.
-
- To get more information about a specific migration, select the database name.
-
- The migration details pane displays the backup files and their corresponding status:
-
- | Status | Description |
- |--|-|
- | Arrived | The backup file arrived in the source backup location and was validated. |
- | Uploading | The integration runtime is uploading the backup file to Azure storage. |
- | Uploaded | The backup file has been uploaded to Azure storage. |
- | Restoring | The service is restoring the backup file to SQL Server on Azure Virtual Machines. |
- | Restored | The backup file was successfully restored on SQL Server on Azure Virtual Machines. |
- | Canceled | The migration process was canceled. |
- | Ignored | The backup file was ignored because it doesn't belong to a valid database backup chain. |
-
-After all database backups are restored on the instance of SQL Server on Azure Virtual Machines, an automatic migration cutover is initiated by Database Migration Service to ensure that the migrated database is ready to use. The migration status changes from **In progress** to **Succeeded**.
-
-## Limitations
-
-Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
---
-## Next steps
--- Complete a quickstart to [migrate a database to SQL Server on Azure Virtual Machines by using the T-SQL RESTORE command](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).-- Learn more about [SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview).-- Learn how to [connect apps to SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).-- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
- Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio"-
-description: Learn how to migrate on-premises SQL Server to SQL Server on Azure Virtual Machines online by using Azure Data Studio and Azure Database Migration Service.
--- Previously updated : 06/07/2023---
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines online in Azure Data Studio
-
-Use the Azure SQL migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview).
-
-In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with minimal downtime by using Azure Data Studio with Azure Database Migration Service.
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
->
-> * Launch the Migrate to Azure SQL wizard in Azure Data Studio.
-> * Run an assessment of your source SQL Server database(s)
-> * Collect performance data from your source SQL Server
-> * Get a recommendation of the SQL Server on Azure Virtual Machine SKU best suited for your workload
-> * Specify details of your source SQL Server, backup location and your target SQL Server on Azure Virtual Machine
-> * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups.
-> * Start and monitor the progress for your migration.
-> * Perform the migration cutover when you are ready.
-
-This article describes an online migration from SQL Server to a SQL Server on Azure Virtual Machine. Offline migration, see [Migrate SQL Server to a SQL Server on Azure Virtual Machine offline using Azure Data Studio with DMS](tutorial-sql-server-to-virtual-machine-offline-ads.md).
-
-## Prerequisites
-
-To complete this tutorial, you need to:
-
-* [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
-* Have an Azure account that is assigned to one of the built-in roles listed below:
- - Contributor for the target SQL Server on Azure Virtual Machine (and Storage Account to upload your database backup files from SMB network share).
- - Reader role for the Azure Resource Groups containing the target SQL Server on Azure Virtual Machine or the Azure storage account.
- - Owner or Contributor role for the Azure subscription.
- - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-virtual-machine-ads.md)
- > [!IMPORTANT]
- > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
-* Create a target [SQL Server on Azure Virtual Machine](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal).
-
- > [!IMPORTANT]
- > If you have an existing Azure Virtual Machine, it should be registered with [SQL IaaS Agent extension in Full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes).
-* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission.
-* Use one of the following storage options for the full database and transaction log backup files:
- - SMB network share
- - Azure storage account file share or blob container
-
- > [!IMPORTANT]
- > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration.
- > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
- > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
- > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
- > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
-* Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
-* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target SQL Server on Azure Virtual Machine before migrating data. To learn more, see [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
- > [!TIP]
- > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), migration process using Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target SQL Server on Azure Virtual Machine.
-
-* If your database backups are in a network file share, provide a machine to install [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard provides the download link and authentication keys to download and install your self-hosted integration runtime. In preparation for the migration, ensure that the machine where you plan to install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
-
- | Domain names | Outbound ports | Description |
- | -- | -- | |
- | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For new created Data Factory in public cloud, locate the FQDN from your Self-hosted Integration Runtime key, which is in format `{datafactory}.{region}.datafactory.azure.net`. For old Data factory, if you don't see the FQDN in your Self-hosted Integration key, use *.frontend.clouddatahub.net instead. |
- | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled autoupdate, you can skip configuring this domain. |
- | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account for uploading database backups from your network share |
-
- > [!TIP]
- > If your database backup files are already provided in an Azure storage account, self-hosted integration runtime is not required during the migration process.
-
-* Runtime is installed on the machine using self-hosted integration runtime. The machine connects to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share. Also see [recommendations for using self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations)
-* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
-
-## Launch the Migrate to Azure SQL wizard in Azure Data Studio
-
-1. Open Azure Data Studio and select on the server icon to connect to your on-premises SQL Server (or SQL Server on Azure Virtual Machine).
-1. On the server connection, right-click and select **Manage**.
-1. On the server's home page, Select **Azure SQL Migration** extension.
-1. On the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to launch the migration wizard.
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard":::
-1. In the first step of the migration wizard, link your existing or new Azure account to Azure Data Studio.
-
-## Run database assessment, collect performance data and get Azure recommendation
-
-1. Select the database(s) to run assessment and select **Next**.
-1. Select SQL Server on Azure Virtual Machine as the target.
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/assessment-complete-target-selection.png" alt-text="Screenshot of assessment confirmation.":::
-1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**.
-1. Select the **Get Azure recommendation** button.
-2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and select the **Start** button.
-3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
-4. After 10 minutes you see a recommended configuration for your Azure SQL VM. You can also press the Refresh recommendation link after the initial 10 minutes to refresh the recommendation with the extra data collected.
-5. In the above **SQL Server on Azure Virtual Machine** box, select the **View details** button for more information about your recommendation.
-6. Close the view details box and press the **Next** button.
-
-## Configure migration settings
-
-1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**.
-2. Select **Online migration** as the migration mode.
- > [!NOTE]
- > In the online migration mode, the source SQL Server database can be used for read and write activity while database backups are continuously restored on the target SQL Server on Azure Virtual Machine. Application downtime is limited to duration for the cutover at the end of migration.
-3. In step 5, select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
- > [!NOTE]
- > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
--- For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.-
- |Field |Description |
- ||-|
- |**Source Credentials - Username** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Source Credentials - Password** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backups files in the network share that don't belong to the valid backup set will be automatically ignored during the migration process. |
- |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. |
- |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. |
- |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
--- For backups stored in an Azure storage blob container, specify the below details of the Target database name,
-Resource group, Azure storage account, Blob container from the corresponding drop-down lists.
-
- |Field |Description |
- ||-|
- |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
- |**Storage account details** |The resource group, storage account and container where backup files are located.
-
-4. Select **Next** to continue.
- > [!IMPORTANT]
- > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
--- The [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) no longer requires specific configurations on your Azure Storage account network settings to migrate your SQL Server databases to Azure. However, depending on your database backup location and desired storage account network settings, there are a few steps needed to ensure your resources can access the Azure Storage account. See the following table for the various migration scenarios and network configurations:-
- | Scenario | SMB network share | Azure Storage account container |
- | | | |
- | Enabled from all networks | No extra steps | No extra steps |
- | Enabled from selected virtual networks and IP addresses | [See 1a](#1aazure-blob-storage-network-configuration) | [See 2a](#2aazure-blob-storage-network-configuration-private-endpoint)|
- | Enabled from selected virtual networks and IP addresses + private endpoint | [See 1b](#1bazure-blob-storage-network-configuration) | [See 2b](#2bazure-blob-storage-network-configuration-private-endpoint) |
-
- ### 1a - Azure Blob storage network configuration
- If you have your Self-Hosted Integration Runtime (SHIR) installed on an Azure VM, see section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration). If you have your Self-Hosted Integration Runtime (SHIR) installed on your on-premises network, you need to add your client IP address of the hosting machine in your Azure Storage account as so:
-
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/storage-networking-details.png" alt-text="Screenshot that shows the storage account network details.":::
-
- To apply this specific configuration, connect to the Azure portal from the SHIR machine, open the Azure Storage account configuration, select **Networking**, and then mark the **Add your client IP address** checkbox. Select **Save** to make the change persistent. See section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining steps.
-
- ### 1b - Azure Blob storage network configuration
- If your SHIR is hosted on an Azure VM, you need to add the virtual network of the VM to the Azure Storage account since the Virtual Machine has a nonpublic IP address that can't be added to the IP address range section.
-
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/storage-networking-firewall.png" alt-text="Screenshot that shows the storage account network firewall configuration.":::
-
- To apply this specific configuration, locate your Azure Storage account, from the **Data storage** panel select **Networking**, then mark the **Add existing virtual network** checkbox. A new panel opens up, select the subscription, virtual network, and subnet of the Azure VM hosting the Integration Runtime. This information can be found on the **Overview** page of the Azure Virtual Machine. The subnet may say **Service endpoint required** if so, select **Enable**. Once everything is ready, save the updates. Refer to section [2a - Azure Blob storage network configuration (Private endpoint)a](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining required steps.
-
- ### 2a - Azure Blob storage network configuration (Private endpoint)
- If your backups are placed directly into an Azure Storage Container, all the above steps are unnecessary since there's no Integration Runtime communicating with the Azure Storage account. However, we still need to ensure that the target SQL Server instance can communicate with the Azure Storage account to restore the backups from the container. To apply this specific configuration, follow the instructions in section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration), specifying the target SQL instance Virtual Network when filling out the "Add existing virtual network" popup.
-
- ### 2b - Azure Blob storage network configuration (Private endpoint)
- If you have a private endpoint set up on your Azure Storage account, follow the steps outlined in section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint). However, you need to select the subnet of the private endpoint, not just the target SQL Server subnet. Ensure the private endpoint is hosted in the same VNet as the target SQL Server instance. If it isn't, create another private endpoint using the process in the Azure Storage account configuration section.
-
-## Create Azure Database Migration Service
-
-1. Create a new Azure Database Migration Service or reuse an existing Service that you previously created.
- > [!NOTE]
- > If you had previously created DMS using the Azure Portal, you cannot reuse it in the migration wizard in Azure Data Studio. Only DMS created previously using Azure Data Studio can be reused.
-1. Select the **Resource group** where you have an existing DMS or need to create a new one. The **Azure Database Migration Service** dropdown lists any existing DMS in the selected resource group.
-1. To reuse an existing DMS, select it from the dropdown list and the status of the self-hosted integration runtime will be displayed at the bottom of the page.
-1. To create a new DMS, select on **Create new**.
-1. On the **Create Azure Database Migration Service**, screen provide the name for your DMS and select **Create**.
-1. After successful creation of DMS, you'll be provided with details to **Setup integration runtime**.
-1. Select on **Download and install integration runtime** to open the download link in a web browser. Complete the download. Install the integration runtime on a machine that meets the prerequisites of connecting to source SQL Server and the location containing the source backup.
-1. After the installation is complete, the **Microsoft Integration Runtime Configuration Manager** will automatically launch to begin the registration process.
-1. Copy and paste one of the authentication keys provided in the wizard screen in Azure Data Studio. If the authentication key is valid, a green check icon is displayed in the Integration Runtime Configuration Manager indicating that you can continue to **Register**.
-1. After successfully completing the registration of self-hosted integration runtime, close the **Microsoft Integration Runtime Configuration Manager** and switch back to the migration wizard in Azure Data Studio.
-1. Select **Test connection** in the **Create Azure Database Migration Service** screen in Azure Data Studio to validate that the newly created DMS is connected to the newly registered self-hosted integration runtime and select **Done**.
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/test-connection-integration-runtime-complete.png" alt-text="Test connection integration runtime":::
-1. Review the summary and select **Done** to start the database migration.
-
-## Monitor your migration
-
-1. On the **Database Migration Status**, you can track the migrations in progress, migrations completed, and migrations failed (if any).
-
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard":::
-1. Select **Database migrations in progress** to view ongoing migrations and get further details by selecting the database name.
-1. The migration details page displays the backup files and the corresponding status:
-
- | Status | Description |
- |--|-|
- | Arrived | Backup file arrived in the source backup location and validated |
- | Uploading | Integration runtime is currently uploading the backup file to Azure storage|
- | Uploaded | Backup file is uploaded to Azure storage |
- | Restoring | Azure Database Migration Service is currently restoring the backup file to SQL Server on Azure Virtual Machine|
- | Restored | Backup file is successfully restored on SQL Server on Azure Virtual Machine |
- | Canceled | Migration process was canceled |
- | Ignored | Backup file was ignored as it doesn't belong to a valid database backup chain |
-
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/online-to-vm-migration-status-detailed.png" alt-text="online vm backup restore details":::
-
-## Complete migration cutover
-
-The final step of the tutorial is to complete the migration cutover. The completion ensures the migrated database in SQL Server on Azure Virtual Machine is ready for use. Downtime is required for applications that connect to the database and the timing of the cutover needs to be carefully planned with business or application stakeholders.
-
-To complete the cutover:
-
-1. Stop all incoming transactions to the source database.
-2. Make application configuration changes to point to the target database in SQL Server on Azure Virtual Machines.
-3. Take a final log backup of the source database in the backup location specified
-4. Put the source database in read-only mode. Therefore, users can read data from the database but not modify it.
-5. Ensure all database backups have the status *Restored* in the monitoring details page.
-6. Select *Complete cutover* in the monitoring details page.
-
-During the cutover process, the migration status changes from *in progress* to *completing*. The migration status changes to *succeeded* when the cutover process is completed. The database migration is successful and that the migrated database is ready for use.
-
-## Limitations
-
-Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
---
-## Next steps
-
-* How to migrate a database to SQL Server on Azure Virtual Machines using the T-SQL RESTORE command, see [Migrate a SQL Server database to SQL Server on a virtual machine](/azure/azure-sql/virtual-machines/windows/migrate-to-vm-from-sql-server).
-* For information about SQL Server on Azure Virtual Machines, see [Overview of SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview).
-* For information about connecting apps to SQL Server on Azure Virtual Machines, see [Connect applications](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
-* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Transparent Data Encryption Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-transparent-data-encryption-migration-ads.md
Before you begin the tutorial:
- Contributor for the target managed instance (and Storage Account to upload your backups of the TDE certificate files from SMB network share). - Reader role for the Azure Resource Groups containing the target managed instance or the Azure storage account. - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
- - As an alternative to using the above built-in roles, you can assign a custom role. For more information, see [Custom roles: Online SQL Server to SQL Managed Instance migrations using ADS](resource-custom-roles-sql-db-managed-instance-ads.md).
+ - As an alternative to using the above built-in roles, you can assign a custom role. For more information, see [Custom roles: Online SQL Server to SQL Managed Instance migrations using ADS](/data-migration/sql-server/managed-instance/custom-roles).
- Create a target instance of [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, complete the
Check the following step-by-step tutorials for more information about migrating databases online or offline to Azure SQL Managed Instance targets:
- - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance online](./tutorial-sql-server-managed-instance-offline-ads.md)
- - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline](./tutorial-sql-server-managed-instance-offline-ads.md)
+ - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance online](/data-migration/sql-server/managed-instance/database-migration-service)
+ - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline](/data-migration/sql-server/managed-instance/database-migration-service)
## Post-migration steps
The following table describes the current status of the TDE-enabled database mig
## Related content - [Migrate databases with Azure SQL Migration extension for Azure Data Studio](migration-using-azure-data-studio.md)-- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](tutorial-sql-server-azure-sql-database-offline.md)-- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](tutorial-sql-server-managed-instance-online-ads.md)-- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](tutorial-sql-server-to-virtual-machine-online-ads.md)
+- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](/data-migration/sql-server/database/database-migration-service)
+- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](/data-migration/sql-server/managed-instance/database-migration-service)
+- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](/data-migration/sql-server/virtual-machines/database-migration-service)
education-hub Custom Tenant Set Up Classroom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/custom-tenant-set-up-classroom.md
- Title: Create a custom Azure Classroom tenant and billing profile
-description: This article shows you how to make a custom tenant and billing profile for educators in your organization.
---- Previously updated : 2/22/2024---
-# Create a custom tenant and billing profile for Azure Classroom
-
-This article is for IT admins who use Azure Classroom (subject to regional availability). When you sign up for this offer, you should already have a tenant and billing profile created. But this article shows you how to create a custom tenant and billing profile and then associate them with an educator.
-
-## Prerequisites
-
-You must be signed up for Azure Classroom.
-
-## Create a new tenant
-
-1. Go to the [Azure portal](https://ms.portal.azure.com/), search for **entra**, and select the **Microsoft Entra ID** result.
-1. On the **Manage tenants** tab, select **Create**.
-1. Complete the tenant information.
-1. On the **Tenant details** pane, copy the **Tenant ID** value for the newly created tenant. You'll use it in the next procedure.
-
- :::image type="content" source="media/custom-tenant-set-up-classroom/save-tenant-id.png" alt-text="Screenshot that shows tenant details and the button for copying the tenant ID." border="true":::
-
-## Associate the new tenant with a university tenant
-
-1. Go to **Cost Management** and select **Access control (IAM)**.
-1. Select **Associated billing tenants**.
-1. Select **Add** and paste the tenant ID of the newly created tenant.
-1. Select the box for billing management.
-1. Select **Add** to complete the association between the newly created tenant and university tenant.
-
-## Invite an educator to the newly created tenant
-
-1. Switch to the newly created tenant.
-1. Go to **Users**, and then select **New user**.
-1. On the **New user** pane, select **Invite user**, fill in the **Identity** information, and change the role to **Global Administrator**. Then select **Invite**.
-
- :::image type="content" source="media/custom-tenant-set-up-classroom/add-user.png" alt-text="Screenshot of selections for inviting an existing user to a tenant." border="true":::
-1. Tell the educator to accept the invitation to this tenant.
-1. After the educator joins the tenant, go to the tenant properties and select **Yes** under **Access management for Azure resources**.
-
-## Next step
-
-Now that you've created a custom tenant, you can go to the Azure Education Hub and begin distributing credit to educators to use in labs.
-
-> [!div class="nextstepaction"]
-> [Create an assignment and allocate credit](create-assignment-allocate-credit.md)
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md
Forced Tunnel mode can't be configured at run time. You can either redeploy the
## Outbound SNAT support
-All outbound virtual network traffic IP addresses are translated to the Azure Firewall public IP (Source Network Address Translation). You can identify and allow traffic originating from your virtual network to remote Internet destinations. Azure Firewall doesn't SNAT when the destination IP is a private IP range per [IANA RFC 1918](https://tools.ietf.org/html/rfc1918).
+All outbound virtual network traffic IP addresses are translated to the Azure Firewall public IP (Source Network Address Translation). You can identify and allow traffic originating from your virtual network to remote Internet destinations. When Azure Firewall has multiple public IPs configured for providing outbound connectivity, it will use IPs as needed based on available ports. It will only use the next available public IP once the connections cannot be made from the current public IP.
+
+In scenarios where you have high throughput or dynamic traffic patterns, it is recommended to us an [Azure NAT Gateway](/azure/nat-gateway/nat-overview). Azure NAT Gateway dynamically selects SNAT ports for providing outbound connectivity,
+so all the SNAT ports provided by its associated IP addresses is available on demand. To learn more about how to integrate NAT Gateway with Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](/azure/firewall/integrate-with-nat-gateway).
+
+Azure NAT Gateway can be used with Azure Firewall by associating NAT Gateway to the Azure Firewall subnet. See the [Integrate NAT gateway with Azure Firewall](/azure/nat-gateway/tutorial-hub-spoke-nat-firewall) tutorial for guidance on this configuration.
+
+Azure Firewall doesn't SNAT when the destination IP is a private IP range per [IANA RFC 1918](https://tools.ietf.org/html/rfc1918).
If your organization uses a public IP address range for private networks, Azure Firewall will SNAT the traffic to one of the firewall private IP addresses in AzureFirewallSubnet. You can configure Azure Firewall to **not** SNAT your public IP address range. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale set instance (Minimum of two instances), and you can associate up to [250 public IP addresses](./deploy-multi-public-ip-powershell.md). Depending on your architecture and traffic patterns, you might need more than the 1,248,000 available SNAT ports with this configuration. For example, when you use it to protect large [Azure Virtual Desktop deployments](./protect-azure-virtual-desktop.md) that integrate with Microsoft 365 Apps.
-One of the challenges with using a large number of public IP addresses is when there are downstream IP address filtering requirements. Azure Firewall randomly selects the source public IP address to use for a connection, so you need to allow all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes.
+One of the challenges with using a large number of public IP addresses is when there are downstream IP address filtering requirements. When Azure Firewall is associated with multiple public IP addresses, you need to apply the filtering requirements across all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes.
A better option to scale and dynamically allocate outbound SNAT ports is to use an [Azure NAT Gateway](../virtual-network/nat-gateway/nat-overview.md). It provides 64,512 SNAT ports per public IP address and supports up to 16 public IP addresses. This effectively provides up to 1,032,192 outbound SNAT ports. Azure NAT Gateway also [dynamically allocates SNAT ports](/azure/nat-gateway/nat-gateway-resource#nat-gateway-dynamically-allocates-snat-ports) on a subnet level, so all the SNAT ports provided by its associated IP addresses is available on demand to provide outbound connectivity.
governance General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/troubleshoot/general.md
Title: Troubleshoot common errors description: Learn how to troubleshoot problems with creating policy definitions, the various SDKs, and the add-on for Kubernetes. Previously updated : 10/26/2022 Last updated : 06/27/2024 + # Troubleshoot errors with using Azure Policy
-When you create policy definitions, work with SDKs, or set up the
-[Azure Policy for Kubernetes](../concepts/policy-for-kubernetes.md) add-on, you might run into
-errors. This article describes various general errors that might occur, and it suggests ways to
-resolve them.
+When you create policy definitions, work with SDKs, or set up the [Azure Policy for Kubernetes](../concepts/policy-for-kubernetes.md) add-on, you might run into errors. This article describes various general errors that might occur, and it suggests ways to resolve them.
## Find error details The location of the error details depends on what aspect of Azure Policy you're working with. -- If you're working with a custom policy, go to the Azure portal to get linting feedback about the
- schema, or review resulting [compliance data](../how-to/get-compliance-data.md) to see how
- resources were evaluated.
-- If you're working with any of the various SDKs, the SDK provides details about why the function
- failed.
-- If you're working with the add-on for Kubernetes, start with the
- [logging](../concepts/policy-for-kubernetes.md#logging) in the cluster.
+- If you're working with a custom policy, go to the Azure portal to get linting feedback about the schema, or review resulting [compliance data](../how-to/get-compliance-data.md) to see how resources were evaluated.
+- If you're working with any of the various SDKs, the SDK provides details about why the function failed.
+- If you're working with the add-on for Kubernetes, start with the [logging](../concepts/policy-for-kubernetes.md#logging) in the cluster.
## General errors
The location of the error details depends on what aspect of Azure Policy you're
#### Issue
-An incorrect or nonexistent alias is used in a policy definition. Azure Policy uses
-[aliases](../concepts/definition-structure.md#aliases) to map to Azure Resource Manager properties.
+An incorrect or nonexistent alias is used in a policy definition. Azure Policy uses [aliases](../concepts/definition-structure-alias.md) to map to Azure Resource Manager properties.
#### Cause
An incorrect or nonexistent alias is used in a policy definition.
#### Resolution
-First, validate that the Resource Manager property has an alias. To look up the available aliases,
-go to [Azure Policy extension for Visual Studio Code](../how-to/extension-for-vscode.md) or the SDK.
-If the alias for a Resource Manager property doesn't exist, create a support ticket.
+First, validate that the Resource Manager property has an alias. To look up the available aliases, go to [Azure Policy extension for Visual Studio Code](../how-to/extension-for-vscode.md) or the SDK. If the alias for a Resource Manager property doesn't exist, create a support ticket.
### Scenario: Evaluation details aren't up to date
A resource is in the _Not Started_ state, or the compliance details aren't curre
#### Cause
-A new policy or initiative assignment takes about five minutes to be applied. New or updated
-resources within scope of an existing assignment become available in about 15 minutes. A
-standard compliance scan occurs every 24 hours. For more information, see
-[evaluation triggers](../how-to/get-compliance-data.md#evaluation-triggers).
+A new policy or initiative assignment takes about five minutes to be applied. New or updated resources within scope of an existing assignment become available in about 15 minutes. A standard compliance scan occurs every 24 hours. For more information, see [evaluation triggers](../how-to/get-compliance-data.md#evaluation-triggers).
#### Resolution
-First, wait an appropriate amount of time for an evaluation to finish and compliance results to
-become available in the Azure portal or the SDK. To start a new evaluation scan with Azure
-PowerShell or the REST API, see
-[On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
+First, wait an appropriate amount of time for an evaluation to finish and compliance results to become available in the Azure portal or the SDK. To start a new evaluation scan with Azure PowerShell or the REST API, see [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
### Scenario: Compliance isn't as expected #### Issue
-A resource isn't in either the _Compliant_ or _Not-Compliant_ evaluation state that's expected for
-the resource.
+A resource isn't in either the _Compliant_ or _Not-Compliant_ evaluation state expected for the resource.
#### Cause
-The resource isn't in the correct scope for the policy assignment, or the policy definition doesn't
-operate as intended.
+The resource isn't in the correct scope for the policy assignment, or the policy definition doesn't operate as intended.
#### Resolution
-To troubleshoot your policy definition, do the following:
-
-1. First, wait the appropriate amount of time for an evaluation to finish and compliance results
- to become available in the Azure portal or SDK.
+To troubleshoot your policy definition, do the following steps:
-1. To start a new evaluation scan with Azure PowerShell or the REST API, see
- [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
+1. First, wait the appropriate amount of time for an evaluation to finish and compliance results to become available in the Azure portal or SDK.
+1. To start a new evaluation scan with Azure PowerShell or the REST API, see [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
1. Ensure that the assignment parameters and assignment scope are set correctly.
-1. Check the [policy definition mode](../concepts/definition-structure.md#mode):
+1. Check the [policy definition mode](../concepts/definition-structure-basics.md#mode):
- The mode should be `all` for all resource types. - The mode should be `indexed` if the policy definition checks for tags or location.
-1. Ensure that the scope of the resource isn't
- [excluded](../concepts/assignment-structure.md#excluded-scopes) or
- [exempt](../concepts/exemption-structure.md).
-1. If compliance for a policy assignment shows `0/0` resources, no resources were determined to be
- applicable within the assignment scope. Check both the policy definition and the assignment
- scope.
+1. Ensure that the scope of the resource isn't [excluded](../concepts/assignment-structure.md#excluded-scopes) or [exempt](../concepts/exemption-structure.md).
+1. If compliance for a policy assignment shows `0/0` resources, no resources were determined to be applicable within the assignment scope. Check both the policy definition and the assignment scope.
1. For a noncompliant resource that was expected to be compliant, see [determine the reasons for noncompliance](../how-to/determine-non-compliance.md). The comparison of the definition to the evaluated property value indicates why a resource was noncompliant. - If the **target value** is wrong, revise the policy definition. - If the **current value** is wrong, validate the resource payload through `resources.azure.com`.
-1. For a [Resource Provider mode](../concepts/definition-structure.md#resource-provider-modes)
- definition that supports a RegEx string parameter (such as `Microsoft.Kubernetes.Data` and the
- built-in definition "Container images should be deployed from trusted registries only"), validate
- that the [RegEx string](/dotnet/standard/base-types/regular-expression-language-quick-reference)
- parameter is correct.
-1. For other common issues and solutions, see
- [Troubleshoot: Enforcement not as expected](#scenario-enforcement-not-as-expected).
+1. For a [Resource Provider mode](../concepts/definition-structure-basics.md#resource-provider-modes) definition that supports a RegEx string parameter (such as `Microsoft.Kubernetes.Data` and the built-in definition "Container images should be deployed from trusted registries only"), validate that the [RegEx string](/dotnet/standard/base-types/regular-expression-language-quick-reference) parameter is correct.
+1. For other common issues and solutions, see [Troubleshoot: Enforcement not as expected](#scenario-enforcement-not-as-expected).
-If you still have an issue with your duplicated and customized built-in policy definition or custom
-definition, create a support ticket under **Authoring a policy** to route the issue correctly.
+If you still have an issue with your duplicated and customized built-in policy definition or custom definition, create a support ticket under **Authoring a policy** to route the issue correctly.
### Scenario: Enforcement not as expected #### Issue
-A resource that you expect Azure Policy to act on isn't being acted on, and there's no entry in the
-[Azure Activity log](../../../azure-monitor/essentials/platform-logs-overview.md).
+A resource that you expect Azure Policy to act on isn't being acted on, and there's no entry in the [Azure Activity log](../../../azure-monitor/data-sources.md#azure-resources).
#### Cause
-The policy assignment has been configured for an
-[**enforcementMode**](../concepts/assignment-structure.md#enforcement-mode) setting of _Disabled_.
-While **enforcementMode** is disabled, the policy effect isn't enforced, and there's no entry in the
-Activity log.
+The policy assignment was configured for an [enforcementMode](../concepts/assignment-structure.md#enforcement-mode) setting of _Disabled_. While `enforcementMode` is disabled, the policy effect isn't enforced, and there's no entry in the Activity log.
#### Resolution
-Troubleshoot your policy assignment's enforcement by doing the following:
-
-1. First, wait the appropriate amount of time for an evaluation to finish and compliance results to
- become available in the Azure portal or the SDK.
+Troubleshoot your policy assignment's enforcement by doing the following steps:
-1. To start a new evaluation scan with Azure PowerShell or the REST API, see
- [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
-1. Ensure that the assignment parameters and assignment scope are set correctly and that
- **enforcementMode** is _Enabled_.
-1. Check the [policy definition mode](../concepts/definition-structure.md#mode):
+1. First, wait the appropriate amount of time for an evaluation to finish and compliance results to become available in the Azure portal or the SDK.
+1. To start a new evaluation scan with Azure PowerShell or the REST API, see [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
+1. Ensure that the assignment parameters and assignment scope are set correctly and that `enforcementMode` is _Enabled_.
+1. Check the [policy definition mode](../concepts/definition-structure-basics.md#mode):
- The mode should be `all` for all resource types. - The mode should be `indexed` if the policy definition checks for tags or location.
-1. Ensure that the scope of the resource isn't
- [excluded](../concepts/assignment-structure.md#excluded-scopes) or
- [exempt](../concepts/exemption-structure.md).
-1. Verify that the resource payload matches the policy logic. This can be done by
- [capturing an HTTP Archive (HAR) trace](../../../azure-portal/capture-browser-trace.md) or
- reviewing the Azure Resource Manager template (ARM template) properties.
-1. For other common issues and solutions, see
- [Troubleshoot: Compliance not as expected](#scenario-compliance-isnt-as-expected).
-
-If you still have an issue with your duplicated and customized built-in policy definition or custom
-definition, create a support ticket under **Authoring a policy** to route the issue correctly.
+1. Ensure that the scope of the resource isn't [excluded](../concepts/assignment-structure.md#excluded-scopes) or [exempt](../concepts/exemption-structure.md).
+1. Verify that the resource payload matches the policy logic. This verification can be done by [capturing an HTTP Archive (HAR) trace](../../../azure-portal/capture-browser-trace.md) or reviewing the Azure Resource Manager template (ARM template) properties.
+1. For other common issues and solutions, see [Troubleshoot: Compliance not as expected](#scenario-compliance-isnt-as-expected).
+
+If you still have an issue with your duplicated and customized built-in policy definition or custom definition, create a support ticket under **Authoring a policy** to route the issue correctly.
### Scenario: Denied by Azure Policy
Creation or update of a resource is denied.
#### Cause
-A policy assignment to the scope of your new or updated resource meets the criteria of a policy
-definition with a [Deny](../concepts/effects.md#deny) effect. Resources that meet these definitions
-are prevented from being created or updated.
+A policy assignment to the scope of your new or updated resource meets the criteria of a policy definition with a [Deny](../concepts/effect-deny.md) effect. Resources that meet these definitions are prevented from being created or updated.
#### Resolution
-The error message from a deny policy assignment includes the policy definition and policy assignment
-IDs. If the error information in the message is missed, it's also available in the
-[Activity log](../../../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log). Use this
-information to get more details to understand the resource restrictions and adjust the resource
-properties in your request to match allowed values.
+The error message from a deny policy assignment includes the policy definition and policy assignment IDs. If the error information in the message is missed, it's also available in the [Activity log](../../../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log). Use this information to get more details to understand the resource restrictions and adjust the resource properties in your request to match allowed values.
### Scenario: Definition targets multiple resource types #### Issue
-A policy definition that includes multiple resource types fails validation during creation or update
-with the following error:
+A policy definition that includes multiple resource types fails validation during creation or update with the following error:
```error The policy definition '{0}' targets multiple resource types, but the policy rule is authored in a way that makes the policy not applicable to the target resource types '{1}'.
The policy definition '{0}' targets multiple resource types, but the policy rule
#### Cause
-The policy definition rule has one or more conditions that don't get evaluated by the target
-resource types.
+The policy definition rule has one or more conditions that don't get evaluated by the target resource types.
#### Resolution
-If an alias is used, make sure that the alias gets evaluated against only the resource type it
-belongs to by adding a type condition before it. An alternative is to split the policy definition
-into multiple definitions to avoid targeting multiple resource types.
+If an alias is used, make sure that the alias gets evaluated against only the resource type it belongs to by adding a type condition before it. An alternative is to split the policy definition into multiple definitions to avoid targeting multiple resource types.
### Scenario: Subscription limit exceeded #### Issue
-An error message on the compliance page in Azure portal is shown when retrieving compliance for
-policy assignments.
+An error message on the compliance page in Azure portal is shown when retrieving compliance for policy assignments.
#### Cause
-The number of subscriptions under the selected scopes in the request has exceeded the limit of 5000
-subscriptions. The compliance results may be partially displayed.
+The number of subscriptions under the selected scopes in the request exceeded the limit of 5,000 subscriptions. The compliance results might be partially displayed.
#### Resolution
-Select a more granular scope with fewer child subscriptions to see the complete results.
+To see the complete results, select a more granular scope with fewer child subscriptions.
## Template errors
Select a more granular scope with fewer child subscriptions to see the complete
#### Issue
-Azure Policy supports a number of ARM template functions and functions that are available only in a
-policy definition. Resource Manager processes these functions as part of a deployment instead of as
-part of a policy definition.
+Azure Policy supports many ARM template functions and functions that are available only in a policy definition. Resource Manager processes these functions as part of a deployment instead of as part of a policy definition.
#### Cause
-Using supported functions, such as `parameter()` or `resourceGroup()`, results in the processed
-outcome of the function at deployment time instead of allowing the function for the policy
-definition and Azure Policy engine to process.
+Using supported functions, such as `parameter()` or `resourceGroup()`, results in the processed outcome of the function at deployment time instead of allowing the function for the policy definition and Azure Policy engine to process.
#### Resolution
-To pass a function through as part of a policy definition, escape the entire string with `[` such
-that the property looks like `[[resourceGroup().tags.myTag]`. The escape character causes Resource
-Manager to treat the value as a string when it processes the template. Azure Policy then places the
-function into the policy definition, which allows it to be dynamic as expected. For more
-information, see
-[Syntax and expressions in Azure Resource Manager templates](../../../azure-resource-manager/templates/template-expressions.md).
+To pass a function through as part of a policy definition, escape the entire string with `[` such that the property looks like `[[resourceGroup().tags.myTag]`. The escape character causes Resource Manager to treat the value as a string when it processes the template. Azure Policy then places the function into the policy definition, which allows it to be dynamic as expected. For more information, see [Syntax and expressions in Azure Resource Manager templates](../../../azure-resource-manager/templates/template-expressions.md).
## Add-on for Kubernetes installation errors
The generated password includes a comma (`,`), which the Helm Chart is splitting
#### Resolution
-When you run `helm install azure-policy-addon`, escape the comma (`,`) in the password value with a
-backslash (`\`).
+When you run `helm install azure-policy-addon`, escape the comma (`,`) in the password value with a backslash (`\`).
### Scenario: Installation by using a Helm Chart fails because the name already exists
The `helm install azure-policy-addon` command fails, and it returns the followin
#### Cause
-The Helm Chart with the name `azure-policy-addon` has already been installed or partially installed.
+The Helm Chart with the name `azure-policy-addon` was already installed or partially installed.
#### Resolution
-Follow the instructions to
-[remove the Azure Policy for Kubernetes add-on](../concepts/policy-for-kubernetes.md#remove-the-add-on),
-then rerun the `helm install azure-policy-addon` command.
+Follow the instructions to [remove the Azure Policy for Kubernetes add-on](../concepts/policy-for-kubernetes.md#remove-the-add-on), then rerun the `helm install azure-policy-addon` command.
### Scenario: Azure virtual machine user-assigned identities are replaced by system-assigned managed identities #### Issue
-After you assign Guest Configuration policy initiatives to audit settings inside a machine, the
-user-assigned managed identities that were assigned to the machine are no longer assigned. Only a
-system-assigned managed identity is assigned.
+After you assign Guest Configuration policy initiatives to audit settings inside a machine, the user-assigned managed identities that were assigned to the machine are no longer assigned. Only a system-assigned managed identity is assigned.
#### Cause
-The policy definitions that were previously used in Guest Configuration DeployIfNotExists
-definitions ensured that a system-assigned identity is assigned to the machine, but they also
-removed the user-assigned identity assignments.
+The policy definitions that were previously used in Guest Configuration `deployIfNotExists` definitions ensured that a system-assigned identity is assigned to the machine. But they also removed the user-assigned identity assignments.
#### Resolution
-The definitions that previously caused this issue appear as _\[Deprecated\]_, and they're replaced
-by policy definitions that manage prerequisites without removing user-assigned managed identities. A
-manual step is required. Delete any existing policy assignments that are marked as
-_\[Deprecated\]_, and replace them with the updated prerequisite policy initiative and policy
-definitions that have the same name as the original.
+The definitions that previously caused this issue appear as `\[Deprecated\]`, and are replaced by policy definitions that manage prerequisites without removing user-assigned managed identities. A manual step is required. Delete any existing policy assignments that are marked as `\[Deprecated\]`, and replace them with the updated prerequisite policy initiative and policy definitions that have the same name as the original.
-For a detailed narrative, see the blog post
-[Important change released for Guest Configuration audit policies](https://techcommunity.microsoft.com/t5/azure-governance-and-management/important-change-released-for-guest-configuration-audit-policies/ba-p/1655316).
+For a detailed narrative, see the blog post [Important change released for Guest Configuration audit policies](https://techcommunity.microsoft.com/t5/azure-governance-and-management/important-change-released-for-guest-configuration-audit-policies/ba-p/1655316).
## Add-on for Kubernetes general errors
Ensure that the domains and ports mentioned in the following article are open:
#### Issue
-The add-on can't reach the Azure Policy service endpoint, and it returns one of the following
-errors:
+The add-on can't reach the Azure Policy service endpoint, and it returns one of the following errors:
- `azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://gov-prod-policy-data.trafficmanager.net/checkDataPolicyCompliance?api-version=2019-01-01-preview: StatusCode=404` - `adal: Refresh request failed. Status Code = '404'. Response body: getting assigned identities for pod kube-system/azure-policy-8c785548f-r882p in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>` #### Cause
-This error occurs when _aad-pod-identity_ is installed on the cluster and the _kube-system_ pods
-aren't excluded in _aad-pod-identity_.
+This error occurs when `aad-pod-identity` is installed on the cluster and the _kube-system_ pods aren't excluded in `aad-pod-identity`.
-The _aad-pod-identity_ component Node Managed Identity (NMI) pods modify the nodes' iptables to
-intercept calls to the Azure instance metadata endpoint. This setup means that any request that's
-made to the metadata endpoint is intercepted by NMI, even if the pod doesn't use _aad-pod-identity_.
-The _AzurePodIdentityException_ CustomResourceDefinition (CRD) can be configured to inform
-_aad-pod-identity_ that any requests to a metadata endpoint that originate from a pod matching the
-labels defined in the CRD should be proxied without any processing in NMI.
+The `aad-pod-identity` component Node Managed Identity (NMI) pods modify the nodes' iptables to intercept calls to the Azure instance metadata endpoint. This setup means that any request made to the metadata endpoint is intercepted by NMI, even if the pod doesn't use `aad-pod-identity`. The `AzurePodIdentityException` CustomResourceDefinition (CRD) can be configured to inform `aad-pod-identity` that any requests to a metadata endpoint that originate from a pod matching the labels defined in the CRD should be proxied without any processing in NMI.
#### Resolution
-Exclude the system pods that have the `kubernetes.azure.com/managedby: aks` label in _kube-system_
-namespace in _aad-pod-identity_ by configuring the _AzurePodIdentityException_ CRD.
+Exclude the system pods that have the `kubernetes.azure.com/managedby: aks` label in _kube-system_ namespace in `aad-pod-identity` by configuring the `AzurePodIdentityException` CRD.
-For more information, see
-[Disable the Azure Active Directory (Azure AD) pod identity for a specific pod/application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception).
+For more information, see [Disable the Azure Active Directory (Azure AD) pod identity for a specific pod/application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception).
To configure an exception, follow this example:
spec:
#### Issue
-The add-on can reach the Azure Policy service endpoint, but the add-on logs display one of the
-following errors:
+The add-on can reach the Azure Policy service endpoint, but the add-on logs display one of the following errors:
- `The resource provider 'Microsoft.PolicyInsights' is not registered in subscription '{subId}'. See https://aka.ms/policy-register-subscription for how to register subscriptions.`
following errors:
#### Cause
-The 'Microsoft.PolicyInsights' resource provider isn't registered. It must be registered for the
-add-on to get policy definitions and return compliance data.
+The `Microsoft.PolicyInsights` resource provider isn't registered. It must be registered for the add-on to get policy definitions and return compliance data.
#### Resolution
-Register the 'Microsoft.PolicyInsights' resource provider in the cluster subscription. For
-instructions, see
-[Register a resource provider](../../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+Register the `Microsoft.PolicyInsights` resource provider in the cluster subscription. For instructions, see [Register a resource provider](../../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
### Scenario: The subscription is disabled
The add-on can reach the Azure Policy service endpoint, but the following error
#### Cause
-This error means that the subscription was determined to be problematic, and the feature flag
-`Microsoft.PolicyInsights/DataPlaneBlocked` was added to block the subscription.
+This error means that the subscription was determined to be problematic, and the feature flag `Microsoft.PolicyInsights/DataPlaneBlocked` was added to block the subscription.
#### Resolution
To investigate and resolve this issue, [contact the feature team](mailto:azuredg
#### Issue
-When attempting to create a custom policy definition from the Azure portal page for policy
-definitions, you select the "Duplicate definition" button. After assigning the policy, you
-find machines are _NonCompliant_ because no guest configuration assignment resource exists.
+When attempting to create a custom policy definition from the Azure portal page for policy definitions, you select the **Duplicate definition** button. After assigning the policy, you find machines are _NonCompliant_ because no guest configuration assignment resource exists.
#### Cause
-Guest configuration relies on custom metadata added to policy definitions when
-creating guest configuration assignment resources. The "Duplicate definition" activity in
-the Azure portal does not copy custom metadata.
+Guest configuration relies on custom metadata added to policy definitions when creating guest configuration assignment resources. The _Duplicate definition_ activity in the Azure portal doesn't copy custom metadata.
#### Resolution
New-AzPolicyDefinition -name (new-guid).guid -DisplayName "$($def.DisplayName) (
#### Issue
-In the event of a Kubernetes cluster connectivity failure, evaluation for newly created or updated resources may be bypassed due to Gatekeeper's fail-open behavior.
+If there's a Kubernetes cluster connectivity failure, evaluation for newly created or updated resources might be bypassed due to Gatekeeper's fail-open behavior.
#### Cause
The GK fail-open model is by design and based on community feedback. Gatekeeper
#### Resolution
-In the above event, the error case can be monitored from the [admission webhook metrics](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhook-metrics) provided by the kube-apiserver. And even if evaluation is bypassed at creation time and an object is created, it will still be reported on Azure Policy compliance as non-compliant as a flag to customers.
+In the prior event, the error case can be monitored from the [admission webhook metrics](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhook-metrics) provided by the `kube-apiserver`. If evaluation is bypassed at creation time and an object is created, it's reported on Azure Policy compliance as non-compliant as a flag to customers.
-Regardless of the above, in such a scenario, Azure policy will still retain the last known policy on the cluster and keep the guardrails in place.
+Regardless of the scenario, Azure policy retains the last known policy on the cluster and keeps the guardrails in place.
## Next steps
-If your problem isn't listed in this article or you can't resolve it, get support by visiting one of
-the following channels:
--- Get answers from experts through
- [Microsoft Q&A](/answers/topics/azure-policy.html).
-- Connect with [@AzureSupport](https://twitter.com/azuresupport). This official Microsoft Azure
- resource on Twitter helps improve the customer experience by connecting the Azure community to the
- right answers, support, and experts.
-- If you still need help, go to the
- [Azure support site](https://azure.microsoft.com/support/options/) and select **Submit a support
- request**.
+If your problem isn't listed in this article or you can't resolve it, get support by visiting one of the following channels:
+
+- Get answers from experts through [Microsoft Q&A](/answers/topics/azure-policy.html).
+- Connect with [@AzureSupport](https://twitter.com/azuresupport). This official Microsoft Azure resource on Twitter helps improve the customer experience by connecting the Azure community to the right answers, support, and experts.
+- If you still need help, go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Submit a support ticket**.
governance First Query Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-azurecli.md
Title: "Quickstart: Run Resource Graph query using Azure CLI" description: In this quickstart, you run a Resource Graph query using Azure CLI and the resource-graph extension. Previously updated : 06/26/2024 Last updated : 06/27/2024
This quickstart describes how to run an Azure Resource Graph query using the Azu
- [Azure CLI](/cli/azure/install-azure-cli) must be version 2.22.0 or higher for the Resource Graph extension. - A Bash shell environment where you can run Azure CLI commands. For example, Git Bash in a [Visual Studio Code](https://code.visualstudio.com/) terminal session.
-## Connect to Azure
-
-From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
-
-```azurecli
-az login
-
-# Run these commands if you have multiple subscriptions
-az account list --output table
-az account set --subscription <subscriptionID>
-```
- ## Install the extension To enable Azure CLI to query resources using Azure Resource Graph, the Resource Graph extension must be installed. The first time you run a query with `az graph` a prompt is displayed to install the extension. Otherwise, use the following steps to do a manual installation.
To enable Azure CLI to query resources using Azure Resource Graph, the Resource
For more information about Azure CLI extensions, go to [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+## Connect to Azure
+
+From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
+
+```azurecli
+az login
+
+# Run these commands if you have multiple subscriptions
+az account list --output table
+az account set --subscription <subscriptionID>
+```
+ ## Run a query After the Azure CLI extension is added to your environment, you can run a tenant-based query. The query in this example returns five Azure resources with the `name` and `type` of each resource. To query by [management group](../management-groups/overview.md) or subscription, use the `--management-groups` or `--subscriptions` arguments.
az logout
## Next steps
-In this quickstart, you ran Azure Resource Graph queries using the extension for Azure CLI. To learn more, go to the query language details article.
+In this quickstart, you ran Azure Resource Graph queries using the extension for Azure CLI. To learn more about the Resource Graph language, continue to the query language details page.
> [!div class="nextstepaction"] > [Understanding the Azure Resource Graph query language](./concepts/query-language.md)
governance First Query Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-powershell.md
Title: "Quickstart: Run Resource Graph query using Azure PowerShell" description: In this quickstart, you run an Azure Resource Graph query using the module for Azure PowerShell. Previously updated : 04/24/2024 Last updated : 06/27/2024 # Quickstart: Run Resource Graph query using Azure PowerShell
-This quickstart describes how to run an Azure Resource Graph query using the `Az.ResourceGraph` module for Azure PowerShell. The article also shows how to order (sort) and limit the query's results. You can run a query for resources in your tenant, management groups, or subscriptions. When you're finished, you can remove the module.
+This quickstart describes how to run an Azure Resource Graph query using the `Az.ResourceGraph` module for Azure PowerShell. The module is included with the latest version of Azure PowerShell and adds [cmdlets](/powershell/module/az.resourcegraph) for Resource Graph.
+
+The article also shows how to order (sort) and limit the query's results. You can run a query for resources in your tenant, management groups, or subscriptions.
## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- [PowerShell](/powershell/scripting/install/installing-powershell).-- [Azure PowerShell](/powershell/azure/install-azure-powershell).
+- Latest versions of [PowerShell](/powershell/scripting/install/installing-powershell) and [Azure PowerShell](/powershell/azure/install-azure-powershell).
- [Visual Studio Code](https://code.visualstudio.com/). ## Install the module
-Install the `Az.ResourceGraph` module so that you can use Azure PowerShell to run Azure Resource Graph queries. The Azure Resource Graph module requires PowerShellGet version 2.0.1 or higher. If you installed the latest versions of PowerShell and Azure PowerShell, you already have the required version.
+If you installed the latest versions of PowerShell and Azure PowerShell, you already have the `Az.ResourceGraph` module and required version of PowerShellGet.
+
+### Optional module installation
+
+Use the following steps to install the `Az.ResourceGraph` module so that you can use Azure PowerShell to run Azure Resource Graph queries. The Azure Resource Graph module requires PowerShellGet version 2.0.1 or higher.
1. Verify your PowerShellGet version:
If a query doesn't return results from a subscription you already have access to
## Clean up resources
+To sign out of your Azure PowerShell session:
+
+```azurepowershell
+Disconnect-AzAccount
+```
+
+### Optional clean up steps
+
+If you installed the latest version of Azure PowerShell, the `Az.ResourceGraph` module is included and shouldn't be removed. The following steps are optional if you did a manual install of the `Az.ResourceGraph` module and want to remove the module.
+ To remove the `Az.ResourceGraph` module from your PowerShell session, run the following command: ```azurepowershell
Uninstall-Module -Name Az.ResourceGraph
A message might be displayed that _module Az.ResourceGraph is currently in use_. If so, you need to shut down your PowerShell session and start a new session. Then run the command to uninstall the module from your computer.
-To sign out of your Azure PowerShell session:
-
-```azurepowershell
-Disconnect-AzAccount
-```
- ## Next steps In this quickstart, you added the Resource Graph module to your Azure PowerShell environment and ran a query. To learn more, go to the query language details page.
governance Shared Query Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-azure-cli.md
Title: "Quickstart: Create Resource Graph shared query using Azure CLI" description: In this quickstart, you create an Azure Resource Graph shared query using Azure CLI and the resource-graph extension. Previously updated : 06/26/2024 Last updated : 06/27/2024
A shared query can be run from Azure CLI with the _experimental_ feature's comma
- [Azure CLI](/cli/azure/install-azure-cli) must be version 2.22.0 or higher for the Resource Graph extension. - A Bash shell environment where you can run Azure CLI commands. For example, Git Bash in a [Visual Studio Code](https://code.visualstudio.com/) terminal session.
-## Connect to Azure
-
-From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
-
-```azurecli
-az login
-
-# Run these commands if you have multiple subscriptions
-az account list --output table
-az account set --subscription <subscriptionID>
-```
- ## Install the extension To enable Azure CLI to query resources using Azure Resource Graph, the Resource Graph extension must be installed. The first time you run a query with `az graph` a prompt is displayed to install the extension. Otherwise, use the following steps to do a manual installation.
To enable Azure CLI to query resources using Azure Resource Graph, the Resource
For more information about Azure CLI extensions, go to [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+## Connect to Azure
+
+From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
+
+```azurecli
+az login
+
+# Run these commands if you have multiple subscriptions
+az account list --output table
+az account set --subscription <subscriptionID>
+```
+ ## Create a shared query Create a resource group and a shared that summarizes the count of all resources grouped by location.
You can verify the shared query works using Azure Resource Graph Explorer. To ch
1. Change **Type** to _Shared queries_. 1. Select the query _Count VMs by OS_. 1. Select **Run query** and the view output in the **Results** tab.
+1. Select **Charts** and then select **Map** to view the location map.
You can also run the query from your resource group. 1. In Azure, go to the resource group, _demoSharedQuery_. 1. From the **Overview** tab, select the query _Count VMs by OS_. 1. Select the **Results** tab.
+1. Select **Charts** and then select **Map** to view the location map.
## Clean up resources
-To remove the resource group and shared query:
+To remove the shared query:
+
+```azurecli
+az graph shared-query delete --name "Summarize resources by location" --resource-group demoSharedQuery
+```
+
+When a resource group is deleted, the resource group and all its resources are deleted. To remove the resource group:
```azurecli az group delete --name demoSharedQuery
az logout
## Next steps
-In this quickstart, you added the Resource Graph extension to your Azure CLI environment and
-created a shared query. To learn more about the Resource Graph language, continue to the query
-language details page.
+In this quickstart, you added the Resource Graph extension to your Azure CLI environment and created a shared query. To learn more about the Resource Graph language, continue to the query language details page.
> [!div class="nextstepaction"] > [Understanding the Azure Resource Graph query language](./concepts/query-language.md)
governance Shared Query Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-azure-powershell.md
Title: 'Quickstart: Create a shared query with Azure PowerShell'
-description: In this quickstart, you follow the steps to create a Resource Graph shared query using Azure PowerShell.
Previously updated : 11/09/2022
+ Title: "Quickstart: Create a Resource Graph shared query using Azure PowerShell"
+description: In this quickstart, you create a Resource Graph shared query using Azure PowerShell.
Last updated : 06/27/2024 + # Quickstart: Create a Resource Graph shared query using Azure PowerShell
-This article describes how you can create an Azure Resource Graph shared query using the
-[Az.ResourceGraph](/powershell/module/az.resourcegraph) PowerShell module.
+In this quickstart, you create an Azure Resource Graph shared query using the `Az.ResourceGraph` Azure PowerShell module. The module is included with the latest version of Azure PowerShell and adds [cmdlets](/powershell/module/az.resourcegraph) for Resource Graph.
+
+A shared query can be run from Azure CLI with the _experimental_ feature's commands, or you can run the shared query from the Azure portal. A shared query is an Azure Resource Manager object that you can grant permission to or run in Azure Resource Graph Explorer. When you finish, you can remove the Resource Graph extension.
## Prerequisites -- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Latest versions of [PowerShell](/powershell/scripting/install/installing-powershell) and [Azure PowerShell](/powershell/azure/install-azure-powershell).
+- [Visual Studio Code](https://code.visualstudio.com/).
+
+## Install the module
+If you installed the latest versions of PowerShell and Azure PowerShell, you already have the `Az.ResourceGraph` module and required version of PowerShellGet.
- > [!IMPORTANT]
- > While the **Az.ResourceGraph** PowerShell module is in preview, you must install it separately
- > using the `Install-Module` cmdlet.
+### Optional module installation
- ```azurepowershell-interactive
- Install-Module -Name Az.ResourceGraph -Scope CurrentUser -Repository PSGallery -Force
- ```
+Use the following steps to install the `Az.ResourceGraph` module so that you can use Azure PowerShell to run Azure Resource Graph queries. The Azure Resource Graph module requires PowerShellGet version 2.0.1 or higher.
-- If you have multiple Azure subscriptions, choose the appropriate subscription in which the
- resources should be billed. Select a specific subscription using the
- [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+1. Verify your PowerShellGet version:
- ```azurepowershell-interactive
- Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
- ```
+ ```azurepowershell
+ Get-Module -Name PowerShellGet
+ ```
-## Create a Resource Graph shared query
+ If you need to update, go to [PowerShellGet](/powershell/gallery/powershellget/install-powershellget).
-With the **Az.ResourceGraph** PowerShell module added to your environment of choice, it's time to create
-a Resource Graph shared query. The shared query is an Azure Resource Manager object that you can
-grant permission to or run in Azure Resource Graph Explorer. The query summarizes the count of all
-resources grouped by _location_.
+1. Install the module:
+
+ ```azurepowershell
+ Install-Module -Name Az.ResourceGraph -Repository PSGallery -Scope CurrentUser
+ ```
-1. Create a resource group with
- [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) to store the Azure
- Resource Graph shared query. This resource group is named `resource-graph-queries` and the
- location is `westus2`.
+ The command installs the module in the `CurrentUser` scope. If you need to install in the `AllUsers` scope, run the installation from an administrative PowerShell session.
- ```azurepowershell-interactive
- # Login first with `Connect-AzAccount` if not using Cloud Shell
+1. Verify the module was installed:
- # Create the resource group
- New-AzResourceGroup -Name resource-graph-queries -Location westus2
+ ```azurepowershell
+ Get-Command -Module Az.ResourceGraph -CommandType Cmdlet
```
-1. Create the Azure Resource Graph shared query using the **Az.ResourceGraph** PowerShell module and
- [New-AzResourceGraphQuery](/powershell/module/az.resourcegraph/new-azresourcegraphquery)
- cmdlet:
+ The command displays the `Search-AzGraph` cmdlet version and loads the module into your PowerShell session.
+
+## Connect to Azure
+
+From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
+
+```azurepowershell
+Connect-AzAccount
+
+# Run these commands if you have multiple subscriptions
+Get-AzSubScription
+Set-AzContext -Subscription <subscriptionID>
+```
- ```azurepowershell-interactive
- # Create the Azure Resource Graph shared query
- $Params = @{
+## Create a shared query
+
+The shared query is an Azure Resource Manager object that you can grant permission to or run in Azure Resource Graph Explorer. The query summarizes the count of all resources grouped by location.
+
+1. Create a resource group to store the Azure Resource Graph shared query.
+
+ ```azurepowershell
+ New-AzResourceGroup -Name demoSharedQuery -Location westus2
+ ```
+
+1. Create the Azure Resource Graph shared query.
+
+ ```azurepowershell
+ $params = @{
Name = 'Summarize resources by location'
- ResourceGroupName = 'resource-graph-queries'
+ ResourceGroupName = 'demoSharedQuery'
Location = 'westus2' Description = 'This shared query summarizes resources by location for a pinnable map graphic.' Query = 'Resources | summarize count() by location' }
- New-AzResourceGraphQuery @Params
+
+ New-AzResourceGraphQuery @params
```
-1. List the shared queries in the new resource group. The
- [Get-AzResourceGraphQuery](/powershell/module/az.resourcegraph/get-azresourcegraphquery)
- cmdlet returns an array of values.
+ The `$params` variable uses PowerShell [splatting](/powershell/module/microsoft.powershell.core/about/about_splatting) to improve readability for the parameter values used in the command to create the shared query.
- ```azurepowershell-interactive
- # List all the Azure Resource Graph shared queries in a resource group
- Get-AzResourceGraphQuery -ResourceGroupName resource-graph-queries
+1. List all shared queries in the resource group.
+
+ ```azurepowershell
+ Get-AzResourceGraphQuery -ResourceGroupName demoSharedQuery
```
-1. To get just a single shared query result, use `Get-AzResourceGraphQuery` with its `Name` parameter.
+1. Limit the results to a specific shared query.
- ```azurepowershell-interactive
- # Show a specific Azure Resource Graph shared query
- Get-AzResourceGraphQuery -ResourceGroupName resource-graph-queries -Name 'Summarize resources by location'
+ ```azurepowershell
+ Get-AzResourceGraphQuery -ResourceGroupName demoSharedQuery -Name 'Summarize resources by location'
```
+## Run the shared query
+
+You can verify the shared query works using Azure Resource Graph Explorer. To change the scope, use the **Scope** menu on the left side of the page.
+
+1. Sign in to [Azure portal](https://portal.azure.com).
+1. Enter _resource graph_ into the search field at the top of the page.
+1. Select **Resource Graph Explorer**.
+1. Select **Open query**.
+1. Change **Type** to _Shared queries_.
+1. Select the query _Count VMs by OS_.
+1. Select **Run query** and the view output in the **Results** tab.
+1. Select **Charts** and then select **Map** to view the location map.
+
+You can also run the query from your resource group.
+
+1. In Azure, go to the resource group, _demoSharedQuery_.
+1. From the **Overview** tab, select the query _Count VMs by OS_.
+1. Select the **Results** tab to view a list.
+1. Select **Charts** and then select **Map** to view the location map.
+ ## Clean up resources
-If you wish to remove the Resource Graph shared query and resource group from your Azure
-environment, you can do so by using the following commands:
+When you finish, you can remove the Resource Graph shared query and resource group from your Azure environment. When a resource group is deleted, the resource group and all its resources are deleted.
+
+Remove the shared query:
+
+```azurepowershell
+Remove-AzResourceGraphQuery -ResourceGroupName demoSharedQuery -Name 'Summarize resources by location'
+```
+
+Delete the resource group:
+
+```azurepowershell
+Remove-AzResourceGroup -Name demoSharedQuery
+```
+
+To sign out of your Azure PowerShell session:
+
+```azurepowershell
+Disconnect-AzAccount
+```
+
+### Optional clean up steps
-- [Remove-AzResourceGraphQuery](/powershell/module/az.resourcegraph/remove-azresourcegraphquery)-- [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup)
+If you installed the latest version of Azure PowerShell, the `Az.ResourceGraph` module is included and shouldn't be removed. The following steps are optional if you did a manual install of the `Az.ResourceGraph` module and want to remove the module.
-```azurepowershell-interactive
-# Delete the Azure Resource Graph shared query
-Remove-AzResourceGraphQuery -ResourceGroupName resource-graph-queries -Name 'Summarize resources by location'
+To remove the `Az.ResourceGraph` module from your PowerShell session, run the following command:
-# Remove the resource group
-# WARNING: This command deletes ALL resources you've added to this resource group
-Remove-AzResourceGroup -Name resource-graph-queries
+```azurepowershell
+Remove-Module -Name Az.ResourceGraph
```
+To uninstall the `Az.ResourceGraph` module from your computer, run the following command:
+
+```azurepowershell
+Uninstall-Module -Name Az.ResourceGraph
+```
+
+A message might be displayed that _module Az.ResourceGraph is currently in use_. If so, you need to shut down your PowerShell session and start a new session. Then run the command to uninstall the module from your computer.
+ ## Next steps
-In this quickstart, you've created a Resource Graph shared query using Azure PowerShell. To learn
-more about the Resource Graph language, continue to the query language details page.
+In this quickstart, you created a Resource Graph shared query using Azure PowerShell. To learn more about the Resource Graph language, continue to the query language details page.
> [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
+> [Understanding the Azure Resource Graph query language](./concepts/query-language.md)
governance Shared Query Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-bicep.md
Create a resource group and deploy the Bicep file with Azure CLI or Azure PowerS
# [Azure CLI](#tab/azure-cli) ```azurecli
-az group create --name exampleRG --location eastus
-az deployment group create --resource-group exampleRG --template-file main.bicep
+az group create --name demoSharedQuery --location eastus
+az deployment group create --resource-group demoSharedQuery --template-file main.bicep
``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
-New-AzResourceGroup -Name exampleRG -Location eastus
-New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile main.bicep
+New-AzResourceGroup -Name demoSharedQuery -Location eastus
+New-AzResourceGroupDeployment -ResourceGroupName demoSharedQuery -TemplateFile main.bicep
```
Use Azure CLI or Azure PowerShell to list the deployed resources in the resource
# [Azure CLI](#tab/azure-cli) ```azurecli
-az resource list --resource-group exampleRG
+az resource list --resource-group demoSharedQuery
``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
-Get-AzResource -ResourceGroupName exampleRG
+Get-AzResource -ResourceGroupName demoSharedQuery
```
You can verify the shared query works using Azure Resource Graph Explorer. To ch
You can also run the query from your resource group.
-1. In Azure, go to the resource group, _exampleRG_.
+1. In Azure, go to the resource group, _demoSharedQuery_.
1. From the **Overview** tab, select the query _Count VMs by OS_. 1. Select the **Results** tab. ## Clean up resources
-When you no longer need the resource that you created, delete the resource group using Azure CLI or Azure PowerShell. And if you signed into Azure portal to run the query, be sure to sign out.
+When you no longer need the resource that you created, delete the resource group using Azure CLI or Azure PowerShell. When a resource group is deleted, the resource group and all its resources are deleted. And if you signed into Azure portal to run the query, be sure to sign out.
# [Azure CLI](#tab/azure-cli) ```azurecli
-az group delete --name exampleRG
+az group delete --name demoSharedQuery
``` To sign out of your Azure CLI session:
az logout
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
-Remove-AzResourceGroup -Name exampleRG
+Remove-AzResourceGroup -Name demoSharedQuery
``` To sign out of your Azure PowerShell session:
Disconnect-AzAccount
## Next steps
-In this quickstart, you created a Resource Graph shared query using Bicep.
-
-To learn more about shared queries, continue to the tutorial for:
+In this quickstart, you created a Resource Graph shared query using Bicep. To learn more about the Resource Graph language, continue to the query language details page.
> [!div class="nextstepaction"]
-> [Tutorial: Create and share an Azure Resource Graph query in the Azure portal](./tutorials/create-share-query.md)
+> [Understanding the Azure Resource Graph query language](./concepts/query-language.md)
governance Shared Query Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-template.md
Title: 'Quickstart: Create Resource Graph shared query using ARM template'
+ Title: "Quickstart: Create Resource Graph shared query using ARM template"
description: In this quickstart, you use an Azure Resource Manager template (ARM template) to create a Resource Graph shared query that counts virtual machines by OS. Last updated 06/26/2024
To remove the shared query created, follow these steps:
## Next steps
-In this quickstart, you created a Resource Graph shared query.
-
-To learn more about shared queries, continue to the tutorial for:
+In this quickstart, you created a Resource Graph shared query. To learn more about the Resource Graph language, continue to the query language details page.
> [!div class="nextstepaction"]
-> [Manage queries in Azure portal](./tutorials/create-share-query.md)
+> [Understanding the Azure Resource Graph query language](./concepts/query-language.md)
hdinsight Hdinsight Management Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-management-ip-addresses.md
description: Learn which IP addresses you must allow inbound traffic from, in or
Previously updated : 07/12/2023 Last updated : 06/28/2024 # HDInsight management IP addresses
Allow traffic from the following IP addresses for Azure HDInsight health and man
Allow traffic from the IP addresses listed for the Azure HDInsight health and management services in the specific Azure region where your resources are located, refer the following note: > [!IMPORTANT]
-> We recommend to use [service tag](hdinsight-service-tags.md) feature for network security groups. If you require region specific service tags, please refer the [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/confirmation.aspx?id=56519)
+> We recommend to use [service tag](hdinsight-service-tags.md) feature for network security groups. If you require region specific service tags, please refer the [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://download.microsoft.com/download/7/1/D/71D86715-5596-4529-9B13-DA13A5DE5B63/ServiceTags_Public_20240624.json)
For information on the IP addresses to use for Azure Government, see the [Azure Government Intelligence + Analytics](../azure-government/compare-azure-government-global-azure.md) document.
hdinsight Subscribe To Hdi Release Notes Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/subscribe-to-hdi-release-notes-repo.md
Title: Subscribe to GitHub release notes repo
description: Learn how to subscribe to GitHub release notes repo Previously updated : 06/15/2023 Last updated : 06/28/2024 # Subscribe to HDInsight release notes GitHub repo
iot-edge Configure Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-device.md
Title: Configure Azure IoT Edge device settings
description: This article shows you how to configure Azure IoT Edge device settings and options using the config.toml file. Previously updated : 05/06/2024 Last updated : 06/27/2024
auto_generated_edge_ca_expiry_days = 90
This setting manages autorenewal of the Edge CA certificate. Autorenewal applies when the Edge CA is configured as *quickstart* or when the Edge CA has an issuance `method` set. Edge CA certificates loaded from files generally can't be autorenewed as the Edge runtime doesn't have enough information to renew them. > [!IMPORTANT]
-> Renewal of an Edge CA requires all server certificates issued by that CA to be regenerated. This regeneration is done by restarting all modules. The time of Edge CA renewal can't be guaranteed. If random module restarts are unacceptable for your use case, disable autorenewal.
+> Renewal of an Edge CA requires all server certificates issued by that CA to be regenerated. This regeneration is done by restarting all modules. The time of Edge CA renewal can't be guaranteed. If random module restarts are unacceptable for your use case, disable autorenewal by not including the [edge_ca.auto_renew] section.
```toml [edge_ca.auto_renew]
iot-hub-device-update Device Update Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-region-mapping.md
-# Regional failover mapping for Device Update for IoT Hub
-
-In cases where an Azure region is unavailable due to an outage, Device Update for IoT Hub supports business continuity and disaster recovery (BCDR) efforts with regional failover pairings. During an outage, data contained in the update files submitted to the Device Update service may be sent to a secondary Azure region. This failover enables Device Update to continue scanning update files for malware and making the updates available on the service endpoints.
-
-## Failover region mapping
-
-| Region name | Fails over to
-| | |
-| North Europe | West Europe |
-| West Europe | North Europe |
-| UK South | North Europe |
-| Sweden Central | North Europe |
-| East US | West US 2 |
-| East US 2 | West US 2 |
-| West US 2 | East US |
-| West US 3 | East US |
-| South Central US | East US |
-| East US 2 (EUAP) | West US 2 |
-| Australia East | Southeast Asia |
-| Southeast Asia | Australia East |
+# Device Update for IoT Hub regional mapping for scan and failover
+
+When you're importing an update into the Device Update for IoT Hub service, that update content may be processed within different Azure regions. The region used for processing depends on the region that your Device Update Instance was created in.
+
+## Anti-malware scan
+
+When you're using the Azure portal for importing your update, there's now an option to enable anti-malware scan. If you select the option to enable anti-malware scan, your update is sent to the Azure region that corresponds to the "Default scan region" column table in the **Region mapping for default and failover cases** section. If you don't select the option to enable anti-malware scan, your update is processed in the same region as your Device Update Instance, but it isn't scanned for malware. **Optional anti-malware scan is in Public Preview**.
+
+If you're using the Azure CLI or directly calling Device Update APIs, your update isn't scanned for malware during the import process. It's processed in the same region as your Device Update Instance.
+
+## Failover and BCDR
+
+As an exception to the previous section, in cases where an Azure region is unavailable due to an outage, Device Update for IoT Hub supports business continuity and disaster recovery (BCDR) efforts with regional failover pairings. During an outage, data contained in the update files submitted to the Device Update service may be sent to a secondary Azure region for processing. This failover enables Device Update to continue scanning update files for malware if you select that option.
+
+## Region mapping for default and failover cases
++
+| Device Update Instance region|Default scan region|Failover scan region |
+| -- | -- | -- |
+| North Europe | North Europe | Sweden Central |
+|West Europe | North Europe | Sweden Central |
+| UK South| North Europe | Sweden Central |
+|Sweden Central|Sweden Central| North Europe |
+|East US| East US |East US 2 |
+|East US 2| East US 2 |East US |
+|West US 2|West US 2| East US 2 |
+|West US 3| West US 2| East US 2 |
+|South Central US|West US 2| East US 2 |
+|East US 2 (EUAP)|East US 2| East US|
+|Australia East|North Europe| Sweden Central|
+|Southeast Asia | West US 2| East US 2 |
## Next steps
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
Previously updated : 12/11/2023 Last updated : 06/28/2024
By default, messages are routed to the built-in service-facing endpoint (**messa
If you're using message routing and the [fallback route](iot-hub-devguide-messages-d2c.md#fallback-route) is enabled, a message that doesn't match a query on any route goes to the built-in endpoint. If you disable this fallback route, a message that doesn't match any query is dropped.
-This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**.
+This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hubs-compatible messaging endpoint **messages/events**.
| Property | Description | | - | -- | | **Partition count** | Set this property at creation to define the number of [partitions](../event-hubs/event-hubs-features.md#partitions) for device-to-cloud event ingestion. |
-| **Retention time** | This property specifies how long in days messages are retained by IoT Hub. The default is one day, but it can be increased to seven days. |
+| **Retention time** | This property specifies how long in days IoT Hub retains messages. The default is one day, but it can be increased to seven days. |
-IoT Hub allows data retention in the built-in endpoint for a maximum of seven days. You can set the retention time during creation of your IoT hub. Data retention time in IoT Hub depends on your IoT hub tier and unit type. In terms of size, the built-in endpoint can retain messages of the maximum message size up to at least 24 hours of quota. For example, one S1 unit IoT hub provides enough storage to retain at least 400,000 messages, at 4 KB per message. If your devices are sending smaller messages, they may be retained for longer (up to seven days) depending on how much storage is consumed. We guarantee to retain the data for the specified retention time as a minimum. After the retention time has passed, messages expire and become inaccessible. You can modify the retention time, either programmatically using the [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource), or with the [Azure portal](https://portal.azure.com).
+IoT Hub allows data retention in the built-in endpoint for a maximum of seven days. You can set the retention time during creation of your IoT hub. Data retention time in IoT Hub depends on your IoT hub tier and unit type. In terms of size, the built-in endpoint can retain messages of the maximum message size up to at least 24 hours of quota. For example, one S1 unit IoT hub provides enough storage to retain at least 400,000 messages, at 4 KB per message. If your devices are sending smaller messages, they might be retained for longer (up to seven days) depending on how much storage is consumed. We guarantee to retain the data for the specified retention time as a minimum. After the retention time, messages expire and become inaccessible. You can modify the retention time, either programmatically using the [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource), or with the Azure portal.
IoT Hub also enables you to manage consumer groups on the built-in endpoint. You can have up to 20 consumer groups for each IoT hub.
IoT Hub also enables you to manage consumer groups on the built-in endpoint. You
Some product integrations and Event Hubs SDKs are aware of IoT Hub and let you use your IoT hub service connection string to connect to the built-in endpoint.
-When you use Event Hubs SDKs or product integrations that are unaware of IoT Hub, you need an Event Hub-compatible endpoint and Event Hub-compatible name. You can retrieve these values from the portal as follows:
+When you use Event Hubs SDKs or product integrations that are unaware of IoT Hub, you need an Event Hubs-compatible endpoint and Event Hubs-compatible name. You can retrieve these values from the portal as follows:
1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT hub.
You can then choose any shared access policy from the **Shared access policy** d
## SDK samples
-The SDKs you can use to connect to the built-in Event Hub-compatible endpoint that IoT Hub exposes include:
+The SDKs you can use to connect to the built-in Event Hubs-compatible endpoint that IoT Hub exposes include:
| Language | SDK | Example | | -- | | - | | .NET | https://www.nuget.org/packages/Azure.Messaging.EventHubs | [ReadD2cMessages .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/getting%20started/ReadD2cMessages) |
-| Java | https://mvnrepository.com/artifact/com.azure/azure-messaging-eventhubs | |
+| Java | https://mvnrepository.com/artifact/com.azure/azure-messaging-eventhubs | [read-d2c-messages Java](https://github.com/Azure/azure-iot-service-sdk-java/tree/main/service/iot-service-samples/read-d2c-messages) |
| Node.js | https://www.npmjs.com/package/@azure/event-hubs | [read-d2c-messages Node.js](https://github.com/Azure-Samples/azure-iot-samples-node/tree/master/iot-hub/Quickstarts/read-d2c-messages) |
-| Python | https://pypi.org/project/azure-eventhub/ | [read-dec-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) |
+| Python | https://pypi.org/project/azure-eventhub/ | [read-d2c-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) |
-The product integrations you can use with the built-in Event Hub-compatible endpoint that IoT Hub exposes include:
+## Connect to other service and products
+
+The product integrations you can use with the built-in Event Hubs-compatible endpoint that IoT Hub exposes include:
* [Azure Functions](../azure-functions/index.yml)
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-net.md
# Quickstart: Azure Key Vault certificate client library for .NET
-Get started with the Azure Key Vault certificate client library for .NET. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for certificates. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete certificates from an Azure key vault using the .NET client library
+Get started with the Azure Key Vault certificate client library for .NET. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for certificates. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete certificates from an Azure key vault using the .NET client library.
Key Vault client library resources:
For more information about Key Vault and certificates, see:
* [Azure CLI](/cli/azure/install-azure-cli) * A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md), [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md).
-This quickstart is using `dotnet` and Azure CLI
+This quickstart is using `dotnet` and Azure CLI.
## Setup
dotnet add package Azure.Identity
#### Set environment variables
-This application is using key vault name as an environment variable called `KEY_VAULT_NAME`.
+The application obtains the key vault name from an environment variable called `KEY_VAULT_NAME`.
Windows ```cmd
lab-services Account Setup Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/account-setup-guide.md
You might want to create your images in your physical environment and then impor
If you decide to use the Shared Image Gallery service, you'll need to create or attach a shared image gallery to your lab account. You can postpone this decision for now, because a shared image gallery can be attached to a lab account at any time. For more information, see:+ - The "Shared image gallery" section of [Azure Lab Services - Administrator guide](./administrator-guide-1.md#shared-image-gallery) - The "Pricing" section of [Azure Lab Services - Administrator guide](./administrator-guide-1.md#pricing)
When you set up a lab account, you also can peer your lab account with a virtual
After you've finished planning, you're ready to set up your lab account. You can apply the same steps to setting up [Azure Lab Services in Teams](./lab-services-within-teams-overview.md).
-1. **Create your lab account**. For instructions, see [Create a lab account](./tutorial-setup-lab-account.md#create-a-lab-account).
+1. **Create your lab account**. For instructions, see [Create a lab account](how-to-create-lab-accounts.md).
For information about naming conventions, see the "Naming" section of [Azure Lab Services - Administrator guide](./administrator-guide-1.md#naming).
-1. **Add users to the Lab Creator role**. For instructions, see [Add users to the Lab Creator role](./tutorial-setup-lab-account.md#add-a-user-to-the-lab-creator-role).
+1. **Add users to the Lab Creator role**. For instructions, see [Add a user to the Lab Creator role](how-to-add-lab-creator.md).
1. **Connect to a peer virtual network**. For instructions, see [Connect your lab network with a peer virtual network](./how-to-connect-peer-virtual-network.md).
lab-services Administrator Guide 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide-1.md
Last updated 10/20/2020
[!INCLUDE [lab account focused article](./includes/lab-services-labaccount-focused-article.md)]
-Information technology (IT) administrators who manage a university's cloud resources are ordinarily responsible for setting up the lab account for their school. After they've set up a lab account, administrators or educators create the labs that are contained within the account. This article provides a high-level overview of the Azure resources that are involved and the guidance for creating them.
+Information technology (IT) administrators who manage a university's cloud resources are ordinarily responsible for setting up the lab account for their school. After they set up a lab account, administrators or educators create the labs that are contained within the account. This article provides a high-level overview of the Azure resources that are involved and the guidance for creating them.
![Diagram of a high-level view of Azure resources in a lab account.](./media/administrator-guide/high-level-view.png) -- Labs are hosted within an Azure subscription that's owned by Azure Lab Services.
+- Labs are hosted within an Azure subscription managed by Azure Lab Services.
- Lab accounts, a shared image gallery, and image versions are hosted within your subscription. - You can have your lab account and the shared image gallery in the same resource group. In this diagram, they are in different resource groups.
The relationship between a lab account and its subscription is important because
- Billing is reported through the subscription that contains the lab account. - You can grant users in the subscription's Microsoft Entra tenant access to Azure Lab Services. You can add a user as a lab account Owner or Contributor, or as a Lab Creator or lab Owner.
-Labs and their virtual machines (VMs) are managed and hosted for you within a subscription that's owned by Azure Lab Services.
+Labs and their virtual machines (VMs) are managed and hosted for you within a subscription managed Azure Lab Services.
## Resource group
-A subscription contains one or more resource groups. Resource groups are used to create logical groupings of Azure resources that are used together within the same solution.
+A subscription contains one or more resource groups. Resource groups are used to create logical groupings of Azure resources that are used together within the same solution.
When you create a lab account, you must configure the resource group that contains the lab account. A resource group is also required when you create a [shared image gallery](#shared-image-gallery). You can place your lab account and shared image gallery in the same resource group or in two separate resource groups. You might want to take this second approach if you plan to share the image gallery across various solutions.
-When you create a lab account, you can automatically create and attach a shared image gallery at the same time. This option results in the lab account and the shared image gallery being created in separate resource groups. You'll see this behavior when you follow the steps that are described in the [Configure shared image gallery at the time of lab account creation](how-to-attach-detach-shared-image-gallery-1.md#configure-at-the-time-of-lab-account-creation) tutorial. The image at the beginning of this article uses this configuration.
+When you create a lab account, you can automatically create and attach a shared image gallery at the same time. This option results in the lab account and the shared image gallery being created in separate resource groups. You see this behavior when you follow the steps that are described in the [Configure shared image gallery at the time of lab account creation](how-to-attach-detach-shared-image-gallery-1.md#configure-at-the-time-of-lab-account-creation) tutorial. The image at the beginning of this article uses this configuration.
-We recommend that you invest time up front to plan the structure of your resource groups, because it's *not* possible to change a lab account or shared image gallery resource group once it's created. If you need to change the resource group for these resources, you'll need to delete and re-create your lab account or shared image gallery.
+We recommend that you invest time up front to plan the structure of your resource groups. It's *not* possible to change a lab account or shared image gallery resource group once after creation. If you need to change the resource group for these resources, you need to delete and re-create your lab account or shared image gallery.
## Lab account
The following list highlights scenarios where more than one lab account might be
- **Assign a separate budget to each lab account**
- Instead of reporting all lab costs through a single lab account, you might need a more clearly apportioned budget. For example, you can create separate lab accounts for your university's Math department, Computer Science department, and so forth, to distribute the budget across departments. You can then view the cost for each individual lab account by using [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md).
+ Instead of reporting all lab costs through a single lab account, you might need a more clearly apportioned budget. For example, you can create separate lab accounts for your university's Math department, Computer Science department, and so forth, to distribute the budget across departments. You can then view the cost for each individual lab account by using [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md).
- **Isolate pilot labs from active or production labs**
The following list highlights scenarios where more than one lab account might be
## Lab
-A lab contains VMs that are each assigned to a single student. In general, you can expect to:
+A lab contains VMs that are each assigned to a single student. In general, you can expect to:
- Have one lab for each class. - Create a new set of labs for each semester, quarter, or other academic system you're using. For classes that need to use the same image, you should use a [shared image gallery](#shared-image-gallery). This way, you can reuse images across labs and academic periods.
When you're determining how to structure your labs, consider the following point
- **The usage quota is set at the lab level and applies to all users within the lab**
- To set different quotas for users, you must create separate labs. However, it's possible to add more hours to specific users after you've set the quota.
+ To set different quotas for users, you must create separate labs. However, it's possible to add more hours to specific users after you set the quota for the lab.
- **The startup or shutdown schedule is set at the lab level and applies to all VMs within the lab** Similar to quota setting, if you need to set different schedules for users, you need to create a separate lab for each schedule.
-By default, each lab has its own virtual network. If you have virtual network peering enabled, each lab will have its own subnet peered with the specified virtual network.
+By default, each lab has its own virtual network. If you have virtual network peering enabled, each lab has its own subnet peered with the specified virtual network.
## Shared image gallery
-A shared image gallery is attached to a lab account and serves as a central repository for storing images. An image is saved in the gallery when an educator chooses to export it from a lab's template VM. Each time an educator makes changes to the template VM and exports it, new image definitions and\or versions are created in the gallery.
+A shared image gallery is attached to a lab account and serves as a central repository for storing images. An image is saved in the gallery when an educator chooses to export it from a lab's template VM. Each time an educator makes changes to the template VM and exports it, new image definitions and\or versions are created in the gallery.
-Educators can publish an image version from the shared image gallery when they create a new lab. Although the gallery stores multiple versions of an image, educators can select only the most recent version during lab creation. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch. For more information about versioning, see [Image versions](../virtual-machines/shared-image-galleries.md#image-versions).
+Educators can publish an image version from the shared image gallery when they create a new lab. Although the gallery stores multiple versions of an image, educators can select only the most recent version during lab creation. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch. For more information about versioning, see [Image versions](../virtual-machines/shared-image-galleries.md#image-versions).
The shared image gallery service is an optional resource that you might not need immediately if you're starting with only a few labs. However, shared image gallery offers many benefits that are helpful as you scale up to more labs: - **You can save and manage versions of a template VM image**
- It's useful to create a custom image or make changes (software, configuration, and so on) to an image from the Azure Marketplace gallery. For example, it's common for educators to require different software or tooling be installed. Rather than requiring students to manually install these prerequisites on their own, different versions of the template VM image can be exported to a shared image gallery. You can then use these image versions when you create new labs.
+ It's useful to create a custom image or make changes (software, configuration, and so on) to an image from the Azure Marketplace gallery. For example, it's common for educators to require different software or tooling be installed. Rather than requiring students to manually install these prerequisites on their own, different versions of the template VM image can be exported to a shared image gallery. You can then use these image versions when you create new labs.
- **You can share and reuse template VM images across labs**
The shared image gallery service is an optional resource that you might not need
- **You can upload your own custom images from other environments outside of labs**
- You can [upload custom images other environments outside of the context of labs](how-to-attach-detach-shared-image-gallery-1.md). For example, you can upload images from your own physical lab environment or from an Azure VM into shared image gallery. Once an image is imported into the gallery, you can then use the images to create labs.
+ You can [upload custom images other environments outside of the context of labs](how-to-attach-detach-shared-image-gallery-1.md). For example, you can upload images from your own physical lab environment or from an Azure VM into shared image gallery. Once an image is imported into the gallery, you can then use the images to create labs.
To logically group shared images, you can do either of the following: - Create multiple shared image galleries. Each lab account can connect to only one shared image gallery, so this option also requires you to create multiple lab accounts.-- Use a single shared image gallery that's shared by multiple lab accounts. In this case, each lab account can enable only images that are applicable to the labs in that account.
+- Use a single shared image gallery shared by multiple lab accounts. In this case, each lab account can enable only images that are applicable to the labs in that account.
## Naming
-As you get started with Azure Lab Services, we recommend that you establish naming conventions for Azure and Azure Lab Services related resources. Although the naming conventions that you establish will be unique to the needs of your organization, the following table provides general guidelines:
+As you get started with Azure Lab Services, we recommend that you establish naming conventions for Azure and Azure Lab Services related resources. Although the naming conventions that you establish are unique to the needs of your organization, the following table provides general guidelines:
| Resource type | Role | Suggested pattern | Examples | | - | - | -- | -- |
As you get started with Azure Lab Services, we recommend that you establish nami
| Lab | Contains one or more student VMs. | {class-name}-{time}-{educator} | CS101-Fall2021, CS101-Fall2021-JohnDoe | | Shared image gallery | Contains one or more VM image versions | {org-name}-sig, {dept-name}-sig | contoso-sig, mathdept-sig |
-In the proceeding table, we used some terms and tokens in the suggested name patterns. Let's go over those terms in a little more detail.
+In the preceding table, we used some terms and tokens in the suggested name patterns. Let's go over those terms in a little more detail.
| Pattern term/token | Definition | Example | | | - | - |
The region specifies the datacenter where information about a resource group is
### Lab account
-A lab account's location indicates the region that a resource exists in.
+A lab account's location indicates the region that a resource exists in.
### Lab
The location that a lab exists in varies, depending on the following factors:
- **The lab account is peered with a virtual network**
- You can [peer a lab account with a virtual network](./how-to-connect-peer-virtual-network.md) when they're in the same region. When a lab account is peered with a virtual network, labs are automatically created in the same region as both the lab account and the virtual network.
+ You can [peer a lab account with a virtual network](./how-to-connect-peer-virtual-network.md) when they're in the same region. When a lab account is peered with a virtual network, labs are automatically created in the same region as both the lab account and the virtual network.
> [!NOTE] > When a lab account is peered with a virtual network, the **Allow lab creator to pick lab location** setting is disabled. For more information, see [Allow lab creator to pick location for the lab](./allow-lab-creator-pick-lab-location.md). - **No virtual network is peered *and* Lab Creators aren't allowed to pick the lab location**
- When *no* virtual network is peered with the lab account and [Lab Creators are *not allowed* to pick the lab location](./allow-lab-creator-pick-lab-location.md), labs are automatically created in a region that has available VM capacity. Specifically, Azure Lab Services looks for availability in [regions that are within the same geography as the lab account](https://azure.microsoft.com/global-infrastructure/regions).
+ When *no* virtual network is peered with the lab account and [Lab Creators are *not allowed* to pick the lab location](./allow-lab-creator-pick-lab-location.md), labs are automatically created in a region that has available VM capacity. Specifically, Azure Lab Services looks for availability in [regions that are within the same geography as the lab account](https://azure.microsoft.com/global-infrastructure/regions).
- **No virtual network is peered *and* Lab Creators are allowed to pick the lab location**
A general rule is to set a resource's region to one that's closest to its users.
When administrators or Lab Creators create a lab, they can choose from various VM sizes, depending on the needs of their classroom. Remember that the size availability depends on the region that your lab account is located in.
-In the following table, notice that several of the VM sizes map to more than one VM series. Depending on capacity availability, Lab Services may use any of the VM series that are listed for a VM size. For example, the *Small* VM size maps to using either the [Standard_A2_v2](../virtual-machines/av2-series.md) or the [Standard_A2](../virtual-machines/sizes-previous-gen.md#a-series) VM series. When you choose *Small* as the VM size for your lab, Lab Services will first attempt to use the *Standard_A2_v2* series. However, when there isn't sufficient capacity available, Lab Services will instead use the *Standard_A2* series. The pricing is determined by the VM size and is the same regardless of which VM series Lab Services uses for that specific size. For more information on pricing for each VM size, read the [Lab Services pricing guide](https://azure.microsoft.com/pricing/details/lab-services/).
+In the following table, notice that several of the VM sizes map to more than one VM series. Depending on capacity availability, Lab Services can use any of the VM series that are listed for a VM size. For example, the *Small* VM size maps to using either the [Standard_A2_v2](../virtual-machines/av2-series.md) or the [Standard_A2](../virtual-machines/sizes-previous-gen.md#a-series) VM series. When you choose *Small* as the VM size for your lab, Lab Services first attempts to use the *Standard_A2_v2* series. However, when there isn't sufficient capacity available, Lab Services uses the *Standard_A2* series. The pricing is determined by the VM size and is the same regardless of which VM series Lab Services uses for that specific size. For more information on pricing for each VM size, read the [Lab Services pricing guide](https://azure.microsoft.com/pricing/details/lab-services/).
| Size | Minimum vCPUs | Minimum RAM | Series | Suggested use | | - | -- | -- | | - | | Small| 2 vCPUs | 3.5 GB RAM | [Standard_A2_v2](../virtual-machines/av2-series.md), [Standard_A2](../virtual-machines/sizes-previous-gen.md#a-series) | Best suited for command line, opening web browser, low-traffic web servers, small to medium databases. | | Medium | 4 vCPUs | 7 GB RAM | [Standard_A4_v2](../virtual-machines/av2-series.md), [Standard_A3](../virtual-machines/sizes-previous-gen.md#a-series) | Best suited for relational databases, in-memory caching, and analytics. |
-| Medium (nested virtualization) | 4 vCPUs | 16 GBs RAM | [Standard_D4s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for relational databases, in-memory caching, and analytics. This size also supports nested virtualization.
+| Medium (nested virtualization) | 4 vCPUs | 16 GBs RAM | [Standard_D4s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for relational databases, in-memory caching, and analytics. This size also supports nested virtualization. |
| Large | 8 vCPUs | 16 GB RAM | [Standard_A8_v2](../virtual-machines/av2-series.md), [Standard_A7](../virtual-machines/sizes-previous-gen.md#a-series) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. |
-| Large (nested virtualization) | 8 vCPUs | 32 GB RAM | [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. This size also supports nested virtualization. |
+| Large (nested virtualization) | 8 vCPUs | 32 GB RAM | [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. This size also supports nested virtualization. |
| Small GPU (visualization) | 6 vCPUs | 56 GB RAM | [Standard_NV6](../virtual-machines/nv-series.md) | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. | | Small GPU (Compute) | 6 vCPUs | 56 GB RAM | [Standard_NC6](../virtual-machines/nc-series.md), [Standard_NC6s_v3](../virtual-machines/ncv3-series.md) |Best suited for computer-intensive applications such as AI and deep learning. | | Medium GPU (visualization) | 12 vCPUs | 112 GB RAM | [Standard_NV12](../virtual-machines/nv-series.md), [Standard_NV12s_v3](../virtual-machines/nvv3-series.md), [Standard_NV12s_v2](../virtual-machines/sizes-previous-gen.md#nvv2-series) | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. |
By using [Azure role-based access control (RBAC)](../role-based-access-control/o
- **Lab Creator**
- To create labs within a lab account, an educator must be a member of the Lab Creator role. An educator who creates a lab is automatically added as a lab Owner. For more information, see [Add a user to the Lab Creator role](./tutorial-setup-lab-account.md#add-a-user-to-the-lab-creator-role).
+ To create labs within a lab account, an educator must be a member of the Lab Creator role. An educator who creates a lab is automatically added as a lab Owner. For more information, see [Add a user to the Lab Creator role](how-to-add-lab-creator.md).
- Lab **Owner** or **Contributor**
When you're assigning roles, it helps to follow these tips:
- Ordinarily, only administrators should be members of a lab account Owner or Contributor role. The lab account might have more than one Owner or Contributor. - To give educators the ability to create new labs and manage the labs that they create, you need only assign them the Lab Creator role.-- To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that they'll manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab. For more information, see [Add Owners to a lab](./how-to-add-user-lab-owner.md).
+- To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that they manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab. For more information, see [Add Owners to a lab](./how-to-add-user-lab-owner.md).
## Content filtering
-Your school may need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Lab Services doesn't offer built-in support for content filtering.
+Your school might need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Lab Services doesn't offer built-in support for content filtering.
There are two approaches that schools typically consider for content filtering: - Configure a firewall to filter content at the network level. - Install 3rd party software directly on each computer that performs content filtering.
-The first approach isn't currently supported by Lab Services. Lab Services hosts each lab's virtual network within a Microsoft-managed Azure subscription. As a result, you don't have access to the underlying virtual network to do content filtering at the network level. For more information on Lab Services' architecture, read the article [Architecture Fundamentals](./classroom-labs-fundamentals.md).
+The first approach isn't currently supported by Lab Services. Lab Services hosts each lab's virtual network within a Microsoft-managed Azure subscription. As a result, you don't have access to the underlying virtual network to do content filtering at the network level. For more information on Lab Services' architecture, read the article [Architecture Fundamentals](./classroom-labs-fundamentals.md).
-Instead, we recommend the second approach which is to install 3rd party software on each lab's template VM. There are a few key points to highlight as part of this solution:
+Instead, we recommend the second approach, which is to install 3rd party software on each lab's template VM. There are a few key points to highlight as part of this solution:
-- If you plan to use the [auto-shutdown settings](./cost-management-guide.md#automatic-shutdown-settings-for-cost-control), you will need to unblock several Azure host names with the 3rd party software. The auto-shutdown settings use a diagnostic extension that must be able to communicate back to Lab Services. Otherwise, the auto-shutdown settings will fail to enable for the lab.-- You may also want to have each student use a non-admin account on their VM so that they can't uninstall the content filtering software. By default, Lab Services creates an admin account that each student uses to sign into their VM. It is possible to add a non-admin account using a specialized image, but there are some known limitations.
+- If you plan to use the [auto-shutdown settings](./cost-management-guide.md#automatic-shutdown-settings-for-cost-control), you'll need to unblock several Azure host names with the 3rd party software. The auto-shutdown settings use a diagnostic extension that must be able to communicate back to Lab Services. Otherwise, the auto-shutdown settings fail to enable for the lab.
+- You may also want to have each student use a non-admin account on their VM so that they can't uninstall the content filtering software. By default, Lab Services creates an admin account that each student uses to sign into their VM. It is possible to add a non-admin account using a specialized image, but there are some known limitations.
If your school needs to do content filtering, contact us via the [Azure Lab Services' forums](https://techcommunity.microsoft.com/t5/azure-lab-services/bd-p/AzureLabServices) for more information. ## Endpoint management
-Many endpoint management tools, such as [Microsoft Configuration Manager](https://techcommunity.microsoft.com/t5/azure-lab-services/configuration-manager-azure-lab-services/ba-p/1754407), require Windows VMs to have unique machine security identifiers (SIDs). Using SysPrep to create a *generalized* image typically ensures that each Windows machine will have a new, unique machine SID generated when the VM boots from the image.
+Many endpoint management tools, such as [Microsoft Configuration Manager](https://techcommunity.microsoft.com/t5/azure-lab-services/configuration-manager-azure-lab-services/ba-p/1754407), require Windows VMs to have unique machine security identifiers (SIDs). Using SysPrep to create a *generalized* image typically ensures that each Windows machine has a new, unique machine SID generated when the VM boots from the image.
-With Lab Services, even if you use a *generalized* image to create a lab, the template VM and student VMs will all have the same machine SID. The VMs have the same SID because the template VM's image is in a *specialized* state when it's published to create the student VMs.
+With Lab Services, even if you use a *generalized* image to create a lab, the template VM and student VMs will all have the same machine SID. The VMs have the same SID because the template VM's image is in a *specialized* state when it's published to create the student VMs.
-For example, the Azure Marketplace images are generalized. If you create a lab from the Win 10 marketplace image and publish the template VM, all of the student VMs within a lab will have the same machine SID as the template VM. The machine SIDs can be verified by using a tool such as [PsGetSid](/sysinternals/downloads/psgetsid).
+For example, the Azure Marketplace images are generalized. If you create a lab from the Win 10 marketplace image and publish the template VM, all of the student VMs within a lab have the same machine SID as the template VM. The machine SIDs can be verified by using a tool such as [PsGetSid](/sysinternals/downloads/psgetsid).
-If you plan to use an endpoint management tool or similar software, we recommend that you test it with lab VMs to ensure that it works properly when machine SIDs are the same.
+If you plan to use an endpoint management tool or similar software, we recommend that you test it with lab VMs to ensure that it works properly when machine SIDs are the same.
## Pricing
To learn about pricing, see [Azure Lab Services pricing](https://azure.microsoft
You also need to consider the pricing for the Shared Image Gallery service if you plan to use shared image galleries for storing and managing image versions.
-Creating a shared image gallery and attaching it to your lab account is free. No cost is incurred until you save an image version to the gallery. The pricing for using a shared image gallery is ordinarily fairly negligible, but it's important to understand how it's calculated, because it isn't included in the pricing for Azure Lab Services.
+Creating a shared image gallery and attaching it to your lab account is free. No cost is incurred until you save an image version to the gallery. The pricing for using a shared image gallery is ordinarily fairly negligible, but it's important to understand how it's calculated, because it isn't included in the pricing for Azure Lab Services.
#### Storage charges
-To store image versions, a shared image gallery uses standard hard disk drive (HDD) managed disks by default. We recommend using HDD-managed disks when using shared image gallery with Lab Services. The size of the HDD-managed disk that's used depends on the size of the image version that's being stored. Lab Services supports image and disk sizes up to 128 GB. To learn about pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/).
+To store image versions, a shared image gallery uses standard hard disk drive (HDD) managed disks by default. We recommend using HDD-managed disks when using shared image gallery with Lab Services. The size of the HDD-managed disk that's used depends on the size of the image version that's being stored. Lab Services supports image and disk sizes up to 128 GB. To learn about pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/).
#### Replication and network egress charges
When you save an image version by using a lab template VM, Azure Lab Services fi
It's important to note that Azure Lab Services automatically replicates the source image version to all [target regions within the geography](https://azure.microsoft.com/global-infrastructure/regions/) where the lab is located. For example, if your lab is in the US geography, an image version is replicated to each of the eight regions that exist within the US.
-A network egress charge occurs when an image version is replicated from the source region to additional target regions. The amount charged is based on the size of the image version when the image's data is initially transferred outbound from the source region. For pricing details, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/).
+A network egress charge occurs when an image version is replicated from the source region to additional target regions. The amount charged is based on the size of the image version when the image's data is initially transferred outbound from the source region. For pricing details, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/).
Egress charges might be waived for [Education Solutions](https://www.microsoft.com/licensing/licensing-programs/licensing-for-industries?rtc=1&activetab=licensing-for-industries-pivot:primaryr3) customers. To learn more, contact your account manager.
Let's look at an example of the cost of saving a template VM image to a shared i
The total cost per month is estimated as:
-* *Number of images &times; number of versions &times; number of replicas &times; managed disk price = total cost per month*
+- *Number of images &times; number of versions &times; number of replicas &times; managed disk price = total cost per month*
In this example, the cost is:
-* 1 custom image (32 GB) &times; 2 versions &times; 8 US regions &times; $1.54 = $24.64 per month
+- 1 custom image (32 GB) &times; 2 versions &times; 8 US regions &times; $1.54 = $24.64 per month
> [!NOTE] > The preceding calculation is for example purposes only. It covers storage costs associated with using Shared Image Gallery and does *not* include egress costs. For actual pricing for storage, see [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/).
lab-services How To Add Lab Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-add-lab-creator.md
+
+ Title: 'How to add a lab creator to a lab account with Azure Lab Services'
+
+description: Learn how to grant a user access to create labs.
+++++ Last updated : 06/27/2024+++
+# Add a user to the Lab Creator role
++
+To grant people the permission to create labs, add them to the Lab Creator role.
+
+Follow these steps to [assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
+
+> [!NOTE]
+> Azure Lab Services automatically assigns the Lab Creator role to the Azure account you use to create the lab account.
+
+1. On the **Lab Account** page, select **Access control (IAM)**.
+
+1. From the **Access control (IAM)** page, select **Add** > **Add role assignment**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows the Access control (I A M) page with Add role assignment menu option highlighted.":::
+
+1. On the **Role** tab, select the **Lab Creator** role.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot that shows the Add role assignment page with Role tab selected.":::
+
+1. On the **Members** tab, select the user you want to add to the Lab Creators role.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+## Next steps
+
+In this article, you granted lab creation permissions to another user. To learn about how to create a lab, see [Manage labs in Azure Lab Services when using lab accounts](how-to-manage-classroom-labs.md).
lab-services How To Connect Peer Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-peer-virtual-network.md
Title: Connect to a peer network
-description: Learn how to connect your lab network with another network as a peer for lab accounts in Azure Lab Services. For example, connect your on-premises organization/university network with Lab's virtual network in Azure.
+description: Learn how to connect your lab network with another network as a peer for lab accounts in Azure Lab Services. For example, connect your on-premises organization/university network with Lab's virtual network in Azure.
This article provides information about peering your labs network with another n
Virtual network peering enables you to seamlessly connect Azure virtual networks. Once peered, the virtual networks appear as one, for connectivity purposes. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same virtual network, through private IP addresses only. For more information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
-You may need to connect your lab's network with a peer virtual network in some scenarios including the following ones:
+You might need to connect your lab's network with a peer virtual network in some scenarios including the following ones:
- The virtual machines in the lab have software that connects to on-premises license servers to acquire license. - The virtual machines in the lab need access to data sets (or any other files) on university's network shares.
You may need to connect your lab's network with a peer virtual network in some s
Certain on-premises networks are connected to Azure Virtual Network either through [ExpressRoute](../expressroute/expressroute-introduction.md) or [Virtual Network Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). These services must be set up outside of Azure Lab Services. To learn more about connecting an on-premises network to Azure using ExpressRoute, see [ExpressRoute overview](../expressroute/expressroute-introduction.md). For on-premises connectivity using a Virtual Network Gateway, the gateway, specified virtual network, and the lab account must all be in the same region. > [!NOTE]
-> When creating a Azure Virtual Network that will be peered with a lab account, it's important to understand how the virtual network's region impacts where labs are created. For more information, see the administrator guide's section on [regions/locations](./administrator-guide-1.md#regionslocations).
+> When creating a Azure Virtual Network that will be peered with a lab account, it's important to understand how the virtual network's region impacts where labs are created. For more information, see the administrator guide's section on [regions/locations](./administrator-guide-1.md#regionslocations).
> [!NOTE]
-> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
+> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
## Configure at the time of lab account creation
-During the new [lab account creation](tutorial-setup-lab-account.md), you can pick an existing virtual network that shows in the **Peer virtual network** dropdown list on the **Advanced** tab. The list only shows virtual networks in the same region as the lab account. The selected virtual network is connected (peered) to labs created under the lab account. All the virtual machines in labs that are created after the making this change have access to the resources on the peered virtual network.
+When [creating a lab account](how-to-create-lab-accounts.md), you can pick an existing virtual network that shows in the **Peer virtual network** dropdown list on the **Advanced** tab. The list only shows virtual networks in the same region as the lab account. The selected virtual network is connected (peered) to labs created under the lab account. All the virtual machines in labs that are created after the making this change have access to the resources on the peered virtual network.
![Screenshot that shows how to create a lab account in the Azure portal, highlighting the peer virtual network setting.](./media/how-to-connect-peer-virtual-network/select-vnet-to-peer.png) ### Address range
-There's also an option to provide an address range for virtual machines for the labs. The address range setting applies only if you enable a peer virtual network for the lab. If the address range is provided, all the virtual machines in the labs under the lab account are created in that address range. The address range should be in CIDR notation (for example, 10.20.0.0/20) and shouldn't overlap with any existing address ranges.
+There's also an option to provide an address range for virtual machines for the labs. The address range setting applies only if you enable a peer virtual network for the lab. If the address range is provided, all the virtual machines in the labs under the lab account are created in that address range. The address range should be in CIDR notation (for example, 10.20.0.0/20) and shouldn't overlap with any existing address ranges.
-When you provide an address range, it's important to think about the number of *labs* that you create. Azure Lab Services assumes a maximum of 512 virtual machines per lab. For example, an IP range with '/23' can create only one lab. A range with a '/21' allows for the creation of four labs.
+When you provide an address range, it's important to think about the number of *labs* that you create. Azure Lab Services assumes a maximum of 512 virtual machines per lab. For example, an IP range with '/23' can create only one lab. A range with a '/21' allows for the creation of four labs.
-If the address range isn't specified, Azure Lab Services uses the default address range given to it by Azure when creating the virtual network to be peered with your virtual network. The range is often something like 10.x.0.0/16. This large range might lead to IP range overlap, so make sure to either specify an address range in the lab settings or check the address range of your virtual network being peered.
+If the address range isn't specified, Azure Lab Services uses the default address range given to it by Azure when creating the virtual network to be peered with your virtual network. The range is often something like 10.x.0.0/16. Large IP ranges might lead to IP range overlap. Make sure to either specify an address range in the lab settings or check the address range of your virtual network being peered.
> [!NOTE]
-> Lab creation can fail if the lab account is peered to a virtual network but has too narrow of an IP address range. You can run out of space in the address range if there are too many labs in the lab account (each lab uses 512 addresses).
+> Lab creation can fail if the lab account is peered to a virtual network but has too narrow of an IP address range. You can run out of space in the address range if there are too many labs in the lab account (each lab uses 512 addresses).
> > For example, if you have a block of /19, this address range can accommodate 8192 IP addresses and 16 labs (8192/512 = 16 labs). In this case, lab creation fails on the 17th lab creation.
->
-> If the lab creation fails, contact your lab account owner/admin and request for the address range to be increased. The admin can increase the address range using steps mentioned in the [Specify an address range for VMs in a lab account](#specify-an-address-range-for-vms-in-the-lab-account) section.
+>
+> If the lab creation fails, contact your lab account owner/admin and request for the address range to be increased. The admin can increase the address range using steps mentioned in the [Specify an address range for VMs in a lab account](#specify-an-address-range-for-vms-in-the-lab-account) section.
## Configure after the lab account is created
When you select a virtual network for the **Peer virtual network** field, the **
> The peered virtual network setting applies only to labs that are created after the change is made, not to the existing labs. ## Specify an address range for VMs in the lab account
-The following procedure has steps to specify an address range for VMs in the lab. If you update the range that you previously specified, the modified address range applies only to VMs that are created after the change was made.
-Here are some restrictions when specifying the address range that you should keep in mind.
+The following procedure has steps to specify an address range for VMs in the lab. If you update the range that you previously specified, the modified address range applies only to VMs that are created after the change was made.
+
+Here are some restrictions when specifying the address range that you should keep in mind.
-- The prefix must be smaller than or equal to 23.
+- The prefix must be smaller than or equal to 23.
- If a virtual network is peered to the lab account, the provided address range can't overlap with address range from peered virtual network. 1. On the **Lab Account** page, select **Lab settings** on the left menu. 2. For the **Address range** field, specify the address range for VMs that are created in the lab. The address range should be in the classless inter-domain routing (CIDR) notation (example: 10.20.0.0/23). Virtual machines in the lab are created in this address range.
-3. Select **Save** on the toolbar.
+3. Select **Save** on the toolbar
![Screenshot that shows the lab settings page for a lab account in the Azure portal, highlighting the option to configure an address range.](./media/how-to-manage-lab-accounts/labs-configuration-page-address-range.png)
See the following articles:
- [Attach a compute gallery to a lab](how-to-attach-detach-shared-image-gallery-1.md) - [Add a user as a lab owner](how-to-add-user-lab-owner.md) - [View firewall settings for a lab](how-to-configure-firewall-settings.md)-- [Configure other settings for a lab](how-to-configure-lab-accounts.md)
+- [Configure other settings for a lab](how-to-configure-lab-accounts.md)
lab-services How To Manage Classroom Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-classroom-labs.md
This article describes how to create and delete a lab. It also shows you how to
## Prerequisites
-To set up a lab in a lab account, you must be a member of the **Lab Creator** role in the lab account. The account you used to create a lab account is automatically added to this role. A lab owner can add other users to the Lab Creator role by using steps in the following article: [Add a user to the Lab Creator role](tutorial-setup-lab-account.md#add-a-user-to-the-lab-creator-role).
+To set up a lab in a lab account, you must be a member of the **Lab Creator** role in the lab account. The account you used to create a lab account is automatically added to this role. A lab owner can add other users to the Lab Creator role by using steps in the following article: [Add a user to the Lab Creator role](how-to-add-lab-creator.md).
## Create a lab
To set up a lab in a lab account, you must be a member of the **Lab Creator** ro
> Make a note of user name and password. They won't be shown again. 3. Disable **Use same password for all virtual machines** option if you want students to set their own passwords. This step is **optional**.
- An educator can choose to use the same password for all the VMs in the lab, or allow students to set passwords for their VMs. By default, this setting is enabled for all Windows and Linux images except for Ubuntu. When you select **Ubuntu** VM, this setting is disabled and students are prompted to set a password when they sign in for the first time.
+ An educator can choose to use the same password for all the VMs in the lab, or allow students to set passwords for their VMs. By default, this setting is enabled for all Windows and Linux images except for Ubuntu. When you select **Ubuntu** VM, this setting is disabled and students are prompted to set a password when they sign in for the first time.
:::image type="content" source="./media/how-to-manage-classroom-labs/virtual-machine-credentials.png" alt-text="Screenshot that shows the Virtual machine credentials page of the New lab wizard.":::
To set up a lab in a lab account, you must be a member of the **Lab Creator** ro
8. On the **Template** page, do the following steps: These steps are **optional** for the tutorial. 1. Start the template VM.
- 1. Connect to the template VM by selecting **Connect**. If it's a Linux template VM, you choose whether you want to connect using an SSH terminal or a graphical remote desktop. Additional setup is required to use a graphical remote desktop. For more information, see [Enable graphical remote desktop for Linux virtual machines in Azure Lab Services](how-to-enable-remote-desktop-linux.md).
+ 1. Connect to the template VM by selecting **Connect**. If it's a Linux template VM, you choose whether you want to connect using an SSH terminal or a graphical remote desktop. Extra setup is required to use a graphical remote desktop. For more information, see [Enable graphical remote desktop for Linux virtual machines in Azure Lab Services](how-to-enable-remote-desktop-linux.md).
1. Select **Reset password** to reset the password for the VM. The VM must be running before the reset password button is available. 1. Install and configure software on your template VM.
- 1. **Stop** the VM.
+ 1. **Stop** the VM.
9. On **Template** page, select **Publish** on the toolbar.
lab-services Tutorial Setup Lab Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab-account.md
- Title: 'Tutorial: Set up a lab account with Azure Lab Services'-
-description: Learn how to set up a lab account with Azure Lab Services in the Azure portal. Then, grant a user access to create labs.
----- Previously updated : 03/03/2023---
-# Tutorial: Set up a lab account with Azure Lab Services
--
-In Azure Lab Services, a lab account serves as the central resource in which you manage your organization's labs. In your lab account, give permission to others to create labs, and set policies that apply to all labs under the lab account. In this tutorial, learn how to create a lab account by using the Azure portal.
-
-In this tutorial, you do the following actions:
-
-> [!div class="checklist"]
-> - Create a lab account
-> - Add a user to the Lab Creator role
--
-## Prerequisites
-
-* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Create a lab account
-
-The following steps illustrate how to use the Azure portal to create a lab account with Azure Lab Services.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
-
- :::image type="content" source="./media/tutorial-setup-lab-account/azure-portal-create-resource.png" alt-text="Screenshot that shows the Azure portal home page, highlighting the Create a resource button.":::
-
-1. Search for **lab account**. (**Lab account** can also be found under the **DevOps** category.)
-
-1. On the **Lab account** tile, select **Create** > **Lab account**.
-
- :::image type="content" source="./media/tutorial-setup-lab-account/select-lab-accounts-service.png" alt-text="Screenshot of how to search for and create a lab account by using the Azure Marketplace.":::
-
-1. On the **Basics** tab of the **Create a lab account** page, provide the following information:
-
- | Field | Description |
- | | -- |
- | **Subscription** | Select the Azure subscription that you want to use to create the resource. |
- | **Resource group** | Select an existing resource group or select **Create new**, and enter a name for the new resource group. |
- | **Name** | Enter a unique lab account name. <br/>For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices). |
- | **Region** | Select a geographic location to host your lab account. |
-
-1. After you're finished configuring the resource, select **Review + Create**.
-
- :::image type="content" source="./media/tutorial-setup-lab-account/lab-account-basics-page.png" alt-text="Screenshot that shows the Basics tab to create a new lab account in the Azure portal.":::
-
-1. Review all the configuration settings and select **Create** to start the deployment of the lab account.
-
-1. To view the new resource, select **Go to resource**.
-
- :::image type="content" source="./media/tutorial-setup-lab-account/go-to-lab-account.png" alt-text="Screenshot that shows the resource deployment completion page in the Azure portal.":::
-
-1. Confirm that you see the lab account **Overview** page.
-
- :::image type="content" source="./media/tutorial-setup-lab-account/lab-account-page.png" alt-text="Screenshot that shows the lab account overview page in the Azure portal.":::
-
-You've now successfully created a lab account by using the Azure portal. To let others create labs in the lab account, you assign them the Lab Creator role.
-
-## Add a user to the Lab Creator role
-
-To set up a lab in a lab account, you must be a member of the Lab Creator role in the lab account. To grant people the permission to create labs, add them to the Lab Creator role.
-
-Follow these steps to [assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
-
-> [!NOTE]
-> Azure Lab Services automatically assigns the Lab Creator role to the Azure account you use to create the lab account. If you plan to use the same user account to create a lab in this tutorial, skip this step.
-
-1. On the **Lab Account** page, select **Access control (IAM)**.
-
-1. From the **Access control (IAM)** page, select **Add** > **Add role assignment**.
-
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows the Access control (I A M) page with Add role assignment menu option highlighted.":::
-
-1. On the **Role** tab, select the **Lab Creator** role.
-
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot that shows the Add roll assignment page with Role tab selected.":::
-
-1. On the **Members** tab, select the user you want to add to the Lab Creators role.
-
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
-## Next steps
-
-In this tutorial, you created a lab account and granted lab creation permissions to another user. To learn about how to create a lab, advance to the next tutorial:
-
-> [!div class="nextstepaction"]
-> [Set up a lab](tutorial-setup-lab.md)
load-balancer Howto Load Balancer Imds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/howto-load-balancer-imds.md
Previously updated : 05/08/2023 Last updated : 06/28/2024
## Schema breakdown
-| Data | Description | Version introduced |
+| **Data** | **Description** | **Version introduced** |
||-|--| | `publicIpAddresses` | The instance level Public or Private IP of the specific Virtual Machine instance | 2020-10-01 | `inboundRules` | List of load balancing rules or inbound NAT rules using which the Load Balancer directs traffic to the specific Virtual Machine instance. Frontend IP addresses and the Private IP addresses listed here belong to the Load Balancer. | 2020-10-01
load-balancer Load Balancer Multiple Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-cli.md
Previously updated : 05/30/2023 Last updated : 06/28/2024
load-balancer Load Balancer Multiple Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-powershell.md
Previously updated : 06/27/2023 Last updated : 06/27/2024
> * [CLI](load-balancer-multiple-ip-cli.md) > * [PowerShell](load-balancer-multiple-ip-powershell.md)
-This article describes how to use Azure Load Balancer with multiple IP addresses on a secondary network interface (NIC). For this scenario, we have two VMs running Windows, each with a primary and a secondary NIC. Each of the secondary NICs has two IP configurations. Each VM hosts both websites contoso.com and fabrikam.com. Each website is bound to one of the IP configurations on the secondary NIC. We use Azure Load Balancer to expose two frontend IP addresses, one for each website, to distribute traffic to the respective IP configuration for the website. This scenario uses the same port number across both frontends, as well as both backend pool IP addresses.
+This article describes how to use Azure Load Balancer with multiple IP addresses on a secondary network interface (NIC). For this scenario, we have two VMs running Windows, each with a primary and a secondary NIC. Each of the secondary NICs has two IP configurations. Each VM hosts both websites contoso.com and fabrikam.com. Each website is bound to one of the IP configurations on the secondary NIC. We use Azure Load Balancer to expose two frontend IP addresses, one for each website, to distribute traffic to the respective IP configuration for the website. This scenario uses the same port number across both frontends, and both backend pool IP addresses.
## Steps to load balance on multiple IP configurations
Follow the steps below to achieve the scenario outlined in this article:
$Subnet1 = Get-AzVirtualNetworkSubnetConfig -Name "mySubnet" -VirtualNetwork $myVnet ```
- You do not need to associate the secondary IP configurations with public IPs for the purpose of this tutorial. Edit the command to remove the public IP association part.
+ You don't need to associate the secondary IP configurations with public IPs in this tutorial. Edit the command to remove the public IP association part.
-6. Complete steps 4 through 6 of this article again for VM2. Be sure to replace the VM name to VM2 when doing this. Note that you do not need to create a virtual network for the second VM. You may or may not create a new subnet based on your use case.
+6. Complete steps 4 through 6 of this article again for VM2. Be sure to replace the VM name to VM2 when doing this. You don't need to create a virtual network for the second VM. You can create a new subnet based on your use case.
7. Create two public IP addresses and store them in the appropriate variables as shown:
Follow the steps below to achieve the scenario outlined in this article:
$nic2 | Set-AzNetworkInterface ```
-13. Finally, you must configure DNS resource records to point to the respective frontend IP address of the Load Balancer. You may host your domains in Azure DNS. For more information about using Azure DNS with Load Balancer, see [Using Azure DNS with other Azure services](../dns/dns-for-azure-services.md).
+13. Finally, you must configure DNS resource records to point to the respective frontend IP address of the Load Balancer. You can host your domains in Azure DNS. For more information about using Azure DNS with Load Balancer, see [Using Azure DNS with other Azure services](../dns/dns-for-azure-services.md).
## Next steps - Learn more about how to combine load balancing services in Azure in [Using load-balancing services in Azure](../traffic-manager/traffic-manager-load-balancing-azure.md).
load-balancer Load Balancer Multiple Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip.md
Previously updated : 11/29/2023 Last updated : 06/28/2024
In this section, you create two virtual machines to host the IIS websites.
3. In **Create virtual machine**, enter or select the following information:
- | Setting | Value |
+ | Setting | Value |
|--|-| | **Project Details** | | | Subscription | Select your Azure subscription |
In this section, you create two virtual machines to host the IIS websites.
| Subnet | Select **backend-subnet(10.1.0.0/24)** | | Public IP | Select **None**. | | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In **Create network security group**, enter **myNSG** in **Name**. </br> In **Inbound rules**, select **+Add an inbound rule**. </br> In **Service**, select **HTTP**. </br> In **Priority**, enter **100**. </br> In **Name**, enter **myNSGrule** </br> Select **Add** </br> Select **OK** |
+ | Configure network security group | Select **Create new**.</br> In **Create network security group**, enter **myNSG** in **Name**.</br> In **Inbound rules**, select **+Add an inbound rule**.</br> In **Service**, select **HTTP**.</br> In **Priority**, enter **100**.</br> In **Name**, enter **myNSGrule**.</br> Select **Add**.</br> Select **OK**. |
6. Select **Review + create**.
You connect to **myVM1** and **myVM2** with Azure Bastion and configure the seco
6. Select **Allow** for Bastion to use the clipboard.
-7. On the server desktop, navigate to Start > Windows Administrative Tools > Windows PowerShell > Windows PowerShell.
+7. On the server desktop, navigate to **Start > Windows Administrative Tools > Windows PowerShell > Windows PowerShell**.
8. In the PowerShell window, execute the `route print` command, which returns output similar to the following output for a virtual machine with two attached network interfaces:
During the creation of the load balancer, you configure:
| Name | Enter **Frontend-contoso**. | | IP version | Select **IPv4**. | | IP type | Select **IP address**. |
- | Public IP address | Select **Create new**. </br> Enter **myPublicIP-contoso** for **Name** </br> Select **Zone-redundant** in **Availability zone**. </br> Leave the default of **Microsoft Network** for **Routing preference**. </br> Select **OK**. |
+ | Public IP address | Select **Create new**.</br> Enter **myPublicIP-contoso** for **Name** </br> Select **Zone-redundant** in **Availability zone**.</br> Leave the default of **Microsoft Network** for **Routing preference**.</br> Select **OK**. |
> [!NOTE] > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier). > > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md). >
- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear.</br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
7. Select **Add**.
During the creation of the load balancer, you configure:
| Name | Enter **Frontend-fabrikam**. | | IP version | Select **IPv4**. | | IP type | Select **IP address**. |
- | Public IP address | Select **Create new**. </br> Enter **myPublicIP-fabrikam** for **Name** </br> Select **Zone-redundant** in **Availability zone**. </br> Leave the default of **Microsoft Network** for **Routing preference**. </br> Select **OK**. |
+ | Public IP address | Select **Create new**.</br> Enter **myPublicIP-fabrikam** for **Name** </br> Select **Zone-redundant** in **Availability zone**.</br> Leave the default of **Microsoft Network** for **Routing preference**.</br> Select **OK**. |
10. Select **Add**.
During the creation of the load balancer, you configure:
| Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe-contoso**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Health probe | Select **Create new**.</br> In **Name**, enter **myHealthProbe-contoso**.</br> Select **TCP** in **Protocol**.</br> Leave the rest of the defaults, and select **OK**. |
| Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. |
During the creation of the load balancer, you configure:
| Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe-fabrikam**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Health probe | Select **Create new**.</br> In **Name**, enter **myHealthProbe-fabrikam**.</br> Select **TCP** in **Protocol**.</br> Leave the rest of the defaults, and select **OK**. |
| Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. |
If you're not going to continue to use this application, delete the virtual mach
Advance to the next article to learn how to create a cross-region load balancer: > [!div class="nextstepaction"]
-> [Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md)
+> [Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md)
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
Previously updated : 10/19/2023 Last updated : 06/28/2024 #Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs.
During the creation of the load balancer, you configure:
| Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **lb-health-probe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Health probe | Select **Create new**.</br> In **Name**, enter **lb-health-probe**.</br> Select **TCP** in **Protocol**.</br> Leave the rest of the defaults, and select **Save**. |
| Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | Enable TCP reset | Select **checkbox**. |
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
Title: "Quickstart: Create a public load balancer - Azure portal"
-description: This quickstart shows how to create a load balancer using the Azure portal.
+description: Learn how to create a public load balancer using the Azure portal.
Previously updated : 06/06/2023 Last updated : 06/28/2024 #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
During the creation of the load balancer, you configure:
1. Select **Zone-redundant** in **Availability zone**. > [!NOTE]
- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear.</br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
1. Leave the default of **Microsoft Network** for **Routing preference**.
-1. Select **OK**.
+1. Select **Save**.
-1. Select **Add**.
+1. Select **Save**.
1. Select **Next: Backend pools** at the bottom of the page.
During the creation of the load balancer, you configure:
| Protocol | Select **TCP** | | Port | Enter **80** | | Backend port | Enter **80** |
- | Health probe | Select **Create new**. </br> In **Name**, enter **lb-health-probe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. |
+ | Health probe | Select **Create new**.</br> In **Name**, enter **lb-health-probe**.</br> Select **HTTP** in **Protocol**.</br> Leave the rest of the defaults, and select **Save**. |
| Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15** | | Enable TCP reset | Select checkbox |
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users can find the latest image available for provisioning the Data
Visit the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## June 28, 2024
+
+Image Version: 24.06.10
+
+SDK Version: 1.56.0
+
+Issue fixed: Compute Instance 20.04 image build with SDK 1.56.0
+
+Major: Image Version: 24.06.10
+
+- SDK(azureml-core):1.56.0
+- Python:3.9
+- CUDA: 12.2
+- CUDnn==9.1.1
+- Nvidia Driver: 535.171.04
+- PyTorch: 1.13.1
+- TensorFlow: 2.15.0
+- autokeras==1.0.16
+- keras=2.15.0
+- ray==2.2.0
+- docker version==24.0.9-1
+ ## June 17, 2024 [Data Science Virtual Machine - Windows 2022](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2022?tab=Overview)
machine-learning Deploy Jais Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/deploy-jais-models.md
Title: How to deploy JAIS models with Azure Machine Learning Studio-
-description: Learn how to deploy JAIS models with Azure Machine Learning Studio.
+ Title: How to deploy JAIS models with Azure Machine Learning studio
+
+description: Learn how to deploy JAIS models with Azure Machine Learning studio.
-# How to deploy JAIS with Azure Machine Learning Studio
+# How to deploy JAIS with Azure Machine Learning studio
-In this article, you learn how to use Azure Machine Learning Studio to deploy the JAIS model as a service with pay-as you go billing.
+In this article, you learn how to use Azure Machine Learning studio to deploy the JAIS model as a service with pay-as you go billing.
-The JAIS model is available in Azure Machine Learning Studio with pay-as-you-go token based billing with Models as a Service.
+The JAIS model is available in Azure Machine Learning studio with pay-as-you-go token based billing with Models as a Service.
You can find the JAIS model in the model catalog by filtering on the JAIS collection. ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for JAIS is only available with workspaces created in these regions:
- > [!IMPORTANT]
- > For JAIS models, the pay-as-you-go model deployment offering is only available with workspaces created in East US 2 or Sweden Central region.
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md).
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../ai-studio/concepts/rbac-ai-studio.md). ### JAIS 30b Chat
-JAIS 30b Chat is an auto-regressive bi-lingual LLM for **Arabic** & **English**. The tuned versions use supervised fine-tuning (SFT). The model is finetuned with both Arabic and English prompt-response pairs. The finetuning datasets included a wide range of instructional data across various domains. The model covers a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, the Core42 team developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic.
+JAIS 30b Chat is an auto-regressive bi-lingual LLM for **Arabic** & **English**. The tuned versions use supervised fine-tuning (SFT). The model is fine-tuned with both Arabic and English prompt-response pairs. The fine-tuning datasets included a wide range of instructional data across various domains. The model covers a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, the Core42 team developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic.
*Context length:* JAIS 30b Chat supports a context length of 8K.
Models deployed as a service with pay-as-you-go are protected by [Azure AI Conte
- [What is Azure AI Studio?](../ai-studio/what-is-ai-studio.md) - [Azure AI FAQ article](../ai-studio/faq.yml)
+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
machine-learning How To Connect Models Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connect-models-serverless.md
Follow these steps to create a connection:
# [Python SDK](#tab/python) ```python
- client.connections.create(ServerlessConnection(
+ client.connections.create_or_update(ServerlessConnection(
name="meta-llama3-8b-connection", endpoint="https://meta-llama3-8b-qwerty-serverless.inference.ai.azure.com", api_key="1234567890qwertyuiop"
Follow these steps to create a connection:
## Related content - [Model Catalog and Collections](concept-model-catalog.md)-- [Deploy models as serverless API endpoints](how-to-deploy-models-serverless.md)
+- [Deploy models as serverless API endpoints](how-to-deploy-models-serverless.md)
machine-learning How To Create Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md
--++ Previously updated : 05/03/2024 Last updated : 06/10/2024 # Create an Azure Machine Learning compute instance
Choose the tab for the environment you're using for other prerequisites.
* To use the Python SDK, [set up your development environment with a workspace](how-to-configure-environment.md). Once your environment is set up, attach to the workspace in your Python script:
- [!INCLUDE [connect ws v2](includes/machine-learning-connect-ws-v2.md)]
# [Azure CLI](#tab/azure-cli)
-* To use the CLI, install the [Azure CLI extension for Machine Learning service (v2)](https://aka.ms/sdk-v2-install), [Azure Machine Learning Python SDK (v2)](https://aka.ms/sdk-v2-install), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
+* If you're working on a compute instance, the CLI is already installed. If working on a different computer, install the [Azure CLI extension for Machine Learning service (v2)](https://aka.ms/sdk-v2-install).
++ # [Studio](#tab/azure-studio)
Where the file *create-instance.yml* is:
* If you're using an __Azure Virtual Network__, specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network. You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup. * If you're using an Azure Machine Learning __managed virtual network__, the compute instance is created inside the managed virtual network. You can also select __No public IP__ to prevent the creation of a public IP address. For more information, see [managed compute with a managed network](./how-to-managed-network-compute.md).
- * Allow root access. (preview)
1. Select **Applications** if you want to add custom applications to use on your compute instance, such as RStudio or Posit Workbench. See [Add custom applications such as RStudio or Posit Workbench](#add-custom-applications-such-as-rstudio-or-posit-workbench). 1. Select **Tags** if you want to add additional information to categorize the compute instance.
from azure.ai.ml.constants import TimeZone
from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential
-# authenticate
-credential = DefaultAzureCredential()
-
-# Get a handle to the workspace
-ml_client = MLClient(
- credential=credential,
- subscription_id="<SUBSCRIPTION_ID>",
- resource_group_name="<RESOURCE_GROUP>",
- workspace_name="<AML_WORKSPACE_NAME>",
-)
- ci_minimal_name = "ci-name" ci_start_time = "2023-06-21T11:47:00" #specify your start time in the format yyyy-mm-ddThh:mm:ss
from azure.ai.ml import MLClient
from azure.identity import ManagedIdentityCredential client_id = os.environ.get("DEFAULT_IDENTITY_CLIENT_ID", None) credential = ManagedIdentityCredential(client_id=client_id)
-ml_client = MLClient(credential, sub_id, rg_name, ws_name)
-data = ml_client.data.get(name=data_name, version="1")
+ml_client = MLClient(credential, subscription_id, resource_group, workspace)
``` You can also use SDK V1:
from azureml.core.authentication import MsiAuthentication
from azureml.core import Workspace client_id = os.environ.get("DEFAULT_IDENTITY_CLIENT_ID", None) auth = MsiAuthentication(identity_config={"client_id": client_id})
-workspace = Workspace.get("chrjia-eastus", auth=auth, subscription_id="381b38e9-9840-4719-a5a0-61d9585e1e91", resource_group="chrjia-rg", location="East US")
+workspace = Workspace.get("chrjia-eastus", auth=auth, subscription_id=subscription_id, resource_group=resource_group, location="East US")
``` # [Azure CLI](#tab/azure-cli)
machine-learning How To Deploy Models Cohere Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-command.md
The previously mentioned Cohere models can be deployed as a serverless API with
### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Cohere Command is only available with workspaces created in these regions:
- > [!IMPORTANT]
- > Pay-as-you-go model deployment offering is only available in workspaces created in EastUS2 or Sweden Central region.
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md).
- Azure role-based access controls (Azure RBAC) are used to grant access to operations. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the Resource Group.
Models deployed as a service with pay-as-you-go are protected by Azure AI conten
- [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
machine-learning How To Deploy Models Cohere Embed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-embed.md
The previously mentioned Cohere models can be deployed as a service with pay-as-
### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Cohere Embed is only available with workspaces created in these regions:
- > [!IMPORTANT]
- > Pay-as-you-go model deployment offering is only available in workspaces created in EastUS2 or Sweden Central region.
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md).
- Azure role-based access controls (Azure RBAC) are used to grant access to operations. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the Resource Group.
Models deployed as a service with pay-as-you-go are protected by Azure AI conten
- [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
machine-learning How To Deploy Models Jamba https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-jamba.md
To get started with Jamba Instruct deployed as a serverless API, explore our int
### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Jamba Instruct is only available with workspaces created in these regions:
- > [!IMPORTANT]
- > The pay-as-you-go model deployment offering for for Jamba Instruct is only available in workspaces created in the **East US 2** and **Sweden Central** regions.
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md).
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
Models deployed as a serverless API are protected by Azure AI content safety. Wi
- [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Plan and manage costs for Azure AI Studio](../ai-studio/how-to/costs-plan-manage.md)
+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
machine-learning How To Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-llama.md
If you need to deploy a different model, [deploy it to managed compute](#deploy-
# [Meta Llama 3](#tab/llama-three) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.-
- > [!IMPORTANT]
- > Pay-as-you-go model deployment offering is only available in workspaces created in **East US 2** and **Sweden Central** regions for Meta Llama 3 models.
+- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Meta Llama 3 is only available with workspaces created in these regions:
+
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md).
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
If you need to deploy a different model, [deploy it to managed compute](#deploy-
# [Meta Llama 2](#tab/llama-two) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Meta Llama 2 is only available with workspaces created in these regions:
- > [!IMPORTANT]
- > Pay-as-you-go model deployment offering is only available in workspaces created in **East US 2** and **West US 3** regions for Meta Llama 2 models.
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md).
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
Models deployed as a serverless API are protected by Azure AI content safety. Wh
- [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Plan and manage costs for Azure AI Studio](../ai-studio/how-to/costs-plan-manage.md)
+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
machine-learning How To Deploy Models Phi 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-phi-3.md
Certain models in the model catalog can be deployed as a serverless API with pay
### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have a workspace, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one.
+- An Azure Machine Learning workspace. If you don't have a workspace, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one. The serverless API model deployment offering for Phi-3 is only available with workspaces created in these regions:
- > [!IMPORTANT]
- > For Phi-3 family models, the serverless API model deployment offering is only available with workspaces created in **East US 2** and **Sweden Central** regions.
+ * East US 2
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md).
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
- [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)-- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
machine-learning How To Deploy Models Timegen 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-timegen-1.md
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
- [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)-- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
Previously updated : 04/17/2024 Last updated : 06/28/2024 monikerRange: 'azureml-api-2'
machine-learning How To Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-compute-instance.md
--++ Last updated 05/03/2024
machine-learning How To R Deploy R Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-deploy-r-model.md
These steps assume you have an Azure Container Registry associated with your wor
1. If you see custom environments, nothing more is needed. 1. If you don't see any custom environments, create [an R environment](how-to-r-modify-script-for-production.md#create-an-environment), or any other custom environment. (You *won't* use this environment for deployment, but you *will* use the container registry that is also created for you.)
-Once you have verified that you have at least one custom environment, use the following steps to build a container.
+Once you have verified that you have at least one custom environment, start a terminal and set up the CLI:
-1. Open a terminal window and sign in to Azure. If you're doing this from an [Azure Machine Learning compute instance](quickstart-create-resources.md#create-a-compute-instance), use:
- ```azurecli
- az login --identity
- ```
-
- If you're not on the compute instance, omit `--identity` and follow the prompt to open a browser window to authenticate.
-
-1. Make sure you have the most recent versions of the CLI and the `ml` extension:
-
- ```azurecli
- az upgrade
- ```
-
-1. If you have multiple Azure subscriptions, set the active subscription to the one you're using for your workspace. (You can skip this step if you only have access to a single subscription.) Replace `<SUBSCRIPTION-NAME>` with your subscription name. Also remove the brackets `<>`.
-
- ```azurecli
- az account set --subscription "<SUBSCRIPTION-NAME>"
- ```
-
-1. Set the default workspace. If you're doing this from a compute instance, you can use the following command as is. If you're on any other computer, substitute your resource group and workspace name instead. (You can find these values in [Azure Machine Learning studio](how-to-r-train-model.md#submit-the-job).)
-
- ```azurecli
- az configure --defaults group=$CI_RESOURCE_GROUP workspace=$CI_WORKSPACE
- ```
+After you've set up the CLI, use the following steps to build a container.
1. Make sure you are in your project directory.
machine-learning How To Use Pipelines Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipelines-prompt-flow.md
Previously updated : 06/30/2023 Last updated : 06/20/2024
# Use Azure Machine Learning pipelines with no code to construct RAG pipelines (preview)
-This tutorial walks you through how to create an RAG pipeline. For advanced scenarios, you can build your own custom Azure Machine Learning pipelines from code (typically notebooks) that allows you granular control of the RAG workflow. Azure Machine Learning provides several in-built pipeline components for data chunking, embeddings generation, test data creation, automatic prompt generation, prompt evaluation. These components can be used as per your needs using notebooks. You can even use the Vector Index created in Azure Machine Learning in LangChain.
+This article offers you examples on how to create an RAG pipeline. For advanced scenarios, you can build your own custom Azure Machine Learning pipelines from code (typically notebooks) that allows you granular control of the RAG workflow. Azure Machine Learning provides several in-built pipeline components for data chunking, embeddings generation, test data creation, automatic prompt generation, prompt evaluation. These components can be used as per your needs using notebooks. You can even use the Vector Index created in Azure Machine Learning in LangChain.
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
machine-learning Concept Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-flows.md
Previously updated : 06/30/2023 Last updated : 06/28/2024 # Flows in prompt flow?
machine-learning Concept Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-tools.md
Previously updated : 06/30/2023 Last updated : 06/28/2024 # Tools in prompt flow?
machine-learning Concept Variants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-variants.md
Previously updated : 06/30/2023 Last updated : 06/28/2024 # Variants in prompt flow
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-high-availability-machine-learning.md
Previously updated : 11/04/2022 Last updated : 06/28/2024 monikerRange: 'azureml-api-1'
migrate Concepts Dependency Visualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-dependency-visualization.md
Title: Dependency analysis in Azure Migrate Discovery and assessment description: Describes how to use dependency analysis for assessment using Azure Migrate Discovery and assessment. --
-ms.
++ Last updated 12/07/2023
migrate Concepts Migration Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-webapps.md
Title: Support matrix for web apps migration description: Support matrix for web apps migration--++ Last updated 08/31/2023
migrate Create Manage Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/create-manage-projects.md
Title: Create and manage projects description: Find, create, manage, and delete projects in Azure Migrate.--
-ms.
++ Last updated 05/22/2023
migrate How To Discover Sql Existing Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-sql-existing-project.md
Title: Discover SQL Server instances in an existing Azure Migrate project description: Learn how to discover SQL Server instances in an existing Azure Migrate project. --
-ms.
++ Last updated 09/27/2023
migrate Troubleshoot Webapps Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-webapps-migration.md
Title: Troubleshoot web apps migration issues description: Troubleshoot web apps migration issues--++ Last updated 02/28/2023
migrate Tutorial Modernize Asp Net Appservice Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-modernize-asp-net-appservice-code.md
Title: Modernize ASP.NET web apps to Azure App Service code description: At-scale migration of ASP.NET web apps to Azure App Service using Azure Migrate--++ Last updated 02/28/2023
migrate Set Discovery Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/set-discovery-scope.md
Title: Set the scope for discovery of servers on VMware vSphere with Azure Migrate description: Describes how to set the discovery scope for servers hosted on VMware vSphere assessment and migration with Azure Migrate.--
-ms.
++ Last updated 12/12/2022
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
Previously updated : 02/08/2024 Last updated : 06/28/2024 #
Example output:
```output Name Version -- -
-monitor-control-service 0.2.0
+monitor-control-service 0.4.1
connectedmachine 0.7.0
-connectedk8s 1.6.5
+connectedk8s 1.7.3
k8s-extension 1.4.3 networkcloud 1.1.0
-k8s-configuration 1.7.0
-managednetworkfabric 4.2.0
+k8s-configuration 2.0.0
+managednetworkfabric 6.2.0
customlocation 0.1.3
-ssh 2.0.2
+ssh 2.0.4
``` <!-- LINKS - External -->
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
Azure Database for PostgreSQL - Flexible Server encrypts data in two ways:
Although **it's highly not recommended**, if needed, due to legacy client incompatibility, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the `require_secure_transport` server parameter to OFF. You can also set TLS version by setting `ssl_max_protocol_version` server parameters. - **Data at rest**: For storage encryption, Azure Database for PostgreSQL - Flexible Server uses the FIPS 140-2 validated cryptographic module. Data is encrypted on disk, including backups and the temporary files created while queries are running.
- The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies, like transparent data encryption in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled.
+ The service uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies, like transparent data encryption in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled.
## Network security
postgresql How To Autovacuum Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-autovacuum-tuning.md
[!INCLUDE [applies-to-postgresql-flexible-server](~/reusable-content/ce-skilling/azure/includes/postgresql/includes/applies-to-postgresql-flexible-server.md)]
-This article provides an overview of the autovacuum feature for [Azure Database for PostgreSQL flexible server](overview.md) and the feature troubleshooting guides that are available to monitor the database bloat, autovacuum blockers and also information around how far the database is from emergency or wraparound situation.
+This article provides an overview of the autovacuum feature for [Azure Database for PostgreSQL flexible server](overview.md) and the feature troubleshooting guides that are available to monitor the database bloat, autovacuum blockers. It also provides information around how far the database is from emergency or wraparound situation.
## What is autovacuum
-Internal data consistency in PostgreSQL is based on the Multi-Version Concurrency Control (MVCC) mechanism, which allows the database engine to maintain multiple versions of a row and provides greater concurrency with minimal blocking between the different processes.
-
-PostgreSQL databases need appropriate maintenance. For example, when a row is deleted, it isn't removed physically. Instead, the row is marked as "dead". Similarly for updates, the row is marked as "dead" and a new version of the row is inserted. These operations leave behind dead records, called dead tuples, even after all the transactions that might see those versions finish. Unless cleaned up, dead tuples remain, consuming disk space and bloating tables and indexes which result in slow query performance.
-
-PostgreSQL uses a process called autovacuum to automatically clean-up dead tuples.
+Autovacuum is a PostgreSQL background process that automatically cleans up dead tuples and updates statistics. It helps maintain the database performance by automatically running two key maintenance tasks:
+
+- VACUUM - Frees up disk space by removing dead tuples.
+- ANALYZE - Collects statistics to help the PostgreSQL Optimizer choose the best execution paths for queries.
+
+To ensure autovacuum works properly, the autovacuum server parameter should always be set to ON. When enabled, PostgreSQL automatically decides when to run VACUUM or ANALYZE on a table, ensuring the database remains efficient and optimized.
## Autovacuum internals
-Autovacuum reads pages looking for dead tuples, and if none are found, autovacuum discards the page. When autovacuum finds dead tuples, it removes them. The cost is based on:
+Autovacuum reads pages looking for dead tuples, and if none are found, autovacuum discards the page. When autovacuum finds dead tuples, it removes them. The cost is based on:
-- `vacuum_cost_page_hit`: Cost of reading a page that is already in shared buffers and doesn't need a disk read. The default value is set to 1.-- `vacuum_cost_page_miss`: Cost of fetching a page that isn't in shared buffers. The default value is set to 10.-- `vacuum_cost_page_dirty`: Cost of writing to a page when dead tuples are found in it. The default value is set to 20.
+| Parameter | Description
+| -- | -- |
+`vacuum_cost_page_hit` | Cost of reading a page that is already in shared buffers and doesn't need a disk read. The default value is set to 1.
+`vacuum_cost_page_miss` | Cost of fetching a page that isn't in shared buffers. The default value is set to 10.
+`vacuum_cost_page_dirty` | Cost of writing to a page when dead tuples are found in it. The default value is set to 20.
-The amount of work autovacuum does depends on two parameters:
+The amount of work autovacuum performs depend on two parameters:
-- `autovacuum_vacuum_cost_limit` is the amount of work autovacuum does in one go.-- `autovacuum_vacuum_cost_delay` number of milliseconds that autovacuum is asleep after it has reached the cost limit specified by the `autovacuum_vacuum_cost_limit` parameter.
+| Parameter | Description
+| -- | -- |
+`autovacuum_vacuum_cost_limit` | The amount of work autovacuum does in one go.
+`autovacuum_vacuum_cost_delay` | Number of milliseconds that autovacuum is asleep after it reaches the cost limit specified by the `autovacuum_vacuum_cost_limit` parameter.
-In all currently supported versions of Postgres the default for `autovacuum_vacuum_cost_limit` is 200 (actually, it is set to -1 which makes it equals to the value of the regular `vacuum_cost_limit` which, by default, is 200).
+In all currently supported versions of Postgres the default value for `autovacuum_vacuum_cost_limit` is 200 (actually, set to -1, which makes it equals to the value of the regular `vacuum_cost_limit`, which by default, is 200).
As for `autovacuum_vacuum_cost_delay`, in Postgres version 11 it defaults to 20 milliseconds, while in Postgres versions 12 and above it defaults to 2 milliseconds.
select schemaname,relname,n_dead_tup,n_live_tup,round(n_dead_tup::float/n_live_t
The following columns help determine if autovacuum is catching up to table activity: -- **dead_pct**: percentage of dead tuples when compared to live tuples.-- **last_autovacuum**: The date of the last time the table was autovacuumed.-- **last_autoanalyze**: The date of the last time the table was automatically analyzed.
+| Parameter | Description
+| -- | -- |
+`dead_pct` | Percentage of dead tuples when compared to live tuples.
+`last_autovacuum` | The date of the last time the table was autovacuumed.
+`last_autoanalyze` | The date of the last time the table was automatically analyzed.
## When does PostgreSQL trigger autovacuum
-An autovacuum action (either *ANALYZE* or *VACUUM*) triggers when the number of dead tuples exceeds a particular number that is dependent on two factors: the total count of rows in a table, plus a fixed threshold. *ANALYZE*, by default, triggers when 10% of the table plus 50 rows changes, while *VACUUM* triggers when 20% of the table plus 50 rows changes. Since the *VACUUM* threshold is twice as high as the *ANALYZE* threshold, *ANALYZE* gets triggered earlier than *VACUUM*.
+An autovacuum action (either *ANALYZE* or *VACUUM*) triggers when the number of dead tuples exceeds a particular number that is dependent on two factors: the total count of rows in a table, plus a fixed threshold. *ANALYZE*, by default, triggers when 10% of the table plus 50 row changes, while *VACUUM* triggers when 20% of the table plus 50 row changes. Since the *VACUUM* threshold is twice as high as the *ANALYZE* threshold, *ANALYZE* gets triggered earlier than *VACUUM*.
+For PG versions >=13; *ANALYZE* by default, triggers when 20% of the table plus 1000 row inserts.
The exact equations for each action are: -- **Autoanalyze** = autovacuum_analyze_scale_factor * tuples + autovacuum_analyze_threshold
+- **Autoanalyze** = autovacuum_analyze_scale_factor * tuples + autovacuum_analyze_threshold or
+ autovacuum_vacuum_insert_scale_factor * tuples + autovacuum_vacuum_insert_threshold (For PG versions >= 13)
- **Autovacuum** = autovacuum_vacuum_scale_factor * tuples + autovacuum_vacuum_threshold
-For example, analyze triggers after 60 rows change on a table that contains 100 rows, and vacuum triggers when 70 rows change on the table, using the following equations:
+For example, if we have a table with 100 rows. The following equation then provides the information on when the analyze and vacuum triggers:
+For Updates/deletes:
`Autoanalyze = 0.1 * 100 + 50 = 60` `Autovacuum = 0.2 * 100 + 50 = 70`
+Analyze triggers after 60 rows are changed on a table, and Vacuum triggers when 70 rows are changed on a table.
+
+For Inserts:
+`Autoanalyze = 0.2 * 100 + 1000 = 1020`
+
+Analyze triggers after 1,020 rows are inserted on a table
+
+Here's the description of the parameters used in the equation:
+
+| Parameter | Description
+| -- | -- |
+| `autovacuum_analyze_scale_factor` | Percentage of inserts/updates/deletes which triggers ANALYZE on the table.
+| `autovacuum_analyze_threshold` | Specifies the minimum number of tuples inserted/updated/deleted to ANALYZE a table.
+| `autovacuum_vacuum_insert_scale_factor` | Percentage of inserts that triggers ANLYZE on the table.
+| `autovacuum_vacuum_insert_threshold` | Specifies the minimum number of tuples inserted to ANALYZE a table.
+| `autovacuum_vacuum_scale_factor` | Percentage of updates/deletes which triggers VACUUM on the table.
+ Use the following query to list the tables in a database and identify the tables that qualify for the autovacuum process: ```sql
The autovacuum process estimates the cost of every I/O operation, accumulates a
By default, `autovacuum_vacuum_cost_limit` is set to –1, meaning autovacuum cost limit is the same value as the parameter `vacuum_cost_limit`, which defaults to 200. `vacuum_cost_limit` is the cost of a manual vacuum.
-If `autovacuum_vacuum_cost_limit` is set to `-1` then autovacuum uses the `vacuum_cost_limit` parameter, but if `autovacuum_vacuum_cost_limit` itself is set to greater than `-1` then `autovacuum_vacuum_cost_limit` parameter is considered.
+If `autovacuum_vacuum_cost_limit` is set to `-1`, then autovacuum uses the `vacuum_cost_limit` parameter, but if `autovacuum_vacuum_cost_limit` itself is set to greater than `-1` then `autovacuum_vacuum_cost_limit` parameter is considered.
In case the autovacuum isn't keeping up, the following parameters might be changed:
-| Parameter | Description |
+| Parameter | Description
| -- | -- |
-| `autovacuum_vacuum_scale_factor` | Default: `0.2`, range: `0.05 - 0.1`. The scale factor is workload-specific and should be set depending on the amount of data in the tables. Before changing the value, investigate the workload and individual table volumes. |
| `autovacuum_vacuum_cost_limit` | Default: `200`. Cost limit might be increased. CPU and I/O utilization on the database should be monitored before and after making changes. | | `autovacuum_vacuum_cost_delay` | **Postgres Version 11** - Default: `20 ms`. The parameter might be decreased to `2-10 ms`.<br />**Postgres Versions 12 and above** - Default: `2 ms`. | > [!NOTE]
-> The `autovacuum_vacuum_cost_limit` value is distributed proportionally among the running autovacuum workers, so that if there is more than one, the sum of the limits for each worker doesn't exceed the value of the `autovacuum_vacuum_cost_limit` parameter
+> - The `autovacuum_vacuum_cost_limit` value is distributed proportionally among the running autovacuum workers, so that if there is more than one, the sum of the limits for each worker doesn't exceed the value of the `autovacuum_vacuum_cost_limit` parameter.
+> - `autovacuum_vacuum_scale_factor` is another parameter which could trigger vacuum on a table based on dead tuple accumulation. Default: `0.2`, Allowed range: `0.05 - 0.1`. The scale factor is workload-specific and should be set depending on the amount of data in the tables. Before changing the value, investigate the workload and individual table volumes.
### Autovacuum constantly running
-Continuously running autovacuum might affect CPU and IO utilization on the server. The following might be possible reasons:
+Continuously running autovacuum might affect CPU and IO utilization on the server. Here are some of the possible reasons:
#### `maintenance_work_mem`
If `maintenance_work_mem` is low, it might be increased to up to 2 GB on Azure
Autovacuum tries to start a worker on each database every `autovacuum_naptime` seconds.
-For example, if a server has 60 databases and `autovacuum_naptime` is set to 60 seconds, then the autovacuum worker starts every second [autovacuum_naptime/Number of DBs].
+For example, if a server has 60 databases and `autovacuum_naptime` is set to 60 seconds, then the autovacuum worker starts every second [autovacuum_naptime/Number of databases].
It's a good idea to increase `autovacuum_naptime` if there are more databases in a cluster. At the same time, the autovacuum process can be made more aggressive by increasing the `autovacuum_cost_limit` and decreasing the `autovacuum_cost_delay` parameters and increasing the `autovacuum_max_workers` from the default of 3 to 4 or 5.
Overly aggressive `maintenance_work_mem` values could periodically cause out
### Autovacuum is too disruptive
-If autovacuum is consuming a lot of resources, the following can be done:
+If autovacuum is consuming more resources, the following actions can be done:
#### Autovacuum parameters Evaluate the parameters `autovacuum_vacuum_cost_delay`, `autovacuum_vacuum_cost_limit`, `autovacuum_max_workers`. Improperly setting autovacuum parameters might lead to scenarios where autovacuum becomes too disruptive.
-If autovacuum is too disruptive, consider the following:
+If autovacuum is too disruptive, consider the following actions:
- IncreaseΓÇ»`autovacuum_vacuum_cost_delay` and reduce `autovacuum_vacuum_cost_limit` if set higher than the default of 200.-- Reduce the number of `autovacuum_max_workers` if it's set higher than the default of 3.
+- Reduce the number of `autovacuum_max_workers` if set higher than the default of 3.
#### Too many autovacuum workers
-Increasing the number of autovacuum workers won't necessarily increase the speed of vacuum. Having a high number of autovacuum workers isn't recommended.
+Increasing the number of autovacuum workers doesn't increase the speed of vacuum. Having a high number of autovacuum workers isn't recommended.
-Increasing the number of autovacuum workers will result in more memory consumption, and depending on the value of `maintenance_work_mem` , could cause performance degradation.
+Increasing the number of autovacuum workers result in more memory consumption, and depending on the value of `maintenance_work_mem` , could cause performance degradation.
Each autovacuum worker process only gets (1/autovacuum_max_workers) of the total `autovacuum_cost_limit`, so having a high number of workers causes each one to go slower. If the number of workers is increased, `autovacuum_vacuum_cost_limit` should also be increased and/or `autovacuum_vacuum_cost_delay` should be decreased to make the vacuum process faster.
-However, if we have changed table level `autovacuum_vacuum_cost_delay` or `autovacuum_vacuum_cost_limit` parameters then the workers running on those tables are exempted from being considered in the balancing algorithm [autovacuum_cost_limit/autovacuum_max_workers].
+However, if we set the parameter at table level `autovacuum_vacuum_cost_delay` or `autovacuum_vacuum_cost_limit` parameters then the workers running on those tables are exempted from being considered in the balancing algorithm [autovacuum_cost_limit/autovacuum_max_workers].
### Autovacuum transaction ID (TXID) wraparound protection
-When a database runs into transaction ID wraparound protection, an error message like the following can be observed:
+When a database runs into transaction ID wraparound protection, an error message like the following error can be observed:
``` Database isn't accepting commands to avoid wraparound data loss in database 'xx'
Stop the postmaster and vacuum that database in single-user mode.
> [!NOTE] > This error message is a long-standing oversight. Usually, you do not need to switch to single-user mode. Instead, you can run the required VACUUM commands and perform tuning for VACUUM to run fast. While you cannot run any data manipulation language (DML), you can still run VACUUM.
-The wraparound problem occurs when the database is either not vacuumed or there are too many dead tuples that couldn't be removed by autovacuum. The reasons for this might be:
+The wraparound problem occurs when the database is either not vacuumed or there are too many dead tuples that aren't removed by autovacuum. The reasons for this issue might be:
#### Heavy workload
The workload could cause too many dead tuples in a brief period that makes it di
#### Long-running transactions
-Any long-running transactions in the system won't allow dead tuples to be removed while autovacuum is running. They're a blocker to the vacuum process. Removing the long running transactions frees up dead tuples for deletion when autovacuum runs.
+Any long-running transaction in the system doesn't allow dead tuples to be removed while autovacuum is running. They're a blocker to the vacuum process. Removing the long running transactions frees up dead tuples for deletion when autovacuum runs.
Long-running transactions can be detected using the following query:
Unused replication slots prevent autovacuum from claiming dead tuples. The follo
UseΓÇ»`pg_drop_replication_slot()` to delete unused replication slots.
-When the database runs into transaction ID wraparound protection, check for any blockers as mentioned previously, and remove those manually for autovacuum to continue and complete. You can also increase the speed of autovacuum by setting `autovacuum_cost_delay` to 0 and increasing the `autovacuum_cost_limit` to a value greater than 200. However, changes to these parameters won't be applied to existing autovacuum workers. Either restart the database or kill existing workers manually to apply parameter changes.
+When the database runs into transaction ID wraparound protection, check for any blockers as mentioned previously, and remove the blockers manually for autovacuum to continue and complete. You can also increase the speed of autovacuum by setting `autovacuum_cost_delay` to 0 and increasing the `autovacuum_cost_limit` to a value greater than 200. However, changes to these parameters do not apply to existing autovacuum workers. Either restart the database or kill existing workers manually to apply parameter changes.
### Table-specific requirements
-Autovacuum parameters might be set for individual tables. It's especially important for small and big tables. For example, for a small table that contains only 100 rows, autovacuum triggers VACUUM operation when 70 rows change (as calculated previously). If this table is frequently updated, you might see hundreds of autovacuum operations a day. This prevents autovacuum from maintaining other tables on which the percentage of changes aren't as big. Alternatively, a table containing a billion rows needs to change 200 million rows to trigger autovacuum operations. Setting autovacuum parameters appropriately prevents such scenarios.
+Autovacuum parameters might be set for individual tables. It's especially important for small and large tables. For example, for a small table that contains only 100 rows, autovacuum triggers VACUUM operation when 70 rows change (as calculated previously). If this table is frequently updated, you might see hundreds of autovacuum operations a day, preventing autovacuum from maintaining other tables on which the percentage of changes aren't as significant. Alternatively, a table containing a billion rows needs to change 200 million rows to trigger autovacuum operations. Setting autovacuum parameters appropriately prevents such scenarios.
To set autovacuum setting per table, change the server parameters as the following examples:
To set autovacuum setting per table, change the server parameters as the follo
### Insert-only workloads
-In versions of PostgreSQL prior to 13, autovacuum won't run on tables with an insert-only workload, because if there are no updates or deletes, there are no dead tuples and no free space that needs to be reclaimed. However, autoanalyze will run for insert-only workloads since there's new data. The disadvantages of this are:
+In versions of PostgreSQL <= 13, autovacuum doesn't run on tables with an insert-only workload, as there are no dead tuples and no free space that needs to be reclaimed. However, autoanalyze runs for insert-only workloads since there's new data. The disadvantages of this are:
- The visibility map of the tables isn't updated, and thus query performance, especially where there are Index Only Scans, starts to suffer over time. - The database can run into transaction ID wraparound protection.-- Hint bits won't be set.
+- Hint bits are not set.
#### Solutions
-##### Postgres versions prior to 13
+##### Postgres versions <= 13
Using the **pg_cron** extension, a cron job can be set up to schedule a periodic vacuum analyze on the table. The frequency of the cron job depends on the workload.
For step-by-step guidance using pg_cron, review [Extensions](./concepts-extensio
##### Postgres 13 and higher versions
-Autovacuum will run on tables with an insert-only workload. Two new server parameters `autovacuum_vacuum_insert_threshold` and  `autovacuum_vacuum_insert_scale_factor` help control when autovacuum can be triggered on insert-only tables.
+Autovacuum runs on tables with an insert-only workload. Two new server parameters `autovacuum_vacuum_insert_threshold` and  `autovacuum_vacuum_insert_scale_factor` help control when autovacuum can be triggered on insert-only tables.
## Troubleshooting guides
-Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL flexible server portal it is possible to monitor bloat at database or individual schema level along with identifying potential blockers to autovacuum process. Two troubleshooting guides are available first one is autovacuum monitoring that can be used to monitor bloat at database or individual schema level. The second troubleshooting guide is autovacuum blockers and wraparound which helps to identify potential autovacuum blockers along with information on how far the databases on the server are from wraparound or emergency situation. The troubleshooting guides also share recommendations to mitigate potential issues. How to set up the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
+Using the feature troubleshooting guides that is available on the Azure Database for PostgreSQL flexible server portal it's possible to monitor bloat at database or individual schema level along with identifying potential blockers to autovacuum process. Two troubleshooting guides are available first one is autovacuum monitoring that can be used to monitor bloat at database or individual schema level. The second troubleshooting guide is autovacuum blockers and wraparound, which helps to identify potential autovacuum blockers. It also provides information on how far the databases on the server are from wraparound or emergency situation. The troubleshooting guides also share recommendations to mitigate potential issues. How to set up the troubleshooting guides to use them follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
++
+## Azure Advisor Recommendations
+
+Azure Advisor recommendations are a proactive way of identifying if a server has a high bloat ratio or the server is approaching transaction wraparound scenario. You can also set alerts for the recommendations using the [Create Azure Advisor alerts on new recommendations using the Azure portal](../../advisor/advisor-alerts-portal.md)
+
+The recommendations are:
+
+- **High Bloat Ratio**: A high bloat ratio can affect server performance in several ways. One significant issue is that the PostgreSQL Engine Optimizer might struggle to select the best execution plan, leading to degraded query performance. Therefore, a recommendation is triggered when the bloat percentage on a server reaches a certain threshold to avoid such performance issues.
+
+- **Transaction Wrap around**: This scenario is one of the most serious issues a server can encounter. Once your server is in this state it might stop accepting any more transactions, causing the server to become read-only. Hence, a recommendation is triggered when we see the server has crossed 1 billion transactions threshold.
## Related content
role-based-access-control Role Assignments Eligible Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-eligible-activate.md
+
+ Title: Activate eligible Azure role assignments (Preview) - Azure RBAC
+description: Learn how to activate eligible Azure role assignments in Azure role-based access control (Azure RBAC) using the Azure portal.
++++ Last updated : 06/27/2024+++
+# Activate eligible Azure role assignments (Preview)
+
+> [!IMPORTANT]
+> Azure role assignment integration with Privileged Identity Management is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Eligible Azure role assignments provide just-in-time access to a role for a limited period of time. Microsoft Entra Privileged Identity Management (PIM) role activation has been integrated into the Access control (IAM) page in the Azure portal. If you have been made eligible for an Azure role, you can activate that role using the Azure portal. This capability is being deployed in stages, so it might not be available yet in your tenant or your interface might look different.
+
+## Prerequisites
+
+- Microsoft Entra ID P2 license or Microsoft Entra ID Governance license
+- [Eligible role assignment](./role-assignments-portal.yml#step-6-select-assignment-type-(preview))
+- `Microsoft.Authorization/roleAssignments/read` permission, such as [Reader](./built-in-roles/general.md#reader)
+
+## Activate group membership (if needed)
+
+If you have been made eligible for a group ([PIM for Groups](/entra/id-governance/privileged-identity-management/concept-pim-for-groups)) and this group has an eligible role assignment, you must first activate your group membership before you can see the eligible role assignment for the group. For this scenario, you must activate twice - first for the group and then for the role.
+
+For steps on how to activate your group membership, see [Activate your group membership or ownership in Privileged Identity Management](/entra/id-governance/privileged-identity-management/groups-activate-roles).
+
+## Activate role using the Azure portal
+
+These steps describe how to activate an eligible role assignment using the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Click **All services** and then select the scope. For example, you can select **Management groups**, **Subscriptions**, **Resource groups**, or a resource.
+
+1. Click the specific resource.
+
+1. Click **Access control (IAM)**.
+
+1. Click **Activate role**.
+
+ The **assignments** pane appears and lists your eligible role assignments.
+
+ :::image type="content" source="./media/role-assignments-eligible-activate/activate-role.png" alt-text="Screenshot of Access control page and Activate role assignments pane." lightbox="./media/role-assignments-eligible-activate/activate-role.png":::
+
+1. Add a check mark next to a role you want to activate and then click **Activate role**.
+
+ The **Activate** pane appears with activate settings.
+
+1. On the **Activate** tab, specify the start time, duration, and reason. If you want to customize the activation start time, check the **Custom activation start time** box.
+
+ :::image type="content" source="./media/role-assignments-eligible-activate/activate-role-settings.png" alt-text="Screenshot of Activate pane and Activate tab that shows start time, duration, and reason settings." lightbox="./media/role-assignments-eligible-activate/activate-role-settings.png":::
+
+1. (Optional) Click the **Scope** tab to specify the scope for the role assignment.
+
+ If your eligible role assignment was defined at a higher scope, you can select a lower scope to narrow your access. For example, if you have an eligible role assignment at subscription scope, you can choose resource groups in the subscription to narrow your scope.
+
+ :::image type="content" source="./media/role-assignments-eligible-activate/activate-role-scope.png" alt-text="Screenshot of Activate pane and Scope tab that shows scope settings." lightbox="./media/role-assignments-eligible-activate/activate-role-scope.png":::
+
+1. When finished, click the **Activate** button to activate the role with the selected settings.
+
+ Progress messages appear to indicate the status of the activation.
+
+ :::image type="content" source="./media/role-assignments-eligible-activate/activate-role-status.png" alt-text="Screenshot of Activate pane that shows activation status." lightbox="./media/role-assignments-eligible-activate/activate-role-status.png":::
+
+ When activation is complete, you see a message that the role was successfully activated.
+
+ Once an eligible role assignment has been activated, it will be listed as an active time-bound role assignment on the **Role assignments** tab. For more information, see [List Azure role assignments using the Azure portal](./role-assignments-list-portal.yml#list-role-assignments-at-a-scope).
+
+## Next steps
+
+- [Integration with Privileged Identity Management (Preview)](./role-assignments.md#integration-with-privileged-identity-management-preview)
+- [Activate my Azure resource roles in Privileged Identity Management](/entra/id-governance/privileged-identity-management/pim-resource-roles-activate-your-roles)
role-based-access-control Role Assignments Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-steps.md
Previously updated : 12/01/2023 Last updated : 06/25/2024
If you are using a service principal to assign roles, you might get the error "I
Once you know the security principal, role, and scope, you can assign the role. You can assign roles using the Azure portal, Azure PowerShell, Azure CLI, Azure SDKs, or REST APIs.
-You can have up to **4000** role assignments in each subscription. This limit includes role assignments at the subscription, resource group, and resource scopes. You can have up to **500** role assignments in each management group. For more information, see [Troubleshoot Azure RBAC limits](troubleshoot-limits.md).
+You can have up to **4000** role assignments in each subscription. This limit includes role assignments at the subscription, resource group, and resource scopes. [Eligible role assignments](./role-assignments-portal.yml#step-6-select-assignment-type-(preview)) and role assignments scheduled in the future do not count towards this limit. You can have up to **500** role assignments in each management group. For more information, see [Troubleshoot Azure RBAC limits](troubleshoot-limits.md).
Check out the following articles for detailed steps for how to assign roles.
role-based-access-control Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments.md
description: Learn about Azure role assignments in Azure role-based access contr
Previously updated : 10/03/2022 Last updated : 06/27/2024 # Understand Azure role assignments
The preceding condition allows users to read blobs with a blob index tag key of
For more information about conditions, see [What is Azure attribute-based access control (Azure ABAC)?](conditions-overview.md)
+## Integration with Privileged Identity Management (Preview)
+
+> [!IMPORTANT]
+> Azure role assignment integration with Privileged Identity Management is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+If you have a Microsoft Entra ID P2 or Microsoft Entra ID Governance license, [Microsoft Entra Privileged Identity Management (PIM)](/entra/id-governance/privileged-identity-management/pim-configure) is integrated into role assignment steps. For example, you can assign roles to users for a limited period of time. You can also make users eligible for role assignments so that they must activate to use the role, such as request approval. Eligible role assignments provide just-in-time access to a role for a limited period of time. You can't create eligible role assignments for applications, service principals, or managed identities because they can't perform the activation steps. This capability is being deployed in stages, so it might not be available yet in your tenant or your interface might look different.
+
+The assignment type options available to you might vary depending or your PIM policy. For example, PIM policy defines whether permanent assignments can be created, maximum duration for time-bound assignments, roles activations requirements (approval, multifactor authentication, or Conditional Access authentication context), and other settings. For more information, see [Configure Azure resource role settings in Privileged Identity Management](/entra/id-governance/privileged-identity-management/pim-resource-roles-configure-role-settings).
++
+To better understand PIM, you should review the following terms.
+
+| Term or concept | Role assignment category | Description |
+| | | |
+| eligible | Type | A role assignment that requires a user to perform one or more actions to use the role. If a user has been made eligible for a role, that means they can activate the role when they need to perform privileged tasks. There's no difference in the access given to someone with a permanent versus an eligible role assignment. The only difference is that some people don't need that access all the time. |
+| active | Type | A role assignment that doesn't require a user to perform any action to use the role. Users assigned as active have the privileges assigned to the role. |
+| activate | | The process of performing one or more actions to use a role that a user is eligible for. Actions might include performing a multifactor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers. |
+| permanent eligible | Duration | A role assignment where a user is always eligible to activate the role. |
+| permanent active | Duration | A role assignment where a user can always use the role without performing any actions. |
+| time-bound eligible | Duration | A role assignment where a user is eligible to activate the role only within start and end dates. |
+| time-bound active | Duration | A role assignment where a user can use the role only within start and end dates. |
+| just-in-time (JIT) access | | A model in which users receive temporary permissions to perform privileged tasks, which prevents malicious or unauthorized users from gaining access after the permissions have expired. Access is granted only when users need it. |
+| principle of least privilege access | | A recommended security practice in which every user is provided with only the minimum privileges needed to accomplish the tasks they're authorized to perform. This practice minimizes the number of Global Administrators and instead uses specific administrator roles for certain scenarios. |
+
+For more information, see [What is Microsoft Entra Privileged Identity Management?](/entra/id-governance/privileged-identity-management/pim-configure).
+ ## Next steps - [Delegate Azure access management to others](delegate-role-assignments-overview.md)
role-based-access-control Troubleshoot Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshoot-limits.md
Previously updated : 02/22/2024 Last updated : 06/27/2024
When you try to assign a role, you get the following error message:
### Cause
-Azure supports up to **4000** role assignments per subscription. This limit includes role assignments at the subscription, resource group, and resource scopes, but not at the management group scope. You should try to reduce the number of role assignments in the subscription.
+Azure supports up to **4000** role assignments per subscription. This limit includes role assignments at the subscription, resource group, and resource scopes, but not at the management group scope. [Eligible role assignments](./role-assignments-portal.yml#step-6-select-assignment-type-(preview)) and role assignments scheduled in the future do not count towards this limit. You should try to reduce the number of role assignments in the subscription.
> [!NOTE] > The **4000** role assignments limit per subscription is fixed and cannot be increased.
search Keyless Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/keyless-connections.md
Deploy production workloads includes these steps:
### Roles for production workloads
-To create your production resources, you need to create a user-assigend [managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) then assign that identity to your resources with the correct roles.
+To create your production resources, you need to create a [user-assigned managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) then assign that identity to your resources with the correct roles.
The following role is suggested for a production application:
Create environment variables for your deployed and keyless Azure AI Search resou
## Related content * [Keyless connections developer guide](/azure/developer/intro/passwordless-overview)
-* [Azure built-in roles](/azure/role-based-access-control/built-in-roles)
+* [Azure built-in roles](/azure/role-based-access-control/built-in-roles)
+* [Set environment variables](/azure/ai-services/cognitive-services-environment-variables)
search Search Get Started Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rest.md
ms.devlang: rest-api Previously updated : 03/14/2024 Last updated : 06/27/2024 - mode-api - ignite-2023
-# Quickstart: Text search by using REST
+# Quickstart: Keyword search by using REST
The REST APIs in Azure AI Search provide programmatic access to all of its capabilities, including preview features, and they're an easy way to learn how features work. In this quickstart, learn how to call the [Search REST APIs](/rest/api/searchservice) to create, load, and query a search index in Azure AI Search.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites - [Visual Studio Code](https://code.visualstudio.com/download) with a [REST client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client).+ - [Azure AI Search](search-what-is-azure-search.md). [Create](search-create-service-portal.md) or [find an existing Azure AI Search resource](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart. ## Download files
-[Download a REST sample](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart) from GitHub to send the requests in this quickstart. For more information, see [Downloading files from GitHub](https://docs.github.com/get-started/start-your-journey/downloading-files-from-github).
+[Download a REST sample](https://github.com/Azure-Samples/azure-search-rest-samples/tree/main/Quickstart) from GitHub to send the requests in this quickstart. Instructions can be found at [Downloading files from GitHub](https://docs.github.com/get-started/start-your-journey/downloading-files-from-github).
You can also start a new file on your local system and create requests manually by using the instructions in this article.
-## Copy a search service key and URL
+## Get a search service endpoint
+
+You can find the search service endpoint in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
+
+1. On the **Overview** home page, find the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+
+ :::image type="content" source="media/search-get-started-rest/get-endpoint.png" lightbox="media/search-get-started-rest/get-endpoint.png" alt-text="Screenshot of the URL property on the overview page.":::
+
+You're pasting this endpoint into the `.rest` or `.http` file in a later step.
+
+## Configure access
+
+Requests to the search endpoint must be authenticated and authorized. You can use API keys or roles for this task. Keys are easier to start with, but roles are more secure.
+
+### Option 1: Use keys
+
+Select **Settings** > **Keys** and then copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one. For more information, see [Connect to Azure AI Search using key authentication](search-security-api-keys.md).
++
+You're pasting this key into the `.rest` or `.http` file in a later step.
+
+### Option 2: Use roles
+
+Make sure your search service is [configured for role-based access](search-security-enable-roles.md). You must have preconfigured [role-assignments for developer access](search-security-rbac.md#assign-roles-for-development). Your role assignments must grant permission to create, load, and query a search index.
+
+In this section, obtain your personal identity token using either the Azure CLI, Azure PowerShell, or the Azure portal.
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Sign in to Azure CLI.
+
+ ```azurecli
+ az login
+ ```
+
+1. Get your personal identity.
-REST calls require the search service endpoint and an API key on every request. You can get these values from the Azure portal.
+ ```azurecli
+ az ad signed-in-user show \
+ --query id -o tsv
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Sign in with PowerShell.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+1. Get your personal identity.
+
+ ```azurepowershell
+ (Get-AzContext).Account.ExtendedProperties.HomeAccountId.Split('.')[0]
+ ```
+
+#### [Azure portal](#tab/portal)
+
+Use the steps found here: [find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id) in the Azure portal.
-1. Sign in to the [Azure portal](https://portal.azure.com). Then go to the search service **Overview** page and copy the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+
-1. Select **Settings** > **Keys** and then copy an admin key. Admin keys are used to add, modify, and delete objects. There are two interchangeable admin keys. Copy either one.
+You're pasting your personal identity token into the `.rest` or `.http` file in a later step.
- :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot that shows the URL and API keys in the Azure portal.":::
+> [!NOTE]
+> This section assumes you're using a local client that connects to Azure AI Search on your behalf. An alternative approach is [getting a token for the client app](/entra/identity-platform/v2-oauth2-client-creds-grant-flow), assuming your application is [registered](/entra/identity-platform/quickstart-register-app) with Microsoft Entra ID.
## Set up Visual Studio Code
If you're not familiar with the REST client for Visual Studio Code, this section
1. Open or create a new file named with either a `.rest` or `.http` file extension.
-1. Paste in the following example. Replace the base URL and API key with the values you copied earlier.
+1. Paste in the following example if you're using API keys. Replace the `@baseUrl` and `@apiKey` placeholders with the values you copied earlier.
```http @baseUrl = PUT-YOUR-SEARCH-SERVICE-ENDPOINT-HERE
If you're not familiar with the REST client for Visual Studio Code, this section
api-key: {{apiKey}} ```
+1. Or, paste in this example if your using roles. Replace the `@baseUrl` and `@token` placeholders with the values you copied earlier.
+
+ ```http
+ @baseUrl = PUT-YOUR-SEARCH-SERVICE-ENDPOINT-HERE
+ @token = PUT-YOUR-PERSONAL-IDENTITY-TOKEN-HERE
+
+ ### List existing indexes by name
+ GET {{baseUrl}}/indexes?api-version=2023-11-01&$select=name HTTP/1.1
+ Content-Type: application/json
+ Authorization: Bearer {{token}}
+ ```
+ 1. Select **Send request**. A response should appear in an adjacent pane. If you have existing indexes, they're listed. Otherwise, the list is empty. If the HTTP code is `200 OK`, you're ready for the next steps. :::image type="content" source="media/search-get-started-rest/rest-client-request-setup.png" lightbox="media/search-get-started-rest/rest-client-request-setup.png" alt-text="Screenshot that shows a REST client configured for a search service request.":::
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
- Title: Configure search apps for Microsoft Entra ID-
-description: Acquire a token from Microsoft Entra ID to authorize search requests to an app built on Azure AI Search.
---- Previously updated : 04/25/2024-
- - subject-rbac-steps
- - ignite-2023
--
-# Authorize access to a search app using Microsoft Entra ID
-
-Search applications that are built on Azure AI Search can now use the [Microsoft identity platform](../active-directory/develop/v2-overview.md) for authenticated and authorized access. On Azure, the identity provider is Microsoft Entra ID. A key [benefit of using Microsoft Entra ID](../active-directory/develop/how-to-integrate.md#benefits-of-integration) is that your credentials and API keys no longer need to be stored in your code. Microsoft Entra authenticates the security principal (a user, group, or service) running the application. If authentication succeeds, Microsoft Entra ID returns the access token to the application, and the application can then use the access token to authorize requests to Azure AI Search.
-
-This article shows you how to configure your client for Microsoft Entra ID:
-
-+ For authentication, create a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for your application. You can use a different type of security principal object, but this article uses managed identities because they eliminate the need to manage credentials.
-
-+ For authorization, assign an Azure role to the managed identity that grants permissions to run queries or manage indexing jobs.
-
-+ Update your client code to call [`TokenCredential()`](/dotnet/api/azure.core.tokencredential). For example, you can get started with new SearchClient(endpoint, new `DefaultAzureCredential()`) to authenticate via a Microsoft Entra ID using [Azure.Identity](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md).
-
-## Configure role-based access for data plane
-
-**Applies to:** Search Index Data Contributor, Search Index Data Reader, Search Service Contributor
-
-In this step, configure your search service to recognize an **authorization** header on data requests that provide an OAuth2 access token.
-
-### [**Azure portal**](#tab/config-svc-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and open the search service page.
-
-1. Select **Keys** in the left navigation pane.
-
- :::image type="content" source="media/search-create-service-portal/set-authentication-options.png" lightbox="media/search-create-service-portal/set-authentication-options.png" alt-text="Screenshot of the keys page with authentication options." border="true":::
-
-1. Choose an **API access control** option. We recommend **Both** if you want flexibility or need to migrate apps.
-
- | Option | Description |
- |--||
- | API Key | (default) Requires an [admin or query API keys](search-security-api-keys.md) on the request header for authorization. No roles are used. |
- | Role-based access control | Requires membership in a role assignment to complete the task, described in the next step. It also requires an authorization header. |
- | Both | Requests are valid using either an API key or role-based access control. |
-
-The change is effective immediately, but wait a few seconds before testing.
-
-All network calls for search service operations and content respect the option you select: API keys, bearer token, or either one if you select **Both**.
-
-When you enable role-based access control in the portal, the failure mode is "http401WithBearerChallenge" if authorization fails.
-
-### [**REST API**](#tab/config-svc-rest)
-
-Use the Management REST API [Create or Update Service](/rest/api/searchmanagement/services/create-or-update) to configure your service.
-
-All calls to the Management REST API are authenticated through Microsoft Entra ID, with Contributor or Owner permissions. For help with setting up authenticated requests in a REST client, see [Manage Azure AI Search using REST](search-manage-rest.md).
-
-1. Get service settings so that you can review the current configuration.
-
- ```http
- GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2023-11-01
- ```
-
-1. Use PATCH to update service configuration. The following modifications enable both keys and role-based access. If you want a roles-only configuration, see [Disable API keys](search-security-enable-roles.md#disable-api-key-authentication).
-
- Under "properties", set ["authOptions"](/rest/api/searchmanagement/services/create-or-update#dataplaneauthoptions) to "aadOrApiKey". The "disableLocalAuth" property must be false to set "authOptions".
-
- Optionally, set ["aadAuthFailureMode"](/rest/api/searchmanagement/services/create-or-update#aadauthfailuremode) to specify whether 401 is returned instead of 403 when authentication fails. Valid values are "http401WithBearerChallenge" or "http403".
-
- ```http
- PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2023-11-01
- {
- "properties": {
- "disableLocalAuth": false,
- "authOptions": {
- "aadOrApiKey": {
- "aadAuthFailureMode": "http401WithBearerChallenge"
- }
- }
- }
- }
- ```
---
-## Create a managed identity
-
-In this step, create a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for your client application.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Search for **Managed Identities**.
-
-1. Select **Create**.
-
-1. Give your managed identity a name and select a region. Then, select **Create**.
-
- :::image type="content" source="media/search-howto-aad/create-managed-identity.png" alt-text="Screenshot of the Create Managed Identity wizard." border="true" :::
-
-## Assign a role to the managed identity
-
-Next, you need to grant your client's managed identity access to your search service. Azure AI Search has various [built-in roles](search-security-rbac.md#built-in-roles-used-in-search). You can also create a [custom role](search-security-rbac.md#create-a-custom-role).
-
-It's a best practice to grant minimum permissions. If your application only needs to handle queries, you should assign the [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) role. Alternatively, if the client needs both read and write access on a search index, you should use the [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) role.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to your search service.
-
-1. Select **Access control (IAM)** in the left navigation pane.
-
-1. Select **+ Add** > **Add role assignment**.
-
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot of Access control (IAM) page with Add role assignment menu open." border="true":::
-
-1. Select an applicable role:
-
- + Owner
- + Contributor
- + Reader
- + Search Service Contributor
- + Search Index Data Contributor
- + Search Index Data Reader
-
- > [!NOTE]
- > The Owner, Contributor, Reader, and Search Service Contributor are control plane roles and don't give you access to the data within a search index. For data access, choose either the Search Index Data Contributor or Search Index Data Reader role. For more information on the scope and purpose of each role, see [Built-in roles used in Search](search-security-rbac.md#built-in-roles-used-in-search).
-
-1. On the **Members** tab, select the managed identity that you want to give access to your search service.
-
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
-You can assign multiple roles, such as Search Service Contributor and Search Index Data Contributor, if your application needs comprehensive access to the search services, objects, and content.
-
-You can also [assign roles using PowerShell](search-security-rbac.md).
-
-<a name='set-up-azure-ad-authentication-in-your-client'></a>
-
-## Set up Microsoft Entra authentication in your client
-
-Once you have a managed identity and a role assignment on the search service, you're ready to add code to your application to authenticate the security principal and acquire an OAuth 2.0 token.
-
-Use the following client libraries for role-based access control:
-
-+ [azure.search.documents (Azure SDK for .NET)](https://www.nuget.org/packages/Azure.Search.Documents/)
-+ [azure-search-documents (Azure SDK for Java)](https://central.sonatype.com/artifact/com.azure/azure-search-documents)
-+ [azure/search-documents (Azure SDK for JavaScript)](https://www.npmjs.com/package/@azure/search-documents/v/11.3.1)
-+ [azure.search.documents (Azure SDK for Python)](https://pypi.org/project/azure-search-documents/)
-
-> [!NOTE]
-> To learn more about the OAuth 2.0 code grant flow used by Microsoft Entra ID, see [Authorize access to Microsoft Entra web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md).
-
-### [**.NET SDK**](#tab/aad-dotnet)
-
-The following instructions reference an existing C# sample to demonstrate the code changes.
-
-1. As a starting point, clone the [source code](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/quickstart/v11) for the C# section of [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md).
-
- The sample currently uses key-based authentication and the `AzureKeyCredential` to create the `SearchClient` and `SearchIndexClient` but you can make a small change to switch over to role-based authentication.
-
-1. Update the Azure.Search.Documents NuGet package to version 11.4 or later.
-
-1. Import the [Azure.Identity](https://www.nuget.org/packages/Azure.Identity/) library to get access to other authentication techniques.
-
-1. Instead of using `AzureKeyCredential` in the beginning of `Main()` in [Program.cs](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/quickstart/v11/AzureSearchQuickstart-v11/Program.cs), use `DefaultAzureCredential` like in the code snippet below:
-
- ```csharp
- // Create a SearchIndexClient to send create/delete index commands
- SearchIndexClient adminClient = new SearchIndexClient(serviceEndpoint, new DefaultAzureCredential());
- // Create a SearchClient to load and query documents
- SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, new DefaultAzureCredential());
- ```
-
-### Local testing
-
-User-assigned managed identities work only in Azure environments. If you run this code locally, `DefaultAzureCredential` falls back to authenticating with your credentials. Make sure you give yourself the required access to the search service if you plan to run the code locally.
-
-1. Verify your account has role assignments to run all of the operations in the quickstart sample. To both create and query an index, use "Search Index Data Reader" and "Search Index Data Contributor".
-
-1. Go to **Tools** > **Options** > **Azure Service Authentication** to choose your Azure sign-on account.
-
-You should now be able to run the project from Visual Studio on your local system, using role-based access control for authorization.
-
-> [!NOTE]
-> The Azure.Identity documentation has more details about `DefaultAzureCredential` and using [Microsoft Entra authentication with the Azure SDK for .NET](/dotnet/api/overview/azure/identity-readme). `DefaultAzureCredential` is intended to simplify getting started with the SDK by handling common scenarios with reasonable default behaviors. Developers who want more control or whose scenario isn't served by the default settings should use other credential types.
-
-### [**REST API**](#tab/aad-rest)
-
-Using an Azure SDK simplifies the OAuth 2.0 flow but you can also program directly against the protocol in your application. Full details are available in [Microsoft identity platform and the OAuth 2.0 client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-
-1. Start by [getting a token](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md#get-a-token) from the Microsoft identity platform:
-
- ```http
- POST /[tenant id]/oauth2/v2.0/token HTTP/1.1
- Host: login.microsoftonline.com
- Content-Type: application/x-www-form-urlencoded
-
- client_id=[client id]
- &scope=https%3A%2F%2Fsearch.azure.com%2F.default
- &client_secret=[client secret]
- &grant_type=client_credentials
- ```
-
- The required scope is "https://search.azure.com/.default".
-
-1. Now that you have a token, you're ready to issue a request to the search service.
-
- ```http
- GET https://[service name].search.windows.net/indexes/[index name]/docs?[query parameters]
- Content-Type: application/json
- Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q...
- ```
---
-## See also
-
-+ [Use Azure role-based access control in Azure AI Search](search-security-rbac.md)
-+ [Authorize access to Microsoft Entra web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md)
-+ [Integrating with Microsoft Entra ID](../active-directory/develop/how-to-integrate.md#benefits-of-integration)
-+ [Azure custom roles](../role-based-access-control/custom-roles.md)
search Search Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md
You can use the [Azure CLI or Azure PowerShell to create an access token](/azure
az account get-access-token --query accessToken --output tsv ```
+You should have a tenant ID, subscription ID, and bearer token. You'll paste these values into the `.rest` or `.http` file that you create in the next step.
+ ## Set up Visual Studio Code If you're not familiar with the REST client for Visual Studio Code, this section includes setup so that you can complete the tasks in this quickstart.
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
- ignite-2023 Previously updated : 04/22/2024 Last updated : 06/28/2024 # Connect to Azure AI Search using key authentication Azure AI Search offers key-based authentication that you can use on connections to your search service. An API key is a unique string composed of 52 randomly generated numbers and letters. A request made to a search service endpoint is accepted if both the request and the API key are valid.
-Key-based authentication is the default. You can disable it if you opt in for role-based authentication.
-
-> [!NOTE]
-> A quick note about *key* terminology. An *API key* is a GUID used for authentication. A separate term, *document key* is a unique string in your indexed content that uniquely identifies documents in a search index.
+Key-based authentication is the default. You can disable it if you opt in for [role-based authentication](search-security-enable-roles.md).
## Types of API keys
search Search Security Enable Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-enable-roles.md
Last updated 06/18/2024
# Enable or disable role-based access control in Azure AI Search
-If you want to use Azure role-based access control for connections into Azure AI Search, this article explains how to enable it for your search service.
+If you want to use Azure role assignments for authorized access to Azure AI Search, this article explains how to enable role-based access for your search service.
-Role-based access for data plane operations is optional, but recommended. The alternative is [key-based authentication](search-security-api-keys.md), which is the default.
+Role-based access for data plane operations is optional, but recommended as the more secure option. The alternative is [key-based authentication](search-security-api-keys.md), which is the default.
Roles for service administration (control plane) are built in and can't be enabled or disabled.
Roles for service administration (control plane) are built in and can't be enabl
## Enable role-based access for data plane operations
-When you enable roles for the data plane, the change is effective immediately, but wait a few seconds before assigning roles.
+Configure your search service to recognize an **authorization** header on data requests that provide an OAuth2 access token.
-The default failure mode is `http401WithBearerChallenge`. Alternatively, you can set the failure mode to `http403`.
+When you enable roles for the data plane, the change is effective immediately, but wait a few seconds before assigning roles.
-Once role-based access is enabled, the search service recognizes an **authorization** header on data plane requests that provide an OAuth2 access token.
+The default failure mode for unauthorized requests is `http401WithBearerChallenge`. Alternatively, you can set the failure mode to `http403`.
### [**Azure portal**](#tab/config-svc-portal)
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
- ignite-2023 Previously updated : 05/28/2024 Last updated : 06/28/2024 # Security overview for Azure AI Search
An Azure AI Search service is hosted on Azure and is typically accessed by clien
Azure AI Search has three basic network traffic patterns:
-+ Inbound requests made by a client to the search service (the predominant pattern)
++ Inbound requests made by a user or client to the search service (the predominant pattern) + Outbound requests issued by the search service to other services on Azure and elsewhere + Internal service-to-service requests over the secure Microsoft backbone network
Internal requests are secured and managed by Microsoft. You can't configure or c
Internal traffic consists of: + Service-to-service calls for tasks like authentication and authorization through Microsoft Entra ID, resource logging sent to Azure Monitor, and [private endpoint connections](service-create-private-endpoint.md) that utilize Azure Private Link.
-+ Requests made to Azure AI services APIs for [built-in skills](cognitive-search-predefined-skills.md).
++ Requests made to Azure AI services APIs for [built-in skills](cognitive-search-predefined-skills.md) + Requests made to the machine learning models that support [semantic ranking](semantic-search-overview.md#availability-and-pricing). ### Outbound traffic
-Outbound requests can be secured and managed by you. Outbound requests originate from a search service to other applications. These requests are typically made by indexers for text-based indexing, skills-based AI enrichment, and vectorizations at query time. Outbound requests include both read and write operations.
+Outbound requests can be secured and managed by you. Outbound requests originate from a search service to other applications. These requests are typically made by indexers for text-based indexing, custom skills-based AI enrichment, and vectorizations at query time. Outbound requests include both read and write operations.
The following list is a full enumeration of the outbound requests for which you can configure secure connections. A search service makes requests on its own behalf, and on the behalf of an indexer or custom skill.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
New-AzRoleAssignment -SignInName <email> `
### Assign roles for read-only queries
-Use the Search Index Data Reader role for apps and processes that only need read-access to an index. This is a very specific role. It grants [GET or POST access](/rest/api/searchservice/documents) to the *documents collection of a search index* for search, autocomplete, and suggestions.
+Use the Search Index Data Reader role for apps and processes that only need read-access to an index.
-It doesn't support GET or LIST operations on an index or other top-level objects, or GET service statistics.
+This is a very specific role. It grants [GET or POST access](/rest/api/searchservice/documents) to the *documents collection of a search index* for search, autocomplete, and suggestions. It doesn't support GET or LIST operations on an index or other top-level objects, or GET service statistics.
+
+This section provides basic steps for setting up the role assignment and is here for completeness, but we recommend [Use Azure AI Search without keys ](keyless-connections.md) for comprehensive instructions on configuring your app for role-based access.
#### [**Azure portal**](#tab/roles-portal-query)
When [using PowerShell to assign roles](../role-based-access-control/role-assign
Use a client to test role assignments. Remember that roles are cumulative and inherited roles that are scoped to the subscription or resource group level can't be deleted or denied at the resource (search service) level.
-Make sure that you [register your client application with Microsoft Entra ID](search-howto-aad.md) and have role assignments in place before testing access.
+[Configure your application for keyless connections](keyless-connections.md) and have role assignments in place before testing.
### [**Azure portal**](#tab/test-portal)
search Semantic How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-configure.md
- ignite-2023 Previously updated : 06/13/2024 Last updated : 06/27/2024 # Configure semantic ranking and return captions in search results
You can only specify one title field, but you can have as many content and keywo
Across all semantic configuration properties, the fields you assign must be: + Attributed as `searchable` and `retrievable`
-+ Strings of type `Edm.String`, `Collection(Edm.String)`, string subfields of `Collection(Edm.ComplexType)`
++ Strings of type `Edm.String`, `Collection(Edm.String)`, string subfields of `Edm.ComplexType` ### [**Azure portal**](#tab/portal)
search Service Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-configure-firewall.md
- ignite-2023 Previously updated : 06/18/2024 Last updated : 06/27/2024 # Configure network access and firewall rules for Azure AI Search
-As soon as you install Azure AI Search, you can set up network access to limit access to an approved set of devices and cloud services. There are two mechanisms:
+By default, Azure AI Search is configured to allow connections over a public endpoint. Access to a search service *through* the public endpoint is protected by authentication and authorization protocols, but the endpoint itself is open to the internet at the network layer for data plane requests.
+
+If you aren't hosting a public web site, you might want to configure network access to automatically refuse requests unless they originate from an approved set of devices and cloud services. There are two mechanisms:
+ Inbound rules listing the IP addresses, ranges, or subnets from which requests are admitted + Exceptions to network rules, where requests are admitted with no checks, as long as the request originates from a [trusted service](#grant-access-to-trusted-azure-services)
-Network rules aren't required, but it's a security best practice to add them.
+Network rules aren't required, but it's a security best practice to add them if you use Azure AI Search for surfacing private or internal corporate content.
Network rules are scoped to data plane operations against the search service's public endpoint. Data plane operations include creating or querying indexes, and all other actions described by the [Search REST APIs](/rest/api/searchservice/). Control plane operations target service administration. Those operations specify resource provider endpoints, which are subject to the [network protections supported by Azure Resource Manager](/security/benchmark/azure/baselines/azure-resource-manager-security-baseline). This article explains how to configure network access to a search service's public endpoint. To block *all* data plane access to the public endpoint, use [private endpoints](service-create-private-endpoint.md) and an Azure virtual network.
-This article assumes the Azure portal for network access configuration. You can also use the [Management REST API](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or the [Azure CLI](/cli/azure/search).
+This article assumes the Azure portal to explain network access options. You can also use the [Management REST API](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or the [Azure CLI](/cli/azure/search).
## Prerequisites
This article assumes the Azure portal for network access configuration. You can
+ Owner or Contributor permissions
+## Limitations
+
+There are a few drawbacks to locking down the public endpoint.
+++ It takes time to fully identify IP ranges and set up firewalls, and if you're in early stages of proof-of-concept testing and investigation and using sample data, you might want to defer network access controls until you actually need them.+++ Some workflows require access to a public endpoint. Specifically, the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal currently connects to embedding models over the public endpoint, and the response from the embedding model is returned over the public endpoint. You can switch to code or script to complete the same tasks, but if you want to try the wizard, the public endpoint must be available.+ <a id="configure-ip-policy"></a> ## Configure network access in Azure portal
A banner informs you that IP rules affect the portal experience. This banner rem
## Grant access to trusted Azure services
-Did you select the trusted services exception? If yes, your Azure resource must have a managed identity (either system or user-assigned, but usually system), and you must use role-based access controls.
+Did you select the trusted services exception? If yes, your search service admits requests and responses from a trusted Azure resource without checking for an IP address. A trusted resource must have a managed identity (either system or user-assigned, but usually system). A trusted resource must have a role assignment on Azure AI Search that gives it permission to data and operations.
The trusted service list for Azure AI Search includes: + `Microsoft.CognitiveServices` for Azure OpenAI and Azure AI services + `Microsoft.MachineLearningServices` for Azure Machine Learning
-Workflows for this network exception are requests originating *from* Azure AI Studio, Azure OpenAI Studio, or other AML features *to* Azure AI Search, typically in [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) scenarios for retrieval augmented generation (RAG) and playground environments.
+Workflows for this network exception are requests originating *from* Azure AI Studio, Azure OpenAI Studio, or other AML features *to* Azure AI Search, typically in [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) scenarios for retrieval augmented generation (RAG) and playground environments.
-For managed identities on Azure OpenAI and Azure Machine Learning:
+### Trusted resources must have a managed identity
+
+To set up managed identities for Azure OpenAI and Azure Machine Learning:
+ [How to configure Azure OpenAI Service with managed identities](/azure/ai-services/openai/how-to/managed-identity) + [How to set up authentication between Azure Machine Learning and other services](/azure/machine-learning/how-to-identity-based-service-authentication).
-For managed identities on Azure AI
+To set up a managed identity for an Azure AI service:
1. [Find your multiservice account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.cognitiveServices%2Faccounts). 1. On the leftmost pane, under **Resource management**, select **Identity**. 1. Set **System-assigned** to **On**.
-Once your Azure resource has a managed identity, [assign roles on Azure AI Search](search-security-rbac.md) to grant permissions to data and operations. We recommend Search Index Data Reader.
+### Trusted resources must have a role assignment
+
+Once your Azure resource has a managed identity, [assign roles on Azure AI Search](keyless-connections.md) to grant permissions to data and operations.
+
+The trusted services are used for vectorization workloads: generating vectors from text and image content, and sending payloads back to the search service for query execution or indexing. Connections from a trusted service are used to deliver payloads to Azure AI search.
+++ To load a search index with vectors generated by an embedding model, assign **Search Index Data Contributor**.+++ To provide queries with a vector generated by an embedding model, assign **Search Index Data Reader**. The embedding used in a query isn't written to an index, so no write permissions are required. > [!NOTE] > This article covers the trusted exception for admitting requests to your search service, but Azure AI Search is itself on the trusted services list of other Azure resources. Specifically, you can use the trusted service exception for [connections from Azure AI Search to Azure Storage](search-indexer-howto-access-trusted-service-exception.md).
Once your Azure resource has a managed identity, [assign roles on Azure AI Searc
Once a request is allowed through the firewall, it must be authenticated and authorized. You have two options:
-+ [Key-based authentication](search-security-api-keys.md), where an admin or query API key is provided on the request. This is the default.
++ [Key-based authentication](search-security-api-keys.md), where an admin or query API key is provided on the request. This option is the default.
-+ [Role-based access control (RBAC)](search-security-rbac.md) using Microsoft Entra ID, where the caller is a member of a security role on a search service. This is the most secure option. It uses Microsoft Entra ID for authentication and role assignments on Azure AI Search for permissions to data and operations.
++ [Role-based access control](search-security-rbac.md) using Microsoft Entra ID, where the caller is a member of a security role on a search service. This is the most secure option. It uses Microsoft Entra ID for authentication and role assignments on Azure AI Search for permissions to data and operations. > [!div class="nextstepaction"] > [Enable RBAC on your search service](search-security-enable-roles.md)
security Antimalware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware.md
ms.assetid: 265683c8-30d7-4f2b-b66c-5082a18f7a8b
Previously updated : 04/27/2023 Last updated : 06/27/2024 # Microsoft Antimalware for Azure Cloud Services and Virtual Machines
security Backup Plan To Protect Against Ransomware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/backup-plan-to-protect-against-ransomware.md
Previously updated : 08/29/2023 Last updated : 06/27/2024 # Backup and restore plan to protect against ransomware
security Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/double-encryption.md
ms.assetid: 9dcb190e-e534-4787-bf82-8ce73bf47dba
Previously updated : 07/01/2022 Last updated : 06/27/2024 # Double encryption
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
Previously updated : 08/31/2023 Last updated : 06/27/2024 # Cloud feature availability for commercial and US Government customers
security Infrastructure Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-network.md
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
Previously updated : 09/08/2020 Last updated : 06/27/2024
security Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure.md
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
Previously updated : 01/31/2023 Last updated : 06/27/2024
security Management Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management-monitoring-overview.md
ms.assetid: 5cf2827b-6cd3-434d-9100-d7411f7ed424
Previously updated : 01/20/2023 Last updated : 06/20/2024
security Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-overview.md
ms.assetid: bedf411a-0781-47b9-9742-d524cf3dbfc1
Previously updated : 03/31/2023 Last updated : 06/27/2024 #Customer intent: As an IT Pro or decision maker, I am looking for information on the network security controls available in Azure.
security Operational Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-best-practices.md
Previously updated : 04/18/2023 Last updated : 06/27/2024
security Operational Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-checklist.md
Previously updated : 01/23/2023 Last updated : 06/27/2024
security Pen Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/pen-testing.md
ms.assetid: 695d918c-a9ac-4eba-8692-af4526734ccc
Previously updated : 03/23/2023 Last updated : 06/27/2024
security Services Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/services-technologies.md
ms.assetid: a5a7f60a-97e2-49b4-a8c5-7c010ff27ef8
Previously updated : 01/16/2023 Last updated : 04/27/2024
security Threat Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/threat-detection.md
Previously updated : 01/20/2023 Last updated : 06/27/2024
security Virtual Machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/virtual-machines-overview.md
ms.assetid: 467b2c83-0352-4e9d-9788-c77fb400fe54
Previously updated : 12/05/2022 Last updated : 06/27/2024
sentinel Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices.md
description: Learn about best practices to employ when managing your Microsoft S
Previously updated : 05/16/2024 Last updated : 06/28/2024 # Best practices for Microsoft Sentinel
The following table provides high-level descriptions for how to use Microsoft Se
|Entity behavior | Entity behavior in Microsoft Sentinel allows users to review and investigate actions and alerts for specific entities, such as investigating accounts and host names. For more information, see:<br><br>- [Enable User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel](enable-entity-behavior-analytics.md)<br>- [Investigate incidents with UEBA data](investigate-with-ueba.md)<br>- [Microsoft Sentinel UEBA enrichments reference](ueba-reference.md) | |Watchlists | Use a watchlist that combines data from ingested data and external sources, such as enrichment data. For example, create lists of IP address ranges used by your organization or recently terminated employees. Use watchlists with playbooks to gather enrichment data, such as adding malicious IP addresses to watchlists to use during detection, threat hunting, and investigations. <br><br>During an incident, use watchlists to contain investigation data, and then delete them when your investigation is done to ensure that sensitive data doesn't remain in view. <br><br> For more information, see [Watchlists in Microsoft Sentinel](watchlists.md). |
-## Regular SOC activities to perform
-
-Schedule the following Microsoft Sentinel activities regularly to ensure continued security best practices:
-
-### Daily tasks
--- **Triage and investigate incidents**. Review the Microsoft Sentinel **Incidents** page to check for new incidents generated by the currently configured analytics rules, and start investigating any new incidents. For more information, see [Investigate incidents with Microsoft Sentinel](investigate-cases.md).--- **Explore hunting queries and bookmarks**. Explore results for all built-in queries, and update existing hunting queries and bookmarks. Manually generate new incidents or update old incidents if applicable. For more information, see:-
- - [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md)
- - [Hunt for threats with Microsoft Sentinel](hunting.md)
- - [Keep track of data during hunting with Microsoft Sentinel](bookmarks.md)
--- **Analytic rules**. Review and enable new analytics rules as applicable, including both newly released or newly available rules from recently connected data connectors.--- **Data connectors**. Review the status, date, and time of the last log received from each data connector to ensure that data is flowing. Check for new connectors, and review ingestion to ensure set limits aren't exceeded. For more information, see [Data collection best practices](best-practices-data.md) and [Connect data sources](connect-data-sources.md).--- **Log Analytics Agent**. Verify that servers and workstations are actively connected to the workspace, and troubleshoot and remediate any failed connections. For more information, see [Log Analytics Agent overview](../azure-monitor/agents/log-analytics-agent.md).--- **Playbook failures**. Verify playbook run statuses and troubleshoot any failures. For more information, see [Tutorial: Respond to threats by using playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md).-
-### Weekly tasks
--- **Content review of solutions or standalone content**. Get any content updates for your installed solutions or standalone content from the [Content hub](sentinel-solutions-deploy.md). Review new solutions or standalone content that might be of value for your environment, such as analytics rules, workbooks, hunting queries, or playbooks.--- **Microsoft Sentinel auditing**. Review Microsoft Sentinel activity to see who updated or deleted resources, such as analytics rules, bookmarks, and so on. For more information, see [Audit Microsoft Sentinel queries and activities](audit-sentinel-data.md).-
-### Monthly tasks
--- **Review user access**. Review permissions for your users and check for inactive users. For more information, see [Permissions in Microsoft Sentinel](roles.md).--- **Log Analytics workspace review**. Review that the Log Analytics workspace data retention policy still aligns with your organization's policy. For more information, see [Data retention policy](/workplace-analytics/privacy/license-expiration) and [Integrate Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md).-- ## Related content
+- [Microsoft Sentinel operational guide](ops-guide.md)
- [On-board Microsoft Sentinel](quickstart-onboard.md) - [Deployment guide for Microsoft Sentinel](deploy-overview.md) - [Protecting MSSP intellectual property in Microsoft Sentinel](mssp-protect-intellectual-property.md)
sentinel Cef Syslog Ama Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cef-syslog-ama-overview.md
Previously updated : 05/13/2024 Last updated : 06/27/2024 #Customer intent: As a security operator, I want to understand how Microsoft Sentinel collects Syslog and CEF messages with the Azure Monitor Agent so that I can determine if this solution fits my organization's needs.
As part of the setup process, create a data collection rule and install the Azur
After you create the DCR, and AMA is installed, run the "installation" script on the log forwarder. This script configures the Syslog daemon to listen for messages from other machines, and to open the necessary local ports. Then configure the security devices or appliances as needed.
-For more information, see [Ingest Syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md).
+For more information, see the following articles:
+
+- [Ingest Syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md)
+- [CEF via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-cef-device.md)
+- [Syslog via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-syslog-device.md)
## Data ingestion duplication avoidance
sentinel Connect Cef Syslog Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-syslog-ama.md
Title: Ingest Syslog CEF messages to Microsoft Sentinel - AMA
+ Title: Ingest syslog CEF messages to Microsoft Sentinel - AMA
description: Ingest syslog messages from linux machines, devices, and appliances to Microsoft Sentinel using data connectors based on the Azure Monitor Agent (AMA). Previously updated : 05/13/2024
-#Customer intent: As a security operator, I want to ingest and filter Syslog and CEF messages from Linux machines and from network and security devices and appliances to my Microsoft Sentinel workspace, so that security analysts can monitor activity on these systems and detect security threats.
Last updated : 06/27/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#Customer intent: As a security operator, I want to ingest and filter syslog and CEF messages from Linux machines and from network and security devices and appliances to my Microsoft Sentinel workspace, so that security analysts can monitor activity on these systems and detect security threats.
-# Ingest Syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent
+# Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent
-This article describes how to use the **Syslog via AMA** and **Common Event Format (CEF) via AMA** connectors to quickly filter and ingest Syslog messages, including messages in Common Event Format (CEF), from Linux machines and from network and security devices and appliances. To learn more about these data connectors, see [Syslog and Common Event Format (CEF) via AMA connectors for Microsoft Sentinel](cef-syslog-ama-overview.md).
+This article describes how to use the **Syslog via AMA** and **Common Event Format (CEF) via AMA** connectors to quickly filter and ingest syslog messages, including messages in Common Event Format (CEF), from Linux machines and from network and security devices and appliances. To learn more about these data connectors, see [Syslog and Common Event Format (CEF) via AMA connectors for Microsoft Sentinel](cef-syslog-ama-overview.md).
> [!NOTE] > Container Insights now supports the automatic collection of Syslog events from Linux nodes in your AKS clusters. To learn more, see [Syslog collection with Container Insights](../azure-monitor/containers/container-insights-syslog.md).
Before you begin, you must have the resources configured and the appropriate per
### Microsoft Sentinel prerequisites
-For Microsoft Sentinel, install the appropriate solution and make sure you have the permissions to complete the steps in this article.
+Install the appropriate Microsoft Sentinel solution and make sure you have the permissions to complete the steps in this article.
-- Install the appropriate solution&mdash;**Syslog** and/or **Common Event Format** from the **Content hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
+- Install the appropriate solution from the **Content hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
+- Identify which data connector the Microsoft Sentinel solution requires &mdash; **Syslog via AMA** or **Common Event Format (CEF) via AMA** and whether you need to install the **Syslog** or **Common Event Format** solution. To fulfill this prerequisite,
+ - In the **Content hub**, select **Manage** on the installed solution and review the data connector listed.
+ - If either **Syslog via AMA** or **Common Event Format (CEF) via AMA** isn't installed with the solution, identify whether you need to install the **Syslog** or **Common Event Format** solution by finding your appliance or device from one of the following articles:
-- Your Azure account must have the following Azure role-based access control (Azure RBAC) roles:
+ - [CEF via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-cef-device.md)
+ - [Syslog via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-syslog-device.md)
+
+ Then install either the **Syslog** or **Common Event Format** solution from the content hub to get the related AMA data connector.
+- Have an Azure account with the following Azure role-based access control (Azure RBAC) roles:
| Built-in role | Scope | Reason | | - | -- | |
If you're collecting messages from a log forwarder, the following prerequisites
- For space requirements for your log forwarder, refer to the [Azure Monitor Agent Performance Benchmark](../azure-monitor/agents/azure-monitor-agent-performance.md). You can also review [this blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/designs-for-accomplishing-microsoft-sentinel-scalable-ingestion/ba-p/3741516), which includes designs for scalable ingestion. -- Your log sources, security devices, and appliances, must be configured to send their log messages to the log forwarder's Syslog daemon instead of to their local Syslog daemon.
+- Your log sources, security devices, and appliances, must be configured to send their log messages to the log forwarder's syslog daemon instead of to their local syslog daemon.
### Machine security prerequisites Configure the machine's security according to your organization's security policy. For example, configure your network to align with your corporate network security policy and change the ports and protocols in the daemon to align with your requirements. To improve your machine security configuration, [secure your VM in Azure](../virtual-machines/security-policy.md), or review these [best practices for network security](../security/fundamentals/network-best-practices.md).
-If your devices are sending Syslog and CEF logs over TLS because, for example, your log forwarder is in the cloud, you need to configure the Syslog daemon (`rsyslog` or `syslog-ng`) to communicate in TLS. For more information, see:
+If your devices are sending syslog and CEF logs over TLS because, for example, your log forwarder is in the cloud, you need to configure the syslog daemon (`rsyslog` or `syslog-ng`) to communicate in TLS. For more information, see:
- [Encrypt Syslog traffic with TLS ΓÇô rsyslog](https://www.rsyslog.com/doc/v8-stable/tutorials/tls_cert_summary.html) - [Encrypt log messages with TLS ΓÇô syslog-ng](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.22/administration-guide/60#TOPIC-1209298)
The setup process for the Syslog via AMA or Common Event Format (CEF) via AMA d
1. Install the Azure Monitor Agent and create a Data Collection Rule (DCR) by using either of the following methods: - [Azure or Defender portal](?tabs=syslog%2Cportal#create-data-collection-rule) - [Azure Monitor Logs Ingestion API](?tabs=syslog%2Capi#install-the-azure-monitor-agent)
-1. If you're collecting logs from other machines using a log forwarder, [**run the "installation" script**](#run-the-installation-script) on the log forwarder to configure the Syslog daemon to listen for messages from other machines, and to open the necessary local ports.
+1. If you're collecting logs from other machines using a log forwarder, [**run the "installation" script**](#run-the-installation-script) on the log forwarder to configure the syslog daemon to listen for messages from other machines, and to open the necessary local ports.
Select the appropriate tab for instructions.
Select the appropriate tab for instructions.
### Create data collection rule
-To get started, open the data connector in Microsoft Sentinel and create a data connector rule.
+To get started, open either the **Syslog via AMA** or **Common Event Format (CEF) via AMA** data connector in Microsoft Sentinel and create a data connector rule.
1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Data connectors**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Data connectors**.
In the **Resources** tab, select the machines on which you want to install the A
### Select facilities and severities
-Be aware that using the same facility for both Syslog and CEF messages might result in data ingestion duplication. For more information, see [Data ingestion duplication avoidance](cef-syslog-ama-overview.md#data-ingestion-duplication-avoidance).
+Be aware that using the same facility for both syslog and CEF messages might result in data ingestion duplication. For more information, see [Data ingestion duplication avoidance](cef-syslog-ama-overview.md#data-ingestion-duplication-avoidance).
1. In the **Collect** tab, select the minimum log level for each facility. When you select a log level, Microsoft Sentinel collects logs for the selected level and other levels with higher severity. For example, if you select **LOG_ERR**, Microsoft Sentinel collects logs for the **LOG_ERR**, **LOG_CRIT**, **LOG_ALERT**, and **LOG_EMERG** levels.
Create a JSON file for the data collection rule, create an API request, and send
1. Prepare a DCR file in JSON format. The contents of this file is the request body in your API request.
- For an example, see [Syslog/CEF DCR creation request body](api-dcr-reference.md#syslogcef-dcr-creation-request-body). To collect Syslog and CEF messages in the same data collection rule, see the example [Syslog and CEF streams in the same DCR](#syslog-and-cef-streams-in-the-same-dcr).
+ For an example, see [Syslog/CEF DCR creation request body](api-dcr-reference.md#syslogcef-dcr-creation-request-body). To collect syslog and CEF messages in the same data collection rule, see the example [Syslog and CEF streams in the same DCR](#syslog-and-cef-streams-in-the-same-dcr).
- - Verify that the `streams` field is set to `Microsoft-Syslog` for Syslog messages, or to `Microsoft-CommonSecurityLog` for CEF messages.
+ - Verify that the `streams` field is set to `Microsoft-Syslog` for syslog messages, or to `Microsoft-CommonSecurityLog` for CEF messages.
- Add the filter and facility log levels in the `facilityNames` and `logLevels` parameters. See [Examples of facilities and log levels sections](#examples-of-facilities-and-log-levels-sections). 1. Create an API request in a REST API client of your choosing.
This example collects events from the `cron`, `daemon`, `local0`, `local3` and `
### Syslog and CEF streams in the same DCR
-This example shows how you can collect Syslog and CEF messages in the same DCR.
+This example shows how you can collect syslog and CEF messages in the same DCR.
The DCR collects CEF event messages for: - The `authpriv` and `mark` facilities with the `Info`, `Notice`, `Warning`, `Error`, `Critical`, `Alert`, and `Emergency` log levels - The `daemon` facility with the `Warning`, `Error`, `Critical`, `Alert`, and `Emergency` log levels
-It collects Syslog event messages for:
+It collects syslog event messages for:
- The `kern`, `local0`, `local5`, and `news` facilities with the `Critical`, `Alert`, and `Emergency` log levels - The `mail` and `uucp` facilities with the `Emergency` log level
It collects Syslog event messages for:
## Run the "installation" script
-If you're using a log forwarder, configure the Syslog daemon to listen for messages from other machines, and open the necessary local ports.
+If you're using a log forwarder, configure the syslog daemon to listen for messages from other machines, and open the necessary local ports.
1. From the connector page, copy the command line that appears under **Run the following command to install and apply the CEF collector:**
If you're using a log forwarder, configure the Syslog daemon to listen for messa
1. Sign in to the log forwarder machine where you just installed the AMA. 1. Paste the command you copied in the last step to launch the installation script.
- The script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon. The script opens port 514 to listen to incoming messages in both UDP and TCP protocols. To change this setting, refer to the Syslog daemon configuration file according to the daemon type running on the machine:
+ The script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon. The script opens port 514 to listen to incoming messages in both UDP and TCP protocols. To change this setting, refer to the syslog daemon configuration file according to the daemon type running on the machine:
- Rsyslog: `/etc/rsyslog.conf` - Syslog-ng: `/etc/syslog-ng/syslog-ng.conf`
If you're using a log forwarder, configure the Syslog daemon to listen for messa
> To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed AMA. > For more information, see [RSyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [Syslog-ng](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/34#TOPIC-1431029).
+## Configure the security device or appliance
+
+Get specific instructions to configure your security device or appliance by going to one of the following articles:
+
+- [CEF via AMA data connector - Configure specific appliances and devices for Microsoft Sentinel data ingestion](unified-connector-cef-device.md)
+- [Syslog via AMA data connector - Configure specific appliances and devices for Microsoft Sentinel data ingestion](unified-connector-syslog-device.md)
+
+Contact the solution provider for more information or where information is unavailable for the appliance or device.
+ ## Test the connector Verify that logs messages from your linux machine or security devices and appliances are ingested into Microsoft Sentinel.
Verify that logs messages from your linux machine or security devices and applia
- [Syslog and Common Event Format (CEF) via AMA connectors for Microsoft Sentinel](cef-syslog-ama-overview.md) - [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md)
+- [CEF via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-cef-device.md)
+- [Syslog via AMA data connector - Configure specific appliance or device for the Microsoft Sentinel data ingestion](unified-connector-syslog-device.md)
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs
description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Last updated : 06/27/2024 Previously updated : 05/30/2024 appliesto: - Microsoft Sentinel in the Azure portal
This article lists all supported, out-of-the-box data connectors and links to ea
> [!IMPORTANT] > - Noted Microsoft Sentinel data connectors are currently in **Preview**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> - For connectors that use the Log Analytics agent, the agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
+> - For connectors that use the Log Analytics agent, the agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you migrate to the the Azure Monitor Agent (AMA). For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
> - [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)] Data connectors are available as part of the following offerings:
Data connectors are available as part of the following offerings:
## Syslog and Common Event Format (CEF) connectors
-Some Microsoft Sentinel solutions are supported by the data connectors Syslog via AMA or Common Event Format (CEF) via AMA in Microsoft Sentinel. To forward data to your Log Analytics workspace for Microsoft Sentinel, complete the steps in [Ingest Syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md). These steps include installing either the **Common Event Format** or **Syslog** solution from the **Content hub** in Microsoft Sentinel. Then, configure the related AMA connector that's installed with the solution. Complete the setup by configuring the appropriate devices or appliances. For more information, see the solution provider's installation instructions or contact the solution provider.
+Log collection from many security appliances and devices are supported by the data connectors **Syslog via AMA** or **Common Event Format (CEF) via AMA** in Microsoft Sentinel. To forward data to your Log Analytics workspace for Microsoft Sentinel, complete the steps in [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md). These steps include installing the Microsoft Sentinel solution for a security appliance or device from the **Content hub** in Microsoft Sentinel. Then, configure the **Syslog via AMA** or **Common Event Format (CEF) via AMA** data connector that's appropriate for the Microsoft Sentinel solution you installed. Complete the setup by configuring the security device or appliance. Find instructions to configure your security device or appliance in one of the following articles:
+
+- [CEF via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-cef-device.md)
+- [Syslog via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion](unified-connector-syslog-device.md)
+
+Contact the solution provider for more information or where information is unavailable for the appliance or device.
[comment]: <> (DataConnector includes start)
sentinel Deploy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/deploy-overview.md
description: Learn about the steps to deploy Microsoft Sentinel including the ph
Previously updated : 06/18/2024 Last updated : 06/28/2024
When you're finished with your deployment of Microsoft Sentinel, continue to exp
- [Respond to threats using automation](tutorial-respond-threats-playbook.md) - [Extract incident entities with non-native action](tutorial-extract-incident-entities.md) - [Investigate with UEBA](investigate-with-ueba.md)-- [Build and monitor Zero Trust](sentinel-solution.md)
+- [Build and monitor Zero Trust](sentinel-solution.md)
+
+Review the [Microsoft Sentinel operational guide](ops-guide.md) for the regular SOC activities we recommend that you perform daily, weekly, and monthly.
sentinel Ops Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ops-guide.md
+
+ Title: Operational guide - Microsoft Sentinel
+description: Learn about the operational recommendations to help security operations teams to plan and run security activities.
Last updated : 06/28/2024+++
+appliesto:
+ - Microsoft Sentinel in the Azure portal and the Microsoft Defender portal
+#Customer intent: As a security operations (SOC) team member or security administrator, I want to know what operational activities I should plan to do daily, weekly, and monthly with Microsoft Sentinel to help keep my organization's environment secure.
++
+# Microsoft Sentinel operational guide
+
+This article lists the operational activities that we recommend security operations (SOC) teams and security administrators plan for and run as part of their regular security activities with Microsoft Sentinel.
+
+## Daily tasks
+
+Schedule the following activities daily.
+
+|Task|description|
+|||
+|**Triage and investigate incidents**|Review the Microsoft Sentinel **Incidents** page to check for new incidents generated by the currently configured analytics rules, and start investigating any new incidents. For more information, see [Investigate incidents with Microsoft Sentinel](investigate-cases.md).|
+|**Explore hunting queries and bookmarks**|Explore results for all built-in queries, and update existing hunting queries and bookmarks. Manually generate new incidents or update old incidents if applicable. For more information, see:</br></br>- [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md)</br>- [Hunt for threats with Microsoft Sentinel](hunting.md)</br>- [Keep track of data during hunting with Microsoft Sentinel](bookmarks.md)|
+|**Analytic rules**|Review and enable new analytics rules as applicable, including both newly released or newly available rules from recently connected data connectors.|
+|**Data connectors**| Review the status, date, and time of the last log received from each data connector to ensure that data is flowing. Check for new connectors, and review ingestion to ensure set limits aren't exceeded. For more information, see [Data collection best practices](best-practices-data.md) and [Connect data sources](connect-data-sources.md).|
+|**Log Analytics Agent**| Verify that servers and workstations are actively connected to the workspace, and troubleshoot and remediate any failed connections. For more information, see [Log Analytics Agent overview](../azure-monitor/agents/log-analytics-agent.md).|
+|**Playbook failures**| Verify playbook run statuses and troubleshoot any failures. For more information, see [Tutorial: Respond to threats by using playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md).|
+
+## Weekly tasks
+
+Schedule the following activities weekly.
+
+|Task|description|
+|||
+|**Content review of solutions or standalone content**| Get any content updates for your installed solutions or standalone content from the [Content hub](sentinel-solutions-deploy.md). Review new solutions or standalone content that might be of value for your environment, such as analytics rules, workbooks, hunting queries, or playbooks.|
+|**Microsoft Sentinel auditing**| Review Microsoft Sentinel activity to see who updated or deleted resources, such as analytics rules, bookmarks, and so on. For more information, see [Audit Microsoft Sentinel queries and activities](audit-sentinel-data.md).|
+
+## Monthly tasks
+
+Schedule the following activities monthly.
+
+|Task|description|
+|||
+|**Review user access**| Review permissions for your users and check for inactive users. For more information, see [Permissions in Microsoft Sentinel](roles.md).|
+|**Log Analytics workspace review**| Review that the Log Analytics workspace data retention policy still aligns with your organization's policy. For more information, see [Data retention policy](/workplace-analytics/privacy/license-expiration) and [Integrate Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md).|
++
+## Related content
+
+- [Deployment guide for Microsoft Sentinel](deploy-overview.md)
sentinel Unified Connector Cef Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/unified-connector-cef-device.md
+
+ Title: CEF via AMA connector - Configure appliances and devices
+description: Learn how to configure specific devices that use the Common Event Format (CEF) via AMA data connector for Microsoft Sentinel.
++++ Last updated : 06/27/2024++
+# CEF via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion
+
+Log collection from many security appliances and devices are supported by the **Common Event Format (CEF) via AMA** data connector in Microsoft Sentinel. This article lists provider supplied installation instructions for specific security appliances and devices that use this data connector. Contact the provider for updates, more information, or where information is unavailable for your security appliance or device.
+
+To forward data to your Log Analytics workspace for Microsoft Sentinel, complete the steps in [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md). As you complete those steps, install the **Common Event Format (CEF) via AMA** data connector in Microsoft Sentinel. Then, use the appropriate provider's instructions in this article to complete the setup.
+
+For more information about the related Microsoft Sentinel solution for each of these appliances or devices, search the [Azure Marketplace](https://azuremarketplace.microsoft.com/) for the **Product Type** > **Solution Templates** or review the solution from the **Content hub** in Microsoft Sentinel.
+
+## AI Analyst Darktrace
+
+Configure Darktrace to forward syslog messages in CEF format to your Azure workspace via the syslog agent.
+
+1. Within the Darktrace Threat Visualizer, navigate to the **System Config** page in the main menu under **Admin**.
+1. From the left-hand menu, select **Modules** and choose Microsoft Sentinel from the available **Workflow Integrations**.
+1. Locate Microsoft Sentinel syslog CEF and select **New** to reveal the configuration settings, unless already exposed.
+1. In the **Server** configuration field, enter the location of the log forwarder and optionally modify the communication port. Ensure that the port selected is set to 514 and is allowed by any intermediary firewalls.
+1. Configure any alert thresholds, time offsets or other settings as required.
+1. Review any other configuration options you might wish to enable that alter the syslog syntax.
+1. Enable **Send Alerts** and save your changes.
+
+## Akamai Security Events
+
+[Follow these steps](https://developer.akamai.com/tools/integrations/siem) to configure Akamai CEF connector to send syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+## AristaAwakeSecurity
+
+Complete the following steps to forward Awake Adversarial Model match results to a CEF collector listening on TCP port 514 at IP **192.168.0.1**:
+
+1. Navigate to the **Detection Management Skills** page in the Awake UI.
+1. Select **+ Add New Skill**.
+1. Set **Expression** to `integrations.cef.tcp { destination: "192.168.0.1", port: 514, secure: false, severity: Warning }`
+1. Set **Title** to a descriptive name like, *Forward Awake Adversarial Model match result to Microsoft Sentinel*.
+1. Set **Reference Identifier** to something easily discoverable like, *integrations.cef.sentinel-forwarder*.
+1. Select **Save**.
+
+Within a few minutes of saving the definition and other fields, the system begins to send new model match results to the CEF events collector as they're detected.
+
+For more information, see the **Adding a Security Information and Event Management Push Integration** page from the **Help Documentation** in the Awake UI.
+
+## Aruba ClearPass
+
+Configure Aruba ClearPass to forward syslog messages in CEF format to your Microsoft Sentinel workspace via the syslog agent.
+
+1. [Follow these instructions](https://www.arubanetworks.com/techdocs/ClearPass/6.7/PolicyManager/Content/CPPM_UserGuide/Admin/syslogExportFilters_add_syslog_filter_general.htm) to configure the Aruba ClearPass to forward syslog.
+2. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+## Barracuda WAF
+
+The Barracuda Web Application Firewall can integrate with and export logs directly to Microsoft Sentinel via the Azure Monitoring Agent (AMA).ΓÇï
+
+1. Go to [Barracuda WAF configuration](https://aka.ms/asi-barracuda-connector), and follow the instructions, using the following parameters to set up the connection.
+
+1. Web Firewall logs facility: Go to the advanced settings for your workspace and on the **Data** > **Syslog** tabs. Make sure that the facility exists.ΓÇï
+
+Notice that the data from all regions are stored in the selected workspace.
+
+## Broadcom SymantecDLP
+
+Configure Symantec DLP to forward syslog messages in CEF format to your Microsoft Sentinel workspace via the syslog agent.
+
+1. [Follow these instructions](https://knowledge.broadcom.com/external/article/159509/generating-syslog-messages-from-data-los.html) to configure the Symantec DLP to forward syslog
+1. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+## Cisco Firepower EStreamer
+
+Install and configure the Firepower eNcore eStreamer client. For more information, see the full install [guide](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html).
+
+## CiscoSEG
+
+Complete the following steps to configure Cisco Secure Email Gateway to forward logs via syslog:
+
+1. Configure [Log Subscription](https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0/b_ESA_Admin_Guide_12_1_chapter_0100111.html#con_1134718).
+1. Select **Consolidated Event Logs** in Log Type field.
+
+## Citrix Web App Firewall
+
+Configure Citrix WAF to send syslog messages in CEF format to the proxy machine.
+
+- Find guides to configure WAF and CEF logs from [Citrix Support](https://support.citrix.com/).
+
+- Follow [this guide](https://docs.citrix.com/en-us/citrix-adc/13/system/audit-logging/configuring-audit-logging.html) to forward the logs to proxy. Make sure you to send the logs to port 514 TCP on the Linux machine's IP address.
+
+## Claroty
+
+Configure log forwarding using CEF.
+
+1. Navigate to the **Syslog** section of the Configuration menu.
+1. Select **+Add**.
+1. In the **Add New Syslog Dialog** specify **Remote Server IP**, **Port**, **Protocol**.
+1. Select **Message Format** - **CEF**.
+1. Choose **Save** to exit the **Add Syslog dialog**.
+
+## Contrast Protect
+
+Configure the Contrast Protect agent to forward events to syslog as described here: https://docs.contrastsecurity.com/en/output-to-syslog.html. Generate some attack events for your application.
+
+## CrowdStrike Falcon
+
+Deploy the CrowdStrike Falcon SIEM Collector to forward syslog messages in CEF format to your Microsoft Sentinel workspace via the syslog agent.
+
+1. [Follow these instructions](https://www.crowdstrike.com/blog/tech-center/integrate-with-your-siem/) to deploy the **SIEM Collector** and forward syslog.
+1. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+
+## CyberArk Enterprise Password Vault (EPV) Events
+
+On the EPV, configure the dbparm.ini to send syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machines IP address.
+
+## Delinea Secret Server
+
+Set your security solution to send syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+## ExtraHop Reveal(x)
+
+Set your security solution to send syslog messages in CEF format to the proxy machine. Make sure to send the logs to port 514 TCP on the machine IP address.
+
+1. Follow the directions to install the [ExtraHop Detection SIEM Connector bundle](https://learn.extrahop.com/extrahop-detection-siem-connector-bundle) on your Reveal(x) system. The **SIEM Connector** is required for this integration.
+1. Enable the trigger for **ExtraHop Detection SIEM Connector - CEF**.
+1. Update the trigger with the ODS syslog targets you created. 
+
+The Reveal(x) system formats syslog messages in Common Event Format (CEF) and then sends data to Microsoft Sentinel.
+
+## F5 Networks
+
+Configure F5 to forward syslog messages in CEF format to your Microsoft Sentinel workspace via the syslog agent.
+
+Go to [F5 Configuring Application Security Event Logging](https://aka.ms/asi-syslog-f5-forwarding), follow the instructions to set up remote logging, using the following guidelines:
+
+1. Set the **Remote storage type** to **CEF**.
+1. Set the **Protocol setting** to **UDP**.
+1. Set the **IP address** to the syslog server IP address.
+1. Set the **port number** to **514**, or the port your agent uses.
+1. Set the **facility** to the one that you configured in the syslog agent. By default, the agent sets this value to **local4**.
+1. You can set the **Maximum Query String Size** to be the same as you configured.
+
+## FireEye Network Security
+
+Complete the following steps to send data using CEF:
+
+1. Sign into the FireEye appliance with an administrator account.
+1. Select **Settings**.
+1. Select **Notifications**. Select **rsyslog**.
+1. Check the **Event type** check box.
+1. Make sure Rsyslog settings are:
+
+ - Default format: **CEF**
+ - Default delivery: **Per event**
+ - Default send as: **Alert**
+
+## Forcepoint CASB
+
+Set your security solution to send syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+## Forcepoint CSG
+
+The integration is made available with two implementations options:
+
+1. Uses docker images where the integration component is already installed with all necessary dependencies.
+Follow the instructions provided in the [Integration Guide](https://frcpnt.com/csg-sentinel).
+1. Requires the manual deployment of the integration component inside a clean Linux machine. Follow the instructions provided in the [Integration Guide](https://frcpnt.com/csg-sentinel).
+
+## Forcepoint NGFW
+
+Set your security solution to send syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+## ForgeRock Common Audit for CEF
+
+In ForgeRock, install and configure this Common Audit (CAUD) for Microsoft Sentinel per the documentation at https://github.com/javaservlets/SentinelAuditEventHandler. Next, in Azure, follow the steps to configure the CEF via AMA data connector.
+
+## iboss
+
+Set your Threat Console to send syslog messages in CEF format to your Azure workspace. Make note of your **Workspace ID** and **Primary Key** within your Log Analytics workspace. Select the workspace from the Log Analytics workspaces menu in the Azure portal. Then select **Agents management** in the **Settings** section.
+
+1. Navigate to **Reporting & Analytics** inside your iboss Console.
+1. Select **Log Forwarding** > **Forward From Reporter**.
+1. Select **Actions** > **Add Service**.
+1. Toggle to Microsoft Sentinel as a **Service Type** and input your **Workspace ID/Primary Key** along with other criteria. If a dedicated proxy Linux machine was configured, toggle to **Syslog** as a **Service Type** and configure the settings to point to your dedicated proxy Linux machine.
+1. Wait one to two minutes for the setup to complete.
+1. Select your Microsoft Sentinel service and verify the Microsoft Sentinel setup status is successful. If a dedicated proxy Linux machine is configured, you might validate your connection.
+
+## Illumio Core
+
+Configure event format.
+
+1. From the PCE web console menu, choose **Settings > Event Settings** to view your current settings.
+1. Select **Edit** to change the settings.
+1. Set **Event Format** to CEF.
+1. (Optional) Configure **Event Severity** and **Retention Period**.
+
+Configure event forwarding to an external syslog server.
+
+1. From the PCE web console menu, choose **Settings** > **Event Settings**.
+1. Select **Add**.
+1. Select **Add Repository**.
+1. Complete the **Add Repository** dialog.
+1. Select **OK** to save the event forwarding configuration.
+
+## Illusive Platform
+
+1. Set your security solution to send syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+1. Sign into the Illusive Console, and navigate to **Settings** > **Reporting**.
+1. Find **Syslog Servers**.
+1. Supply the following information:
+
+ - Host name: *Linux Syslog agent IP address or FQDN host name*
+ - Port: *514*
+ - Protocol: *TCP*
+ - Audit messages: *Send audit messages to server*
+
+1. To add the syslog server, select **Add**.
+
+For more information about how to add a new syslog server in the Illusive platform, find the Illusive Networks Admin Guide in here: https://support.illusivenetworks.com/hc/en-us/sections/360002292119-Documentation-by-Version
+
+## Imperva WAF Gateway
+
+This connector requires an **Action Interface** and **Action Set** to be created on the Imperva SecureSphere MX. [Follow the steps](https://community.imperva.com/blogs/craig-burlingame1/2020/11/13/steps-for-enabling-imperva-waf-gateway-alert) to create the requirements.
+
+1. Create a new **Action Interface** that contains the required parameters to send WAF alerts to Microsoft Sentinel.
+1. Create a new **Action Set** that uses the **Action Interface** configured.
+1. Apply the Action Set to any security policies you wish to have alerts for sent to Microsoft Sentinel.
+
+## Infoblox Cloud Data Connector
+
+Complete the following steps to configure the Infoblox CDC to send BloxOne data to Microsoft Sentinel via the Linux syslog agent.
+
+1. Navigate to **Manage** > **Data Connector**.
+1. Select the **Destination Configuration** tab at the top.
+1. Select **Create > Syslog**.
+ - **Name**: Give the new Destination a meaningful name, such as *Microsoft-Sentinel-Destination*.
+ - **Description**: Optionally give it a meaningful description.
+ - **State**: Set the state to **Enabled**.
+ - **Format**: Set the format to **CEF**.
+ - **FQDN/IP**: Enter the IP address of the Linux device on which the Linux agent is installed.
+ - **Port**: Leave the port number at **514**.
+ - **Protocol**: Select desired protocol and CA certificate if applicable.
+ - Select **Save & Close**.
+1. Select the **Traffic Flow Configuration** tab at the top.
+1. Select **Create**.
+ - **Name**: Give the new Traffic Flow a meaningful name, such as *Microsoft-Sentinel-Flow*.
+ - **Description**: Optionally give it a meaningful description.
+ - **State**: Set the state to **Enabled**.
+ - Expand the **Service Instance** section.
+ - **Service Instance**: Select your desired Service Instance for which the Data Connector service is enabled.
+ - Expand the **Source Configuration** section.
+ - **Source**: Select **BloxOne Cloud Source**.
+ - Select all desired **log types** you wish to collect. Currently supported log types are:
+ - Threat Defense Query/Response Log
+ - Threat Defense Threat Feeds Hits Log
+ - DDI Query/Response Log
+ - DDI DHCP Lease Log
+ - Expand the **Destination Configuration** section.
+ - Select the **Destination** you created.
+ - Select **Save & Close**.
+1. Allow the configuration some time to activate.
+
+## Infoblox SOC Insights
+
+Complete the following steps to configure the Infoblox CDC to send BloxOne data to Microsoft Sentinel via the Linux syslog agent.
+
+1. Navigate to **Manage > Data Connector**.
+1. Select the **Destination Configuration** tab at the top.
+1. Select **Create > Syslog**.
+ - **Name**: Give the new Destination a meaningful name, such as *Microsoft-Sentinel-Destination*.
+ - **Description**: Optionally give it a meaningful description.
+ - **State**: Set the state to **Enabled**.
+ - **Format**: Set the format to **CEF**.
+ - **FQDN/IP**: Enter the IP address of the Linux device on which the Linux agent is installed.
+ - **Port**: Leave the port number at **514**.
+ - **Protocol**: Select desired protocol and CA certificate if applicable.
+ - Select **Save & Close**.
+1. Select the **Traffic Flow Configuration** tab at the top.
+1. Select **Create**.
+ - **Name**: Give the new Traffic Flow a meaningful name, such as *Microsoft-Sentinel-Flow*.
+ - **Description**: Optionally give it a meaningful **description**.
+ - **State**: Set the state to **Enabled**.
+ - Expand the **Service Instance** section.
+ - **Service Instance**: Select your desired service instance for which the data connector service is enabled.
+ - Expand the **Source Configuration** section.
+ - **Source**: Select **BloxOne Cloud Source**.
+ - Select the **Internal Notifications** Log Type.
+ - Expand the **Destination Configuration** section.
+ - Select the **Destination** you created.
+ - Select **Save & Close**.
+1. Allow the configuration some time to activate.
+
+## KasperskySecurityCenter
+
+[Follow the instructions](https://support.kaspersky.com/KSC/13/en-US/89277.htm) to configure event export from Kaspersky Security Center.
+
+## Morphisec
+
+Set your security solution to send syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+## Netwrix Auditor
+
+[Follow the instructions](https://www.netwrix.com/download/QuickStart/Netwrix_Auditor_Add-on_for_HPE_ArcSight_Quick_Start_Guide.pdf) to configure event export from Netwrix Auditor.
+
+## NozomiNetworks
+
+Complete the following steps to configure Nozomi Networks device to send alerts, audit, and health logs via syslog in CEF format:
+
+1. Sign in to the Guardian console.
+1. Navigate to **Administration** > **Data Integration**.
+1. Select **+Add**.
+1. Select the **Common Event Format (CEF)** from the drop-down.
+1. Create **New Endpoint** using the appropriate host information.
+1. Enable **Alerts**, **Audit Logs**, and **Health Logs** for sending.
+
+## Onapsis Platform
+
+Refer to the Onapsis in-product help to set up log forwarding to the syslog agent.
+
+1. Go to **Setup** > **Third-party integrations** > **Defend Alarms** and follow the instructions for Microsoft Sentinel.
+
+2. Make sure your Onapsis Console can reach the proxy machine where the agent is installed. The logs should be sent to port 514 using TCP.
+
+## OSSEC
+
+[Follow these steps](https://www.ossec.net/docs/docs/manual/output/syslog-output.html) to configure OSSEC sending alerts via syslog.
+
+## Palo Alto - XDR (Cortex)
+
+Configure Palo Alto XDR (Cortex) to forward messages in CEF format to your Microsoft Sentinel workspace via the syslog agent.
+
+1. Go to **Cortex Settings and Configurations**.
+1. Select to add **New Server** under **External Applications**.
+1. Then specify the name and give the public IP of your syslog server in **Destination**.
+1. Give **Port number** as 514.
+1. From **Facility** field, select **FAC_SYSLOG** from dropdown.
+1. Select **Protocol** as **UDP**.
+1. Select **Create**.
+
+## PaloAlto-PAN-OS
+
+Configure Palo Alto Networks to forward syslog messages in CEF format to your Microsoft Sentinel workspace via the syslog agent.
+
+1. Go to [configure Palo Alto Networks NGFW for sending CEF events](https://aka.ms/sentinel-paloaltonetworks-readme).
+1. Go to [Palo Alto CEF Configuration](https://aka.ms/asi-syslog-paloalto-forwarding) and Palo Alto [Configure Syslog Monitoring](https://aka.ms/asi-syslog-paloalto-configure) steps 2, 3, choose your version, and follow the instructions using the following guidelines:
+
+ 1. Set the **Syslog server format** to **BSD**.
+ 1. Copy the text to an editor and remove any characters that might break the log format before pasting it. The copy/paste operations from the PDF might change the text and insert random characters.
+
+[Learn more](https://aka.ms/CEFPaloAlto)
+
+## PaloAltoCDL
+
+[Follow the instructions](https://docs.paloaltonetworks.com/cortex/cortex-data-lake/cortex-data-lake-getting-started/get-started-with-log-forwarding-app/forward-logs-from-logging-service-to-syslog-server.html) to configure logs forwarding from Cortex Data Lake to a syslog Server.
+
+## PingFederate
+
+[Follow these steps](https://docs.pingidentity.com/bundle/pingfederate-102/page/gsn1564002980953.html) to configure PingFederate sending audit log via syslog in CEF format.
+
+## RidgeSecurity
+
+Configure the RidgeBot to forward events to syslog server as described [here](https://portal.ridgesecurity.ai/downloadurl/89x72912). Generate some attack events for your application.
+
+## SonicWall Firewall
+
+Set your SonicWall Firewall to send syslog messages in CEF format to the proxy machine. Make sure you send the logs to port 514 TCP on the machine's IP address.
+
+Follow instructions. Then Make sure you select local use 4 as the facility. Then select ArcSight as the syslog format.
+
+## Trend Micro Apex One
+
+[Follow these steps](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/detections/logs_001/syslog-forwarding.aspx) to configure Apex Central sending alerts via syslog. While configuring, on step 6, select the log format **CEF**.
+
+## Trend Micro Deep Security
+
+Set your security solution to send syslog messages in CEF format to the proxy machine. Make sure to send the logs to port 514 TCP on the machine's IP address.
+
+1. Forward Trend Micro Deep Security events to the syslog agent.
+1. Define a new syslog Configuration that uses the CEF format by referencing [this knowledge article](https://aka.ms/Sentinel-trendmicro-kblink) for additional information.
+1. Configure the **Deep Security Manager** to use this new configuration to forward events to the syslog agent using [these instructions](https://aka.ms/Sentinel-trendMicro-connectorInstructions).
+1. Make sure to save the [TrendMicroDeepSecurity](https://aka.ms/TrendMicroDeepSecurityFunction) function so that it queries the Trend Micro Deep Security data properly.
+
+## Trend Micro TippingPoint
+
+Set your TippingPoint SMS to send syslog messages in ArcSight CEF Format v4.2 format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+## vArmour Application Controller
+
+Send syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+Download the user guide from https://support.varmour.com/hc/en-us/articles/360057444831-vArmour-Application-Controller-6-0-User-Guide. In the user guide, refer to "Configuring Syslog for Monitoring and Violations" and follow steps 1 to 3.
+
+## Vectra AI Detect
+
+Configure Vectra (X Series) Agent to forward syslog messages in CEF format to your Microsoft Sentinel workspace via the syslog agent.
+
+From the Vectra UI, navigate to Settings > Notifications and Edit syslog configuration. Follow below instructions to set up the connection:
+
+1. Add a new Destination (which is the host where the Microsoft Sentinel syslog agent is running).
+1. Set the **Port** as *514*.
+1. Set the **Protocol** as *UDP*.
+1. Set the **format** to *CEF*.
+1. Set **Log types**. Select all log types available.
+1. Select on **Save**.
+1. Select the **Test** button to send some test events.
+
+For more information, see the Cognito Detect Syslog Guide, which can be downloaded from the resource page in Detect UI.
+
+## Votiro
+
+Set Votiro Endpoints to send syslog messages in CEF format to the Forwarder machine. Make sure you to send the logs to port 514 TCP on the Forwarder machine's IP address.
+
+## WireX Network Forensics Platform
+
+Contact WireX support (https://wirexsystems.com/contact-us/) in order to configure your NFP solution to send syslog messages in CEF format to the proxy machine. Make sure that they central manager can send the logs to port 514 TCP on the machine's IP address.
+
+## WithSecure Elements via Connector
+
+Connect your WithSecure Elements Connector appliance to Microsoft Sentinel. The WithSecure Elements Connector data connector allows you to easily connect your WithSecure Elements logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation.
+
+> [!NOTE]
+> Data is stored in the geographic location of the workspace on which you are running Microsoft Sentinel.
+
+Configure With Secure Elements Connector to forward syslog messages in CEF format to your Log Analytics workspace via the syslog agent.
+
+1. Select or create a Linux machine for Microsoft Sentinel to use as the proxy between your WithSecurity solution and Microsoft Sentinel. The machine can be an on-premises environment, Microsoft Azure, or other cloud based environment. Linux needs to have `syslog-ng` and `python`/`python3` installed.
+1. Install the Azure Monitoring Agent (AMA) on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. You must have elevated permissions (sudo) on your machine.
+1. Go to **EPP** in WithSecure Elements Portal. Then navigate to **Downloads**. In **Elements Connector** section, select **Create subscription key**. You can check your subscription key in **Subscriptions**.
+1. In **Downloads** in WithSecure Elements **Connector** section, select the correct installer and download it.
+1. When in EPP, open account settings from the top right hand corner. Then select **Get management API key**. If the key was created earlier, it can be read there as well.
+1. To install Elements Connector, follow [Elements Connector Docs](https://help.f-secure.com/product.html#business/connector/latest/en/concept_BA55FDB13ABA44A8B16E9421713F4913-latest-en).
+1. If API access isn't configured during installation, follow [Configuring API access for Elements Connector](https://help.f-secure.com/product.html#business/connector/latest/en/task_F657F4D0F2144CD5913EE510E155E234-latest-en).
+1. Go to EPP, then **Profiles**, then use **For Connector** from where you can see the connector profiles. Create a new profile (or edit an existing not read-only profile). In **Event forwarding**, enable it. Set SIEM system address: **127.0.0.1:514**. Set format to **Common Event Format**. Protocol is **TCP**. Save profile and assign it to **Elements Connector** in **Devices** tab.
+1. To use the relevant schema in Log Analytics for the WithSecure Elements Connector, search for **CommonSecurityLog**.
+1. Continue with [validating your CEF connectivity](/azure/sentinel/troubleshooting-cef-syslog?tabs=rsyslog#validate-cef-connectivity).
+
+## Zscaler
+
+Set Zscaler product to send syslog messages in CEF format to your syslog agent. Make sure you to send the logs on port 514 TCP.
+
+For more information, see [Zscaler Microsoft Sentinel integration guide](https://aka.ms/ZscalerCEFInstructions).
+
+## Related content
+
+- [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md)
+- [Syslog via AMA and Common Event Format (CEF) via AMA connectors for Microsoft Sentinel](cef-syslog-ama-overview.md)
sentinel Unified Connector Syslog Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/unified-connector-syslog-device.md
+
+ Title: Syslog via AMA connector - configure appliances and devices
+description: Learn how to configure specific appliances and devices that use the Syslog via AMA data connector for Microsoft Sentinel.
++++ Last updated : 06/27/2024++
+# Syslog via AMA data connector - Configure specific appliance or device for Microsoft Sentinel data ingestion
+
+Log collection from many security appliances and devices are supported by the **Syslog via AMA** data connector in Microsoft Sentinel. This article lists provider supplied installation instructions for specific security appliances and devices that use this data connector. Contact the provider for updates, more information, or where information is unavailable for your security appliance or device.
+
+To forward data to your Log Analytics workspace for Microsoft Sentinel, complete the steps in [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md). As you complete those steps, install the **Syslog via AMA** data connector in Microsoft Sentinel. Then, use the appropriate provider's instructions in this article to complete the setup.
+
+For more information about the related Microsoft Sentinel solution for each of these appliances or devices, search the [Azure Marketplace](https://azuremarketplace.microsoft.com/) for the **Product Type** > **Solution Templates** or review the solution from the **Content hub** in Microsoft Sentinel.
+
+## Barracuda CloudGen Firewall
+
+[Follow instructions](https://aka.ms/sentinel-barracudacloudfirewall-connector) to configure syslog streaming. Use the IP address or hostname for the Linux machine with the Microsoft Sentinel agent installed for the **Destination IP** address.
+
+## Blackberry CylancePROTECT
+
+[Follow these instructions](https://docs.blackberry.com/en/unified-endpoint-security/blackberry-ues/cylance-syslog-guide/Configure_Syslog_Settings) to configure the CylancePROTECT to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+## Cisco Application Centric Infrastructure (ACI)
+
+Configure Cisco ACI system to send logs via syslog to the remote server where you install the agent.
+[Follow these steps](https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/basic-config/b_ACI_Config_Guide/b_ACI_Config_Guide_chapter_010.html#d2933e4611a1635) to configure **Syslog Destination**, **Destination Group**, and **Syslog Source**.
+
+This data connector was developed using Cisco ACI Release 1.x.
+
+## Cisco Identity Services Engine (ISE)
+
+[Follow these instructions](https://www.cisco.com/c/en/us/td/docs/security/ise/2-7/admin_guide/b_ise_27_admin_guide/b_ISE_admin_27_maintain_monitor.html#ID58) to configure remote syslog collection locations in your Cisco ISE deployment.
+
+## Cisco Stealthwatch
+
+Complete the following configuration steps to get Cisco Stealthwatch logs into Microsoft Sentinel.
+
+1. Sign in to the Stealthwatch Management Console (SMC) as an administrator.
+1. In the menu bar, select **Configuration** > **Response Management**.
+1. From the **Actions** section in the **Response Management** menu, select **Add > Syslog Message**.
+1. In the **Add Syslog Message Action** window, configure parameters.
+1. Enter the following custom format:
+
+ `|Lancope|Stealthwatch|7.3|{alarm_type_id}|0x7C|src={source_ip}|dst={target_ip}|dstPort={port}|proto={protocol}|msg={alarm_type_description}|fullmessage={details}|start={start_active_time}|end={end_active_time}|cat={alarm_category_name}|alarmID={alarm_id}|sourceHG={source_host_group_names}|targetHG={target_host_group_names}|sourceHostSnapshot={source_url}|targetHostSnapshot={target_url}|flowCollectorName={device_name}|flowCollectorIP={device_ip}|domain={domain_name}|exporterName={exporter_hostname}|exporterIPAddress={exporter_ip}|exporterInfo={exporter_label}|targetUser={target_username}|targetHostname={target_hostname}|sourceUser={source_username}|alarmStatus={alarm_status}|alarmSev={alarm_severity_name}`
+
+1. Select the custom format from the list and **OK**.
+1. Select **Response Management > Rules**.
+1. Select **Add** and **Host Alarm**.
+1. Provide a rule name in the **Name** field.
+1. Create rules by selecting values from the **Type** and **Options** menus. To add more rules, select the ellipsis icon. For a **Host Alarm**, combine as many possible types in a statement as possible.
+
+This data connector was developed using Cisco Stealthwatch version 7.3.2
+
+## Cisco Unified Computing Systems (UCS)
+
+[Follow these instructions](https://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-manager/110265-setup-syslog-for-ucs.html#configsremotesyslog) to configure the Cisco UCS to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation.
+>
+> Update the parser and specify the hostname of the source machines transmitting the logs in the parser's second line.
+>
+> To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias **CiscoUCS**. Alternatively, directly load the [function code](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Cisco%20UCS/Parsers/CiscoUCS.txt). It might take about 15-minutes post-installation to update.
+
+## Cisco Web Security Appliance (WSA)
+
+Configure Cisco to forward logs via syslog to the remote server where you install the agent.
+[Follow these steps](https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0/b_ESA_Admin_Guide_12_1_chapter_0100111.html#con_1134718) to configure Cisco WSA to forward logs via Syslog
+
+Select **Syslog Push** as a Retrieval Method.
+
+This data connector was developed using AsyncOS 14.0 for Cisco Web Security Appliance
+
+## Citrix Application Delivery Controller (ADC)
+
+Configure Citrix ADC (former NetScaler) to forward logs via Syslog.
+
+1. Navigate to **Configuration tab > System > Auditing > Syslog > Servers tab**
+2. Specify **Syslog action name**.
+3. Set IP address of remote Syslog server and port.
+4. Set **Transport type** as **TCP** or **UDP** depending on your remote syslog server configuration.
+5. For more information, see the [Citrix ADC (former NetScaler) documentation](https://docs.netscaler.com/).
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation. To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias **CitrixADCEvent**. Alternatively, you can directly load the [function code](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Citrix%20ADC/Parsers/CitrixADCEvent.txt). It might take about 15 minutes post-installation to update.
+>
+> This parser requires a watchlist named `Sources_by_SourceType`.
+>
+>i. If you don't have watchlist already created, create a watchlist from Microsoft Sentinel in the [Azure portal](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Sentinel%2Fmaster%2FASIM%2Fdeploy%2FWatchlists%2FASimSourceType.json).
+>
+> ii. Open watchlist `Sources_by_SourceType` and add entries for this data source.
+>
+> ii. The SourceType value for CitrixADC is `CitrixADC`.
+> For more information, see [Manage Advanced Security Information Model (ASIM) parsers](/azure/sentinel/normalization-manage-parsers?WT.mc_id=Portal-fx#configure-the-sources-relevant-to-a-source-specific-parser).
+
+## Digital Guardian Data Loss Prevention
+
+Complete the following steps to configure Digital Guardian to forward logs via Syslog:
+
+1. Sign in to the Digital Guardian Management Console.
+1. Select **Workspace** > **Data Export** > **Create Export**.
+1. From the **Data Sources** list, select **Alerts** or **Events** as the data source.
+1. From the **Export type** list, select **Syslog**.
+1. From the **Type list**, select **UDP, or TCP** as the transport protocol.
+1. In the **Server** field, type the IP address of your remote syslog server.
+1. In the **Port** field, type 514 (or other port if your syslog server was configured to use nondefault port).
+1. From the **Severity Level** list, select a severity level.
+1. Select the **Is Active** check box.
+1. Select **Next**.
+1. From the list of available fields, add Alert or Event fields for your data export.
+1. Select a Criteria for the fields in your data export and **Next**.
+1. Select a group for the criteria and **Next**.
+1. Select **Test Query**.
+1. Select **Next**.
+1. Save the data export.
+
+## ESET Protect integration
+
+Configure ESET PROTECT to send all events through Syslog.
+
+1. Follow [these instructions](https://help.eset.com/protect_admin/latest/en-US/admin_server_settings_syslog.html) to configure syslog output. Make sure to select **BSD** as the format and **TCP** as the transport.
+1. Follow [these instructions](https://help.eset.com/protect_admin/latest/en-US/admin_server_settings_export_to_syslog.html) to export all logs to syslog. Select **JSON** as the output format.
+
+## Exabeam Advanced Analytics
+
+[Follow these instructions](https://docs.exabeam.com/en/advanced-analytics/i56/advanced-analytics-administration-guide/125351-advanced-analytics.html#UUID-7ce5ff9d-56aa-93f0-65de-c5255b682a08) to send Exabeam Advanced Analytics activity log data via syslog.
+
+This data connector was developed using Exabeam Advanced Analytics i54 (Syslog)
+
+## Forescout
+
+Complete the following steps to get Forescout logs into Microsoft Sentinel.
+
+1. [Select an Appliance to Configure.](https://docs.forescout.com/bundle/syslog-3-6-1-h/page/syslog-3-6-1-h.Select-an-Appliance-to-Configure.html)
+1. [Follow these instructions](https://docs.forescout.com/bundle/syslog-3-6-1-h/page/syslog-3-6-1-h.Send-Events-To-Tab.html#pID0E0CE0HA) to forward alerts from the Forescout platform to a syslog server.
+1. [Configure](https://docs.forescout.com/bundle/syslog-3-6-1-h/page/syslog-3-6-1-h.Syslog-Triggers.html) the settings in the **Syslog Triggers** tab.
+
+This data connector was developed using Forescout Syslog Plugin version: v3.6
+
+## Gitlab
+
+[Follow these instructions](https://docs.gitlab.com/omnibus/settings/logs.html#udp-log-forwarding) to send Gitlab audit log data via syslog.
+
+## ISC Bind
+
+1. Follow these instructions to configure the ISC Bind to forward syslog: [DNS Logs](https://kb.isc.org/docs/aa-01526).
+1. Configure syslog to send the syslog traffic to the agent. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+## Infoblox Network Identity Operating System (NIOS)
+
+[Follow these instructions](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-slog-and-snmp-configuration-for-nios.pdf) to enable syslog forwarding of Infoblox NIOS Logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation.
+>
+> To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias **Infoblox**. Alternatively, you can directly load the [function code](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Infoblox%20NIOS/Parser/Infoblox.txt). It might take about 15 minutes post-installation to update.
+>
+> This parser requires a watchlist named **`Sources_by_SourceType`**.
+>
+>i. If you don't have watchlist already created, create a watchlist from Microsoft Sentinel in the [Azure portal](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Sentinel%2Fmaster%2FASIM%2Fdeploy%2FWatchlists%2FASimSourceType.json).
+>
+>ii. Open watchlist **`Sources_by_SourceType`** and add entries for this data source.
+>
+>ii. The SourceType value for InfobloxNIOS is **`InfobloxNIOS`**.
+>
+> For more information, see [Manage Advanced Security Information Model (ASIM) parsers](/azure/sentinel/normalization-manage-parsers?WT.mc_id=Portal-fx#configure-the-sources-relevant-to-a-source-specific-parser).
+
+## Ivanti Unified Endpoint Management
+
+[Follow the instructions](https://help.ivanti.com/ld/help/en_US/LDMS/11.0/Windows/alert-t-define-action.htm) to set up Alert Actions to send logs to syslog server.
+
+This data connector was developed using Ivanti Unified Endpoint Management Release 2021.1 Version 11.0.3.374
+
+## Juniper SRX
+
+1. Complete the following instructions to configure the Juniper SRX to forward syslog:
+
+ - [Traffic Logs (Security Policy Logs)](https://kb.juniper.net/InfoCenter/index?page=content&id=KB16509&actp=METADATA)
+ - [System Logs](https://kb.juniper.net/InfoCenter/index?page=content&id=kb16502)
+
+2. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+## McAfee Network Security Platform
+
+Complete the following configuration steps to get McAfee® Network Security Platform logs into Microsoft Sentinel.
+
+1. Forward alerts from the manager to a syslog server.
+2. You must add a syslog notification profile. While creating profile, to make sure that events are formatted correctly, enter the following text in the Message text box:
+
+ ``<SyslogAlertForwarderNSP>:|SENSOR_ALERT_UUID|ALERT_TYPE|ATTACK_TIME|ATTACK_NAME|ATTACK_ID``
+``|ATTACK_SEVERITY|ATTACK_SIGNATURE|ATTACK_CONFIDENCE|ADMIN_DOMAIN|SENSOR_NAME|INTERFACE``
+``|SOURCE_IP|SOURCE_PORT|DESTINATION_IP|DESTINATION_PORT|CATEGORY|SUB_CATEGORY``
+``|DIRECTION|RESULT_STATUS|DETECTION_MECHANISM|APPLICATION_PROTOCOL|NETWORK_PROTOCOL|``
+
+This data connector was developed using McAfee® Network Security Platform version: 10.1.x.
+
+## McAfee ePolicy Orchestrator
+
+Contact the provider for guidance on how to register a syslog server.
+
+## Microsoft Sysmon For Linux
+
+This data connector depends on ASIM parsers based on a Kusto Functions to work as expected. [Deploy the parsers](https://aka.ms/ASimSysmonForLinuxARM).
+
+The following functions are deployed:
+
+- vimFileEventLinuxSysmonFileCreated, vimFileEventLinuxSysmonFileDeleted
+- vimProcessCreateLinuxSysmon, vimProcessTerminateLinuxSysmon
+- vimNetworkSessionLinuxSysmon
+
+[Read more](https://aka.ms/AboutASIM)
+
+## Nasuni
+
+Follow the instructions in the [Nasuni Management Console Guide](https://view.highspot.com/viewer/629a633ae5b4caaf17018daa?iid=5e6fbfcbc7143309f69fcfcf) to configure Nasuni Edge Appliances to forward syslog events. Use the IP address or hostname of the Linux device running the Azure Monitor Agent in the Servers configuration field for the syslog settings.
+
+## OpenVPN
+
+Install the agent on the Server where the OpenVPN are forwarded.
+OpenVPN server logs are written into common syslog file (depending on the Linux distribution used: e.g. /var/log/messages).
+
+## Oracle Database Audit
+
+Complete the following steps.
+
+1. Create the Oracle database [Follow these steps.](/azure/virtual-machines/workloads/oracle/oracle-database-quick-create)
+1. Sign in to the Oracle database you created. [Follow these steps](https://docs.oracle.com/cd/F49540_01/DOC/server.815/a67772/create.htm).
+1. Enable unified logging over syslog by **Alter the system to enable unified logging** [Following these steps](https://docs.oracle.com/en/database/oracle/oracle-database/21/refrn/UNIFIED_AUDIT_COMMON_SYSTEMLOG.html#GUID-9F26BC8E-1397-4B0E-8A08-3B12E4F9ED3A).
+1. Create and **enable an Audit policy for unified auditing** [Follow these steps](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-AUDIT-POLICY-Unified-Auditing.html#GUID-8D6961FB-2E50-46F5-81F7-9AEA314FC693).
+1. **Enabling syslog and Event Viewer** Captures for the Unified Audit Trail [Follow these steps](https://docs.oracle.com/en/database/oracle/oracle-database/18/dbseg/administering-the-audit-trail.html#GUID-3EFB75DB-AE1C-44E6-B46E-30E5702B0FC4).
+
+## Pulse Connect Secure
+
+[Follow the instructions](https://help.ivanti.com/ps/help/en_US/PPS/9.1R13/ag/configuring_an_external_syslog_server.htm) to enable syslog streaming of Pulse Connect Secure logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation.
+>
+> Update the parser and specify the hostname of the source machines transmitting the logs in the parser's second line.
+>
+> To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias **PulseConnectSecure**. Alternatively, directly load the [function code](https://aka.ms/sentinel-PulseConnectSecure-parser). It might take about 15 minutes post-installation to update.
+
+## RSA SecurID
+
+Complete the following steps to get RSA® SecurID Authentication Manager logs into Microsoft Sentinel.
+[Follow these instructions](https://community.rsa.com/t5/rsa-authentication-manager/configure-the-remote-syslog-host-for-real-time-log-monitoring/ta-p/571374) to forward alerts from the Manager to a syslog server.
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation.
+>
+> Update the parser and specify the hostname of the source machines transmitting the logs in the parser's second line.
+>
+> To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias **RSASecurIDAMEvent**. Alternatively, you can directly load the [function code](https://aka.ms/sentinel-rsasecuridam-parser). It might take about 15 minutes post-installation to update.
+
+This data connector was developed using RSA SecurID Authentication Manager version: 8.4 and 8.5
+
+## Sophos XG Firewall
++
+[Follow these instructions](https://doc.sophos.com/nsg/sophos-firewall/20.0/Help/en-us/webhelp/onlinehelp/AdministratorHelp/SystemServices/LogSettings/SyslogServerAdd/https://docsupdatetracker.net/index.html) to enable syslog streaming. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation.
+>
+> Update the parser and specify the hostname of the source machines transmitting the logs in the parser's second line.
+> To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias **SophosXGFirewall**. Alternatively, directly load the [function code](https://aka.ms/sentinel-SophosXG-parser). It might take about 15 minutes post-installation to update.
++
+## Symantec Endpoint Protection
++
+[Follow these instructions](https://techdocs.broadcom.com/us/en/symantec-security-software/endpoint-security-and-management/endpoint-protection/all/Monitoring-Reporting-and-Enforcing-Compliance/viewing-logs-v7522439-d37e464/exporting-data-to-a-syslog-server-v8442743-d15e1107.html) to configure the Symantec Endpoint Protection to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation.
+>
+> Update the parser and specify the hostname of the source machines transmitting the logs in the parser's second line.
+> To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias **SymantecEndpointProtection**. Alternatively, you can directly load the [function code](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Symantec%20Endpoint%20Protection/Parsers/SymantecEndpointProtection.yaml). It might take about 15 minutes post-installation to update.
+
+## Symantec ProxySG
+
+1. Sign in to the Blue Coat Management Console.
+1. Select **Configuration** > **Access Logging** > **Formats**.
+1. Select **New**.
+1. Enter a unique name in the **Format Name** field.
+1. Select the radio button for **Custom format string** and paste the following string into the field.
+
+ `` 1 $(date) $(time) $(time-taken) $(c-ip) $(cs-userdn) $(cs-auth-groups) $(x-exception-id) $(sc-filter-result) $(cs-categories) $(quot)$(cs(Referer))$(quot) $(sc-status) $(s-action) $(cs-method) $(quot)$(rs(Content-Type))$(quot) $(cs-uri-scheme) $(cs-host) $(cs-uri-port) $(cs-uri-path) $(cs-uri-query) $(cs-uri-extension) $(quot)$(cs(User-Agent))$(quot) $(s-ip) $(sr-bytes) $(rs-bytes) $(x-virus-id) $(x-bluecoat-application-name) $(x-bluecoat-application-operation) $(cs-uri-port) $(x-cs-client-ip-country) $(cs-threat-risk)``
+
+1. Select **OK**.
+1. Select **Apply**n.
+1. [Follow these instructions](https://knowledge.broadcom.com/external/article/166529/sending-access-logs-to-a-syslog-server.html) to enable syslog streaming of **Access** logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation.
+>
+> Update the parser and specify the hostname of the source machines transmitting the logs in the parser's second line.
+>
+> To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias **SymantecProxySG**. Alternatively, directly load the [function code](https://aka.ms/sentinel-SymantecProxySG-parser). It might take about 15 minutes post-installation to update.
+
+## Symantec VIP
+
+[Follow these instructions](https://aka.ms/sentinel-symantecvip-configurationsteps) to configure the Symantec VIP Enterprise Gateway to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation.
+>
+> Update the parser and specify the hostname of the source machines transmitting the logs in the parser's second line.
+>
+> To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias **SymantecVIP**. Alternatively, directly load the [function code](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Symantec%20VIP/Parsers/SymantecVIP.txt). It might take about 15 minutes post-installation to update.
+
+## VMware ESXi
+
+1. Follow these instructions to configure the VMware ESXi to forward syslog:
+
+ - [VMware ESXi 3.5 and 4.x](https://kb.vmware.com/s/article/1016621)
+ - [VMware ESXi 5.0+](https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.monitoring.doc/GUID-9F67DB52-F469-451F-B6C8-DAE8D95976E7.html)
+
+1. Use the IP address or hostname for the Linux device with the Linux agent installed as the **Destination IP** address.
+
+> [!NOTE]
+> The functionality of this data connector is reliant on a Kusto Function-based parser, which is integral to its operation. This parser is deployed as part of the solution installation.
+>
+> Update the parser and specify the hostname of the source machines transmitting the logs in the parser's second line.
+>
+> To access the function code within Log Analytics, navigate to the Log Analytics/Microsoft Sentinel Logs section, select Functions, and search for the alias VMwareESXi. Alternatively, directly load the [function code](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/VMWareESXi/Parsers/VMwareESXi.yaml). It might take about 15 minutes post-installation to update.
+
+## WatchGuard Firebox
+
+[Follow these instructions](https://www.watchguard.com/help/docs/help-center/en-US/Content/Integration-Guides/General/Microsoft%20Azure%20Sentinel.html?#SetUptheFirebox) to send WatchGuard Firebox log data via syslog.
+
+## Related content
+
+- [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md)
+- [Syslog via AMA and Common Event Format (CEF) via AMA connectors for Microsoft Sentinel](cef-syslog-ama-overview.md)
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
Previously updated : 08/18/2023 Last updated : 06/28/2024 # Mount Blob Storage by using the Network File System (NFS) 3.0 protocol
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- This article provides guidance on how to mount a container in Azure Blob Storage from a Linux-based Azure virtual machine (VM) or a Linux system that runs on-premises by using the Network File System (NFS) 3.0 protocol. To learn more about NFS 3.0 protocol support in Blob Storage, see [Network File System (NFS) 3.0 protocol support for Azure Blob Storage](network-file-system-protocol-support.md). ## Step 1: Create an Azure virtual network
The AZNFS Mount Helper package helps Linux NFS clients to reliably access Azure
> [!NOTE] > AZNFS is supported on following Linux distributions: > - Ubuntu (18.04 LTS, 20.04 LTS, 22.04 LTS)
- > - Centos7, Centos8
> - RedHat7, RedHat8, RedHat9 > - Rocky8, Rocky9 > - SUSE (SLES 15)
storage Elastic San Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-metrics.md
description: Learn about the available metrics that can let you understand how y
Previously updated : 05/31/2024 Last updated : 06/28/2024
The following metrics are currently available for your Elastic SAN resource. You
All metrics are shown at the elastic SAN level.
+## Diagnostic logging
+
+You can configure the diagnostic settings of your elastic SAN to send Azure platform logs and metrics to different destinations. Currently, there are two log configurations:
+
+- All - Every resource log offered by the resource.
+- Audit - All resource logs that record customer interactions with data or the settings of the service.
+
+Audit logs are an attempt by each resource provider to provide the most relevant audit data, but might not be considered sufficient from an auditing standards perspective.
+
+Available log categories:
+
+- Write Success Requests
+- Write Failed Requests
+- Read Success Requests
+- Read Failed Requests
+- Persistent Reservation Requests
+- SendTargets Requests
+ ## Next steps - [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md)
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Last updated 11/28/2022
-# Azure Synapse Runtime for Apache Spark 3.2 (deprecated)
+# Azure Synapse Runtime for Apache Spark 3.2 (End of Support announced)
-Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
+Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
-> [!WARNING]
-> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.2
-> * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023.
-> * Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
+> [!IMPORTANT]
+> * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023.
+> * End of Support announced runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired and disabled as of July 8, 2024. After the End of Support date, the retired runtimes are unavailable for new Spark pools and existing workflows can't execute. Metadata will temporarily remain in the Synapse workspace. > * **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.**
widgetsnbextension==3.5.2
## Migration between Apache Spark versions - support
-For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4, refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4 please refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
update-manager Pre Post Scripts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-scripts-overview.md
Title: An overview of pre and post events (preview) in your Azure Update Manager description: This article provides an overview on pre and post events (preview) and its requirements. Previously updated : 11/22/2023 Last updated : 06/15/2024
-# About pre and post events
+# About pre and post events (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-The pre and post events in Azure Update Manager allow you to perform certain tasks automatically before and after a scheduled maintenance configuration. For example, using pre-and-post events, you can:
+The pre and post events (preview) in Azure Update Manager allow you to perform certain tasks automatically before and after a scheduled maintenance configuration. For more information on how to create schedule maintenance configurations, see [Schedule recurring updates for machines by using the Azure portal and Azure Policy](scheduled-patching.md). For example, using pre and post events, you can execute the following tasks on machines that are part of a schedule. The following list isn't exhaustive, and you can create pre and post events as per your need.
-- Start VMs to apply patches and stop the VMs again.-- Stop service on the machine, apply patches, and restart the service.-
-The pre-events run before the patch installation begins and the post-events run after the patch installation ends. If the VM requires a reboot, it happens before the post-event begins.
-
-Update Manager uses [Event Grid](../event-grid/overview.md) to create and manage pre and post events on scheduled maintenance configurations. In the Event Grid, you can choose from Azure Webhooks, Azure Functions, Storage accounts, and Event hub to trigger your pre and post activity. **If you're using pre and post events in Azure Automation Update management and plan to move to Azure Update Manager, we recommend that you use Azure Webhooks linked to Automation Runbooks.**
-
-## User scenarios
+
+## Sample tasks
The following are the scenarios where you can define pre and post events:
-#### [Pre Event user scenarios](#tab/preevent)
+#### [Pre-events](#tab/preevent)
| **Scenario**| **Description**| |-|-| |Turn on machines | Turn on the machine to apply updates.| |Create snapshot | Disk snaps used to recover data.|
-|Start/configure Windows Update (WU) | Ensures that the WU is up and running before patching is attempted. |
-|Notification email | Send a notification alert before patching is triggered. |
-|Add network security group| Add the network security group.|
-|Stop services | To stop services like Gateway services, NPExServices, SQL services etc.|
+|Notification email | Send a notification alert before triggering a patch. |
+|Stop services | Stop services like Gateway services, NPExServices, SQL services etc.|
-#### [Post Event user scenarios](#tab/postevent)
+#### [Post-events](#tab/postevent)
| **Scenario**| **Description**| |-|-| |Turn off | Turn off the machines after applying updates. |
-|Disable maintenance | Disable the maintenance mode on machines. |
-|Stop/Configure WU| Ensures that the WU is stopped after the patching is complete.|
|Notifications | Send patch summary or an alert that patching is complete.|
-|Delete network security group| Delete the network security group.|
-|Hybrid Worker| Configuration of Hybrid runbook worker. |
-|Mute VM alerts | Enable VM alerts post patching. |
-|Start services | Start services like SQL, health services etc.|
+|Start services | Start services like SQL, health services etc. |
|Reports| Post patching report.| |Tag change | Change tags and occasionally, turns off with tag change.|
+## Schedule execution order with pre and post events
+
+For a given schedule, you can include a pre-event, post-event, or both. Additionally, you can have multiple pre and/or post-events. The sequence of execution for a schedule with pre and post events is as follows:
+
+1. **Pre-event** - Tasks that run before the schedule maintenance window begins. For example - Turn on the machines before patching.
+1. **Cancellation** - In this step, you can initiate the cancellation of the schedule run. Some scenarios where you might choose to cancel a schedule run include pre-event failures or pre-event didn't complete execution.
+
+ > [!NOTE]
+ > You must initiate cancellation as part of the pre-event; Azure Update Manager or maintenance configuration will not automatically cancel the schedule. If you fail to cancel, the schedule run will proceed with installing updates during the user-defined maintenance window.
+
+1. **Updates installation** - Updates are installed as part of the user-defined schedule maintenance window.
+1. **Post-event** - The post-event runs immediately after updates are installed. It occurs either within the maintenance window if update installation is complete and there's window left or outside the window if the maintenance window has ended. For example: Turn off the VMs post completion of the patching.
+
+ > [!NOTE]
+ > In Azure Update Manager, the pre-events run outside of the maintenance window and post events may run outside of the maintenance window. You must plan for this additional time required to complete the schedule execution on your machines.
+
+1. **Schedule status** - The success or failure status of a schedule run refers only to the update installation on the machines that are part of the schedule. The schedule run status doesn't include the pre and post event status. If pre-event has failed and you called the cancellation API, the schedule run status is displayed as **canceled**.
+
+
+ Azure Update Manager uses [Event Grid](../event-grid/overview.md) to create and manage pre and post events on scheduled maintenance configurations. In Event Grid, you can choose from event handlers such as Azure Webhooks, Azure Functions etc., to trigger your pre and post activity.
+
+ :::image type="content" source="./media/pre-post-scripts-overview/overview.png" alt-text="Screenshot that shows the sequence of execution for a schedule with pre and post." lightbox="./media/pre-post-scripts-overview/overview.png":::
+
+ > [!NOTE]
+ > If you're using Runbooks in pre and post events in Azure Automation Update management and plan to reuse them in Azure Update Manager, we recommend that you use Azure Webhooks linked to Automation Runbooks. [Learn more](tutorial-webhooks-using-runbooks.md).
+
+## Timeline of schedules for pre and post events
++++
+We recommend you go through the following table to understand the timeline of the schedule for pre and post events.
+
+For example, if a maintenance schedule is set to start at **3:00 PM**, with the maintenance window of 3 hours and 55 minutes for **Guest** maintenance scope. The schedule has one pre-event and one post-event and following are the details:
++
+| **Time**| **Details** |
+|-|-|
+| 2:19 PM | Since the schedule run starts at 3:00 PM, you can modify the machines or scopes 40 mins before the start time (i.e) at 2:19 PM. </br> **Note** This applies if you're creating a new schedule or editing an existing schedule with a pre-event.
+| 2:20 PM - 2:30 PM | Since the pre-event gets triggered at least 30 mins prior, it can get triggered anytime between 2:20 PM to 2:30 PM. |
+| 2:30 PM - 2:50 PM | The pre-event runs from 2:30 PM to 2:50 PM. The pre-event must complete the tasks by 2:50 PM. </br> **Note** If you have more than one pre-event configured, all the events must run within 20 minutes. In case of multiple pre-events, all of them will execute independently of each other. You can customize as per your needs by defining the logic in the pre-events. For example, if you want two pre-events to run sequentially, you can include a delayed start time in your second pre-eventΓÇÖs logic. </br> If the pre-event continues to run beyond 20 mins or fails, you can choose to cancel the schedule run otherwise the patch installation proceeds irrespective of the pre-event run status.|
+| 2:50 PM | The latest time that can invoke the cancellation API is 2:50 PM. </br> **Note** If cancellation API fails to get invoked or hasn't been set up, the patch installation proceeds to run.|
+| 3:00 PM | The schedule run is triggered at 3:00 PM. |
+| 6:55 PM | At 6:55 PM, the schedule completes installing the updates during the 3 hours 55-mins maintenance window. </br> The post event triggers at 6:55 PM once the updates are installed. </br> **Note** If you have defined a shorter maintenance window of 2 hours, the post maintenance event will trigger after 2 hours and if the update installation is completed before the stipulated time of 2 hours (i.e) 1 hours 50 mins, the post event will start immediately.
+
+We recommend that you're watchful of the following:
+- If you're creating a new schedule or editing an existing schedule with a pre-event, you need at least 40 minutes prior to the start of maintenance window (3:00 PM in the above example) for the pre-event to run otherwise it leads to auto-cancellation of the current scheduled run.
+- Invoking a cancellation API from your script or code cancels the schedule run and not the entire schedule.
+- The status of the pre and post event run can be checked in the event handler you chose.
+ ## Next steps -- Manage the [pre and post maintenance configuration events](manage-pre-post-events.md)-- Troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md).-- Learn on the [common scenarios of pre and post events](pre-post-events-common-scenarios.md)
+- To learn on how to configure pre and post events or to cancel a schedule run, see [pre and post maintenance configuration events](manage-pre-post-events.md).
+
virtual-desktop Disaster Recovery Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery-concepts.md
Title: Azure Virtual Desktop disaster recovery concepts
-description: Understand what a disaster recovery plan for Azure Virtual Desktop is and how each plan works.
----
+description: Learn how to design and implement a disaster recovery plan for Azure Virtual Desktop to keep your organization up and running.
Previously updated : 05/24/2022-++ Last updated : 06/28/2024
-# Azure Virtual Desktop disaster recovery concepts
-
-Azure Virtual Desktop has grown tremendously as a remote and hybrid work solution in recent years. Because so many users now work remotely, organizations require solutions with high deployment speed and reduced costs. Users also need to have a remote work environment with guaranteed availability and resiliency that lets them access their virtual machines even during disasters. This document describes disaster recovery plans that we recommend for keeping your organization up and running.
+# Azure Virtual Desktop business continuity and disaster recovery concepts
-To prevent system outages or downtime, every system and component in your Azure Virtual Desktop deployment must be fault-tolerant. Fault tolerance is when you have a duplicate configuration or system in another Azure region that takes over for the main configuration during an outage. This secondary configuration or system reduces the impact of a localized outage. There are many ways you can set up fault tolerance, but this article will focus on the methods currently available in Azure.
+Many users now work remotely, so organizations require solutions with high availability, rapid deployment speed, and reduced costs. Users also need to have a remote work environment with guaranteed availability and resiliency that lets them access their resources even during disasters.
-## Azure Virtual Desktop infrastructure
+To prevent system outages or downtime, every system and component in your Azure Virtual Desktop deployment must be fault-tolerant. Fault tolerance is when you have a duplicate configuration or system in another Azure region that takes over for the main configuration during an outage. This secondary configuration or system reduces the impact of a localized outage. There are many ways you can set up fault tolerance, but this article focuses on the methods currently available in Azure for dealing with business continuity and disaster recovery (BCDR).
-In order to figure out which areas to make fault-tolerant, we first need to know who's responsible for maintaining each area. You can divide responsibility in the Azure Virtual Desktop service into two areas: Microsoft-managed and customer-managed. Metadata like the host pools, application groups, and workspaces is controlled by Microsoft. The metadata is always available and doesn't require extra setup by the customer to replicate host pool data or configurations. We've designed the gateway infrastructure that connects people to their session hosts to be a global, highly resilient service managed by Microsoft. Meanwhile, customer-managed areas involve the virtual machines (VMs) used in Azure Virtual Desktop and the settings and configurations unique to the customer's deployment. The following table gives a clearer idea of which areas are managed by which party.
+Responsibility for components that make up Azure Virtual Desktop are divided between those components that are Microsoft-managed, and those components that are customer-managed, or partner managed.
-| Managed by Microsoft | Managed by customer |
-|-|-|
-| Load balancer | Network |
-| Session broker | Session hosts |
-| Gateway | Storage |
-| Diagnostics | User profile data |
-| Cloud identity platform | Identity |
+The following components are customer-managed or partner-managed:
-In this article, we're going to focus on customer-managed components, as these are settings you can configure yourself.
+- Session host virtual machines
+- Profile management, usually with FSLogix
+- Applications
+- User data
+- User identities
-## Disaster recovery basics
+To learn about the Microsoft-managed components and how they're designed to be resilient, see [Azure Virtual Desktop service architecture and resilience](service-architecture-resilience.md).
-In this section, we'll discuss actions and design principles that can protect your data and prevent having huge data recovery efforts after small outages or full-blown disasters. For smaller outages, following certain smaller steps can help prevent them from becoming bigger disasters. Let's go over some basic terms that will help you when you start setting up your disaster recovery plan.
+## Business continuity and disaster recovery basics
When you design a disaster recovery plan, you should keep the following three things in mind: -- High availability: distributing infrastructure so smaller, more localized outages don't interrupt your entire deployment. Designing with HA in mind can minimize outage impact and avoid the need for a full disaster recovery.
+- High availability: distributed infrastructure so smaller, more localized outages don't interrupt your entire deployment. Designing with high availability in mind can minimize outage impact and avoid the need for a full disaster recovery.
- Business continuity: how an organization can keep operating during outages of any size. - Disaster recovery: the process of getting back to operation after a full outage.
-Azure has many built-in, free-of-charge features that can deliver high availability at many levels. The first feature is [availability sets](../virtual-machines/availability-set-overview.md), which distribute VMs across different fault and update domains within Azure. Next are [availability zones](../availability-zones/az-region.md), which are physically isolated and geographically distributed groups of data centers that can reduce the impact of an outage. Finally, distributing session hosts across multiple [Azure regions](../best-practices-availability-paired-regions.md) provides even more geographical distribution, which further reduces outage impact. All three features provide a certain level of protection within Azure Virtual Desktop, and you should carefully consider them along with any cost implications.
-
-Basically, the disaster recovery strategy we recommend for Azure Virtual Desktop is to deploy resources across multiple availability zones within a region. If you need more protection, you can also deploy resources across multiple paired Azure regions.
-
-## Active-passive and active-active deployments
-
-Something else you should keep in mind is the difference between active-passive and active-active plans. Active-passive plans are when you have a region with one set of resources that's active and one that's turned off until it's needed (passive). If the active region is taken offline by an emergency, the organization can switch to the passive region by turning it on and moving all their users there.
-
-Another option is an active-active deployment, where you use both sets of infrastructure at the same time. While some users may be affected by outages, the impact is limited to the users in the region that went down. Users in the other region that's still online won't be affected, and the recovery is limited to the users in the affected region reconnecting to the functioning active region. Active-active deployments can take many forms, including:
--- Overprovisioning infrastructure in each region to accommodate affected users in the event one of the regions goes down. A potential drawback to this method is that maintaining the additional resources costs more.-- Have extra session hosts in both active regions, but deallocate them when they aren't needed, which reduces costs.-- Only provision new infrastructure during disaster recovery and allow affected users to connect to the newly provisioned session hosts. This method requires regular testing with infrastructure-as-code tools so you can deploy the new infrastructure as quickly as possible during a disaster.-
-## Recommended disaster recovery methods
-
-The disaster recovery methods we recommend are:
--- Configure and deploy Azure resources across multiple availability zones.--- Configure and deploy Azure resources across multiple regions in either active-active or active-passive configurations. These configurations are typically found in [shared host pools](create-host-pools-azure-marketplace.md).--- For personal host pools with dedicated VMs, [replicate VMs using Azure Site Recovery](../site-recovery/azure-to-azure-how-to-enable-replication.md) to another region.--- Configure a separate "disaster recovery" host pool in the secondary region. During a disaster, you can switch users over to the secondary region.-
-We'll go into more detail about the two main methods you can achieve these methods with for shared and personal host pools in the following sections.
-
-## Disaster recovery for shared host pools
-
-In this section, we'll discuss shared (or "pooled") host pools using an active-passive approach. The active-passive approach is when you divide up existing resources into a primary and secondary region. Normally, your organization would do all its work in the primary (or "active") region, but during a disaster, all it takes to switch over to the secondary (or "passive") region is to turn off the resources in the primary region (if you can do so, depending on the outage's extent) and turn on the ones in the secondary one.
-
-The following diagram shows an example of a deployment with redundant infrastructure in a secondary region. "Redundant" means that a copy of the original infrastructure exists in this other region, and is standard in deployments to provide resiliency for all components. Beneath a single Microsoft Entra ID, there are two regions: West US and East US. Each region has two session hosts running a multi-session operating system (OS), A server running Microsoft Entra Connect, an Active Directory Domain Controller, an Azure Files Premium File share for FSLogix profiles, a storage account, and a virtual network (VNET). In the primary region, West US, all resources are turned on. In the secondary region, East US, the session hosts in the host pool are either turned off or in drain mode, and the Microsoft Entra Connect server is in staging mode. The two VNETs in both regions are connected by peering.
--
-In most cases, if a component fails or the primary region isn't available, then the only action the customer needs to perform is to turn on the hosts or remove drain mode in the secondary region to enable end-user connections. This scenario focuses on reducing downtime. However, a redundancy-based disaster recovery plan may cost more due to having to maintain those extra components in the secondary region.
-
-The potential benefits of this plan are as follows:
--- Less time spent recovering from disasters. For example, you'll spend less time on provisioning, configuring, integrating, and validating newly deployed resources.-- There's no need to use complicated procedures.-- It's easy to test failover outside of disasters.-
-The potential drawbacks are as follows:
--- May cost more due to having more infrastructure to maintain, such as storage accounts, hosts, and so on.-- You'll need to spend more time configuring your deployment to accommodate this plan.-- You need to maintain the extra infrastructure you set up even when you don't need it.-
-## Important information for shared host pool recovery
-
-When using this disaster recovery strategy, it's important to keep the following things in mind:
--- Having multiple session hosts online across many regions can impact user experience. The managed network load balancer doesn't account for geographic proximity, instead treating all hosts in a host pool equally.--- During a disaster, users will be creating new profiles in the secondary region. You should store any business- or mission-critical data in OneDrive ([using known folder redirection](/sharepoint/redirect-known-folders)) or Sharepoint. Storing data here will give users quick access to their applications with minor disruption to the user experience.--- Make sure that you configure your virtual machines (VMs) exactly the same way within your host pool. Also, make sure all VMs within your host pool are the same size. If your VMs aren't the same, the managed network load balancer will distribute user connections evenly across all available VMs. The smaller VMs may become resource-constrained earlier than expected compared to larger VMs, resulting in a negative user experience.--- Region availability affects data or workspace monitoring. If a region isn't available, the service may lose all historical monitoring data during a disaster. We recommend using a custom export or dump of historical monitoring data.--- We recommend you update your session hosts at least once every month. This recommendation applies to session hosts you keep turned off for extended periods of time.--- Test your deployment by running a controlled failover at least once every six months. Part of the controlled failover could mean your secondary location becomes primary until the next controlled failover. Changing your secondary location to primary allows users to have nearly identical profiles during a real disaster.-
-The following table lists deployment recommendations for host pool disaster recovery strategies:
-
-| Technology | Recommendations |
-|-|--|
-| Network | Create and deploy a secondary virtual network in another region and configure [Azure Peering](../virtual-network/virtual-network-manage-peering.md) with your primary virtual network. |
-| Session hosts | [Create and deploy an Azure Virtual Desktop shared host pool](create-host-pools-azure-marketplace.md) with multi-session OS SKU and include VMs from other availability zones and another region. |
-| Storage | Create storage accounts in multiple regions using premium-tier accounts. |
-| User profile data | Create SMB storage locations in multiple regions. |
-| Identity | Active Directory Domain Controllers from the same directory. |
-
-## Disaster recovery for personal host pools
-
-For personal host pools, your disaster recovery strategy should involve replicating your resources to a secondary region using Azure Site Recovery Services Vault. If your primary region goes down during a disaster, Azure Site Recovery can fail over and turn on the resources in your secondary region.
-
-For example, let's say we have a deployment with a primary region in the West US and a secondary region in the East US. The primary region has a personal host pool with two session hosts each. Each session host has their own local disk containing the user profile data and their own VNET that's not paired with anything. If there's a disaster, you can use Azure Site Recovery to fail over to the secondary region in East US (or to a different availability zone in the same region). Unlike the primary region, the secondary region doesn't have local machines or disks. During the failover, Azure Site Recovery takes the replicated data from the Azure Site Recovery Vault and uses it to create two new VMs that are copies of the original session hosts, including the local disk and user profile data. The secondary region has its own independent VNET, so the VNET going offline in the primary region won't affect functionality.
-
-The following diagram shows the example deployment we just described.
--
-The benefits of this plan include a lower overall cost and not requiring maintenance to patch or update due to resources only being provisioned when you need them. However, a potential drawback is that you'll spend more time provisioning, integrating, and validating failover infrastructure than you would with a shared host pool disaster recovery setup.
-
-## Important information about personal host pool recovery
-
-When using this disaster recovery strategy, it's important to keep the following things in mind:
--- There may be requirements that the host pool VMs need to function in the secondary site, such as virtual networks, subnets, network security, or VPNs to access a directory such as on-premises Active Directory.-
- >[!NOTE]
- > Using an [Microsoft Entra joined VM](deploy-azure-ad-joined-vm.md) fulfills some of these requirements automatically.
--- You may experience integration, performance, or contention issues for resources if a large-scale disaster affects multiple customers or tenants.--- Personal host pools use VMs that are dedicated to one user, which means affinity load load-balancing rules direct all user sessions back to a specific VM. This one-to-one mapping between user and VM means that if a VM is down, the user won't be able to sign in until the VM comes back online or the VM is recovered after disaster recovery is finished.--- VMs in a personal host pool store user profile on drive C, which means FSLogix isn't required.--- Region availability affects data or workspace monitoring. If a region isn't available, the service may lose all historical monitoring data during a disaster. We recommend using a custom export or dump of historical monitoring data.--- We recommend you avoid using FSLogix when using a personal host pool configuration.--- Virtual machine provisioning isn't guaranteed in the failover region.
+Azure Virtual Desktop doesn't have any native features for managing disaster recovery scenarios, but you can use many other Azure services for each scenario depending on your requirements, such as [Availability sets](../virtual-machines/availability-set-overview.md), [availability zones](../availability-zones/az-region.md), Azure Site Recovery, and [Azure Files data redundancy](../storage/files/files-redundancy.md) options for user profiles and data.
-- Run [controlled failover](../site-recovery/azure-to-azure-tutorial-dr-drill.md) and [failback](../site-recovery/azure-to-azure-tutorial-failback.md) tests at least once every six months.
+You can also distribute session hosts across multiple [Azure regions](../best-practices-availability-paired-regions.md) provides even more geographical distribution, which further reduces outage impact. All these and other Azure features provide a certain level of protection within Azure Virtual Desktop, and you should carefully consider them along with any cost implications.
-The following table lists deployment recommendations for host pool disaster recovery strategies:
+We have further documentation that goes into much more detail about each of the technology areas you need to consider as part of your business continuity and disaster recovery strategy and how to plan for and mitigate disruption to your organization based on your requirements. The following table lists the technology areas you need to consider as part of your disaster recovery strategy and links to other Microsoft documentation that provides guidance for each area:
-| Technology | Recommendations |
-|-||
-| Network | Create and deploy a secondary virtual network in another region to follow custom naming conventions or security requirements outside of the Azure Site Recovery default naming scheme. |
-| Session hosts | [Enable and configure Azure Site Recovery for VMs](../site-recovery/azure-to-azure-tutorial-enable-replication.md). Optionally, you can pre-stage an image manually or use the Azure Image Builder service for ongoing provisioning. |
-| Storage | Creating an Azure Storage account is optional to store profiles. |
-| User profile data | User profile data is locally stored on drive C. |
-| Identity | Active Directory Domain Controllers from the same directory across multiple regions.|
+| Technology area | Documentation link |
+|--|--|
+| Active-passive vs active-active plans | [Active-Active vs. Active-Passive](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr#active-active-vs-active-passive) |
+| Session host resiliency | [Multiregion Business Continuity and Disaster Recovery](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr) |
+| Disaster recovery plans | [Multiregion Business Continuity and Disaster Recovery](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr#architecture-diagrams) |
+| Azure Site Recovery | [Failover and failback](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr#failover-and-failback) |
+| Network connectivity | [Multiregion Business Continuity and Disaster Recovery](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr#prerequisites) |
+| User profiles | [Design recommendations](/azure/cloud-adoption-framework/scenarios/azure-virtual-desktop/eslz-business-continuity-and-disaster-recovery#design-recommendations) |
+| Files share storage | [Storage](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr#storage) |
+| Identity provider | [Identity](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr#identity) |
+| Backup | [Backup](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr#backup) |
-## Next steps
+## Related content
For more in-depth information about disaster recovery in Azure, check out these articles: -- [Cloud Adoption Framework Azure Virtual Desktop business continuity and disaster recovery documentation](/azure/cloud-adoption-framework/scenarios/wvd/eslz-business-continuity-and-disaster-recovery)
+- [Cloud Adoption Framework: Azure Virtual Desktop business continuity and disaster recovery documentation](/azure/cloud-adoption-framework/scenarios/wvd/eslz-business-continuity-and-disaster-recovery)
-- [Azure Virtual Desktop Handbook: Disaster Recovery](https://azure.microsoft.com/resources/azure-virtual-desktop-handbook-disaster-recovery/)
+- [Azure Architecture Center: Multiregion Business Continuity and Disaster Recovery (BCDR) for Azure Virtual Desktop](/azure/architecture/example-scenario/azure-virtual-desktop/azure-virtual-desktop-multi-region-bcdr)
virtual-desktop Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery.md
- Title: Azure Virtual Desktop disaster recovery plan
-description: Make a disaster recovery plan for your Azure Virtual Desktop deployment to protect your data.
----- Previously updated : 05/24/2022---
-# Azure Virtual Desktop disaster recovery
-
-To keep your organization's data safe, you should adopt and manage a business continuity and disaster recovery (BCDR) strategy. A sound BCDR strategy keeps your apps and workloads up and running during planned and unplanned service or Azure outages. These plans should cover the session host virtual machines (VMs) managed by customers, as opposed to the Azure Virtual Desktop service that's managed by Microsoft. For more information about management areas, see [Azure Virtual Desktop disaster recovery concepts](disaster-recovery-concepts.md).
-
-The Azure Virtual Desktop service is designed with high availability in mind. Azure Virtual Desktop is a global service managed by Microsoft, with multiple instances of its independent components distributed across multiple Azure regions. If there's an unexpected outage in any of the components, your traffic will be diverted to one of the remaining instances or Microsoft will initiate a full failover to redundant infrastructure in another Azure region.
-
-To make sure users can still connect during a region outage in session host VMs, you need to design your infrastructure with high availability and disaster recovery in mind. A typical disaster recovery plan includes replicating virtual machines (VMs) to a different location. During outages, the primary site fails over to the replicated VMs in the secondary location. Users can continue to access apps from the secondary location without interruption. On top of VM replication, you'll need to keep user identities accessible at the secondary location. If you're using profile containers, you'll also need to replicate them. Finally, make sure your business apps that rely on data in the primary location can fail over with the rest of the data.
-
-To summarize, to keep your users connected during an outage, you'll need to do the following things:
--- Replicate the VMs to a secondary location.-- If you're using profile containers, set up data replication in the secondary location.-- Make sure user identities you set up in the primary location are available in the secondary location. To ensure availability, make sure your Active Directory Domain Controllers are available in or from the secondary location.-- Make sure any line-of-business applications and data in your primary location are also failed over to the secondary location.-
-## Active-passive and active-active disaster recovery plans
-
-There are two different types of disaster recovery infrastructure: active-passive and active-active. Each type of infrastructure works a different way, so let's look at what those differences are.
-
-Active-passive plans are when you have a region with one set of resources that's active and one that's turned off until it's needed (passive). If the active region is taken offline by an outage or disaster, the organization can switch to the passive region by turning it on and directing all the users there.
-
-Another option is an active-active deployment, where you use both sets of infrastructure at the same time. While some users may be affected by outages, the impact is limited to the users in the region that went down. Users in the other region that's still online won't be affected, and the recovery is limited to the users in the affected region reconnecting to the functioning active region. Active-active deployments can take many forms, including:
--- Overprovisioning infrastructure in each region to accommodate affected users in the event one of the regions goes down. A potential drawback to this method is that maintaining the additional resources costs more.-- Have extra session hosts in both active regions, but deallocate them when they aren't needed, which reduces costs.-- Only provision new infrastructure during disaster recovery and allow affected users to connect to the newly provisioned session hosts. This method requires regular testing with infrastructure-as-code tools so you can deploy the new infrastructure as quickly as possible during a disaster.-
-For more information about types of disaster recovery plans you can use, see [Azure Virtual Desktop disaster recovery concepts](disaster-recovery-concepts.md).
-
-Identifying which method works best for your organization is the first thing you should do before you get started. Once you have your plan in place, you can start building your recovery plan.
-
-## VM replication
-
-First, you'll need to replicate your VMs to the secondary location. Your options for doing so depend on how your VMs are configured:
--- You can configure replication for all your VMs in both pooled and personal host pools with Azure Site Recovery. For more information about how this process works, see [Replicate Azure VMs to another Azure region](../site-recovery/azure-to-azure-how-to-enable-replication.md). However, if you have pooled host pools that you built from the same image and don't have any personal user data stored locally, you can choose not to replicate them. Instead, you have the option to build the VMs ahead of time and keep them powered off. You can also choose to only provision new VMs in the secondary region while a disaster is happening. If you choose these methods, you'll only need to set up one host pool and its related application groups and workspaces.-- You can create a new host pool in the failover region while keeping all resources in your failover location turned off. For this method, you'd need to set up new application groups and workspaces in the failover region. You can then use an Azure Site Recovery plan to turn on host pools.-- You can create a host pool that's populated by VMs built in both the primary and failover regions while keeping the VMs in the failover region turned off. In this case, you only need to set up one host pool and its related application groups and workspaces. You can use an Azure Site Recovery plan to power on host pools with this method.-
-We recommend you use [Azure Site Recovery](../site-recovery/site-recovery-overview.md) to manage replicating VMs to other Azure locations, as described in [Azure-to-Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). We especially recommend using Azure Site Recovery for personal host pools because, true to their name, personal host pools tend to have something personal about them for their users. Azure Site Recovery supports both [server-based and client-based SKUs](../site-recovery/azure-to-azure-support-matrix.md#replicated-machine-operating-systems).
-
-If you use Azure Site Recovery, you won't need to register these VMs manually. The Azure Virtual Desktop agent in the secondary VM will automatically use the latest security token to connect to the service instance closest to it. The VM (session host) in the secondary location will automatically become part of the host pool. The end-user will have to reconnect during the process, but apart from that, there are no other manual operations.
-
-If there are existing user connections during the outage, before the admin can start failing over to the secondary region, you need to end the user connections in the current region.
-
-To disconnect users in Azure Virtual Desktop (classic), run this cmdlet:
-
-```powershell
-Invoke-RdsUserSessionLogoff
-```
-
-To disconnect users in Azure Virtual Desktop, run this cmdlet:
-
-```powershell
-Remove-AzWvdUserSession
-```
-
-Once you've signed out all users in the primary region, you can fail over the VMs in the primary region and let users connect to the VMs in the secondary region.
-
-## Virtual network
-
-Next, consider your network connectivity during the outage. You'll need to make sure you've set up a virtual network (VNET) in your secondary region. If your users need to access on-premises resources, you'll need to configure this VNET to access them. You can establish on-premises connections with a VPN, ExpressRoute, or virtual WAN.
-
-We recommend you use Azure Site Recovery to set up the VNET in the failover region because it preserves your primary network's settings and doesn't need peering.
-
-## User identities
-
-Next, ensure that the domain controller is available at the secondary location.
-
-There are three ways to keep the domain controller available:
-
- - Have one or more Active Directory Domain Controllers in the secondary location
- - Use an on-premises Active Directory Domain Controller
- - Replicate Active Directory Domain Controller using [Azure Site Recovery](../site-recovery/site-recovery-active-directory.md)
-
-## User profiles
-
-We recommend that you use FSLogix for managing user profiles. For information, see [Business continuity and disaster recovery options for FSLogix](/fslogix/concepts-container-recovery-business-continuity).
-
-## Back up your data
-
-You also have the option to back up your data. You can choose one of the following methods to back up your Azure Virtual Desktop data:
--- For Compute data, we recommend only backing up personal host pools with [Azure Backup](../backup/backup-azure-vms-introduction.md). -- For Storage data, the backup solution we recommend varies based on the back-end storage you used to store user profiles:
- - If you used Azure Files Share, we recommend using [Azure Backup for File Share](../backup/azure-file-share-backup-overview.md).
- - If you used Azure NetApp Files, we recommend using either [snapshots/policies](../azure-netapp-files/snapshots-manage-policy.md) or [Azure NetApp Files backup](../azure-netapp-files/backup-introduction.md).
-
-## App dependencies
-
-Finally, make sure that any business apps that rely on data located in the primary region can fail over to the secondary location. Also, be sure to configure the settings the apps need to work in the new location. For example, if one of the apps is dependent on the SQL backend, make sure to replicate SQL in the secondary location. You should configure the app to use the secondary location as either part of the failover process or as its default configuration. You can model app dependencies on Azure Site Recovery plans. To learn more, see [About recovery plans](../site-recovery/recovery-plan-overview.md).
-
-## Disaster recovery testing
-
-After you're done setting up disaster recovery, you'll want to test your plan to make sure it works.
-
-Here are some suggestions for how to test your plan:
--- If the test VMs have internet access, they'll take over any existing session host for new connections, but all existing connections to the original session host will remain active. Make sure the admin running the test signs out all active users before testing the plan. -- You should only do full disaster recovery tests during a maintenance window to not disrupt your users.-- Make sure your test covers all business-critical applications and data.-- We recommend you only failover up to 100 VMs at a time. If you have more VMs than that, we recommend you fail them over in batches 10 minutes apart.-
-## Next steps
-
-If you have questions about how to keep your data secure in addition to planning for outages, check out our [security guide](security-guide.md).
virtual-desktop Insights Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-use-cases.md
description: Learn about how using Azure Virtual Desktop Insights can help you u
Previously updated : 08/24/2023 Last updated : 06/21/2024 # Use cases for Azure Virtual Desktop Insights
-Using Azure Virtual Desktop Insights can help you understand your deployments of Azure Virtual Desktop. It can help with checks such as which client versions are connecting, opportunities for cost saving, or knowing if you have resource limitations or connectivity issues. If you make changes, you can continually validate that the changes have had the intended effect, and iterate if needed. This article provides some use cases for Azure Virtual Desktop Insights and example scenarios using the Azure portal.
+Using Azure Virtual Desktop Insights can help you understand your deployments of Azure Virtual Desktop. It can help with checks such as which client versions are connecting, opportunities for cost saving, or knowing if you have resource limitations or connectivity issues. If you make changes, you can continually validate that the changes have the intended effect, and iterate if needed. This article provides some use cases for Azure Virtual Desktop Insights and example scenarios using the Azure portal.
## Prerequisites - An existing host pool with session hosts, and a workspace [configured to use Azure Virtual Desktop Insights](insights.md). -- You need to have had active sessions for a period of time before you can make informed decisions.
+- You need to have active sessions for a period of time before you can make informed decisions.
## Connectivity
To view round-trip time:
:::image type="content" source="media/insights-use-cases/insights-connection-performance-latency-3.png" alt-text="A screenshot of a table showing the round-trip time per user." lightbox="media/insights-use-cases/insights-connection-performance-latency-3.png":::
-There are several possibilities for why latency may be higher than anticipated for some users, such as a poor Wi-Fi connection, or issues with their Internet Service Provider (ISP). However, with a list of impacted users, you have the ability to proactively contact and attempt to resolve end-user experience problems by understanding their network connectivity.
+There are several possibilities for why latency might be higher than anticipated for some users, such as a poor Wi-Fi connection, or issues with their Internet Service Provider (ISP). However, with a list of impacted users, you have the ability to proactively contact and attempt to resolve end-user experience problems by understanding their network connectivity.
You should periodically review the round-trip time in your environment and the overall trend to identify potential performance concerns.
+### Connection reliability
+
+The reliability of a connection can have a significant impact on the end-user experience. Azure Virtual Desktop Insights can help you understand disconnection events and correlations between errors that affect end users.
+
+Connection reliability provides two main views to help you understand the reliability of your connections:
+
+- A graph showing the number of disconnections over the concurrent connections in a given time range. This graph enables you to easily detect clusters of disconnects that are impacting connection reliability.
+
+- A table of the top 20 disconnection events, listing the top 20 specific time intervals where the most disconnections occurred. You can select a row in the table to highlight specific segments of the connection graph to view the disconnections that occurred at those specific time segments.
+
+You can also analyze connection errors by different pivots to determine the root cause of disconnects and improve connection reliability. Here are the available pivots:
+
+ | Pivot | Description |
+ |--|--|
+ | Subscription | Groups events by the subscription that contains related resources. When more than one subscription has Azure Virtual Desktop resources, it helps to determine whether issues are scoped to one or more subscriptions. |
+ | Resource group | Groups events by the resource group that contains related resources. |
+ | Host pool | Groups events by host pool. |
+ | Transport | Groups events by the network transport layer used for connections, either UDP or TCP.<br /><br />For UDP, valid values are `Relay`, `ShortpathPublic`, and `ShortpathPrivate`.<br /><br />For TCP, valid values are `NotUsed` and `<>` |
+ | Session host | Groups events by session host. |
+ | Session host IP/16 | Groups events by the IPv4 address of each session host, collated by the first two octets, for example (**1.2**.3.4). |
+ | Client type | Groups events by the client used to connect to a remote session, including platform and processor architecture of the connecting device. |
+ | Client version | Groups events by the version number of Windows App or the Remote Desktop app used to connect to a remote session. |
+ | Client IP/16 | Groups events by the IPv4 address of each client device connecting to a remote session, collated by the first two octets, for example (**1.2**.3.4). |
+ | Gateway region | Groups events by the Azure Virtual Desktop gateway region a client device connected through. For a list of gateway regions, see [Gateway region codes](insights-glossary.md#gateway-region-codes). |
+
+To view connection reliability information:
+
+1. Sign in to Azure Virtual Desktop Insights in the Azure portal by browsing to [https://aka.ms/avdi](https://aka.ms/avdi).
+
+1. From the drop-down lists, select one or more **subscriptions**, **resource groups**, **host pools**, and specify a **time range**, then select the **Connection Reliability** tab. The table and graph populate with the top 20 disconnection events and a graph of concurrent connections and disconnections over time.
+
+1. In the graph, review the number of disconnections (shown in red) over the count of concurrent connections (shown in green).
+
+ :::image type="content" source="media/insights-use-cases/insights-connection-reliability-top-20-table-graph-disconnects.png" alt-text="A screenshot showing the connection reliability tab of Azure Virtual Desktop Insights with the top 20 disconnection events table and concurrent connection graph with disconnects. " lightbox="media/insights-use-cases/insights-connection-reliability-top-20-table-graph-disconnects.png":::
+
+1. In the table, review the top 20 disconnection events. Select a row to highlight the specific time segment and neighboring time segments in the graph when the disconnections occurred.
+
+ :::image type="content" source="media/insights-use-cases/insights-connection-reliability-top-20-table-graph-disconnects-selection.png" alt-text="A screenshot showing the connection reliability tab of Azure Virtual Desktop Insights with the top 20 disconnection events table and concurrent connection graph with disconnects with an entry selected. " lightbox="media/insights-use-cases/insights-connection-reliability-top-20-table-graph-disconnects-selection.png":::
+
+1. When you select a row in the table, you can select one of the pivots to analyze the connection errors in further detail. You might need to scroll down to see all the relevant data available. By reviewing the connection errors across different pivots, you can look for commonalities of disconnections.
+
+ :::image type="content" source="media/insights-use-cases/insights-connection-reliability-pivots-events.png" alt-text="A screenshot showing the connection reliability tab of Azure Virtual Desktop Insights with list of pivoted events. " lightbox="media/insights-use-cases/insights-connection-reliability-pivots-events.png":::
+
+1. Select a specific time slice to view its details with the full list of connections in the time slice, their start and end dates, their duration, an indication of their success or failure, and the impacted user and session host.
+
+ :::image type="content" source="media/insights-use-cases/insights-connection-reliability-time-slice.png" alt-text="A screenshot showing the connection reliability tab of Azure Virtual Desktop Insights with the list of events for the time slice. " lightbox="media/insights-use-cases/insights-connection-reliability-time-slice.png":::
+
+1. To see the detailed history of a specific connection, select an entry in the **Details** section of a time slice. Selecting an entry generates a list of steps in the connection and any errors.
+
+ :::image type="content" source="media/insights-use-cases/insights-connection-reliability-connection-details.png" alt-text="A screenshot showing the connection reliability tab of Azure Virtual Desktop Insights with the details of a connection. " lightbox="media/insights-use-cases/insights-connection-reliability-connection-details.png":::
+ ## Session host performance Issues with session hosts, such as where session hosts have too many sessions to cope with the workload end-users are running, can be a major cause of poor end-user experience. Azure Virtual Desktop Insights can provide detailed information about resource utilization and [user input delay](/windows-server/remote/remote-desktop-services/rds-rdsh-performance-counters?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) to allow you to more easily and quickly find if users are impacted by limitations for resources like CPU or memory.
To view session host performance:
1. If you find higher than expected user input delay (>100 ms), it can be useful to then look at the aggregated statistics for CPU, memory, and disk activity for the session hosts to see if there are periods of higher-than-expected utilization. The graphs for **Host CPU and memory metrics**, **Host disk timing metrics**, and **Host disk queue length** show either the aggregate across session hosts, or a selected session host's resource metrics.
- In this example there are some periods of higher disk read times that correlate with the higher user input delay above.
+ In this example, there are some periods of higher disk read times that correlate with the higher user input delay.
:::image type="content" source="media/insights-use-cases/insights-session-host-performance-2.png" alt-text="A screenshot of graphs showing session host metrics." lightbox="media/insights-use-cases/insights-session-host-performance-2.png"::: 1. For more information about a specific session host, select the **Host Diagnostics** tab.
-1. Review the section for **Performance counters** to see a quick summary of any devices that have crossed the specified thresholds for:
+1. Review the section for **Performance counters** to see a quick summary of any devices that crossed the specified thresholds for:
+ - Available MBytes (available memory) - Page Faults/sec - CPU Utilization
In cases where a session host has extended periods of high resource utilization,
## Client version usage
-A common source of issues for end-users of Azure Virtual Desktop is using older clients that may either be missing new or updated features, or have known issues that have been resolved with more recent versions. Azure Virtual Desktop Insights contains a list of the different clients in use, as well as identifying clients that may be out of date.
+A common source of issues for end-users of Azure Virtual Desktop is using older clients that might either be missing new or updated features, or contain known issues that are resolved with more recent versions. Azure Virtual Desktop Insights contains a list of the different clients in use, and identifying clients that might be out of date.
To view a list of users with outdated clients:
To view a list of users with outdated clients:
1. Review the section for **Users with potentially outdated clients (all activity types)**. A summary table shows the highest version level of each client found connecting to your environment (marked as **Newest**) in the selected time range, and the count of users using outdated versions (in parentheses).
- In the below example, the newest version of the Microsoft Remote Desktop Client for Windows (MSRDC) is 1.2.4487.0, and 993 users are currently using a version older than that. It also shows a count of connections and the number of days behind the latest version the older clients are.
+ In the below example, the newest version of the Microsoft Remote Desktop Client for Windows (MSRDC) is 1.2.4487.0, and 993 users are currently using a version older. It also shows a count of connections and the number of days behind the latest version the older clients are.
:::image type="content" source="media/insights-use-cases/insights-client-version-usage-1.png" alt-text="A screenshot showing a table of outdated clients." lightbox="media/insights-use-cases/insights-client-version-usage-1.png":::
To view session host utilization:
1. From the drop-down lists, select one or more **subscriptions**, **resource groups**, **host pools**, and specify a **time range**, then select the **Utilization** tab.
-1. Review the **Session history** chart, which displays the number of active and idle (disconnected) sessions over time. Identify any periods of high activity, and periods of low activity from the peak user session count and the time period in which the peaks occur. If you find a regular, repeated pattern of activity, this usually implies there's a good opportunity to implement a scaling plan.
+1. Review the **Session history** chart, which displays the number of active and idle (disconnected) sessions over time. Identify any periods of high activity, and periods of low activity from the peak user session count and the time period in which the peaks occur. If you find a regular, repeated pattern of activity, it usually implies there's a good opportunity to implement a scaling plan.
- In this example, the graph shows the number of users sessions over the course of a week. Peaks occur at around midday on weekdays, and there's a noticeable lack of activity over the weekend. This suggests that there's an opportunity to scale session hosts to meet demand during the week, and reduce the number of session hosts over the weekend.
+ In this example, the graph shows the number of users sessions over the course of a week. Peaks occur at around midday on weekdays, and there's a noticeable lack of activity over the weekend. This pattern suggests that there's an opportunity to scale session hosts to meet demand during the week, and reduce the number of session hosts over the weekend.
:::image type="content" source="media/insights-use-cases/insights-session-count-over-time.png" alt-text="A screenshot of a graph showing the number of users sessions over the course of a week." lightbox="media/insights-use-cases/insights-session-count-over-time.png":::
To view session host utilization:
:::image type="content" source="media/insights-use-cases/insights-session-host-idle-count-over-time.png" alt-text="A screenshot of a graph showing the number of active and idle session hosts over the course of a week." lightbox="media/insights-use-cases/insights-session-host-idle-count-over-time.png":::
-1. Use the drop-down lists to reduce the scope to a single host pool and repeat the analysis for **session history** and **session host count**. At this scope you can identify patterns that are specific to the session hosts in a particular host pool to help develop a scaling plan for that host pool.
+1. Use the drop-down lists to reduce the scope to a single host pool and repeat the analysis for **session history** and **session host count**. At this scope, you can identify patterns that are specific to the session hosts in a particular host pool to help develop a scaling plan for that host pool.
In this example, the first graph shows the pattern of user activity throughout a week between 6AM and 10PM. On the weekend, there's minimal activity. The second graph shows the number of active and idle session hosts throughout the same week. There are long periods of time where idle session hosts are powered on. Use this information to help determine optimal ramp-up and ramp-down times for a scaling plan.
To view session host utilization:
:::image type="content" source="media/insights-use-cases/insights-session-host-idle-count-over-time-single-host-pool.png" alt-text="A graph showing the number of active and idle session hosts over the course of a week for a single host pool." lightbox="media/insights-use-cases/insights-session-host-idle-count-over-time-single-host-pool.png":::
-1. [Create a scaling plan](autoscale-scaling-plan.md) based on the usage patterns you've identified, then [assign the scaling plan to your host pool](autoscale-new-existing-host-pool.md).
+1. [Create a scaling plan](autoscale-scaling-plan.md) based on the usage patterns you identify, then [assign the scaling plan to your host pool](autoscale-new-existing-host-pool.md).
After a period of time, you should repeat this process to validate that your session hosts are being utilized effectively. You can make changes to the scaling plan if needed, and continue to iterate until you find the optimal scaling plan for your usage patterns.
virtual-desktop Screen Capture Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/screen-capture-protection.md
Title: Screen capture protection in Azure Virtual Desktop-
-description: Learn how to enable screen capture protection in Azure Virtual Desktop (preview) to help prevent sensitive information from being captured on client endpoints.
-
+description: Learn how to enable screen capture protection in Azure Virtual Desktop (preview) to help prevent sensitive information from being captured on client devices.
Previously updated : 07/21/2023+ Last updated : 06/28/2024 # Enable screen capture protection in Azure Virtual Desktop
-Screen capture protection, alongside [watermarking](watermarking.md), helps prevent sensitive information from being captured on client endpoints through a specific set of operating system (OS) features and Application Programming Interfaces (APIs). When you enable screen capture protection, remote content is automatically blocked in screenshots and screen sharing.
+Screen capture protection, alongside [watermarking](watermarking.md), helps prevent sensitive information from being captured on client endpoints through a specific set of operating system (OS) features and Application Programming Interfaces (APIs). When you enable screen capture protection, remote content is automatically blocked in screenshots and screen sharing. You can configure screen capture protection using Microsoft Intune or Group Policy on your session hosts.
There are two supported scenarios for screen capture protection, depending on the version of Windows you're using: -- **Block screen capture on client**: the session host instructs a supported Remote Desktop client to enable screen capture protection for a remote session. This prevents screen capture from the client of applications running in the remote session.
+- **Block screen capture on client**: the session host instructs a supported Remote Desktop client to enable screen capture protection for a remote session. This option prevents screen capture from the client of applications running in the remote session.
-- **Block screen capture on client and server**: the session host instructs a supported Remote Desktop client to enable screen capture protection for a remote session. This prevents screen capture from the client of applications running in the remote session, but also prevents tools and services within the session host from capturing the screen.
+- **Block screen capture on client and server**: the session host instructs a supported Remote Desktop client to enable screen capture protection for a remote session. This option prevents screen capture from the client of applications running in the remote session, but also prevents tools and services within the session host from capturing the screen.
When screen capture protection is enabled, users can't share their Remote Desktop window using local collaboration software, such as Microsoft Teams. With Teams, neither the local Teams app or using [Teams with media optimization](teams-on-avd.md) can share protected content.
When screen capture protection is enabled, users can't share their Remote Deskto
- **Block screen capture on client** is available with a [supported version of Windows 10 or Windows 11](prerequisites.md#operating-systems-and-licenses). - **Block screen capture on client and server** is available starting with Windows 11, version 22H2. -- Users must connect to Azure Virtual Desktop with one of the following Remote Desktop clients to use screen capture protection. If a user tries to connect with a different client or version, the connection is denied and shows an error message with the code `0x1151`.
+- Users must connect to Azure Virtual Desktop with Windows App or the Remote Desktop app to use screen capture protection. The following table shows supported scenarios. If a user tries to connect with a different app or version, the connection is denied and shows an error message with the code `0x1151`.
- | Client | Client version | Desktop session | RemoteApp session |
+ | App | Version | Desktop session | RemoteApp session |
|--|--|--|--|
- | Remote Desktop client for Windows | 1.2.1672 or later | Yes | Yes. Client device OS must be Windows 11, version 22H2 or later. |
+ | Windows App on Windows | Any | Yes | Yes. Client device OS must be Windows 11, version 22H2 or later. |
+ | Remote Desktop client on Windows | 1.2.1672 or later | Yes | Yes. Client device OS must be Windows 11, version 22H2 or later. |
| Azure Virtual Desktop Store app | Any | Yes | Yes. Client device OS must be Windows 11, version 22H2 or later. |
- | Remote Desktop client for macOS | 10.7.0 or later | Yes | Yes |
+ | Windows App on macOS | Any | Yes | Yes |
+ | Remote Desktop client on macOS | 10.7.0 or later | Yes | Yes |
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that is a member of the **Domain Admins** security group.
+
+ - A security group or organizational unit (OU) containing the devices you want to configure.
## Enable screen capture protection
-Screen capture protection is configured on session hosts and enforced by the client. You configure the settings by using Intune or Group Policy.
+Screen capture protection is configured on session hosts and enforced by the client. Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To configure screen capture protection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**.
+
+ :::image type="content" source="media/administrative-template/azure-virtual-desktop-intune-settings-catalog.png" alt-text="A screenshot showing the Azure Virtual Desktop options in the Microsoft Intune portal." lightbox="media/administrative-template/azure-virtual-desktop-intune-settings-catalog.png":::
+
+1. Check the box for **Enable screen capture protection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then toggle the switch for **Enable screen capture protection** to **Enabled**.
+
+ :::image type="content" source="media/screen-capture-protection/screen-capture-protection-intune.png" alt-text="A screenshot showing the screen capture protection settings in Microsoft Intune." lightbox="media/screen-capture-protection/screen-capture-protection-intune.png":::
+
+1. Toggle the switch for **Screen Capture Protection Options (Device)** to **off** for **Block screen capture on client**, or **on** for **Block screen capture on client and server** based on your requirements, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+# [Group Policy](#tab/group-policy)
+
+To configure screen capture protection using Group Policy:
+
+1. Follow the steps to make the [Administrative template for Azure Virtual Desktop](administrative-template.md) available to Group Policy.
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**.
+
+ :::image type="content" source="media/administrative-template/azure-virtual-desktop-gpo.png" alt-text="A screenshot showing the Azure Virtual Desktop options in Group Policy." lightbox="media/administrative-template/azure-virtual-desktop-gpo.png":::
+
+1. Double-click the policy setting **Enable screen capture protection** to open it, then select **Enabled**.
+
+ :::image type="content" source="media/screen-capture-protection/screen-capture-protection-group-policy.png" alt-text="A screenshot showing the screen capture protection settings in Group Policy." lightbox="media/screen-capture-protection/screen-capture-protection-group-policy.png":::
+
+1. From the drop-down menu, select the screen capture protection scenario you want to use from **Block screen capture on client** or **Block screen capture on client and server** based on your requirements, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
++
-To configure screen capture protection:
+## Verify screen capture protection
-1. Follow the steps to make the [Administrative template for Azure Virtual Desktop](administrative-template.md) available.
+To verify screen capture protection is working:
-1. Once you've verified that the administrative template is available, open the policy setting **Enable screen capture protection** and set it to **Enabled**.
+1. Connect to a remote session with a supported client.
-1. From the drop-down menu, select the screen capture protection scenario you want to use from **Block screen capture on client** or **Block screen capture on client and server**.
+1. Take a screenshot or share your screen in a Teams call or meeting. The content should be blocked or hidden. Any existing sessions need to sign out and back in again for the change to take effect.
-1. Apply the policy settings to your session hosts by running a Group Policy update or Intune device sync.
-1. Connect to a remote session with a supported client and test screen capture protection is working by taking a screenshot or sharing your screen. The content should be blocked or hidden. Any existing sessions will need to sign out and back in again for the change to take effect.
+## Related content
-## Next steps
+- Enable [watermarking](watermarking.md), where admins can use a QR code to trace the session.
-Learn about how to secure your Azure Virtual Desktop deployment at [Security best practices](security-guide.md).
+- Learn about how to secure your Azure Virtual Desktop deployment at [Security best practices](security-guide.md).
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
Title: Use Microsoft Teams on Azure Virtual Desktop - Azure
description: How to use Microsoft Teams on Azure Virtual Desktop. Previously updated : 12/06/2023 Last updated : 06/27/2024 # Use Microsoft Teams on Azure Virtual Desktop
-Microsoft Teams on Azure Virtual Desktop supports chat and collaboration. With media optimizations, it also supports calling and meeting functionality by redirecting it to the local device when using a supported Remote Desktop client. You can still use Microsoft Teams on Azure Virtual Desktop with other clients without optimized calling and meetings. Teams chat and collaboration features are supported on all platforms.
+Microsoft Teams on Azure Virtual Desktop supports chat and collaboration. With media optimizations, it also supports calling and meeting functionality by redirecting it to the local device when using Windows App or the Remote Desktop client on a supported platform. You can still use Microsoft Teams on Azure Virtual Desktop with other clients without optimized calling and meetings. Teams chat and collaboration features are supported on all platforms.
+
+There are two versions of Teams, *Classic Teams* and *[New Teams](/microsoftteams/new-teams-desktop-admin)*, and you can use either with Azure Virtual Desktop. New Teams has with feature parity with Classic Teams, but improves performance, reliability, and security.
+
+To redirect calling and meeting functionality to the local device, Azure Virtual Desktop uses an extra component. This component is either *SlimCore* or the *WebRTC Redirector Service*. The option you use depends on the following:
+
+- New Teams can use either SlimCore or the WebRTC Redirector Service. SlimCore is available in preview and you need [to opt in to the preview](/microsoftteams/public-preview-doc-updates?tabs=new-teams-client) to use it. If you use SlimCore, you should also install the WebRTC Redirector Service. This allows a user to fall back to WebRTC, such as if they roam between different devices that don't support the new optimization architecture. For more information about SlimCore and how to opt into the preview, see [New VDI solution for Teams](/microsoftteams/vdi-2).
+
+- Classic Teams uses the WebRTC Redirector Service.
> [!TIP]
-> The new Microsoft Teams app is now generally available to use with Azure Virtual Desktop, with feature parity with the classic Teams app and improved performance, reliability, and security.
->
> If you're using the [classic Teams app with Virtual Desktop Infrastructure (VDI) environments](/microsoftteams/teams-for-vdi), such as as Azure Virtual Desktop, end of support is **October 1, 2024** and end of availability is **July 1, 2025**, after which you'll need to use the new Microsoft Teams app. For more information, see [End of availability for classic Teams app](/microsoftteams/teams-classic-client-end-of-availability). ## Prerequisites
-Before you can use Microsoft Teams on Azure Virtual Desktop, you'll need to do these things:
+Before you can use Microsoft Teams on Azure Virtual Desktop, you need:
- [Prepare your network](/microsoftteams/prepare-network/) for Microsoft Teams.
Before you can use Microsoft Teams on Azure Virtual Desktop, you'll need to do t
- For Windows, you also need to install the latest version of the [Microsoft Visual C++ Redistributable](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads) on your client device and session hosts. The C++ Redistributable is required to use media optimization for Teams on Azure Virtual Desktop. -- Install the latest [Remote Desktop client](./users/connect-windows.md) on a client device running Windows 10, Windows 10 IoT Enterprise, Windows 11, or macOS 10.14 or later that meets the [hardware requirements for Microsoft Teams](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/).
+- Install the latest version of [Windows App](/windows-app/get-started-connect-devices-desktops-apps) or the [Remote Desktop client](./users/connect-windows.md) on Windows or macOS that meets the [hardware requirements for Microsoft Teams](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/).
+
+ SlimCore is available on Windows with the following apps and versions:
+
+ - Windows App for Windows, version 1.3.252 or later
+ - Remote Desktop client for Windows, version 1.2.5405.0 or later
- If you use FSLogix for profile management and want to use the new Microsoft Teams app, you need to install FSLogix 2210 hotfix 3 (2.9.8716.30241) or later.
For more information about which features Teams on Azure Virtual Desktop support
## Prepare to install the Teams desktop app
-This section will show you how to install the Teams desktop app on your Windows 10 or 11 Enterprise multi-session or Windows 10 or 11 Enterprise VM image.
+This section shows you how to install the Teams desktop app on your Windows 10 or 11 Enterprise multi-session or Windows 10 or 11 Enterprise VM image.
### Enable media optimization for Teams
-To enable media optimization for Teams, set the following registry key on the host VM:
+To enable media optimization for Teams, set the following registry key on each session host:
1. From the start menu, run **Registry Editor** as an administrator. Go to `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Teams`. Create the Teams key if it doesn't already exist.
New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Teams" -Name IsWVDEnvironment -
### Install the Remote Desktop WebRTC Redirector Service
-The Remote Desktop WebRTC Redirector Service is required to run Teams on Azure Virtual Desktop. To install the service:
+You need to install the WebRTC Redirector Service on each session host. You can install the [MSI file](https://aka.ms/msrdcwebrtcsvc/msi) using a management tool such [Configuration Manager](/mem/configmgr/apps/get-started/create-and-deploy-an-application), or manually.
+
+To install the WebRTC Redirector Service manually:
1. Sign in to a session host as a local administrator.
The Remote Desktop WebRTC Redirector Service is required to run Teams on Azure V
You can find more information about the latest version of the WebRTC Redirector Service at [What's new in the Remote Desktop WebRTC Redirector Service](whats-new-webrtc.md).
+> [!TIP]
+> If you want to use SlimCore, all of its required components come bundled with new Teams and Windows App or the Remote Desktop client.
+ ## Install Teams on session hosts
-You can deploy the Teams desktop app per-machine or per-user. For session hosts in a pooled host pool, you'll need to install Teams per-machine. To install Teams on your session hosts follow the steps in the relevant article:
+You can deploy the Teams desktop app per-machine or per-user. For session hosts in a pooled host pool, you need to install Teams per-machine. To install Teams on your session hosts follow the steps in the relevant article:
- [Install the classic Teams app](/microsoftteams/teams-for-vdi#deploy-the-teams-desktop-app-to-the-vm). - [Install the new Teams app](/microsoftteams/new-teams-vdi-requirements-deploy).
You can deploy the Teams desktop app per-machine or per-user. For session hosts
After installing the WebRTC Redirector Service and the Teams desktop app, follow these steps to verify that Teams media optimizations loaded:
+1. Connect to a remote session.
+ 1. Quit and restart the Teams application. 1. Select your user profile image, then select **About**. 1. Select **Version**.
- If media optimizations loaded, the banner will show you **Azure Virtual Desktop Media optimized**. If the banner shows you **Azure Virtual Desktop Media not connected**, quit the Teams app and try again.
+ If media optimizations loaded, the banner shows you **AVD SlimCore Media Optimized** or **AVD Media Optimized**. If the banner shows you **AVD Media not connected**, quit the Teams app and try again.
1. Select your user profile image, then select **Settings**.
- If media optimizations loaded, the audio devices and cameras available locally will be enumerated in the device menu. If the menu shows **Remote audio**, quit the Teams app and try again. If the devices still don't appear in the menu, check the Privacy settings on your local PC. Ensure the under **Settings** > **Privacy** > **App permissions - Microphone** the setting **"Allow apps to access your microphone"** is toggled **On**. Disconnect from the remote session, then reconnect and check the audio and video devices again. To join calls and meetings with video, you must also grant permission for apps to access your camera.
+ If media optimizations loaded, the audio devices and cameras available locally will be enumerated in the device menu. If the menu shows **Remote audio**, quit the Teams app and try again. If the devices still don't appear in the menu, check the Privacy settings on your local PC. Ensure the under **Settings** > **Privacy** > **App permissions - Microphone** the setting **"Allow apps to access your microphone"** is toggled **On**. Disconnect from the remote session, then reconnect and check the audio and video devices again. To join calls and meetings with video, you must also grant permission for apps to access your camera.
- If optimizations don't load, uninstall then reinstall Teams and check again.
+ If media optimizations don't load, uninstall then reinstall Teams and check again.
## Enable registry keys for optional features
-If you want to use certain optional features for Teams on Azure Virtual Desktop, you'll need to enable certain registry keys. The following instructions only apply to Windows client devices and session host VMs.
+If you want to use certain optional features for Teams on Azure Virtual Desktop, you need to enable certain registry keys. The following instructions only apply to Windows client devices and session host VMs.
### Enable hardware encode for Teams on Azure Virtual Desktop
-Hardware encode lets you increase video quality for the outgoing camera during Teams calls. In order to enable this feature, your client will need to be running version 1.2.3213 or later of the [Windows Desktop client](whats-new-client-windows.md). You'll need to repeat the following instructions for every client device.
+Hardware encode lets you increase video quality for the outgoing camera during Teams calls. In order to enable this feature, your client needs to be running version 1.2.3213 or later of the [Windows Desktop client](whats-new-client-windows.md). You need to repeat the following instructions for every client device.
To enable hardware encode:
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
You can modify a scale to expand the set of zones over which to spread VM instan
This feature can be used with API version 2023-03-01 or greater.
-### Enable your subscription to use zonal expansion feature
-
-You must register for four feature flags on your subscription:
-
-### [Azure CLI](#tab/cli-1)
--
-```azurecli
-az feature register --namespace Microsoft.Compute --name VmssAllowRegionalToZonalMigration
-az feature register --namespace Microsoft.Compute --name VmssAllowExpansionOfAvailabilityZones
-az feature register --namespace Microsoft.Compute --name VmssFlexAllowExpansionOfAvailabilityZones
-az feature register --namespace Microsoft.Compute --name VmssFlexAllowRegionalToZonalMigration
-```
-
-You can check the registration status of each feature by using:
-
-```azurecli
-az feature show --namespace Microsoft.Compute --name \<feature-name\>
-```
-
-### [Azure PowerShell](#tab/powershell-1)
--
-```powershell
-Register-AzProviderPreviewFeature -Name VmssAllowRegionalToZonalMigration -ProviderNamespace Microsoft.Compute
-Register-AzProviderPreviewFeature -Name VmssAllowExpansionOfAvailabilityZones -ProviderNamespace Microsoft.Compute
-Register-AzProviderPreviewFeature -Name VmssFlexAllowExpansionOfAvailabilityZones -ProviderNamespace Microsoft.Compute
-Register-AzProviderPreviewFeature -Name VmssFlexAllowRegionalToZonalMigration -ProviderNamespace Microsoft.Compute
-```
-
-You can check the registration status of each feature by using:
-
-```powershell
-Get-AzProviderPreviewFeature -Name <feature-name> -ProviderNamespace Microsoft.Compute
-```
-- ### Expand scale set to use availability zones You can update the scale set to scale out instances to one or more additional availability zones, up to the number of availability zones supported by the region. For regions that support zones, the minimum number of zones is 3.
virtual-machines Boot Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/boot-integrity-monitoring-overview.md
Title: Boot integrity monitoring overview
-description: How to use the guest attestation extension to secure boot your VM. How to handle traffic blocking.
+description: Learn how to use the Guest Attestation extension to secure boot your virtual machine and how to handle traffic blocking.
# Boot integrity monitoring overview
-To help Trusted Launch better prevent malicious rootkit attacks on virtual machines, guest attestation through Microsoft Azure Attestation (MAA) endpoint is used to monitor the boot sequence integrity. This attestation is critical to provide validity of a platformΓÇÖs states. If your [Azure Trusted Virtual Machines](trusted-launch.md) has Secure Boot and vTPM enabled and attestation extensions installed, Microsoft Defender for Cloud verifies that the status and boot integrity of your VM is set up correctly. To learn more about MDC integration, see the [trusted launch integration with Microsoft Defender for Cloud](trusted-launch.md#microsoft-defender-for-cloud-integration).
+To help Azure Trusted Launch better prevent malicious rootkit attacks on virtual machines (VMs), guest attestation through an Azure Attestation endpoint is used to monitor the boot sequence integrity. This attestation is critical to provide the validity of a platform's states.
+
+Your [Trusted Launch VM](trusted-launch.md) needs Secure Boot and virtual Trusted Platform Module (vTPM) to be enabled so that the attestation extensions can be installed. Microsoft Defender for Cloud offers reports based on Guest Attestation verifying status and that the boot integrity of your VM is set up correctly. To learn more about Microsoft Defender for Cloud integration, see [Trusted Launch integration with Microsoft Defender for Cloud](trusted-launch.md#microsoft-defender-for-cloud-integration).
> [!IMPORTANT]
-> Automatic Extension Upgrade is now available for Boot Integrity Monitoring - Guest Attestation extension. Learn more about [Automatic extension upgrade](automatic-extension-upgrade.md).
+> Automatic Extension Upgrade is now available for the Boot Integrity Monitoring - Guest Attestation extension. For more information, see [Automatic Extension Upgrade](automatic-extension-upgrade.md).
## Prerequisites
-An Active Azure Subscription + Trusted Launch Virtual Machine
+You need an active Azure subscription and a Trusted Launch VM.
## Enable integrity monitoring
+To enable integrity monitoring, follow the steps in this section.
+ ### [Azure portal](#tab/portal) 1. Sign in to the Azure [portal](https://portal.azure.com). 1. Select the resource (**Virtual Machines**).
-1. Under **Settings**, select **configuration**. In the security type panel, select **integrity monitoring**.
+1. Under **Settings**, select **Configuration**. On the **Security type** pane, select **Integrity monitoring**.
- :::image type="content" source="media/trusted-launch/verify-integrity-boot-on.png" alt-text="Screenshot showing integrity booting selected.":::
+ :::image type="content" source="media/trusted-launch/verify-integrity-boot-on.png" alt-text="Screenshot that shows Integrity monitoring selected.":::
1. Save the changes.
-Now, under the virtual machines overview page, security type for integrity monitoring should state enabled.
+On the VM **Overview** page, the security type for integrity monitoring should appear as **Enabled**.
-This installs the guest attestation extension, which can be referred through settings within the extensions + applications tab.
+This action installs the Guest Attestation extension, which you can refer to via the settings on the **Extensions + Applications** tab.
### [Template](#tab/template)
-You can deploy the guest attestation extension for trusted launch VMs using a quickstart template:
+You can deploy the Guest Attestation extension for Trusted Launch VMs by using a quickstart template.
#### Windows
You can deploy the guest attestation extension for trusted launch VMs using a qu
] } ```+ #### Linux ```json
You can deploy the guest attestation extension for trusted launch VMs using a qu
### [CLI](#tab/cli) -
-1. Create a virtual machine with Trusted Launch that has Secure Boot + vTPM capabilities through initial deployment of trusted launch virtual machine. To deploy guest attestation extension use (`--enable-integrity-monitoring`). Configuration of virtual machines are customizable by virtual machine owner (`az vm create`).
-1. For existing VMs, you can enable boot integrity monitoring settings by updating to make sure enable integrity monitoring is turned on (`--enable-integrity-monitoring`).
+1. Create a VM with Trusted Launch that has Secure Boot and vTPM capabilities through initial deployment of a Trusted Launch VM. To deploy the Guest Attestation extension, use `--enable-integrity-monitoring`. As the VM owner, you can customize VM configuration by using `az vm create`.
+1. For existing VMs, you can enable boot integrity monitoring settings by updating to make sure that integrity monitoring is turned on. You can use `--enable-integrity-monitoring`.
> [!NOTE]
-> The Guest Attestation Extension needs to be configured explicitly.
+> The Guest Attestation extension must be configured explicitly.
### [PowerShell](#tab/powershell)
-If Secure Boot and vTPM are ON, boot integrity will be ON.
+If Secure Boot and vTPM are set to **ON**, then boot integrity is also set to **ON**.
-1. Create a virtual machine with Trusted Launch that has Secure Boot + vTPM capabilities through initial deployment of the trusted launch virtual machine. Configuration of virtual machines is customizable by virtual machine owner.
-1. For existing VMs, you can enable boot integrity monitoring settings by updating to make sure both SecureBoot and vTPM are on.
+1. Create a VM with Trusted Launch that has Secure Boot and vTPM capabilities through initial deployment of a Trusted Launch VM. As the VM owner, you can customize VM configuration.
+1. For existing VMs, you can enable boot integrity monitoring settings by updating. Make sure that both Secure Boot and vTPM are set to **ON**.
-For more information on creation or updating a virtual machine to include the boot integrity monitoring through the guest attestation extension, see [Deploy a VM with trusted launch enabled (PowerShell)](trusted-launch-portal.md#deploy-a-trusted-launch-vm).
+For more information on creating or updating a VM to include boot integrity monitoring through the Guest Attestation extension, see [Deploy a VM with Trusted Launch enabled (PowerShell)](trusted-launch-portal.md#deploy-a-trusted-launch-vm).
-## Troubleshooting guide for guest attestation extension installation
+## Troubleshooting guide for Guest Attestation extension installation
+
+This section addresses attestation errors and solutions.
### Symptoms
-The Microsoft Azure Attestation extensions won't properly work when customers set up a network security group or proxy. An error that looks similar to (Microsoft.Azure.Security.WindowsAttestation.GuestAttestation provisioning failed.)
+The Azure Attestation extension won't work properly when you set up a network security group (NSG) or a proxy. An error appears that looks similar to "`Microsoft.Azure.Security.WindowsAttestation.GuestAttestation` provisioning failed."
### Solutions
-In Azure, Network Security Groups (NSG) are used to help filter network traffic between Azure resources. NSGs contains security rules that either allow or deny inbound network traffic, or outbound network traffic from several types of Azure resources. For the Microsoft Azure Attestation endpoint, it should be able to communicate with the guest attestation extension. Without this endpoint, Trusted Launch canΓÇÖt access guest attestation, which allows Microsoft Defender for Cloud to monitor the integrity of the boot sequence of your virtual machines.
+In Azure, NSGs are used to help filter network traffic between Azure resources. NSGs contain security rules that either allow or deny inbound network traffic, or outbound network traffic from several types of Azure resources. The Azure Attestation endpoint should be able to communicate with the Guest Attestation extension. Without this endpoint, Trusted Launch can't access guest attestation, which allows Microsoft Defender for Cloud to monitor the integrity of the boot sequence of your VMs.
-Unblocking Microsoft Azure Attestation traffic in **Network Security Groups** using service tags.
+To unblock Azure Attestation traffic in NSGs by using service tags:
-1. Navigate to the **virtual machine** that you want to allow outbound traffic.
-1. Under "Networking" in the left-hand sidebar, select the **networking settings** tab.
-1. Then select **create port rule**, and **Add outbound port rule**.
- :::image type="content" source="./media/trusted-launch/tvm-portrule.png" lightbox="./media/trusted-launch/tvm-portrule.png" alt-text="Screenshot of the add outbound port rule selection.":::
-1. To allow Microsoft Azure Attestation, make the destination a **service tag**. This allows for the range of IP addresses to update and automatically set allow rules for Microsoft Azure Attestation. The destination service tag is **AzureAttestation** and action is set to **Allow**.
- :::image type="content" source="media/trusted-launch/unblocking-NSG.png" alt-text="Screenshot showing how to make the destination a service tag.":::
+1. Go to the VM that you want to allow outbound traffic.
+1. On the leftmost pane, under **Networking**, select **Networking settings**.
+1. Then select **Create port rule** > **Outbound port rule**.
-Firewalls protect a virtual network, which contains multiple Trusted Launch virtual machines. To unblock Microsoft Azure Attestation traffic in **Firewall** using application rule collection.
+ :::image type="content" source="./media/trusted-launch/tvm-portrule.png" lightbox="./media/trusted-launch/tvm-portrule.png" alt-text="Screenshot that shows adding the Outbound port rule.":::
-1. Navigate to the Azure Firewall, that has traffic blocked from the Trusted Launch virtual machine resource.
-2. Under settings, select Rules (classic) to begin unblocking guest attestation behind the Firewall.
-3. Select a **network rule collection** and add network rule.
- :::image type="content" source="./media/trusted-launch/firewall-network-rule-collection.png" lightbox="./media/trusted-launch/firewall-network-rule-collection.png" alt-text="Screenshot of the adding application rule":::
-5. The user can configure their name, priority, source type, destination ports based on their needs. The name of the service tag is as follows: **AzureAttestation**, and action needs to be set as **allow**.
+1. To allow Azure Attestation, you make the destination a service tag. This setting allows for the range of IP addresses to update and automatically set rules that allow Azure Attestation. Set **Destination service tag** to **AzureAttestation** and set **Action** to **Allow**.
-To unblock Microsoft Azure Attestation traffic in **Firewall** using application rule collection.
+ :::image type="content" source="media/trusted-launch/unblocking-NSG.png" alt-text="Screenshot that shows how to make the destination a service tag.":::
-1. Navigate to the Azure Firewall, that has traffic blocked from the Trusted Launch virtual machine resource.
-2. Select Application Rule collection and add an application rule.
-3. Select a name, a numeric priority for your application rules. The action for rule collection is set to ALLOW. To learn more about the application processing and values, read here.
-4. Name, source, protocol, are all configurable by the user. Source type for single IP address, select IP group to allow multiple IP address through the firewall.
+Firewalls protect a virtual network, which contains multiple Trusted Launch VMs. To unblock Azure Attestation traffic in a firewall by using an application rule collection:
-### Regional Shared Providers
+1. Go to the Azure Firewall instance that has traffic blocked from the Trusted Launch VM resource.
+1. Under **Settings**, select **Rules (classic)** to begin unblocking guest attestation behind the firewall.
+1. Under **Network rule collection**, select **Add network rule collection**.
-Azure Attestation provides a [regional shared provider](https://maainfo.azurewebsites.net/) in each available region. Customers can choose to use the regional shared provider for attestation or create their own providers with custom policies. Shared providers can be accessed by any Azure AD user, and the policy associated with it cannot be changed.
+ :::image type="content" source="./media/trusted-launch/firewall-network-rule-collection.png" lightbox="./media/trusted-launch/firewall-network-rule-collection.png" alt-text="Screenshot that shows adding an application rule.":::
-> [!NOTE]
-> Users can configure their source type, service, destination port ranges, protocol, priority, and name.
+1. Configure the name, priority, source type, and destination ports based on your needs. Set **Service tag name** to **AzureAttestation** and set **Action** to **Allow**.
+
+To unblock Azure Attestation traffic in a firewall by using an application rule collection:
+
+1. Go to the Azure Firewall instance that has traffic blocked from the Trusted Launch VM resource.
+ :::image type="content" source="./media/trusted-launch/firewall-rule.png" lightbox="./media/trusted-launch/firewall-rule.png" alt-text="Screenshot that shows adding traffic for the application rule route.":::
+
+ The rules collection must contain at least one rule that targets fully qualified domain names (FQDNs).
+
+1. Select the application rule collection and add an application rule.
+1. Select a name and a numeric priority for your application rules. Set **Action** for the rule collection to **Allow**.
+
+ :::image type="content" source="./media/trusted-launch/firewall-application-rule.png" lightbox="./media/trusted-launch/firewall-application-rule.png" alt-text="Screenshot that shows adding the application rule route.":::
+
+1. Configure the name, source, and protocol. The source type is for a single IP address. Select the IP group to allow multiple IP addresses through the firewall.
+
+### Regional shared providers
+
+Azure Attestation provides a [regional shared provider](https://maainfo.azurewebsites.net/) in each available region. You can choose to use the regional shared provider for attestation or create your own providers with custom policies. Any Microsoft Entra user can access shared providers. The policy associated with it can't be changed.
+
+> [!NOTE]
+> You can configure the source type, service, destination port ranges, protocol, priority, and name.
-## Next steps
+## Related content
-Learn more about [trusted launch](trusted-launch.md) and [deploying a trusted virtual machine](trusted-launch-portal.md).
+Learn more about [Trusted Launch](trusted-launch.md) and [deploying a Trusted Launch VM](trusted-launch-portal.md).
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Last updated 03/28/2023
# Azure Linux VM Agent overview
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- The Microsoft Azure Linux VM Agent (waagent) manages Linux and FreeBSD provisioning, along with virtual machine (VM) interaction with the Azure fabric controller. In addition to the Linux agent providing provisioning functionality, Azure provides the option of using cloud-init for some Linux operating systems. The Linux agent provides the following functionality for Linux and FreeBSD Azure Virtual Machines deployments. For more information, see the [Azure Linux VM Agent readme on GitHub](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
Testing has confirmed that the following systems work with the Azure Linux VM Ag
| Distribution | x64 | ARM64 | |:--|:--:|:--:| | Alma Linux | 9.x+ | 9.x+ |
-| CentOS | 7.x+, 8.x+ | 7.x+ |
| Debian | 10+ | 11.x+ | | Flatcar Linux | 3374.2.x+ | 3374.2.x+ | | Azure Linux | 2.x | 2.x |
Testing has confirmed that the following systems work with the Azure Linux VM Ag
> [!IMPORTANT] > RHEL/Oracle Linux 6.10 is the only RHEL/OL 6 version with Extended Lifecycle Support available. [The extended maintenance ends on June 30, 2024](https://access.redhat.com/support/policy/updates/errata). + Other supported systems: - The Agent works on more systems than those listed in the documentation. However, we do not test or provide support for distros that are not on the endorsed list. In particular, FreeBSD is not endorsed. The customer can try FreeBSD 8 and if they run into problems they can open an issue in our [GitHub repository](https://github.com/Azure/WALinuxAgent) and we may be able to help.
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
Last updated 03/31/2023
# Use the Azure Custom Script Extension Version 2 with Linux virtual machines
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- The Custom Script Extension Version 2 downloads and runs scripts on Azure virtual machines (VMs). Use this extension for post-deployment configuration, software installation, or any other configuration or management task. You can download scripts from Azure Storage or another accessible internet location, or you can provide them to the extension runtime. The Custom Script Extension integrates with Azure Resource Manager templates. You can also run it by using the Azure CLI, Azure PowerShell, or the Azure Virtual Machines REST API.
Use Version 2 for new and existing deployments. The new version is a drop-in rep
| Distribution | x64 | ARM64 | |:--|:|:| | Alma Linux | 9.x+ | 9.x+ |
-| CentOS | 7.x+, 8.x+ | 7.x+ |
| Debian | 10+ | 11.x+ | | Flatcar Linux | 3374.2.x+ | 3374.2.x+ | | Azure Linux | 2.x | 2.x |
virtual-machines Tenable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/tenable.md
Last updated 07/18/2023
# Tenable One-Click Nessus Agent
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- Tenable now supports a One-Click deployment of Nessus Agents via Microsoft's Azure portal. This solution provides an easy way to install the latest version of Nessus Agent on Azure virtual machines (VM) (whether Linux or Windows) by either clicking on an icon within the Azure portal or by writing a few lines of PowerShell script. ## Prerequisites
Tenable now supports a One-Click deployment of Nessus Agents via Microsoft's Azu
Azure VM running any of the following:
-* CentOS 7 (x86_64)
- * Debian 11 (x86_64) * Oracle Linux 7 and 8 (x86_64)
virtual-machines Vmaccess Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess-linux.md
# VMAccess Extension for Linux
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- The VMAccess Extension is used to manage administrative users, configure SSH, and check or repair disks on Azure Linux virtual machines. The extension integrates with Azure Resource Manager templates. It can also be invoked using Azure CLI, Azure PowerShell, the Azure portal, and the Azure Virtual Machines REST API. This article describes how to run the VMAccess Extension from the Azure CLI and through an Azure Resource Manager template. This article also provides troubleshooting steps for Linux systems.
This article describes how to run the VMAccess Extension from the Azure CLI and
| **Linux Distro** | **x64** | **ARM64** | |:--|:--:|:--:| | Alma Linux | 9.x+ | 9.x+ |
-| CentOS | 7.x+, 8.x+ | 7.x+ |
| Debian | 10+ | 11.x+ | | Flatcar Linux | 3374.2.x+ | 3374.2.x+ | | Azure Linux | 2.x | 2.x |
virtual-machines Linux Vm Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux-vm-connect.md
Previously updated : 04/06/2023 Last updated : 06/27/2024
Once the above prerequisites are met, you're ready to connect to your VM. Open y
If you have never connected to the desired VM from your current SSH client before you're asked to verify the host's fingerprint. While the default option is to accept the fingerprint presented, you're exposed to a possible "person in the middle attack". You should always validate the host's fingerprint, which only needs to be done the first time your client connects. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command: ```azurepowershell-interactive
- Invoke-AzVMRunCommand -ResourceGroupName 'myResourceGroup' -VMName 'myVM' -CommandId 'RunPowerShellScript' -ScriptString
+ Invoke-AzVMRunCommand -ResourceGroupName 'myResourceGroup' -VMName 'myVM' -CommandId 'RunShellScript' -ScriptString
'ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}'' ```
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
Title: Azure Hybrid Benefit for Linux virtual machines
+ Title: Explore Azure Hybrid Benefit for Linux virtual machines
description: Learn how Azure Hybrid Benefit can save you money on Linux virtual machines.
Previously updated : 05/02/2023 Last updated : 06/27/2024
virtual-machines Debian Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/debian-create-upload-vhd.md
Previously updated : 05/01/2024 Last updated : 06/27/2024
This section assumes that you've already installed a Debian Linux operating syst
## Prepare a Debian image for Azure
-You can create the base Azure Debian cloud image with the [fully automatic installation (FAI) cloud image builder](https://salsa.debian.org/cloud-team/debian-cloud-images).
+You can create the base Azure Debian cloud image with the [fully automatic installation (FAI) cloud image builder](https://salsa.debian.org/cloud-team/debian-cloud-images). To prepare an image without FAI, check out the [generic steps article](./create-upload-generic.md).
The following git clone and apt installation commands were pulled from the Debian cloud images repo. Start by cloning the repo and installing dependencies:
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
For [Step 3: Select the appropriate role](../../role-based-access-control/role-a
For [Step 4: Select who needs access](../../role-based-access-control/role-assignments-portal.yml#step-4-select-who-needs-access): Select member “Azure Virtual Machine Image Builder”
-Then proceed to [Step 6: Assign role](../../role-based-access-control/role-assignments-portal.yml#step-6-assign-role) to assign the role.
+Then proceed to [Step 6: Assign role](../../role-based-access-control/role-assignments-portal.yml#step-7-assign-role) to assign the role.
## Troubleshoot build failures
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
This section assumes that you've already obtained an ISO file from the Red Hat w
1. Move (or remove) the udev rules to avoid generating static rules for the Ethernet interface. These rules cause problems when you clone a VM in Azure or Hyper-V:
+ > [!WARNING]
+ > Many 'v5' and newer VM sizes require Accelerated Networking. If it isn't enabled, NetworkManager will assign the same IP address to all virtual function interfaces. To prevent duplicate IP addresses, make sure to include this udev rule when migrating to a newer size.
+ ```bash sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules sudo rm -f /etc/udev/rules.d/70-persistent-net.rules
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md
# Run scripts in your Linux VM by using managed Run Commands
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets > [!IMPORTANT]
The *updated* managed Run Command uses the same VM agent channel to execute scri
| **Linux Distro** | **x64** | **ARM64** | |:--|:--:|:--:| | Alma Linux | 9.x+ | Not Supported |
-| CentOS | 7.x+, 8.x+ | Not Supported |
| Debian | 10+ | Not Supported | | Flatcar Linux | 3374.2.x+ | Not Supported | | Azure Linux | 2.x | Not Supported |
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
ms.devlang: azurecli
# Run scripts in your Linux VM by using action Run Commands
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets The Run Command feature uses the virtual machine (VM) agent to run shell scripts within an Azure Linux VM. You can use these scripts for general machine or application management. They can help you to quickly diagnose and remediate VM access and network issues and get the VM back to a good state.
This capability is useful in all scenarios where you want to run a script within
| **Linux Distro** | **x64** | **ARM64** | |:--|:--:|:--:| | Alma Linux | 9.x+ | 9.x+ |
-| CentOS | 7.x+, 8.x+ | 7.x+ |
| Debian | 10+ | 11.x+ | | Flatcar Linux | 3374.2.x+ | 3374.2.x+ | | Azure Linux | 2.x | 2.x |
virtual-machines Trusted Launch Existing Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-existing-vm.md
Title: Enable Trusted launch on existing VMs
-description: Enable Trusted launch on existing Azure VMs.
+ Title: Enable Trusted Launch on existing VMs
+description: Learn how to enable Trusted Launch on existing Azure virtual machines (VMs).
Last updated 08/13/2023
-# Enable Trusted launch on existing Azure VMs
+# Enable Trusted Launch on existing Azure VMs
**Applies to:** :heavy_check_mark: Linux VM :heavy_check_mark: Windows VM :heavy_check_mark: Generation 2 VM
-Azure Virtual Machines (VM) supports enabling Trusted launch on existing [Azure Generation 2](generation-2.md) VMs by upgrading to [Trusted launch](trusted-launch.md) security type.
+Azure Virtual Machines supports enabling Azure Trusted Launch on existing [Azure Generation 2](generation-2.md) virtual machines (VMs) by upgrading to the [Trusted Launch](trusted-launch.md) security type.
-[Trusted launch](trusted-launch.md) is a way to enable foundational compute security on [Azure Generation 2 VMs](generation-2.md) virtual machines and protects against advanced and persistent attack techniques like boot kits and rootkits. It does so by combining infrastructure technologies like Secure Boot, vTPM, and Boot Integrity Monitoring on your VM.
+[Trusted Launch](trusted-launch.md) is a way to enable foundational compute security on [Azure Generation 2 VMs](generation-2.md) VMs and protects against advanced and persistent attack techniques like boot kits and rootkits. It does so by combining infrastructure technologies like Secure Boot, virtual Trusted Platform Module (vTPM), and boot integrity monitoring on your VM.
> [!IMPORTANT]
->
-> - Support for **enabling Trusted launch on existing Azure Generation 1 VMs** is currently in private preview. You can gain access to preview using registration link **https://aka.ms/Gen1ToTLUpgrade**.
+> Support for *enabling Trusted Launch on existing Azure Generation 1 VMs* is currently in private preview. You can gain access to preview by using the [registration form](https://aka.ms/Gen1ToTLUpgrade).
## Prerequisites - Azure Generation 2 VM is configured with:
- - [Trusted launch supported size family](trusted-launch.md#virtual-machines-sizes)
- - [Trusted launch supported OS Image](trusted-launch.md#operating-systems-supported). For custom OS image or disks, the base image should be **Trusted launch capable**.
-- Azure Generation 2 VM is not using [features currently not supported with Trusted launch](trusted-launch.md#unsupported-features).-- Azure Generation 2 VM should be **stopped and deallocated** before enabling Trusted launch security type.-- Azure Backup if enabled for VM should be configured with [Enhanced Backup Policy](../backup/backup-azure-vms-enhanced-policy.md). Trusted launch security type cannot be enabled for Generation 2 VM configured with *Standard Policy* backup protection.
- - Existing Azure VM backup can be migrated from *Standard* to *Enhanced* policy using [Migrate Azure VM backups from standard to enhanced policy (preview)](../backup/backup-azure-vm-migrate-enhanced-policy.md).
+ - [Trusted Launch supported size family](trusted-launch.md#virtual-machines-sizes).
+ - [Trusted Launch supported operating system (OS) image](trusted-launch.md#operating-systems-supported). For custom OS images or disks, the base image should be *Trusted Launch capable*.
+- Azure Generation 2 VM isn't using [features currently not supported with Trusted Launch](trusted-launch.md#unsupported-features).
+- Azure Generation 2 VMs should be *stopped and deallocated* before you enable the Trusted Launch security type.
+- Azure Backup, if enabled, for VMs should be configured with the [Enhanced Backup policy](../backup/backup-azure-vms-enhanced-policy.md). The Trusted Launch security type can't be enabled for Generation 2 VMs configured with *Standard policy* backup protection.
+ - Existing Azure VM backup can be migrated from the *Standard* to the *Enhanced* policy. Follow the steps in [Migrate Azure VM backups from Standard to Enhanced policy (preview)](../backup/backup-azure-vm-migrate-enhanced-policy.md).
## Best practices -- Enable Trusted launch on a test Generation 2 VM and ensure if any changes are required to meet the prerequisites before enabling Trusted launch on Generation 2 VMs associated with production workloads.-- [Create restore point](create-restore-points.md) for Azure Generation 2 VM associated with production workloads before enabling Trusted launch security type. You can use the Restore Point to re-create the disks and Generation 2 VM with the previous well-known state.
+- Enable Trusted Launch on a test Generation 2 VM and determine if any changes are required to meet the prerequisites before you enable Trusted Launch on Generation 2 VMs associated with production workloads.
+- [Create restore points](create-restore-points.md) for Azure Generation 2 VMs associated with production workloads before you enable the Trusted Launch security type. You can use the restore points to re-create the disks and Generation 2 VM with the previous well-known state.
-## Enable Trusted launch on existing VM
+## Enable Trusted Launch on an existing VM
> [!NOTE] >
-> - After enabling Trusted launch, currently virtual machines cannot be rolled back to security type **Standard** (Non-Trusted launch configuration).
-> - **vTPM** is enabled by default.
-> - **Secure Boot** is recommended to be enabled (not enabled by default) if you are not using custom unsigned kernel or drivers. Secure Boot preserves boot integrity and enables foundational security for VM.
+> - After you enable Trusted Launch, currently VMs can't be rolled back to the Standard security type (non-Trusted Launch configuration).
+> - vTPM is enabled by default.
+> - We recommend that you enable Secure Boot, if you aren't using custom unsigned kernel or drivers. It's not enabled by default. Secure Boot preserves boot integrity and enables foundational security for VMs.
### [Portal](#tab/portal)
-Enable Trusted launch on existing Azure Generation 2 VM using the Azure portal.
+Enable Trusted Launch on an existing Azure Generation 2 VM by using the Azure portal.
-1. Sign in to [Azure portal](https://portal.azure.com)
-2. Validate virtual machine generation is **V2** and **Stop** VM.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Confirm that the VM generation is **V2** and select **Stop** for the VM.
- :::image type="content" source="./media/trusted-launch/02-generation-2-to-trusted-launch-stop-vm.png" alt-text="Screenshot of the Gen2 VM to be deallocated.":::
+ :::image type="content" source="./media/trusted-launch/02-generation-2-to-trusted-launch-stop-vm.png" alt-text="Screenshot that shows the Gen2 VM to be deallocated.":::
-3. On **Overview** page in VM **Properties**, Select **Standard** under **Security type**. This navigates to **Configuration** page for VM.
+1. On the **Overview** page in the VM properties, under **Security type**, select **Standard**. The **Configuration** page for the VM opens.
- :::image type="content" source="./media/trusted-launch/03-generation-2-to-trusted-launch-click-standard.png" alt-text="Screenshot of the Security type Standard.":::
+ :::image type="content" source="./media/trusted-launch/03-generation-2-to-trusted-launch-click-standard.png" alt-text="Screenshot that shows the Security type as Standard.":::
-4. Select drop-down **Security type** under **Security type** section of **Configuration** page.
+1. On the **Configuration** page, under the **Security type** section, select the **Security type** dropdown list.
- :::image type="content" source="./media/trusted-launch/04-generation-2-to-trusted-launch-select-dropdown.png" alt-text="Screenshot of the Security type drop-down.":::
+ :::image type="content" source="./media/trusted-launch/04-generation-2-to-trusted-launch-select-dropdown.png" alt-text="Screenshot that shows the Security type dropdown list.":::
-5. Select **Trusted launch** under drop-down and select check-boxes to enable **Secure Boot** and **vTPM**. Click **Save** after making required changes.
+1. Under the dropdown list, select **Trusted launch**. Select checkboxes to enable **Secure Boot** and **vTPM**. After you make the changes, select **Save**.
> [!NOTE] >
- > - Generation 2 VMs created using [Azure Compute Gallery (ACG)](azure-compute-gallery.md), [Managed Image](capture-image-resource.yml), [OS Disk](./scripts/create-vm-from-managed-os-disks.md) cannot be upgraded to Trusted launch using Portal. Please ensure [OS Version is supported for Trusted launch](trusted-launch.md#operating-systems-supported) and use PowerShell, CLI or ARM template to execute upgrade.
+ > - Generation 2 VMs created by using [Azure Compute Gallery (ACG)](azure-compute-gallery.md), [Managed image](capture-image-resource.yml), or an [OS disk](./scripts/create-vm-from-managed-os-disks.md) can't be upgraded to Trusted Launch by using the portal. Ensure that the [OS version is supported for Trusted Launch](trusted-launch.md#operating-systems-supported). Use PowerShell, the Azure CLI, or an Azure Resource Manager template (ARM template) to run the upgrade.
- :::image type="content" source="./media/trusted-launch/05-generation-2-to-trusted-launch-select-uefi-settings.png" alt-text="Screenshot of the Secure boot and vTPM settings.":::
+ :::image type="content" source="./media/trusted-launch/05-generation-2-to-trusted-launch-select-uefi-settings.png" alt-text="Screenshot that shows the Secure Boot and vTPM settings.":::
-6. Close the **Configuration** page once the update is successfully complete and validate **Security type** under VM properties on **Overview** page.
+1. After the update successfully finishes, close the **Configuration** page. On the **Overview** page in the VM properties, confirm the **Security type** settings.
- :::image type="content" source="./media/trusted-launch/06-generation-2-to-trusted-launch-validate-uefi.png" alt-text="Screenshot of the Trusted launch upgraded VM.":::
+ :::image type="content" source="./media/trusted-launch/06-generation-2-to-trusted-launch-validate-uefi.png" alt-text="Screenshot that shows the Trusted Launch upgraded VM.":::
-7. Start the upgraded Trusted launch VM and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
+1. Start the upgraded Trusted Launch VM. Verify that you can sign in to the VM by using either the Remote Desktop Protocol (RDP) for Windows VMs or the Secure Shell Protocol (SSH) for Linux VMs.
### [CLI](#tab/cli)
-This section steps through using the Azure CLI to enable Trusted launch on existing Azure Generation 2 VM.
+Follow the steps to enable Trusted Launch on an existing Azure Generation 2 VM by using the Azure CLI.
-Make sure that you've installed the latest [Azure CLI](/cli/azure/install-az-cli2) and are logged in to an Azure account with [az login](/cli/azure/reference-index).
+Make sure that you install the latest [Azure CLI](/cli/azure/install-az-cli2) and are signed in to an Azure account with [az login](/cli/azure/reference-index).
-1. Sign in to Azure Subscription
+1. Sign in to the VM Azure subscription.
```azurecli-interactive az login
Make sure that you've installed the latest [Azure CLI](/cli/azure/install-az-cli
az account set --subscription 00000000-0000-0000-0000-000000000000 ```
-2. **Deallocate** VM
+1. Deallocate the VM.
+
+1. Enable Trusted Launch by setting `--security-type` to `TrustedLaunch`.
```azurecli-interactive az vm deallocate \ --resource-group myResourceGroup --name myVm ```
-3. Enable Trusted launch by setting `--security-type` to `TrustedLaunch`.
+1. Validate the output of the previous command. Ensure that the `securityProfile` configuration is returned with the command output.
```azurecli-interactive az vm update \
Make sure that you've installed the latest [Azure CLI](/cli/azure/install-az-cli
--enable-secure-boot true --enable-vtpm true ```
-4. **Validate** output of previous command. `securityProfile` configuration is returned with command output.
+1. Validate the output of the previous command. Ensure that the `securityProfile` configuration is returned with the command output.
```json {
Make sure that you've installed the latest [Azure CLI](/cli/azure/install-az-cli
} ```
-5. **Start** the VM.
+1. Start the VM.
```azurecli-interactive az vm start \ --resource-group myResourceGroup --name myVm ```
-6. Start the upgraded Trusted launch VM and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
+1. Start the upgraded Trusted Launch VM. Verify that you can sign in to the VM by using either RDP (for Windows VMs) or SSH (for Linux VMs).
### [PowerShell](#tab/powershell)
-This section steps through using the Azure PowerShell to enable Trusted launch on existing Azure Generation 2 VM.
+Follow the steps to enable Trusted Launch on an existing Azure Generation 2 VM by using Azure PowerShell.
-Make sure that you've installed the latest [Azure PowerShell](/powershell/azure/install-azps-windows) and are logged in to an Azure account with [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).
+Make sure that you install the latest [Azure PowerShell](/powershell/azure/install-azps-windows) and are signed in to an Azure account with [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).
-1. Sign in to Azure Subscription
+1. Sign in to the VM Azure subscription.
```azurepowershell-interactive Connect-AzAccount -SubscriptionId 00000000-0000-0000-0000-000000000000 ```
-2. **Deallocate** VM
+1. Deallocate the VM.
```azurepowershell-interactive Stop-AzVM -ResourceGroupName myResourceGroup -Name myVm ```
-3. Enable Trusted launch by setting `-SecurityType` to `TrustedLaunch`.
+1. Enable Trusted Launch by setting `-SecurityType` to `TrustedLaunch`.
```azurepowershell-interactive Get-AzVM -ResourceGroupName myResourceGroup -VMName myVm `
Make sure that you've installed the latest [Azure PowerShell](/powershell/azure/
-EnableSecureBoot $true -EnableVtpm $true ```
-4. **Validate** `securityProfile` in updated VM configuration.
+1. Validate `securityProfile` in the updated VM configuration.
```azurepowershell-interactive # Following command output should be `TrustedLaunch`
Make sure that you've installed the latest [Azure PowerShell](/powershell/azure/
```
-5. **Start** the VM.
+1. Start the VM.
```azurepowershell-interactive Start-AzVM -ResourceGroupName myResourceGroup -Name myVm ```
-6. Start the upgraded Trusted launch VM and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
+1. Start the upgraded Trusted Launch VM. Verify that you can sign in to the VM by using either RDP (for Windows VMs) or SSH (for Linux VMs).
### [Template](#tab/template)
-This section steps through using an ARM template to enable Trusted launch on existing Azure Generation 2 VM.
+Follow the steps to enable Trusted Launch on an existing Azure Generation 2 VM by using an ARM template.
[!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)]
This section steps through using an ARM template to enable Trusted launch on exi
} ```
-2. Edit the **parameters** json file with virtual machines to be updated with `TrustedLaunch` security type.
+1. Edit the `parameters` JSON file with VMs to be updated with the `TrustedLaunch` security type.
```json {
This section steps through using an ARM template to enable Trusted launch on exi
**Parameter file definition**
- Property | Description of Property | Example template value
+ Property | Description of property | Example template value
-|-|-
- vmName | Name of Azure Generation 2 VM | "myVm"
- location | Location of Azure Generation 2 VM | "westus3"
- secureBootEnabled | Enable secure boot with Trusted launch security type | true
+ vmName | Name of Azure Generation 2 VM. | `myVm`
+ location | Location of Azure Generation 2 VM. | `westus3`
+ secureBootEnabled | Enable Secure Boot with the Trusted Launch security type. | `true`
-3. **Deallocate** all Azure Generation 2 VM to be updated.
+1. Deallocate all Azure Generation 2 VMs to be updated.
```azurepowershell-interactive Stop-AzVM -ResourceGroupName myResourceGroup -Name myVm01 ```
-4. Execute the ARM template deployment.
+1. Run the ARM template deployment.
```azurepowershell-interactive $resourceGroupName = "myResourceGroup"
This section steps through using an ARM template to enable Trusted launch on exi
-TemplateFile $templateFile -TemplateParameterFile $parameterFile ```
-5. Verify that the deployment is successful. Check for the security type and UEFI settings of the VM using Azure portal. Check the Security type section in the Overview page.
+ :::image type="content" source="./media/trusted-launch/generation-2-trusted-launch-settings.png" alt-text="Screenshot that shows the Trusted Launch properties of the VM.":::
- :::image type="content" source="./media/trusted-launch/generation-2-trusted-launch-settings.png" alt-text="Screenshot of the Trusted launch properties of the VM.":::
+ :::image type="content" source="./media/trusted-launch/generation-2-trusted-launch-settings.png" alt-text="Screenshot that shows the Trusted Launch properties of the VM.":::
-6. Start the upgraded Trusted launch VM and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
+1. Start the upgraded Trusted Launch VM. Verify that you can sign in to the VM by using either RDP (for Windows VMs) or SSH (for Linux VMs).
-## Next steps
-
-**(Recommended)** Post-Upgrades enable [Boot integrity monitoring](trusted-launch.md#microsoft-defender-for-cloud-integration) to monitor the health of the VM using Microsoft Defender for Cloud.
+## Related content
-Learn more about [Trusted launch](trusted-launch.md) and review [frequently asked questions](trusted-launch-faq.md)
+- After the upgrades, we recommend that you enable [boot integrity monitoring](trusted-launch.md#microsoft-defender-for-cloud-integration) to monitor the health of the VM by using Microsoft Defender for Cloud.
+- Learn more about [Trusted Launch](trusted-launch.md) and review [frequently asked questions](trusted-launch-faq.md).
virtual-machines Trusted Launch Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-faq.md
# Trusted Launch FAQ > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that's nearing end-of-life (EOL) status. Consider your use and plan accordingly. For more information, see the [CentOS EOL guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
-Frequently asked questions about trusted launch. Feature use cases, support for other Azure features, and fixes for common errors.
+Frequently asked questions (FAQs) about Azure Trusted Launch feature use cases, support for other Azure features, and fixes for common errors.
## Use cases
-### Why should I use trusted launch? What does trusted launch guard against?
+This section answers questions about use cases for Trusted Launch.
-Trusted launch guards against boot kits, rootkits, and kernel-level malware. These sophisticated types of malware run in kernel mode and remain hidden from users. For example:
+### Why should I use Trusted Launch? What does Trusted Launch guard against?
-- Firmware rootkits: these kits overwrite the firmware of the virtual machine's BIOS, so the rootkit can start before the OS.-- Boot kits: these kits replace the OS's bootloader so that the virtual machine loads the boot kit before the OS.-- Kernel rootkits: these kits replace a portion of the OS kernel so the rootkit can start automatically when the OS loads.-- Driver rootkits: these kits pretend to be one of the trusted drivers that OS uses to communicate with the virtual machine's components.
+Trusted Launch guards against boot kits, rootkits, and kernel-level malware. These sophisticated types of malware run in kernel mode and remain hidden from users. For example:
-### How does trusted launch compare to Hyper-V Shielded VM?
+- **Firmware rootkits**: These kits overwrite the firmware of the virtual machine (VM) BIOS, so the rootkit can start before the operating system (OS).
+- **Boot kits**: These kits replace the OS's bootloader so that the VM loads the boot kit before the OS.
+- **Kernel rootkits**: These kits replace a portion of the OS kernel, so the rootkit can start automatically when the OS loads.
+- **Driver rootkits**: These kits pretend to be one of the trusted drivers that the OS uses to communicate with the VM's components.
-Hyper-V Shielded VM is currently available on Hyper-V only. [Hyper-V Shielded VM](/windows-server/security/guarded-fabric-shielded-vm/guarded-fabric-and-shielded-vms) is typically deployed in with Guarded Fabric. A Guarded Fabric consists of a Host Guardian Service (HGS), one or more guarded hosts, and a set of Shielded VMs. Hyper-V Shielded VMs are used in fabrics where the data and state of the virtual machine must be protected from various actors. These actors are both fabric administrators and untrusted software that might be running on the Hyper-V hosts. Trusted launch on the other hand can be deployed as a standalone virtual machine or Virtual Machine Scale Sets on Azure without other deployment and management of HGS. All of the trusted launch features can be enabled with a simple change in deployment code or a checkbox on the Azure portal.
+### How does Trusted Launch compare to Hyper-V Shielded VM?
-### Can I disable Trusted Launch for new VM deployment?
+Hyper-V Shielded VM is currently available on Hyper-V only. [Hyper-V Shielded VM](/windows-server/security/guarded-fabric-shielded-vm/guarded-fabric-and-shielded-vms) is typically deployed with Guarded Fabric. A Guarded Fabric consists of a Host Guardian Service (HGS), one or more guarded hosts, and a set of Shielded VMs. Hyper-V Shielded VMs are used in fabrics where the data and state of the VM must be protected from various actors. These actors are both fabric administrators and untrusted software that might be running on the Hyper-V hosts.
-Trusted Launch VMs provide you with foundational compute security and our recommendation isn't to disable same for new VM/VMSS deployments except if your deployments have dependency on:
+Trusted Launch, on the other hand, can be deployed as a standalone VM or as virtual machine scale sets on Azure without other deployment and management of HGS. All of the Trusted Launch features can be enabled with a simple change in deployment code or a checkbox on the Azure portal.
-- [VM Size families currently not supported with Trusted Launch](trusted-launch.md#virtual-machines-sizes)-- [Feature currently not supported with Trusted Launch](trusted-launch.md#unsupported-features)-- [OS version not supported with Trusted Launch](trusted-launch.md#operating-systems-supported)
+### Can I disable Trusted Launch for a new VM deployment?
-You can use parameter **securityType** with value `Standard` to disable Trusted Launch in new VM/VMSS deployments using Azure PowerShell (v10.3.0+) and CLI (v2.53.0+)
+Trusted Launch VMs provide you with foundational compute security. We recommend that you don't disable them for new VM or virtual machine scale set deployments except if your deployments have dependency on:
+
+- [A VM size currently not supported](trusted-launch.md#virtual-machines-sizes)
+- [Unsupported features with Trusted Launch](trusted-launch.md#unsupported-features)
+- [An OS that doesn't support Trusted Launch](trusted-launch.md#operating-systems-supported)
+
+You can use the `securityType` parameter with the `Standard` value to disable Trusted Launch in new VM or virtual machine scale set deployments by using Azure PowerShell (v10.3.0+) and the Azure CLI (v2.53.0+).
+
+> [!NOTE]
+> We don't recommend disabling Secure Boot unless you're using custom unsigned kernel or drivers.
+
+If you need to disable Secure Boot, under the VM's configuration, clear the **Enable Secure Boot** option.
#### [CLI](#tab/cli)
$vmCred = New-Object System.Management.Automation.PSCredential($adminUsername, $
New-AzVM -Name MyVm -Credential $vmCred -SecurityType Standard ```
-### Can I disable Secure Boot option for Trusted Launch VMs?
-
-Secure Boot is NOT enabled by default but it is recommended to enable it if you are not using custom unsigned kernel or drivers. Once a VM is created with Trusted Launch and Secure Boot option enabled, you can go to VM, under Settings tab, go to Configurations and unselect 'Enable secure boot' option.
- ## Supported features and deployments
-### Is Azure Compute Gallery supported by trusted launch?
+This section discusses Trusted Launch supported features and deployments.
+
+### Is Azure Compute Gallery supported by Trusted Launch?
-Trusted launch now allows images to be created and shared through the [Azure Compute Gallery](trusted-launch-portal.md#trusted-launch-vm-supported-images) (formerly Shared Image Gallery). The image source can be:
+Trusted Launch now allows images to be created and shared through the [Azure Compute Gallery](trusted-launch-portal.md#trusted-launch-vm-supported-images) (formerly Shared Image Gallery). The image source can be:
-- an existing Azure VM that is either generalized or specialized OR,-- an existing managed disk or a snapshot OR,-- a VHD or an image version from another gallery.
+- An existing Azure VM that is either generalized or specialized.
+- An existing managed disk or a snapshot.
+- A VHD or an image version from another gallery.
-For more information about deploying Trusted Launch VM using Azure Compute Gallery, see [deploy Trusted Launch VMs](trusted-launch-portal.md#deploy-a-trusted-launch-vm-from-an-azure-compute-gallery-image).
+For more information about deploying a Trusted Launch VM by using the Azure Compute Gallery, see [Deploy Trusted Launch VMs](trusted-launch-portal.md#deploy-a-trusted-launch-vm-from-an-azure-compute-gallery-image).
-### Is Azure Backup supported by trusted launch?
+### Is Azure Backup supported by Trusted Launch?
-Trusted launch now supports Azure Backup. For more information, see [Support matrix for Azure VM backup](../backup/backup-support-matrix-iaas.md#vm-compute-support).
+Trusted Launch now supports Azure Backup. For more information, see [Support matrix for Azure VM backup](../backup/backup-support-matrix-iaas.md#vm-compute-support).
-### Will Azure Backup continue working after enabling trusted launch?
+### Will Azure Backup continue working after I enable Trusted Launch?
-Backups configured with [enhanced policy](../backup/backup-azure-vms-enhanced-policy.md) will continue to take backup of VM after enabling Trusted Launch.
+Backups configured with the [Enhanced policy](../backup/backup-azure-vms-enhanced-policy.md) continue to take backups of VMs after you enable Trusted Launch.
-### Are Ephemeral OS disks supported by trusted launch?
+### Are ephemeral OS disks supported by Trusted Launch?
+
+Trusted Launch supports ephemeral OS disks. For more information, see [Trusted Launch for ephemeral OS disks](ephemeral-os-disks.md#trusted-launch-for-ephemeral-os-disks).
-Trusted launch supports ephemeral OS disks. For more information, see [Trusted Launch for Ephemeral OS disks](ephemeral-os-disks.md#trusted-launch-for-ephemeral-os-disks).
> [!NOTE]
-> While using ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the vTPM after the creation of the VM may not be persisted across operations like reimaging and platform events like service healing.
+> When you use ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the virtual Trusted Platform Module (vTPM) after the creation of the VM might not be persisted across operations like reimaging and platform events like service healing.
-### Can virtual machine be restored using backup taken before enabling trusted launch?
+### Can a VM be restored by using backups taken before Trusted Launch was enabled?
-Backups taken before [upgrading existing Generation 2 VM to Trusted Launch](trusted-launch-existing-vm.md) can be used to restore entire virtual machine or individual data disks. They can't be used to restore or replace OS disk only.
+Backups taken before you [upgrade an existing Generation 2 VM to Trusted Launch](trusted-launch-existing-vm.md) can be used to restore the entire VM or individual data disks. They can't be used to restore or replace the OS disk only.
-### How can I find VM sizes that support trusted launch?
+### How can I find VM sizes that support Trusted Launch?
-See the list of [Generation 2 VM sizes supporting Trusted launch](trusted-launch.md#virtual-machines-sizes).
+See the list of [Generation 2 VM sizes that support Trusted Launch](trusted-launch.md#virtual-machines-sizes).
-The following commands can be used to check if a [Generation 2 VM Size](../virtual-machines/generation-2.md#generation-2-vm-sizes) doesn't support Trusted launch.
+Use the following commands to check if a [Generation 2 VM size](../virtual-machines/generation-2.md#generation-2-vm-sizes) doesn't support Trusted Launch.
#### [CLI](#tab/cli)
$vmSize = "Standard_M64"
(Get-AzComputeResourceSku | where {$_.Locations.Contains($region) -and ($_.Name -eq $vmSize) })[0].Capabilities ```
-The response is similar to the following form. `TrustedLaunchDisabled True` in the output indicates that the Generation 2 VM size doesn't support Trusted launch. If it's a Generation 2 VM size and `TrustedLaunchDisabled` isn't part of the output, it implies that Trusted launch is supported for that VM size.
+The response is similar to the following form. Output that includes `TrustedLaunchDisabled True` indicates that the Generation 2 VM size doesn't support Trusted Launch. If it's a Generation 2 VM size and `TrustedLaunchDisabled` isn't part of the output, Trusted Launch is supported for that VM size.
``` Name Value
MaxNetworkInterfaces 8
-### How can I validate that my OS image supports trusted launch?
+### How can I validate that my OS image supports Trusted Launch?
-See the list of [OS versions supported with Trusted Launch](trusted-launch.md#operating-systems-supported),
+See the list of [OS versions supported with Trusted Launch](trusted-launch.md#operating-systems-supported).
-#### Marketplace OS Images
+#### Marketplace OS images
-The following commands can be used to check if a Marketplace OS image supports Trusted Launch.
+Use the following commands to check if an Azure Marketplace OS image supports Trusted Launch.
##### [CLI](#tab/cli)
The following commands can be used to check if a Marketplace OS image supports T
az vm image show --urn "MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition:latest" ```
-The response is similar to the following form. **hyperVGeneration** `v2` and **SecurityType** contains `TrustedLaunch` in the output indicates that the Generation 2 OS Image supports Trusted Launch.
+The response is similar to the following form. If `hyperVGeneration` is `v2` and `SecurityType` contains `TrustedLaunch` in the output, the Generation 2 OS image supports Trusted Launch.
```json {
The response is similar to the following form. **hyperVGeneration** `v2` and **S
Get-AzVMImage -Skus 22_04-lts-gen2 -PublisherName Canonical -Offer 0001-com-ubuntu-server-jammy -Location westus3 -Version latest ```
-The output of the command can be used with [Virtual Machines - Get API](/rest/api/compute/virtual-machine-images/get). The response is similar to the following form. **hyperVGeneration** `v2` and **SecurityType** contains `TrustedLaunch` in the output indicates that the Generation 2 OS Image supports Trusted Launch.
+You can use the output of the command with [Virtual machines - Get API](/rest/api/compute/virtual-machine-images/get). The response is similar to the following form. If `hyperVGeneration` is `v2` and `SecurityType` contains `TrustedLaunch` in the output, the Generation 2 OS image supports Trusted Launch.
```json {
The output of the command can be used with [Virtual Machines - Get API](/rest/ap
-#### Azure Compute Gallery OS Image
+#### Azure Compute Gallery OS image
-The following commands can be used to check if a [Azure Compute Gallery](trusted-launch-portal.md#trusted-launch-vm-supported-images) OS image supports Trusted Launch.
+Use the following commands to check if an [Azure Compute Gallery](trusted-launch-portal.md#trusted-launch-vm-supported-images) OS image supports Trusted Launch.
##### [CLI](#tab/cli)
az sig image-definition show `
--resource-group myImageGalleryRg ```
-The response is similar to the following form. **hyperVGeneration** `v2` and **SecurityType** contains `TrustedLaunch` in the output indicates that the Generation 2 OS Image supports Trusted Launch.
+The response is similar to the following form. If `hyperVGeneration` is `v2` and `SecurityType` contains `TrustedLaunch` in the output, the Generation 2 OS image supports Trusted Launch.
```json {
Get-AzGalleryImageDefinition -ResourceGroupName myImageGalleryRg `
-GalleryName myImageGallery -GalleryImageDefinitionName myImageDefinition ```
-The response is similar to the following form. **hyperVGeneration** `v2` and **SecurityType** contains `TrustedLaunch` in the output indicates that the Generation 2 OS Image supports Trusted Launch.
+The response is similar to the following form. If `hyperVGeneration` is `v2` and `SecurityType` contains `TrustedLaunch` in the output, the Generation 2 OS image supports Trusted Launch.
``` ResourceGroupName : myImageGalleryRg
Architecture : x64
-### How external communication drivers work with Trusted Launch VMs ?
+### How do external communication drivers work with Trusted Launch VMs?
-Adding COM ports requires disabling Secure Boot. Hence, COM ports are disabled by default in Trusted Launch VMs.
+Adding COM ports requires that you disable Secure Boot. COM ports are disabled by default in Trusted Launch VMs.
## Troubleshooting boot issues
-Feature specific states, boot types, and common boot issues.
+This section answers questions about specific states, boot types, and common boot issues.
-### What is VM Guest State (VMGS)?
+### What is VM Guest State (VMGS)?
-VM Guest State (VMGS) is specific to Trusted Launch VM. It's a blob managed by Azure and contains the unified extensible firmware interface (UEFI) secure boot signature databases and other security information. The lifecycle of the VMGS blob is tied to that of the OS Disk.
+VM Guest State (VMGS) is specific to Trusted Launch VMs. It's a blob managed by Azure and contains the unified extensible firmware interface (UEFI) Secure Boot signature databases and other security information. The lifecycle of the VMGS blob is tied to that of the OS disk.
-### What are the differences between secure boot and measured boot?
+### What are the differences between Secure Boot and measured boot?
-In secure boot chain, each step in the boot process checks a cryptographic signature of the subsequent steps. For example, the BIOS checks a signature on the loader, and the loader checks signatures on all the kernel objects that it loads, and so on. If any of the objects are compromised, the signature doesn't match, and the VM doesn't boot. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot). Measured boot doesn't halt the boot process, it measures or computes the hash of the next objects in the chain and stores the hashes in the Platform Configuration Registers (PCRs) on the vTPM. Measured boot records are used for boot integrity monitoring.
+In a Secure Boot chain, each step in the boot process checks a cryptographic signature of the subsequent steps. For example, the BIOS checks a signature on the loader, and the loader checks signatures on all the kernel objects that it loads, and so on. If any of the objects are compromised, the signature doesn't match and the VM doesn't boot. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot).
-### Why is Trusted Launch Virtual Machine not booting correctly?
+### Why is the Trusted Launch VM not booting correctly?
-If unsigned components are detected from the UEFI (guest firmware), bootloader, operating system, or boot drivers, a Trusted Launch Virtual Machine won't boot. The [secure boot](/windows-server/virtualization/hyper-v/learn-more/generation-2-virtual-machine-security-settings-for-hyper-v#secure-boot-setting-in-hyper-v-manager) setting in the Trusted Launch virtual machine fails to boot if unsigned or untrusted boot components are encountered during the boot process and will report as a secure boot failure.
+If unsigned components are detected from the UEFI (guest firmware), bootloader, OS, or boot drivers, a Trusted Launch VM won't boot. The [Secure Boot](/windows-server/virtualization/hyper-v/learn-more/generation-2-virtual-machine-security-settings-for-hyper-v#secure-boot-setting-in-hyper-v-manager) setting in the Trusted Launch VM fails to boot if unsigned or untrusted boot components are encountered during the boot process and reports as a Secure Boot failure.
-![The trusted launch pipeline from secure boot to third party drivers](./media/trusted-launch/trusted-launch-pipeline.png)
+![Screenshot that shows the Trusted Launch pipeline from Secure Boot to third-party drivers.](./media/trusted-launch/trusted-launch-pipeline.png)
> [!NOTE]
-> Trusted Launch Virtual machines that are created directly from an Azure Marketplace image should not encounter Secure Boot failures. Azure Compute Gallery images with an original image source of Marketplace, and snapshots created from Trusted Launch VMs should also not encounter these errors.
+> Trusted Launch VMs that are created directly from an Azure Marketplace image should not encounter Secure Boot failures. Azure Compute Gallery images with an original image source of Azure Marketplace and snapshots created from Trusted Launch VMs should also not encounter these errors.
### How would I verify a no-boot scenario in the Azure portal?
-When a virtual machine becomes unavailable from a Secure Boot failure, 'no-boot' means that virtual machine has an operating system component that is signed by a trusted authority, which blocks booting a Trusted Launch VM. On VM deployment, customers may see information from resource health within the Azure portal stating that there's a validation error in secure boot.
+When a VM becomes unavailable from a Secure Boot failure, "no-boot" means that VM has an OS component that's signed by a trusted authority, which blocks booting a Trusted Launch VM. On VM deployment, you might see information from resource health within the Azure portal stating that there's a validation error in Secure Boot.
-To access resource health from the virtual machine configuration page, navigate to Resource Health under the 'Help' panel.
+To access resource health from the VM configuration page, go to **Resource Health** under the **Help** pane.
-Follow the 'Recommended Steps' outlined in the resource health screen. Instructions include a screenshot and downloadable serial log from the boot diagnostics of the virtual machine.
+If you verified that the no-boot was caused by a Secure Boot failure:
-If you verified the no-boot was caused by a secure boot failure:
+1. The image you're using is an older version that might have one or more untrusted boot components and is on a deprecation path. To remedy an outdated image, update to a supported newer image version.
+1. The image you're using might have been built outside of a marketplace source or the boot components have been modified and contain unsigned or untrusted boot components. To verify whether your image has unsigned or untrusted boot components, see the following section, "Verify Secure Boot failures."
+1. If the preceding two scenarios don't apply, the VM is potentially infected with malware (bootkit/rootkit). Consider deleting the VM and re-creating a new VM from the same source image while you evaluate all the software being installed.
-1. The image you're using is an older version that may have one or more untrusted boot components and is on a deprecation path. To remedy an outdated image, update to a supported newer image version.
-2. The image you're using may have been built outside of a marketplace source or the boot components have been modified and contain unsigned or untrusted boot components. To verify if your image has unsigned or untrusted boot components, refer to 'Verifying secure boot failures'.
-3. If the above two scenarios don't apply, the virtual machine is potentially infected with malware (bootkit/rootkit). Consider deleting the virtual machine and re-creating a new VM from the same source image while evaluating all software being installed.
+## Verify Secure Boot failures
-## Verifying secure boot failures
+This section helps you verify Secure Boot failures.
-### Linux Virtual Machines
+### Linux virtual machines
-To verify which boot components are responsible for Secure Boot failures within an Azure Linux Virtual Machine, end-users can use the SBInfo tool from the Linux Security Package.
+To verify which boot components are responsible for Secure Boot failures within an Azure Linux VM, you can use the SBInfo tool from the Linux Security Package.
-1. Turn off secure boot
-2. Connect to your Azure Linux Trusted Launch virtual machine.
-3. Install the SBInfo tool for the distro your virtual machine is running. It resides within the Linux Security Package
+1. Turn off Secure Boot.
+1. Connect to your Azure Linux Trusted Launch VM.
+1. Install the SBInfo tool for the distribution your VM is running. It resides within the Linux Security Package.
-#### [Debian-based distros](#tab/debianbased)
+#### [Debian-based distributions](#tab/debianbased)
-These commands apply to Ubuntu, Debian, and other debian-based distros.
+These commands apply to Ubuntu, Debian, and other Debian-based distributions.
```bash echo "deb [arch=amd64] http://packages.microsoft.com/repos/azurecore/ trusty main" | sudo tee -a /etc/apt/sources.list.d/azure.list
sudo apt update && sudo apt install azure-security
```
-#### [Red Hat-based distros](#tab/rhelbased)
+#### [Red Hat-based distributions](#tab/rhelbased)
-These commands apply to RHEL, CentOS, and other Red Hat-based distros.
+These commands apply to RHEL, CentOS, and other Red Hat-based distributions.
```bash echo "[packages-microsoft-com-azurecore]" | sudo tee -a /etc/yum.repos.d/azurecore.repo
echo "gpgcheck=0" | sudo tee -a /etc/yum.repos.d/azurecore.repo
sudo yum install azure-security ```
-#### [SUSE-based distros](#tab/susebased)
+#### [SUSE-based distributions](#tab/susebased)
-These commands apply to SLES, openSUSE, and other SUSE-based distros.
+These commands apply to SLES, openSUSE, and other SUSE-based distributions.
```bash sudo zypper ar -t rpm-md -n "packages-microsoft-com-azurecore" --no-gpgcheck https://packages.microsoft.com/yumrepos/azurecore/ azurecore
sudo zypper install azure-security
-After installing the Linux Security Package for your distro, run the 'sbinfo' command to verify which boot components are responsible for Secure Boot failures by displaying all unsigned modules, kernels, and bootloaders.
+After you install the Linux Security Package for your distribution, run the `sbinfo` command to verify which boot components are responsible for Secure Boot failures by displaying all unsigned modules, kernels, and bootloaders.
```bash sudo sbinfo -u -m -k -b ```
-To learn more about the SBInfo diagnostic tool, you can run 'sudo sbinfo -help'.
+To learn more about the SBInfo diagnostic tool, you can run `sudo sbinfo -help`.
### Why am I getting a boot integrity monitoring fault?
-Trusted launch for Azure virtual machines is monitored for advanced threats. If such threats are detected, an alert is triggered. Alerts are only available if [Defender for Cloud's enhanced security features](../security-center/enable-enhanced-security.md) are enabled.
+Trusted Launch for Azure VMs is monitored for advanced threats. If such threats are detected, an alert is triggered. Alerts are only available if [enhanced security features in Microsoft Defender for Cloud](../security-center/enable-enhanced-security.md) are enabled.
-Microsoft Defender for Cloud periodically performs attestation. If the attestation fails, a medium severity alert is triggered. Trusted launch attestation can fail for the following reasons:
+Microsoft Defender for Cloud periodically performs attestation. If the attestation fails, a medium-severity alert is triggered. Trusted Launch attestation can fail for the following reasons:
-- The attested information, which includes a log of the Trusted Computing Base (TCB), deviates from a trusted baseline (like when Secure Boot is enabled). Any deviation indicates an untrusted module(s) were loaded and the OS may be compromised.-- The attestation quote couldn't be verified to originate from the vTPM of the attested VM. The verification failure indicates a malware is present and may be intercepting traffic to the TPM.
+- The attested information, which includes a log of the Trusted Computing Base (TCB), deviates from a trusted baseline (like when Secure Boot is enabled). Any deviation indicates that untrusted modules were loaded and the OS might be compromised.
+- The attestation quote couldn't be verified to originate from the vTPM of the attested VM. The verification failure indicates malware is present and might be intercepting traffic to the TPM.
- The attestation extension on the VM isn't responding. An unresponsive extension indicates a denial-of-service attack by malware or an OS admin. ## Certificates
-### How can users establish root of trust with Trusted Launch VMs?
+This section provides information on certificates.
+
+### How can I establish root of trust with Trusted Launch VMs?
-The virtual TPM AK public certificate provides users with visibility for information on the full certificate chain (Root and Intermediate Certificates), helping them validate trust in certificate and root chain. To ensure Trusted Launch consumers continually have the highest security posture, it provides information on instance properties, so users can trace back to the full chain.
+The virtual TPM AK public certificate provides you with visibility for information on the full certificate chain (Root and Intermediate Certificates) to help you validate trust in the certificate and root chain. To ensure that you continually have the highest security posture for Trusted Launch, it provides information on instance properties so that you can trace back to the full chain.
#### Download instructions
-Package certificates, compromised of. p7b (Full Certificate Authority) and .cer (Intermediate CA), reveal the signing and certificate authority. Copy the relevant content and use certificate tooling to inspect and assess details of certificates.
+Package certificates, composed of .p7b (Full Certificate Authority) and .cer (Intermediate CA), reveal the signing and certificate authority. Copy the relevant content and use certificate tooling to inspect and assess details of certificates.
[!INCLUDE [json](../virtual-machines/includes/trusted-launch-tpm-certs/tpm-root-certificate-authority.md)]
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
Title: Deploy a trusted launch VM
-description: Deploy a VM that uses trusted launch.
+ Title: Deploy a Trusted Launch VM
+description: Deploy a VM that uses Trusted Launch.
Last updated 05/21/2024
-# Deploy a Virtual Machine with Trusted Launch Enabled
+# Deploy a virtual machine with Trusted Launch enabled
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets.
-[Trusted launch](trusted-launch.md) is a way to improve the security of [generation 2](generation-2.md) VMs. Trusted launch protects against advanced and persistent attack techniques by combining infrastructure technologies like vTPM and secure boot.
+[Trusted Launch](trusted-launch.md) is a way to improve the security of [Generation 2](generation-2.md) virtual machines (VMs). Trusted Launch protects against advanced and persistent attack techniques by combining infrastructure technologies like virtual Trusted Platform Module (vTPM) and secure boot.
## Prerequisites -- It's recommended to [onboard your subscription to Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/?&ef_id=CjwKCAjwwsmLBhACEiwANq-tXHeKhV--teH6kIijnBTmP-PgktfvGr5zW9TAx00SR7xsGUc3sTj5sBoCkEoQAvD_BwE:G:s&OCID=AID2200277_SEM_CjwKCAjwwsmLBhACEiwANq-tXHeKhV--teH6kIijnBTmP-PgktfvGr5zW9TAx00SR7xsGUc3sTj5sBoCkEoQAvD_BwE:G:s&gclid=CjwKCAjwwsmLBhACEiwANq-tXHeKhV--teH6kIijnBTmP-PgktfvGr5zW9TAx00SR7xsGUc3sTj5sBoCkEoQAvD_BwE#overview) if it isn't already. Microsoft Defender for Cloud has a free tier, which offers useful insights for various Azure and Hybrid resources. With the absence of MDC, Trusted Launch virtual machine users can't monitor [boot integrity](boot-integrity-monitoring-overview.md) of VM.
+- We recommend that you [onboard your subscription to Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/?&ef_id=CjwKCAjwwsmLBhACEiwANq-tXHeKhV--teH6kIijnBTmP-PgktfvGr5zW9TAx00SR7xsGUc3sTj5sBoCkEoQAvD_BwE:G:s&OCID=AID2200277_SEM_CjwKCAjwwsmLBhACEiwANq-tXHeKhV--teH6kIijnBTmP-PgktfvGr5zW9TAx00SR7xsGUc3sTj5sBoCkEoQAvD_BwE:G:s&gclid=CjwKCAjwwsmLBhACEiwANq-tXHeKhV--teH6kIijnBTmP-PgktfvGr5zW9TAx00SR7xsGUc3sTj5sBoCkEoQAvD_BwE#overview) if it isn't already. Defender for Cloud has a free tier, which offers useful insights for various Azure and hybrid resources. With the absence of Defender for Cloud, Trusted Launch VM users can't monitor [boot integrity](boot-integrity-monitoring-overview.md) of VM.
+- Assign Azure policy initiatives to your subscription. These policy initiatives need to be assigned only once per subscription. Policies will help deploy and audit for Trusted Launch VMs while automatically installing all required extensions on all supported VMs.
+ - Configure the Trusted Launch VMs' [built-in policy initiative](trusted-launch-portal.md#trusted-launch-built-in-policies).
+ - Configure prerequisites to enable Guest Attestation on Trusted Launch-enabled VMs.
+ - Configure machines to automatically install the Azure Monitor and Azure Security agents on VMs.
-- Assign Azure policies initiatives to your subscription. These policy initiatives need to be assigned only once per subscription. Policy will help deploy, audit for Trusted Launch Virtual Machines while automatically installing all required extensions on all supported VMs.
- - Configure Trusted Launch Virtual Machines [Built In Policy Initiative](trusted-launch-portal.md#trusted-launch-built-in-policies)
- - Configure prerequisites to enable Guest Attestation on Trusted Launch enabled VMs.
- - Configure machines to automatically install the Azure Monitor and Azure Security agents on virtual machines.
--- Allow service tag **AzureAttestation** in Network Security Group outbound rules to allow traffic for Microsoft Azure Attestation. Refer to [Virtual network service tags](../virtual-network/service-tags-overview.md).--- Make sure that the firewall policies are allowing access to `*.attest.azure.net`.
+- Allow the service tag `AzureAttestation` in network security group outbound rules to allow traffic for Azure Attestation. For more information, see [Virtual network service tags](../virtual-network/service-tags-overview.md).
+- Make sure that the firewall policies allow access to `*.attest.azure.net`.
> [!NOTE]
-> If you are using a Linux image and anticipate the VM may have kernel drivers either unsigned or not signed by the Linux distro vendor, then you may want to consider turning off secure boot. In the Azure portal, in the ΓÇÿCreate a virtual machineΓÇÖ page for ΓÇÿSecurity typeΓÇÖ parameter with ΓÇÿTrusted Launch Virtual MachinesΓÇÖ selected, click on ΓÇÿConfigure security featuresΓÇÖ and uncheck the ΓÇÿEnable secure bootΓÇÖ checkbox. In CLI, PowerShell, or SDK, set secure boot parameter to false.
+> If you're using a Linux image and anticipate that the VM might have kernel drivers either unsigned or not signed by the Linux distro vendor, you might want to consider turning off secure boot. In the Azure portal, on the **Create a virtual machine** page for the `Security type` parameter with **Trusted Launch Virtual Machines** selected, select **Configure security features** and clear the **Enable secure boot** checkbox. In the Azure CLI, PowerShell, or SDK, set the secure boot parameter to `false`.
## Deploy a Trusted Launch VM
-Create a virtual machine with trusted launch enabled. Choose an option below:
+Create a VM with Trusted Launch enabled. Choose one of the following options.
### [Portal](#tab/portal) 1. Sign in to the Azure [portal](https://portal.azure.com). 1. Search for **Virtual Machines**. 1. Under **Services**, select **Virtual machines**.
-1. In the **Virtual machines** page, select **Add**, and then select **Virtual machine**.
+1. On the **Virtual machines** page, select **Add**, and then select **Virtual machine**.
1. Under **Project details**, make sure the correct subscription is selected.
-1. Under **Resource group**, select **Create new** and type a name for your resource group or select an existing resource group from the dropdown.
-1. Under **Instance details**, type a name for the virtual machine name and choose a region that supports [trusted launch](trusted-launch.md#additional-information).
-1. For **Security type** select **Trusted launch virtual machines**. This makes three more options appear - **Secure boot**, **vTPM**, and **Integrity Monitoring** . Select the appropriate options for your deployment. To learn more about [Trusted Launch Enabled Security Features](trusted-launch.md#microsoft-defender-for-cloud-integration).
- :::image type="content" source="./media/trusted-launch/tvm-popup.png" alt-text="Screenshot showing the options for Trusted Launch.":::
-1. Under **Image**, select an image from the **Recommended Gen 2 images compatible with Trusted launch**. For a list, see [trusted launch](trusted-launch.md#virtual-machines-sizes).
+1. Under **Resource group**, select **Create new**. Enter a name for your resource group or select an existing resource group from the dropdown list.
+1. Under **Instance details**, enter a name for the VM name and choose a region that supports [Trusted Launch](trusted-launch.md#more-information).
+1. For **Security type**, select **Trusted launch virtual machines**. When the options **Secure boot**, **vTPM**, and **Integrity Monitoring** appear, select the appropriate options for your deployment. For more information, see [Trusted Launch-enabled security features](trusted-launch.md#microsoft-defender-for-cloud-integration).
+
+ :::image type="content" source="./media/trusted-launch/tvm-popup.png" alt-text="Screenshot that shows the options for Trusted Launch.":::
+
+1. Under **Image**, select an image from **Recommended Gen 2 images compatible with Trusted launch**. For a list, see [Trusted Launch](trusted-launch.md#virtual-machines-sizes).
> [!TIP]
- > If you don't see the Gen 2 version of the image you want in the drop-down, select **See all images** and then change the **Security type** filter to **Trusted Launch**.
-13. Select a VM size that supports trusted launch. See the list of [supported sizes](trusted-launch.md#virtual-machines-sizes).
-14. Fill in the **Administrator account** information and then **Inbound port rules**.
-15. At the bottom of the page, select **Review + Create**
-16. On the **Create a virtual machine** page, you can see the details about the VM you are about to deploy. Once validation shows as passed, select **Create**.
+ > If you don't see the Gen2 version of the image that you want in the dropdown list, select **See all images**. Then change the **Security type** filter to **Trusted Launch**.
+1. Select a VM size that supports Trusted Launch. For more information, see the list of [supported sizes](trusted-launch.md#virtual-machines-sizes).
+1. Fill in the **Administrator account** information and then **Inbound port rules**.
+1. At the bottom of the page, select **Review + Create**.
+1. On the **Create a virtual machine** page, you can see the information about the VM you're about to deploy. After validation shows as passed, select **Create**.
- :::image type="content" source="./media/trusted-launch/tvm-complete.png" alt-text="Sceenshot of the validation page, showing the trusted launch options are included.":::
+ :::image type="content" source="./media/trusted-launch/tvm-complete.png" alt-text="Sceenshot that shows the validation page with the Trusted Launch options.":::
It takes a few minutes for your VM to be deployed. ### [CLI](#tab/cli)
-Make sure you're running the latest version of Azure CLI.
+Make sure that you're running the latest version of the Azure CLI.
-Sign in to Azure using `az login`.
+1. Sign in to Azure by using `az login`.
-```azurecli-interactive
-az login
-```
+ ```azurecli-interactive
+ az login
+ ```
-Create a virtual machine with Trusted Launch.
+1. Create a VM with Trusted Launch.
-```azurecli-interactive
-az group create -n myresourceGroup -l eastus
+ ```azurecli-interactive
+ az group create -n myresourceGroup -l eastus
+
+ az vm create \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --image Canonical:UbuntuServer:18_04-lts-gen2:latest \
+ --admin-username azureuser \
+ --generate-ssh-keys \
+ --security-type TrustedLaunch \
+ --enable-secure-boot true \
+ --enable-vtpm true
+ ```
-az vm create \
- --resource-group myResourceGroup \
- --name myVM \
- --image Canonical:UbuntuServer:18_04-lts-gen2:latest \
- --admin-username azureuser \
- --generate-ssh-keys \
- --security-type TrustedLaunch \
- --enable-secure-boot true \
- --enable-vtpm true
-```
-
-For existing VMs, you can enable or disable secure boot and vTPM settings. Updating the virtual machine with secure boot and vTPM settings trigger auto-reboot.
+1. For existing VMs, you can enable or disable secure boot and vTPM settings. Updating the VM with secure boot and vTPM settings triggers auto-reboot.
-```azurecli-interactive
-az vm update \
- --resource-group myResourceGroup \
- --name myVM \
- --enable-secure-boot true \
- --enable-vtpm true
-```
+ ```azurecli-interactive
+ az vm update \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --enable-secure-boot true \
+ --enable-vtpm true
+ ```
For more information about installing boot integrity monitoring through the Guest Attestation extension, see [Boot integrity](./boot-integrity-monitoring-overview.md). ### [PowerShell](#tab/powershell)
-In order to provision a VM with Trusted Launch, it first needs to be enabled with the `TrustedLaunch` using the `Set-AzVmSecurityProfile` cmdlet. Then you can use the Set-AzVmUefi cmdlet to set the vTPM and SecureBoot configuration. Use the below snippet as a quick start, remember to replace the values in this example with your own.
+To provision a VM with Trusted Launch, it first needs to be enabled with the `TrustedLaunch` parameter by using the `Set-AzVmSecurityProfile` cmdlet. Then you can use the `Set-AzVmUefi` cmdlet to set the vTPM and Secure Boot configuration. Use the following snippet as a quick start. Remember to replace the values in this example with your own.
```azurepowershell-interactive $rgName = "myResourceGroup"
New-AzVM -ResourceGroupName $rgName -Location $location -VM $vm
### [Template](#tab/template)
-You can deploy trusted launch VMs using a quickstart template:
+You can deploy Trusted Launch VMs by using a quickstart template.
-**Linux**
+#### Linux
-[![Deploy To Azure](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.compute%2Fvm-trustedlaunch-linux%2Fazuredeploy.json/createUIDefinitionUri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.compute%2Fvm-trustedlaunch-linux%2FcreateUiDefinition.json)
+[![Deploy to Azure](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.compute%2Fvm-trustedlaunch-linux%2Fazuredeploy.json/createUIDefinitionUri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.compute%2Fvm-trustedlaunch-linux%2FcreateUiDefinition.json)
-**Windows**
+#### Windows
-[![Deploy To Azure](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.compute%2Fvm-trustedlaunch-windows%2Fazuredeploy.json/createUIDefinitionUri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.compute%2Fvm-trustedlaunch-windows%2FcreateUiDefinition.json)
+[![Deploy to Azure](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.compute%2Fvm-trustedlaunch-windows%2Fazuredeploy.json/createUIDefinitionUri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.compute%2Fvm-trustedlaunch-windows%2FcreateUiDefinition.json)
-## Deploy a Trusted launch VM from an Azure Compute Gallery image
+## Deploy a Trusted Launch VM from an Azure Compute Gallery image
-[Azure trusted launch virtual machines](trusted-launch.md) supports the creation and sharing of custom images using Azure Compute Gallery. There are two types of images that you can create, based on the security types of the image:
+[Azure Trusted Launch VMs](trusted-launch.md) support the creation and sharing of custom images by using Azure Compute Gallery. There are two types of images that you can create, based on the security types of the image:
-- **Recommended** [Trusted launch VM Supported (`TrustedLaunchSupported`) images](#trusted-launch-vm-supported-images) are images where the source does not have VM Guest state information and can be used to create either [Generation 2 VMs](generation-2.md) or [Trusted Launch VMs](trusted-launch.md).-- [Trusted launch VM (`TrustedLaunch`) images](#trusted-launch-vm-images) are images where the source usually has [VM Guest state information](trusted-launch-faq.md#what-is-vm-guest-state-vmgs) and can be used to create only [Trusted Launch VMs](trusted-launch.md).
+- **Recommended**: [Trusted Launch VM supported (`TrustedLaunchSupported`) images](#trusted-launch-vm-supported-images) are images where the source doesn't have VM Guest state information and can be used to create either [Generation 2 VMs](generation-2.md) or [Trusted Launch VMs](trusted-launch.md).
+- [Trusted Launch VM (`TrustedLaunch`) images](#trusted-launch-vm-images) are images where the source usually has [VM Guest State information](trusted-launch-faq.md#what-is-vm-guest-state-vmgs) and can be used to create only [Trusted Launch VMs](trusted-launch.md).
-### Trusted launch VM supported images
+### Trusted Launch VM supported images
For the following image sources, the security type on the image definition should be set to `TrustedLaunchsupported`: -- Gen2 OS Disk VHD
+- Gen2 operating system (OS) disk VHD
- Gen2 Managed Image-- Gen2 Gallery Image Version
+- Gen2 Gallery Image version
-No VM Guest State information shall be included in the image source.
+No VM Guest State information can be included in the image source.
-The resulting image version can be used to create either Azure Gen2 VMs or Trusted launch VMs.
+You can use the resulting image version to create either Azure Gen2 VMs or Trusted Launch VMs.
-These images can be shared using [Azure Compute Gallery - Direct Shared Gallery](../virtual-machines/azure-compute-gallery.md#shared-directly-to-a-tenant-or-subscription) and [Azure Compute Gallery - Community Gallery](../virtual-machines/azure-compute-gallery.md#community-gallery).
+These images can be shared by using [Azure Compute Gallery - Direct Shared Gallery](../virtual-machines/azure-compute-gallery.md#shared-directly-to-a-tenant-or-subscription) and [Azure Compute Gallery - Community Gallery](../virtual-machines/azure-compute-gallery.md#community-gallery).
> [!NOTE]
-> The OS disk VHD, Managed Image or Gallery Image Version should be created from a [Gen2 image that is compatible with Trusted launch VMs](trusted-launch.md#virtual-machines-sizes).
+> The OS disk VHD, Managed Image, or Gallery Image version should be created from a [Gen2 image that's compatible with Trusted Launch VMs](trusted-launch.md#virtual-machines-sizes).
#### [Portal](#tab/portal3) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for and select **VM image versions** in the search bar
+1. Search for and select **VM image versions** in the search bar.
1. On the **VM image versions** page, select **Create**. 1. On the **Create VM image version** page, on the **Basics** tab: 1. Select the Azure subscription. 1. Select an existing resource group or create a new resource group. 1. Select the Azure region. 1. Enter an image version number.
- 1. For **Source**, select either **Storage Blobs (VHD)** or **Managed Image** or another **VM Image Version**
- 1. If you selected **Storage Blobs (VHD)**, enter an OS disk VHD (without the VM Guest state). Make sure to use a Gen 2 VHD.
- 1. If you selected **Managed Image**, select an existing managed image of a Gen 2 VM.
- 1. If you selected **VM Image Version**, select an existing Gallery Image Version of a Gen2 VM.
+ 1. For **Source**, select either **Storage Blobs (VHD)** or **Managed Image** or another **VM Image Version**.
+ 1. If you selected **Storage Blobs (VHD)**, enter an OS disk VHD (without the VM Guest state). Make sure to use a Gen2 VHD.
+ 1. If you selected **Managed Image**, select an existing managed image of a Gen2 VM.
+ 1. If you selected **VM Image Version**, select an existing Gallery Image version of a Gen2 VM.
1. For **Target Azure compute gallery**, select or create a gallery to share the image.
- 1. For **Operating system state**, select either **Generalized** or **Specialized** depending on your use case. If you're using a managed image as the source, always select **Generalized**. If you're using a storage blob (VHD) and want to select **Generalized**, follow the steps to [generalize a Linux VHD](../virtual-machines/linux/create-upload-generic.md) or [generalize a Windows VHD](../virtual-machines/windows/upload-generalized-managed.md) before you continue. If you're using an existing VM Image Version, select either **Generalized** or **Specialized** based on what is used in the source VM image definition.
+ 1. For **Operating system state**, select either **Generalized** or **Specialized** depending on your use case. If you're using a managed image as the source, always select **Generalized**. If you're using a storage blob (VHD) and want to select **Generalized**, follow the steps to [generalize a Linux VHD](../virtual-machines/linux/create-upload-generic.md) or [generalize a Windows VHD](../virtual-machines/windows/upload-generalized-managed.md) before you continue. If you're using an existing VM image version, select either **Generalized** or **Specialized** based on what's used in the source VM image definition.
1. For **Target VM Image Definition**, select **Create new**.
- 1. In the **Create a VM image definition** pane, enter a name for the definition. Make sure the security type is set to **Trustedlaunch Supported**. Enter publisher, offer, and SKU information. Then, select **Ok**.
+ 1. On the **Create a VM image definition** pane, enter a name for the definition. Make sure the security type is set to **Trustedlaunch Supported**. Enter the publisher, offer, and SKU information. Then select **OK**.
1. On the **Replication** tab, enter the replica count and target regions for image replication, if necessary. 1. On the **Encryption** tab, enter SSE encryption-related information, if necessary. 1. Select **Review + Create**. 1. After the configuration is successfully validated, select **Create** to finish creating the image. 1. After the image version is created, select **Create VM**.
-12. In the Create a virtual machine page, under **Resource group**, select **Create new** and type a name for your resource group or select an existing resource group from the dropdown.
-13. Under **Instance details**, type a name for the virtual machine name and choose a region that supports [trusted launch](trusted-launch.md#additional-information).
-14. Select **Trusted launch virtual machines** as the security type. The **Secure Boot** and **vTPM** checkboxes are enabled by default.
-15. Fill in the **Administrator account** information and then **Inbound port rules**.
+1. On the **Create a virtual machine** page, under **Resource group**, select **Create new**. Enter a name for your resource group or select an existing resource group from the dropdown list.
+1. Under **Instance details**, enter a name for the VM name and choose a region that supports [Trusted Launch](trusted-launch.md#more-information).
+1. For **Security type**, select **Trusted launch virtual machines**. The **Secure Boot** and **vTPM** checkboxes are enabled by default.
+1. Fill in the **Administrator account** information and then **Inbound port rules**.
1. On the validation page, review the details of the VM. 1. After the validation succeeds, select **Create** to finish creating the VM. - #### [CLI](#tab/cli3)
-Make sure you're running the latest version of Azure CLI.
-
-Sign in to Azure using `az login`.
-
-```azurecli-interactive
-az login
-```
-
-Create an image definition with `TrustedLaunchSupported` security type.
-
-```azurecli-interactive
-az sig image-definition create --resource-group MyResourceGroup --location eastus \
gallery-name MyGallery --gallery-image-definition MyImageDef \ publisher TrustedLaunchPublisher --offer TrustedLaunchOffer --sku TrustedLaunchSku \ os-type Linux --os-state Generalized \ hyper-v-generation V2 \ features SecurityType=TrustedLaunchSupported
-```
-
-Use an OS disk VHD to create an image version. Ensure that the Linux VHD was generalized before uploading to an Azure storage account blob using steps outlined [here](../virtual-machines/linux/create-upload-generic.md).
-
-```azurecli-interactive
-az sig image-version create --resource-group MyResourceGroup \
gallery-name MyGallery --gallery-image-definition MyImageDef \gallery-image-version 1.0.0 \os-vhd-storage-account /subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/imageGroups/providers/Microsoft.Storage/storageAccounts/mystorageaccount \os-vhd-uri https://mystorageaccount.blob.core.windows.net/container/path_to_vhd_file
-```
-
-Create a Trusted launch VM from the above image version.
-
-```azurecli-interactive
-adminUsername=linuxvm
-az vm create --resource-group MyResourceGroup \
- --name myTrustedLaunchVM \
- --image "/subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/galleries/MyGallery/images/MyImageDef" \
- --size Standard_D2s_v5 \
- --security-type TrustedLaunch \
- --enable-secure-boot true \
- --enable-vtpm true \
- --admin-username $adminUsername \
- --generate-ssh-keys
-```
+Make sure that you're running the latest version of the Azure CLI.
+
+1. Sign in to Azure by using `az login`.
+
+ ```azurecli-interactive
+ az login
+ ```
+
+1. Create an image definition with the `TrustedLaunchSupported` security type.
+
+ ```azurecli-interactive
+ az sig image-definition create --resource-group MyResourceGroup --location eastus \
+ --gallery-name MyGallery --gallery-image-definition MyImageDef \
+ --publisher TrustedLaunchPublisher --offer TrustedLaunchOffer --sku TrustedLaunchSku \
+ --os-type Linux --os-state Generalized \
+ --hyper-v-generation V2 \
+ --features SecurityType=TrustedLaunchSupported
+ ```
+
+1. Use an OS disk VHD to create an image version. Ensure that the Linux VHD was generalized before you upload it to an Azure Storage account blob by using the steps in [Prepare Linux for imaging in Azure](../virtual-machines/linux/create-upload-generic.md).
+
+ ```azurecli-interactive
+ az sig image-version create --resource-group MyResourceGroup \
+ --gallery-name MyGallery --gallery-image-definition MyImageDef \
+ --gallery-image-version 1.0.0 \
+ --os-vhd-storage-account /subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/imageGroups/providers/Microsoft.Storage/storageAccounts/mystorageaccount \
+ --os-vhd-uri https://mystorageaccount.blob.core.windows.net/container/path_to_vhd_file
+ ```
+
+1. Create a Trusted Launch VM from the preceding image version.
+
+ ```azurecli-interactive
+ adminUsername=linuxvm
+ az vm create --resource-group MyResourceGroup \
+ --name myTrustedLaunchVM \
+ --image "/subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/galleries/MyGallery/images/MyImageDef" \
+ --size Standard_D2s_v5 \
+ --security-type TrustedLaunch \
+ --enable-secure-boot true \
+ --enable-vtpm true \
+ --admin-username $adminUsername \
+ --generate-ssh-keys
+ ```
#### [PowerShell](#tab/powershell3)
-Create an image definition with `TrustedLaunchSupported` security type.
-
-```azurepowershell-interactive
-$rgName = "MyResourceGroup"
-$galleryName = "MyGallery"
-$galleryImageDefinitionName = "MyImageDef"
-$location = "eastus"
-$publisherName = "TrustedlaunchPublisher"
-$offerName = "TrustedlaunchOffer"
-$skuName = "TrustedlaunchSku"
-$description = "My gallery"
-$SecurityType = @{Name='SecurityType';Value='TrustedLaunchSupported'}
-$features = @($SecurityType)
-New-AzGalleryImageDefinition -ResourceGroupName $rgName -GalleryName $galleryName -Name $galleryImageDefinitionName -Location $location -Publisher $publisherName -Offer $offerName -Sku $skuName -HyperVGeneration "V2" -OsState "Generalized" -OsType "Windows" -Description $description -Feature $features
-```
-
-To create an image version, we can use an existing Gen2 Gallery Image Version, which was generalized during creation.
-
-```azurepowershell-interactive
-$rgName = "MyResourceGroup"
-$galleryName = "MyGallery"
-$galleryImageDefinitionName = "MyImageDef"
-$location = "eastus"
-$galleryImageVersionName = "1.0.0"
-$sourceImageId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myVMRG/providers/Microsoft.Compute/galleries/MyGallery/images/Gen2VMImageDef/versions/0.0.1"
-New-AzGalleryImageVersion -ResourceGroupName $rgName -GalleryName $galleryName -GalleryImageDefinitionName $galleryImageDefinitionName -Name $galleryImageVersionName -Location $location -SourceImageId $sourceImageId
-```
-Create a Trusted launch VM from the above image version
-
-```azurepowershell-interactive
-$rgName = "MyResourceGroup"
-$galleryName = "MyGallery"
-$galleryImageDefinitionName = "MyImageDef"
-$location = "eastus"
-$vmName = "myVMfromImage"
-$vmSize = "Standard_D2s_v5"
-$imageDefinition = Get-AzGalleryImageDefinition `
- -GalleryName $galleryName `
- -ResourceGroupName $rgName `
- -Name $galleryImageDefinitionName
-$cred = Get-Credential `
- -Message "Enter a username and password for the virtual machine"
-# Network pieces
-$subnetConfig = New-AzVirtualNetworkSubnetConfig `
- -Name mySubnet `
- -AddressPrefix 192.168.1.0/24
-$vnet = New-AzVirtualNetwork `
- -ResourceGroupName $rgName `
- -Location $location `
- -Name MYvNET `
- -AddressPrefix 192.168.0.0/16 `
- -Subnet $subnetConfig
-$pip = New-AzPublicIpAddress `
- -ResourceGroupName $rgName `
- -Location $location `
- -Name "mypublicdns$(Get-Random)" `
- -AllocationMethod Static `
- -IdleTimeoutInMinutes 4
-$nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
- -Name myNetworkSecurityGroupRuleRDP `
- -Protocol Tcp `
- -Direction Inbound `
- -Priority 1000 `
- -SourceAddressPrefix * `
- -SourcePortRange * `
- -DestinationAddressPrefix * `
- -DestinationPortRange 3389 `
- -Access Deny
-$nsg = New-AzNetworkSecurityGroup `
- -ResourceGroupName $rgName `
- -Location $location `
- -Name myNetworkSecurityGroup `
- -SecurityRules $nsgRuleRDP
-$nic = New-AzNetworkInterface `
- -Name myNic `
- -ResourceGroupName $rgName `
- -Location $location `
- -SubnetId $vnet.Subnets[0].Id `
- -PublicIpAddressId $pip.Id `
- -NetworkSecurityGroupId $nsg.Id
-$vm = New-AzVMConfig -vmName $vmName -vmSize $vmSize | `
- Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
- Set-AzVMSourceImage -Id $imageDefinition.Id | `
- Add-AzVMNetworkInterface -Id $nic.Id
-$vm = Set-AzVMSecurityProfile -SecurityType "TrustedLaunch" -VM $vm
-$vm = Set-AzVmUefi -VM $vm `
- -EnableVtpmΓÇ»$trueΓÇ»`
- -EnableSecureBootΓÇ»$true
-New-AzVM `
- -ResourceGroupName $rgName `
- -Location $location `
- -VM $vm
-```
+1. Create an image definition with the `TrustedLaunchSupported` security type.
+
+ ```azurepowershell-interactive
+ $rgName = "MyResourceGroup"
+ $galleryName = "MyGallery"
+ $galleryImageDefinitionName = "MyImageDef"
+ $location = "eastus"
+ $publisherName = "TrustedlaunchPublisher"
+ $offerName = "TrustedlaunchOffer"
+ $skuName = "TrustedlaunchSku"
+ $description = "My gallery"
+ $SecurityType = @{Name='SecurityType';Value='TrustedLaunchSupported'}
+ $features = @($SecurityType)
+ New-AzGalleryImageDefinition -ResourceGroupName $rgName -GalleryName $galleryName -Name $galleryImageDefinitionName -Location $location -Publisher $publisherName -Offer $offerName -Sku $skuName -HyperVGeneration "V2" -OsState "Generalized" -OsType "Windows" -Description $description -Feature $features
+ ```
+
+1. To create an image version, you can use an existing Gen2 Gallery Image version, which was generalized during creation.
+
+ ```azurepowershell-interactive
+ $rgName = "MyResourceGroup"
+ $galleryName = "MyGallery"
+ $galleryImageDefinitionName = "MyImageDef"
+ $location = "eastus"
+ $galleryImageVersionName = "1.0.0"
+ $sourceImageId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myVMRG/providers/Microsoft.Compute/galleries/MyGallery/images/Gen2VMImageDef/versions/0.0.1"
+ New-AzGalleryImageVersion -ResourceGroupName $rgName -GalleryName $galleryName -GalleryImageDefinitionName $galleryImageDefinitionName -Name $galleryImageVersionName -Location $location -SourceImageId $sourceImageId
+ ```
+
+1. Create a Trusted Launch VM from the preceding image version.
+
+ ```azurepowershell-interactive
+ $rgName = "MyResourceGroup"
+ $galleryName = "MyGallery"
+ $galleryImageDefinitionName = "MyImageDef"
+ $location = "eastus"
+ $vmName = "myVMfromImage"
+ $vmSize = "Standard_D2s_v5"
+ $imageDefinition = Get-AzGalleryImageDefinition `
+ -GalleryName $galleryName `
+ -ResourceGroupName $rgName `
+ -Name $galleryImageDefinitionName
+ $cred = Get-Credential `
+ -Message "Enter a username and password for the virtual machine"
+ # Network pieces
+ $subnetConfig = New-AzVirtualNetworkSubnetConfig `
+ -Name mySubnet `
+ -AddressPrefix 192.168.1.0/24
+ $vnet = New-AzVirtualNetwork `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -Name MYvNET `
+ -AddressPrefix 192.168.0.0/16 `
+ -Subnet $subnetConfig
+ $pip = New-AzPublicIpAddress `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -Name "mypublicdns$(Get-Random)" `
+ -AllocationMethod Static `
+ -IdleTimeoutInMinutes 4
+ $nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
+ -Name myNetworkSecurityGroupRuleRDP `
+ -Protocol Tcp `
+ -Direction Inbound `
+ -Priority 1000 `
+ -SourceAddressPrefix * `
+ -SourcePortRange * `
+ -DestinationAddressPrefix * `
+ -DestinationPortRange 3389 `
+ -Access Deny
+ $nsg = New-AzNetworkSecurityGroup `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -Name myNetworkSecurityGroup `
+ -SecurityRules $nsgRuleRDP
+ $nic = New-AzNetworkInterface `
+ -Name myNic `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -SubnetId $vnet.Subnets[0].Id `
+ -PublicIpAddressId $pip.Id `
+ -NetworkSecurityGroupId $nsg.Id
+ $vm = New-AzVMConfig -vmName $vmName -vmSize $vmSize | `
+ Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
+ Set-AzVMSourceImage -Id $imageDefinition.Id | `
+ Add-AzVMNetworkInterface -Id $nic.Id
+ $vm = Set-AzVMSecurityProfile -SecurityType "TrustedLaunch" -VM $vm
+ $vm = Set-AzVmUefi -VM $vm `
+ -EnableVtpmΓÇ»$trueΓÇ»`
+ -EnableSecureBootΓÇ»$true
+ New-AzVM `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -VM $vm
+ ```
-### Trusted launch VM Images
+### Trusted Launch VM images
-For the following image sources, the security type on the image definition should be set to `TrustedLaunch`:
-- Trusted launch VM capture-- Managed OS disk
+The security type on the image definition should be set to `TrustedLaunch`for the following image sources:
+
+- Trusted Launch VM capture
+- Managed OS disk
- Managed OS disk snapshot
-The resulting image version can be used only to create Azure Trusted launch VMs.
+You can use the resulting image version to create Azure Trusted Launch VMs only.
#### [Portal](#tab/portal2) 1. Sign in to the Azure [portal](https://portal.azure.com).
-2. To create an Azure Compute Gallery Image from a VM, open an existing Trusted launch VM and select **Capture**.
-3. In the Create an Image page that follows, allow the image to be shared to the gallery as a VM image version. Creation of Managed Images isn't supported for Trusted Launch VMs.
-4. Create a new target Azure Compute Gallery or select an existing gallery.
-5. Select the **Operating system state** as either **Generalized** or **Specialized**. If you want to create a generalized image, ensure that you [generalize the VM to remove machine specific information](generalize.yml) before selecting this option. If Bitlocker based encryption is enabled on your Trusted launch Windows VM, you may not be able to generalize the same.
-6. Create a new image definition by providing a name, publisher, offer, and SKU details. The **Security Type** of the image definition should already be set to **Trusted launch**.
-7. Provide a version number for the image version.
-8. Modify replication options if necessary.
-9. At the bottom of the **Create an Image** page, select **Review + Create** and when validation shows as passed, select **Create**.
-10. Once the image version is created, go the image version directly. Alternatively, you can navigate to the required image version through the image definition.
-11. On the **VM image version** page, select the **+ Create VM** to land on the Create a virtual machine page.
-12. In the Create a virtual machine page, under **Resource group**, select **Create new** and type a name for your resource group or select an existing resource group from the dropdown.
-13. Under **Instance details**, type a name for the virtual machine name and choose a region that supports [trusted launch](trusted-launch.md#virtual-machines-sizes).
-14. The image and the security type are already populated based on the selected image version. The **Secure Boot** and **vTPM** checkboxes are enabled by default.
-15. Fill in the **Administrator account** information and then **Inbound port rules**.
-16. At the bottom of the page, select **Review + Create**
+1. To create an Azure Compute Gallery Image from a VM, open an existing Trusted Launch VM and select **Capture**.
+1. On the **Create an Image** page, allow the image to be shared to the gallery as a VM image version. Creation of managed images isn't supported for Trusted Launch VMs.
+1. Create a new target Azure Compute Gallery or select an existing gallery.
+1. Select the **Operating system state** as either **Generalized** or **Specialized**. If you want to create a generalized image, ensure that you [generalize the VM to remove machine-specific information](generalize.yml) before you select this option. If Bitlocker-based encryption is enabled on your Trusted Launch Windows VM, you might not be able to generalize the same.
+1. Create a new image definition by providing a name, publisher, offer, and SKU details. **Security type** for the image definition should already be set to **Trusted launch**.
+1. Provide a version number for the image version.
+1. Modify replication options, if necessary.
+1. At the bottom of the **Create an Image** page, select **Review + Create**. After validation shows as passed, select **Create**.
+1. After the image version is created, go to the image version directly. Alternatively, you can go to the required image version through the image definition.
+1. On the **VM image version** page, select **+ Create VM** to go to the **Create a virtual machine** page.
+1. On the **Create a virtual machine** page, under **Resource group**, select **Create new**. Enter a name for your resource group or select an existing resource group from the dropdown list.
+1. Under **Instance details**, enter a name for the VM name and choose a region that supports [Trusted Launch](trusted-launch.md#virtual-machines-sizes).
+1. The image and the security type are already populated based on the selected image version. The **Secure Boot** and **vTPM** checkboxes are enabled by default.
+1. Fill in the **Administrator account** information and then **Inbound port rules**.
+1. At the bottom of the page, select **Review + Create**.
1. On the validation page, review the details of the VM. 1. After the validation succeeds, select **Create** to finish creating the VM.
-In case you want to use either a managed disk or a managed disk snapshot as a source of the image version (instead of a trusted launch VM), then use the following steps.
-
-1. Sign in to the [portal](https://portal.azure.com)
-2. Search for **VM Image Versions** and select **Create**
-3. Provide the subscription, resource group, region, and image version number
-4. Select the source as **Disks and/or Snapshots**
-5. Select the OS disk as a managed disk or a managed disk snapshot from the dropdown list
-6. Select a **Target Azure Compute Gallery** to create and share the image. If no gallery exists, create a new gallery.
-7. Select the **Operating system state** as either **Generalized** or **Specialized**. If you want to create a generalized image, ensure that you generalize the disk or snapshot to remove machine specific information.
-8. For the **Target VM Image Definition** select Create new. In the window that opens, select an image definition name and ensure that the **Security type** is set to **Trusted launch**. Provide the publisher, offer and SKU information and select **OK**.
-9. The **Replication** tab can be used to set the replica count and target regions for image replication, if required.
-10. The **Encryption** tab can also be used to provide SSE encryption related information, if required.
-11. Select **Create** in the **Review + create** tab to create the image
-12. Once the image version is successfully created, select the **+ Create VM** to land on the Create a virtual machine page.
-13. Follow steps 12 to 18 as mentioned earlier to create a trusted launch VM using this image version
-
+If you want to use either a managed disk or a managed disk snapshot as a source of the image version (instead of a Trusted Launch VM), follow these steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for **VM Image Versions** and select **Create**.
+1. Provide the subscription, resource group, region, and image version number.
+1. Select the source as **Disks and/or Snapshots**.
+1. Select the OS disk as a managed disk or a managed disk snapshot from the dropdown list.
+1. Select a **Target Azure Compute Gallery** to create and share the image. If no gallery exists, create a new gallery.
+1. Select the **Operating system state** as either **Generalized** or **Specialized**. If you want to create a generalized image, ensure that you generalize the disk or snapshot to remove machine-specific information.
+1. For the **Target VM Image Definition** select **Create new**. In the window that opens, select an image definition name and ensure that **Security type** is set to **Trusted launch**. Provide the publisher, offer, and SKU information and select **OK**.
+1. The **Replication** tab can be used to set the replica count and target regions for image replication, if required.
+1. The **Encryption** tab can also be used to provide SSE encryption-related information, if required.
+1. Select **Create** on the **Review + create** tab to create the image.
+1. After the image version is successfully created, select **+ Create VM** to go to the **Create a virtual machine** page.
+1. Follow steps 12 to 18 as mentioned earlier to create a Trusted Launch VM by using this image version.
#### [CLI](#tab/cli2)
-Make sure you're running the latest version of Azure CLI.
-
-Sign in to Azure using `az login`.
-
-```azurecli-interactive
-az login
-```
-
-Create an image definition with `TrustedLaunch` security type
-
-```azurecli-interactive
-az sig image-definition create --resource-group MyResourceGroup --location eastus \
gallery-name MyGallery --gallery-image-definition MyImageDef \ publisher TrustedLaunchPublisher --offer TrustedLaunchOffer --sku TrustedLaunchSku \ os-type Linux --os-state Generalized \ hyper-v-generation V2 \ features SecurityType=TrustedLaunch
-```
-
-To create an image version, we can capture an existing Linux based Trusted launch VM. [Generalize the Trusted launch VM](generalize.yml) before creating the image version.
-
-```azurecli-interactive
-az sig image-version create --resource-group MyResourceGroup \
gallery-name MyGallery --gallery-image-definition MyImageDef \gallery-image-version 1.0.0 \managed-image /subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM
-```
-
-In case a managed disk or a managed disk snapshot needs to be used as the image source for the image version, replace the --managed-image in the above command with --os-snapshot and provide the disk or the snapshot resource name
-
-Create a Trusted launch VM from the above image version
-
-```azurecli-interactive
-adminUsername=linuxvm
-az vm create --resource-group MyResourceGroup \
- --name myTrustedLaunchVM \
- --image "/subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/galleries/MyGallery/images/MyImageDef" \
- --size Standard_D2s_v5 \
- --security-type TrustedLaunch \
- --enable-secure-boot true \
- --enable-vtpm true \
- --admin-username $adminUsername \
- --generate-ssh-keys
-```
+Make sure that you're running the latest version of the Azure CLI.
+
+1. Sign in to Azure by using `az login`.
+
+ ```azurecli-interactive
+ az login
+ ```
+
+1. Create an image definition with the `TrustedLaunch` security type.
+
+ ```azurecli-interactive
+ az sig image-definition create --resource-group MyResourceGroup --location eastus \
+ --gallery-name MyGallery --gallery-image-definition MyImageDef \
+ --publisher TrustedLaunchPublisher --offer TrustedLaunchOffer --sku TrustedLaunchSku \
+ --os-type Linux --os-state Generalized \
+ --hyper-v-generation V2 \
+ --features SecurityType=TrustedLaunch
+ ```
+
+1. To create an image version, you can capture an existing Linux-based Trusted Launch VM. [Generalize the Trusted Launch VM](generalize.yml) before you create the image version.
+
+ ```azurecli-interactive
+ az sig image-version create --resource-group MyResourceGroup \
+ --gallery-name MyGallery --gallery-image-definition MyImageDef \
+ --gallery-image-version 1.0.0 \
+ --managed-image /subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM
+ ```
+
+ If a managed disk or a managed disk snapshot needs to be used as the image source for the image version, replace `--managed-image` in the preceding command with `--os-snapshot` and provide the disk or the snapshot resource name.
+
+1. Create a Trusted Launch VM from the preceding image version.
+
+ ```azurecli-interactive
+ adminUsername=linuxvm
+ az vm create --resource-group MyResourceGroup \
+ --name myTrustedLaunchVM \
+ --image "/subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/galleries/MyGallery/images/MyImageDef" \
+ --size Standard_D2s_v5 \
+ --security-type TrustedLaunch \
+ --enable-secure-boot true \
+ --enable-vtpm true \
+ --admin-username $adminUsername \
+ --generate-ssh-keys
+ ```
#### [PowerShell](#tab/powershell2)
-Create an image definition with `TrustedLaunch` security type
-
-```azurepowershell-interactive
-$rgName = "MyResourceGroup"
-$galleryName = "MyGallery"
-$galleryImageDefinitionName = "MyImageDef"
-$location = "eastus"
-$publisherName = "TrustedlaunchPublisher"
-$offerName = "TrustedlaunchOffer"
-$skuName = "TrustedlaunchSku"
-$description = "My gallery"
-$SecurityType = @{Name='SecurityType';Value='TrustedLaunch'}
-$features = @($SecurityType)
-New-AzGalleryImageDefinition -ResourceGroupName $rgName -GalleryName $galleryName -Name $galleryImageDefinitionName -Location $location -Publisher $publisherName -Offer $offerName -Sku $skuName -HyperVGeneration "V2" -OsState "Generalized" -OsType "Windows" -Description $description -Feature $features
-```
-
-To create an image version, we can capture an existing Windows based Trusted launch VM. [Generalize the Trusted launch VM](generalize.yml) before creating the image version.
+1. Create an image definition with the `TrustedLaunch` security type.
+
+ ```azurepowershell-interactive
+ $rgName = "MyResourceGroup"
+ $galleryName = "MyGallery"
+ $galleryImageDefinitionName = "MyImageDef"
+ $location = "eastus"
+ $publisherName = "TrustedlaunchPublisher"
+ $offerName = "TrustedlaunchOffer"
+ $skuName = "TrustedlaunchSku"
+ $description = "My gallery"
+ $SecurityType = @{Name='SecurityType';Value='TrustedLaunch'}
+ $features = @($SecurityType)
+ New-AzGalleryImageDefinition -ResourceGroupName $rgName -GalleryName $galleryName -Name $galleryImageDefinitionName -Location $location -Publisher $publisherName -Offer $offerName -Sku $skuName -HyperVGeneration "V2" -OsState "Generalized" -OsType "Windows" -Description $description -Feature $features
+ ```
+
+1. To create an image version, you can capture an existing Windows-based Trusted Launch VM. [Generalize the Trusted Launch VM](generalize.yml) before you create the image version.
+
+ ```azurepowershell-interactive
+ $rgName = "MyResourceGroup"
+ $galleryName = "MyGallery"
+ $galleryImageDefinitionName = "MyImageDef"
+ $location = "eastus"
+ $galleryImageVersionName = "1.0.0"
+ $sourceImageId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myVMRG/providers/Microsoft.Compute/virtualMachines/myVM"
+ New-AzGalleryImageVersion -ResourceGroupName $rgName -GalleryName $galleryName -GalleryImageDefinitionName $galleryImageDefinitionName -Name $galleryImageVersionName -Location $location -SourceImageId $sourceImageId
+ ```
+
+1. Create a Trusted Launch VM from the preceding image version.
+
+ ```azurepowershell-interactive
+ $rgName = "MyResourceGroup"
+ $galleryName = "MyGallery"
+ $galleryImageDefinitionName = "MyImageDef"
+ $location = "eastus"
+ $vmName = "myVMfromImage"
+ $vmSize = "Standard_D2s_v5"
+ $imageDefinition = Get-AzGalleryImageDefinition `
+ -GalleryName $galleryName `
+ -ResourceGroupName $rgName `
+ -Name $galleryImageDefinitionName
+ $cred = Get-Credential `
+ -Message "Enter a username and password for the virtual machine"
+ # Network pieces
+ $subnetConfig = New-AzVirtualNetworkSubnetConfig `
+ -Name mySubnet `
+ -AddressPrefix 192.168.1.0/24
+ $vnet = New-AzVirtualNetwork `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -Name MYvNET `
+ -AddressPrefix 192.168.0.0/16 `
+ -Subnet $subnetConfig
+ $pip = New-AzPublicIpAddress `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -Name "mypublicdns$(Get-Random)" `
+ -AllocationMethod Static `
+ -IdleTimeoutInMinutes 4
+ $nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
+ -Name myNetworkSecurityGroupRuleRDP `
+ -Protocol Tcp `
+ -Direction Inbound `
+ -Priority 1000 `
+ -SourceAddressPrefix * `
+ -SourcePortRange * `
+ -DestinationAddressPrefix * `
+ -DestinationPortRange 3389 `
+ -Access Deny
+ $nsg = New-AzNetworkSecurityGroup `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -Name myNetworkSecurityGroup `
+ -SecurityRules $nsgRuleRDP
+ $nic = New-AzNetworkInterface `
+ -Name myNic `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -SubnetId $vnet.Subnets[0].Id `
+ -PublicIpAddressId $pip.Id `
+ -NetworkSecurityGroupId $nsg.Id
+ $vm = New-AzVMConfig -vmName $vmName -vmSize $vmSize | `
+ Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
+ Set-AzVMSourceImage -Id $imageDefinition.Id | `
+ Add-AzVMNetworkInterface -Id $nic.Id
+ $vm = Set-AzVMSecurityProfile -SecurityType "TrustedLaunch" -VM $vm
+ $vm = Set-AzVmUefi -VM $vm `
+ -EnableVtpm $true `
+ -EnableSecureBoot $true
+ New-AzVM `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -VM $vm
+ ```
+
+## Trusted Launch built-in policies
-```azurepowershell-interactive
-$rgName = "MyResourceGroup"
-$galleryName = "MyGallery"
-$galleryImageDefinitionName = "MyImageDef"
-$location = "eastus"
-$galleryImageVersionName = "1.0.0"
-$sourceImageId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myVMRG/providers/Microsoft.Compute/virtualMachines/myVM"
-New-AzGalleryImageVersion -ResourceGroupName $rgName -GalleryName $galleryName -GalleryImageDefinitionName $galleryImageDefinitionName -Name $galleryImageVersionName -Location $location -SourceImageId $sourceImageId
-```
-Create a Trusted launch VM from the above image version
+To help users adopt Trusted Launch, Azure policies are available to help resource owners adopt Trusted Launch. The main objective is to help convert Generation 1 and 2 VMs that are Trusted Launch capable.
-```azurepowershell-interactive
-$rgName = "MyResourceGroup"
-$galleryName = "MyGallery"
-$galleryImageDefinitionName = "MyImageDef"
-$location = "eastus"
-$vmName = "myVMfromImage"
-$vmSize = "Standard_D2s_v5"
-$imageDefinition = Get-AzGalleryImageDefinition `
- -GalleryName $galleryName `
- -ResourceGroupName $rgName `
- -Name $galleryImageDefinitionName
-$cred = Get-Credential `
- -Message "Enter a username and password for the virtual machine"
-# Network pieces
-$subnetConfig = New-AzVirtualNetworkSubnetConfig `
- -Name mySubnet `
- -AddressPrefix 192.168.1.0/24
-$vnet = New-AzVirtualNetwork `
- -ResourceGroupName $rgName `
- -Location $location `
- -Name MYvNET `
- -AddressPrefix 192.168.0.0/16 `
- -Subnet $subnetConfig
-$pip = New-AzPublicIpAddress `
- -ResourceGroupName $rgName `
- -Location $location `
- -Name "mypublicdns$(Get-Random)" `
- -AllocationMethod Static `
- -IdleTimeoutInMinutes 4
-$nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
- -Name myNetworkSecurityGroupRuleRDP `
- -Protocol Tcp `
- -Direction Inbound `
- -Priority 1000 `
- -SourceAddressPrefix * `
- -SourcePortRange * `
- -DestinationAddressPrefix * `
- -DestinationPortRange 3389 `
- -Access Deny
-$nsg = New-AzNetworkSecurityGroup `
- -ResourceGroupName $rgName `
- -Location $location `
- -Name myNetworkSecurityGroup `
- -SecurityRules $nsgRuleRDP
-$nic = New-AzNetworkInterface `
- -Name myNic `
- -ResourceGroupName $rgName `
- -Location $location `
- -SubnetId $vnet.Subnets[0].Id `
- -PublicIpAddressId $pip.Id `
- -NetworkSecurityGroupId $nsg.Id
-$vm = New-AzVMConfig -vmName $vmName -vmSize $vmSize | `
- Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
- Set-AzVMSourceImage -Id $imageDefinition.Id | `
- Add-AzVMNetworkInterface -Id $nic.Id
-$vm = Set-AzVMSecurityProfile -SecurityType "TrustedLaunch" -VM $vm
-$vm = Set-AzVmUefi -VM $vm `
- -EnableVtpm $true `
- -EnableSecureBoot $true
-New-AzVM `
- -ResourceGroupName $rgName `
- -Location $location `
- -VM $vm
-```
-
-## Trusted Launch Built-In Policies
+The **Virtual machine should have Trusted launch enabled** single policy checks if the VM is currently enabled with Trusted Launch security configurations. The **Disks and OS supported for Trusted launch** policy checks if previously created VMs have the [capable Generation 2 OS and VM size](trusted-launch.md#virtual-machines-sizes) to deploy a Trusted Launch VM.
-To help end-users adopt Trusted Launch, there is Azure policies available to help resource owners adopt Trusted Launch. The main objective being to help convert Generation 1 and 2 Virtual Machines that are Trusted Launch capable. **Virtual Machine should have Trusted Launch enabled** single policy checks if the virtual machine, currently enabled with Trusted Launch security configurations. **Disks and OS Supported for Trusted Launch** checks if previously created virtual machines has the [capable Generation 2 operating system and virtual machine size](trusted-launch.md#virtual-machines-sizes) to deploy a Trusted Launch virtual machines. These two policies come together to make the Trusted Launch policy initative, enabling you to group several related policy definitions to simplify assignments and management resources to include Trusted Launch configuration.
+These two policies come together to make the Trusted Launch policy initiative. This initiative enables you to group several related policy definitions to simplify assignments and management resources to include Trusted Launch configuration.
-To learn more and start deploying, see [Trusted Launch built-in policies](../governance/policy/samples/built-in-policies.md#trusted-launch).
+To learn more and start deploying, see [Trusted Launch built-in policies](../governance/policy/samples/built-in-policies.md#trusted-launch).
## Verify or update your settings
-For VMs created with trusted launch enabled, you can view the trusted launch configuration by visiting the **Overview** page for the VM in the Azure portal. The **Properties** tab will show the status of Trusted Launch features:
+For VMs created with Trusted Launch enabled, you can view the Trusted Launch configuration by going to the **Overview** page for the VM in the Azure portal. The **Properties** tab shows the status of Trusted Launch features.
-To change the trusted launch configuration, in the left menu, under the **Settings** section, select **Configuration**. You can enable or disable Secure Boot, vTPM, and Integrity Monitoring from the **Security type** section. Select **Save** at the top of the page when you're done.
+To change the Trusted Launch configuration, on the left menu, under **Settings**, select **Configuration**. In the **Security type** section, you can enable or disable **Secure Boot**, **vTPM**, and **Integrity monitoring**. Select **Save** at the top of the page when you're finished.
-If the VM is running, you receive a message that the VM will be restarted. Select **Yes** then wait for the VM to restart for changes to take effect.
+If the VM is running, you receive a message that the VM will restart. Select **Yes** and then wait for the VM to restart for changes to take effect.
-## Next steps
+## Related content
-Learn more about [trusted launch](trusted-launch.md) and [boot integrity monitoring](boot-integrity-monitoring-overview.md) VMs.
+Learn more about [Trusted Launch](trusted-launch.md) and [boot integrity monitoring](boot-integrity-monitoring-overview.md) VMs.
virtual-machines Trusted Launch Secure Boot Custom Uefi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-secure-boot-custom-uefi.md
Title: Secure Boot UEFI Keys
-description: This feature allows customers to bind UEFI keys (db/dbx/pk/kek) for drivers/kernel modules signed using a private key that is owned by Azure partners or customerΓÇÖs third-party vendors
+ Title: Secure Boot UEFI keys
+description: Customers can bind UEFI keys (db/dbx/pk/kek) for drivers/kernel modules that are signed by using a private key that's owned by an Azure partner's or customer's third-party vendors.
Last updated 04/10/2024
-# Secure Boot UEFI Keys
+# Secure Boot UEFI keys
+
+This feature allows you to bind unified extensible firmware interface (UEFI) keys for driver/kernel modules that are signed by using a private key that's owned by third-party vendors.
## Overview
-When a Trusted Launch VM is deployed, during the boot process, signatures of all the boot components such as UEFI (Unified Extensible Firmware Interface), shim/bootloader, kernel, and kernel modules/drivers are verified against trusted preloaded UEFI keys. Verification failure on any of the boot components results in no-boot of the VM, or no-load of kernel modules/drivers only. Verification can fail due to a component signed by a key not in the preloaded UEFI keys list or an unsigned component.
+When an Azure Trusted Launch virtual machine (VM) is deployed, during the boot process, signatures of all the boot components such as the UEFI, shim/bootloader, kernel, and kernel modules/drivers are verified against trusted preloaded UEFI keys. Verification failure on any of the boot components results in no-boot of the VM or no-load of the kernel modules/drivers only. Verification can fail because of a component signed by a key that's not in the preloaded UEFI keys list or an unsigned component.
+
+Many types of Azure partner-provided or customer-procured software (disaster recovery, network monitoring) install driver/kernel modules as part of their solutions. These driver/kernel modules must be signed for a Trusted Launch VM to boot. Many Azure partners sign their driver/kernel modules with their own private key. This approach requires that the public key (UEFI keys) of the private key pair must be available in the UEFI layer so that the Trusted Launch VM can verify the boot components and boot successfully.
+
+For a Trusted Launch VM, a new feature called Secure Boot UEFI keys is now in preview. With this feature, you can bind UEFI keys (db/dbx/pk/kek) for driver/kernel modules signed by using a private key that's owned by your third-party vendors. In this public preview, you can bind UEFI keys by using the Azure Compute Gallery. Binding UEFI keys for an Azure Marketplace image, or as part of VM deployment parameters, isn't currently supported.
-Many Azure partners provided or customer procured software (disaster recovery, network monitoring) installs drivers/kernel modules as part of their solution. These drivers/kernel modules must be signed for a Trusted Launch VM to boot. Many Azure partners sign their drivers/kernel modules with their own private key. This requires that the public key (UEFI keys) of the private key pair available in UEFI layer so that the Trusted Launch VM can verify boot components and boot successfully.
+> [!NOTE]
+> Binding UEFI keys mostly applies to Linux-based Trusted Launch VMs.
-For Trusted Launch VM, a new feature called Secure Boot UEFI keys is now in preview. This feature allows customers to bind UEFI keys (db/dbx/pk/kek) for drivers/kernel modules signed using a private key that is owned by Azure partners or customerΓÇÖs third-party vendors. In this public preview, you can bind UEFI keys using Azure compute gallery. Binding UEFI keys for marketplace image, or as part of VM deployment parameters, isn't currently supported.
+## Bind Secure Boot keys to an Azure Compute Gallery image
->[!NOTE]
-> Binding UEFI keys is mostly applicable for Linux based Trusted Launch VMs.
-## Bind secureboot keys to Azure compute gallery image
+Follow the steps in the following procedures to bind and create a Trusted Launch VM.
-To bind and create a Trusted Launch VM, the following steps must be followed.
+### Get the virtual hard disk of an Azure Marketplace image
-1. **Get VHD of marketplace image**
+1. Create a Gen2 VM by using an Azure Marketplace image.
+1. Stop the VM to access the operating system (OS) disk.
-- Create a Gen2 VM using a marketplace image-- Stop the VM to access OS disk
+ :::image type="content" source="media/trusted-launch/trusted-launch-custom-stop-vm.png" alt-text="Screenshot that shows how to stop a VM.":::
+1. Open the disk from the leftmost pane of a stopped VM.
-- Open disk from the left navigation pane of stopped VM
+ :::image type="content" source="media/trusted-launch/trusted-launch-custom-open-disk.png" alt-text="Screenshot that shows how to access an OS virtual hard disk.":::
+1. Export the disk to access an OS virtual hard disk (VHD) SAS.
-- Export disk to access OS VHD SAS
+ :::image type="content" source="media/trusted-launch/trusted-launch-custom-generate-url.png" alt-text="Screenshot that shows how to generate a URL.":::
+1. Copy an OS VHD by using an SAS URI to the storage account:
-- Copy OS VHD using SAS URI to the storage account
- 1. Use [azcopy](../storage/common/storage-use-azcopy-v10.md) to perform copy operation.
- 2. Use this storage account and the copied VHD as input to SIG creation.
+ 1. Use [azcopy](../storage/common/storage-use-azcopy-v10.md) to perform the copy operation.
+ 1. Use this storage account and the copied VHD as input for the SIG creation.
-2. **Create SIG using VHD**
+### Create a SIG image by using a VHD
-- Create SIG image by deploying the provided ARM template.
+Create a SIG image by deploying the provided Azure Resource Manager template (ARM template).
+ :::image type="content" source="media/trusted-launch/trusted-launch-custom-template.png" alt-text="Screenshot that shows how to use an Azure template.":::
<details>
-<summary> Access the SIG from OS VHD JSON template </summary>
+<summary> Access the SIG from the OS VHD JSON template </summary>
<pre> { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json",
To bind and create a Trusted Launch VM, the following steps must be followed.
</pre> </details> -- Use this Azure compute gallery image creation template and provide OS vhd URL and its containing storage account name from previous step.
+Use this Azure Compute Gallery image creation template. Provide the OS VHD URL and its containing storage account name from the previous step.
+
+### Create a VM (deploy an ARM template through the portal)
+
+Create a Trusted Launch or confidential VM by using the Azure Compute Gallery image previously created.
-3. **Create VM (Deploy ARM Template through Portal)**
-- Create a Trusted Launch or Confidential VM using the Azure compute gallery image created in Step 1.-- Sample TrustedLaunch VM creation template with Azure compute gallery image:
+The following sample shows a `TrustedLaunch` VM creation template with an Azure Compute Gallery image.
<details>
-<summary> Access the deploy TVM from SIG JSON template </summary>
+<summary> Access the deploy TVM from a SIG JSON template </summary>
<pre> { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
To bind and create a Trusted Launch VM, the following steps must be followed.
</pre> </details>
-4. **Validate custom UEFI key presence in VM.**
-- Do ssh on Linux VM and run **ΓÇ£mokutil--dbΓÇ¥** or **ΓÇ£mokutil--dbxΓÇ¥** to check the corresponding custom UEFI keys in the results.
+### Validate custom UEFI key presence in VM
-## Regions Supported
+Do SSH on the Linux VM and run `mokutil--db` or `mokutil--dbx` to check the corresponding custom UEFI keys in the results.
+
+## Regions supported
| Country | Regions | |: |: |
To bind and create a Trusted Launch VM, the following steps must be followed.
| United Arab Emirates | UAE North | | Japan | Japan East | -
-## Supplemental Information
+## Supplemental information
> [!IMPORTANT] > Method to generate base64 public key certificate to insert as custom UEFI db: [Excerpts taken from Chapter 3. Signing a kernel and modules for Secure Boot Red Hat Enterprise Linux 8 | Red Hat Customer Portal](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/signing-a-kernel-and-modules-for-secure-boot_managing-monitoring-and-updating-the-kernel#generating-a-public-and-private-key-pair_signing-a-kernel-and-modules-for-secure-boot)
-**Install dependencies**
+### Installation dependencies
```bash ~$ sudo yum install pesign openssl kernel-devel mokutil keyutils ```
-**Create key pair to sign the kernel module**
+### Create a key pair to sign the kernel module
```bash $ sudo efikeygen --dbdir /etc/pki/pesign --self-sign --module --common-name 'CN=Organization signing key' --nickname 'Custom Secure Boot key' ```
-**Export public key to cer file**
+### Export a public key to a .cer file
```bash $ sudo certutil -d /etc/pki/pesign -n 'Custom Secure Boot key' -Lr > sb_cert.cer ```
-**Convert to base64 format**
+### Convert to a base64 format
```bash $ openssl x509 -inform der -in sb_cert.cer -out sb_cert_base64.cer ```
-**Extract base64 string to use in SIG creation ARM template**
+### Extract a base64 string to use in a SIG creation ARM template
```bash $ sed -e '/BEGIN CERTIFICATE/d;/END CERTIFICATE/d' sb_cert_base64.cer ```
+## Method to create Azure Compute Gallery and a corresponding Trusted Launch VM by using the Azure CLI
-## Method to create Azure compute gallery and corresponding TrustedLaunch VM using Azure CLI:
-Example Azure compute gallery template with prefilled entries:
-
+The following example of an Azure Compute Gallery template has prefilled entries.
```json {
Example Azure compute gallery template with prefilled entries:
} ```
-### Deploy SIG template using az cli
+### Deploy a SIG template by using az cli
```azurecli-interactive > az deployment group create --resource-group <resourceGroupName> --template-file "<location to template>\SIGWithCustomUEFIKeyExample.json" ```
-### Deploy Trusted Launch VM using Azure compute gallery
+### Deploy a Trusted Launch VM by using the Azure Compute Gallery
```azurecli-interactive > $imagDef="/subscriptions/<subscription id>/resourceGroups/<resourcegroup name>/providers/Microsoft.Compute/galleries/customuefigallerytest/images/image_def/versions/1.0.0" > az vm create --resource-group <resourcegroup name> --name <vm name> --image $imagDef --admin-username <username> --generate-ssh-keys --security-type TrustedLaunch ```
-## Useful links:
-1. [Base64 conversion of certificates](https://www.base64encode.org/enc/certificate/)
-2. [X.509 Certificate Public Key in Base64](https://stackoverflow.com/questions/24492981/x-509-certificate-public-key-in-base64)
-3. [UEFI: What is UEFI Secure Boot and how it works?](https://access.redhat.com/articles/5254641)
-4. [Ubuntu: How to sign things for Secure Boot?](https://ubuntu.com/blog/how-to-sign-things-for-secure-boot)
-5. [Redhat: Signing a kernel and modules for Secure Boot](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/signing-a-kernel-and-modules-for-secure-boot_managing-monitoring-and-updating-the-kernel)
+## Related content
+
+- [Base64 conversion of certificates](https://www.base64encode.org/enc/certificate/)
+- [X.509 Certificate Public Key in Base64](https://stackoverflow.com/questions/24492981/x-509-certificate-public-key-in-base64)
+- [UEFI: What is UEFI Secure Boot and how does it work?](https://access.redhat.com/articles/5254641)
+- [Ubuntu: How to sign things for Secure Boot](https://ubuntu.com/blog/how-to-sign-things-for-secure-boot)
+- [Red Hat: Sign a kernel and modules for Secure Boot](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/signing-a-kernel-and-modules-for-secure-boot_managing-monitoring-and-updating-the-kernel)
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Title: Trusted launch for Azure VMs
-description: Learn about trusted launch for Azure virtual machines.
+ Title: Trusted Launch for Azure VMs
+description: Learn about Trusted Launch for Azure virtual machines.
-# Trusted launch for Azure virtual machines
+# Trusted Launch for Azure virtual machines
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Azure offers trusted launch as a seamless way to improve the security of [generation 2](generation-2.md) VMs. Trusted launch protects against advanced and persistent attack techniques. Trusted launch is composed of several, coordinated infrastructure technologies that can be enabled independently. Each technology provides another layer of defense against sophisticated threats.
+Azure offers Trusted Launch as a seamless way to improve the security of [Generation 2](generation-2.md) virtual machines (VMs). Trusted Launch protects against advanced and persistent attack techniques. Trusted Launch is composed of several coordinated infrastructure technologies that can be enabled independently. Each technology provides another layer of defense against sophisticated threats.
> [!IMPORTANT] >
-> - Trusted Launch is selected as the default state for newly created Azure VMs. If your new VM requires features which are not supported by trusted launch, see the [Trusted Launch FAQs](trusted-launch-faq.md)
-> - Existing [Virtual machines](overview.md) can have trusted launch enabled after being created. For more information, see **[Enable Trusted Launch on existing VMs](trusted-launch-existing-vm.md)**
-> - Existing [Virtual machine scale sets](../virtual-machine-scale-sets/overview.md) can have Trusted launch enabled after being created. For more information, see **[Enable Trusted launch on existing scale sets](trusted-launch-existing-vmss.md)**.
+> - Trusted Launch is selected as the default state for newly created Azure VMs. If your new VM requires features that aren't supported by Trusted Launch, see the [Trusted Launch FAQs](trusted-launch-faq.md).
+> - Existing [VMs](overview.md) can have Trusted Launch enabled after being created. For more information, see [Enable Trusted Launch on existing VMs](trusted-launch-existing-vm.md).
+> - Existing [virtual machine scale sets](../virtual-machine-scale-sets/overview.md) can have Trusted Launch enabled after being created. For more information, see [Enable Trusted Launch on existing scale sets](trusted-launch-existing-vmss.md).
## Benefits -- Securely deploy virtual machines with verified boot loaders, OS kernels, and drivers.-- Securely protect keys, certificates, and secrets in the virtual machines.
+- Securely deploy VMs with verified boot loaders, operating system (OS) kernels, and drivers.
+- Securely protect keys, certificates, and secrets in the VMs.
- Gain insights and confidence of the entire boot chain's integrity.-- Ensure workloads are trusted and verifiable.
+- Ensure that workloads are trusted and verifiable.
-## Virtual Machines sizes
+## Virtual machines sizes
| Type | Supported size families | Currently not supported size families | Not supported size families |: |: |: |: |
-| [General Purpose](sizes-general.md) |[B-series](sizes-b-series-burstable.md), [DCsv2-series](dcv2-series.md), [DCsv3-series](dcv3-series.md#dcsv3-series), [DCdsv3-series](dcv3-series.md#dcdsv3-series), [Dv4-series](dv4-dsv4-series.md#dv4-series), [Dsv4-series](dv4-dsv4-series.md#dsv4-series), [Dsv3-series](dv3-dsv3-series.md#dsv3-series), [Dsv2-series](dv2-dsv2-series.md#dsv2-series), [Dav4-series](dav4-dasv4-series.md#dav4-series), [Dasv4-series](dav4-dasv4-series.md#dasv4-series), [Ddv4-series](ddv4-ddsv4-series.md#ddv4-series), [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series), [Dv5-series](dv5-dsv5-series.md#dv5-series), [Dsv5-series](dv5-dsv5-series.md#dsv5-series), [Ddv5-series](ddv5-ddsv5-series.md#ddv5-series), [Ddsv5-series](ddv5-ddsv5-series.md#ddsv5-series), [Dasv5-series](dasv5-dadsv5-series.md#dasv5-series), [Dadsv5-series](dasv5-dadsv5-series.md#dadsv5-series), [Dlsv5-series](dlsv5-dldsv5-series.md#dlsv5-series), [Dldsv5-series](dlsv5-dldsv5-series.md#dldsv5-series) | [Dpsv5-series](dpsv5-dpdsv5-series.md#dpsv5-series), [Dpdsv5-series](dpsv5-dpdsv5-series.md#dpdsv5-series), [Dplsv5-series](dplsv5-dpldsv5-series.md#dplsv5-series), [Dpldsv5-series](dplsv5-dpldsv5-series.md#dpldsv5-series) | [Av2-series](av2-series.md), [Dv2-series](dv2-dsv2-series.md#dv2-series), [Dv3-series](dv3-dsv3-series.md#dv3-series)
+| [General purpose](sizes-general.md) |[B-series](sizes-b-series-burstable.md), [DCsv2-series](dcv2-series.md), [DCsv3-series](dcv3-series.md#dcsv3-series), [DCdsv3-series](dcv3-series.md#dcdsv3-series), [Dv4-series](dv4-dsv4-series.md#dv4-series), [Dsv4-series](dv4-dsv4-series.md#dsv4-series), [Dsv3-series](dv3-dsv3-series.md#dsv3-series), [Dsv2-series](dv2-dsv2-series.md#dsv2-series), [Dav4-series](dav4-dasv4-series.md#dav4-series), [Dasv4-series](dav4-dasv4-series.md#dasv4-series), [Ddv4-series](ddv4-ddsv4-series.md#ddv4-series), [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series), [Dv5-series](dv5-dsv5-series.md#dv5-series), [Dsv5-series](dv5-dsv5-series.md#dsv5-series), [Ddv5-series](ddv5-ddsv5-series.md#ddv5-series), [Ddsv5-series](ddv5-ddsv5-series.md#ddsv5-series), [Dasv5-series](dasv5-dadsv5-series.md#dasv5-series), [Dadsv5-series](dasv5-dadsv5-series.md#dadsv5-series), [Dlsv5-series](dlsv5-dldsv5-series.md#dlsv5-series), [Dldsv5-series](dlsv5-dldsv5-series.md#dldsv5-series) | [Dpsv5-series](dpsv5-dpdsv5-series.md#dpsv5-series), [Dpdsv5-series](dpsv5-dpdsv5-series.md#dpdsv5-series), [Dplsv5-series](dplsv5-dpldsv5-series.md#dplsv5-series), [Dpldsv5-series](dplsv5-dpldsv5-series.md#dpldsv5-series) | [Av2-series](av2-series.md), [Dv2-series](dv2-dsv2-series.md#dv2-series), [Dv3-series](dv3-dsv3-series.md#dv3-series)
| [Compute optimized](sizes-compute.md) |[FX-series](fx-series.md), [Fsv2-series](fsv2-series.md) | All sizes supported. |
-| [Memory optimized](sizes-memory.md) |[Dsv2-series](dv2-dsv2-series.md#dsv2-series), [Esv3-series](ev3-esv3-series.md#esv3-series), [Ev4-series](ev4-esv4-series.md#ev4-series), [Esv4-series](ev4-esv4-series.md#esv4-series), [Edv4-series](edv4-edsv4-series.md#edv4-series), [Edsv4-series](edv4-edsv4-series.md#edsv4-series), [Eav4-series](eav4-easv4-series.md#eav4-series), [Easv4-series](eav4-easv4-series.md#easv4-series), [Easv5-series](easv5-eadsv5-series.md#easv5-series), [Eadsv5-series](easv5-eadsv5-series.md#eadsv5-series), [Ebsv5-series](ebdsv5-ebsv5-series.md#ebsv5-series),[Ebdsv5-series](ebdsv5-ebsv5-series.md#ebdsv5-series) ,[Edv5-series](edv5-edsv5-series.md#edv5-series), [Edsv5-series](edv5-edsv5-series.md#edsv5-series) | [Epsv5-series](epsv5-epdsv5-series.md#epsv5-series), [Epdsv5-series](epsv5-epdsv5-series.md#epdsv5-series), [M-series](m-series.md), [Msv2-series](msv2-mdsv2-series.md#msv2-medium-memory-diskless), [Mdsv2 Medium Memory series](msv2-mdsv2-series.md#mdsv2-medium-memory-with-disk), [Mv2-series](mv2-series.md) |[Ev3-series](ev3-esv3-series.md#ev3-series)
+| [Memory optimized](sizes-memory.md) |[Dsv2-series](dv2-dsv2-series.md#dsv2-series), [Esv3-series](ev3-esv3-series.md#esv3-series), [Ev4-series](ev4-esv4-series.md#ev4-series), [Esv4-series](ev4-esv4-series.md#esv4-series), [Edv4-series](edv4-edsv4-series.md#edv4-series), [Edsv4-series](edv4-edsv4-series.md#edsv4-series), [Eav4-series](eav4-easv4-series.md#eav4-series), [Easv4-series](eav4-easv4-series.md#easv4-series), [Easv5-series](easv5-eadsv5-series.md#easv5-series), [Eadsv5-series](easv5-eadsv5-series.md#eadsv5-series), [Ebsv5-series](ebdsv5-ebsv5-series.md#ebsv5-series),[Ebdsv5-series](ebdsv5-ebsv5-series.md#ebdsv5-series), [Edv5-series](edv5-edsv5-series.md#edv5-series), [Edsv5-series](edv5-edsv5-series.md#edsv5-series) | [Epsv5-series](epsv5-epdsv5-series.md#epsv5-series), [Epdsv5-series](epsv5-epdsv5-series.md#epdsv5-series), [M-series](m-series.md), [Msv2-series](msv2-mdsv2-series.md#msv2-medium-memory-diskless), [Mdsv2 Medium Memory series](msv2-mdsv2-series.md#mdsv2-medium-memory-with-disk), [Mv2-series](mv2-series.md) |[Ev3-series](ev3-esv3-series.md#ev3-series)
| [Storage optimized](sizes-storage.md) | [Lsv2-series](lsv2-series.md), [Lsv3-series](lsv3-series.md), [Lasv3-series](lasv3-series.md) | All sizes supported. | | [GPU](sizes-gpu.md) |[NCv2-series](ncv2-series.md), [NCv3-series](ncv3-series.md), [NCasT4_v3-series](nct4-v3-series.md#ncast4_v3-series), [NVv3-series](nvv3-series.md), [NVv4-series](nvv4-series.md), [NDv2-series](ndv2-series.md), [NC_A100_v4-series](nc-a100-v4-series.md#nc-a100-v4-series), [NVadsA10 v5-series](nva10v5-series.md#nvadsa10-v5-series) | [NDasrA100_v4-series](nda100-v4-series.md), [NDm_A100_v4-series](ndm-a100-v4-series.md) | [NC-series](nc-series.md), [NV-series](nv-series.md), [NP-series](np-series.md) | [High Performance Compute](sizes-hpc.md) |[HB-series](hb-series.md), [HBv2-series](hbv2-series.md), [HBv3-series](hbv3-series.md), [HBv4-series](hbv4-series.md), [HC-series](hc-series.md), [HX-series](hx-series.md) | All sizes supported. | > [!NOTE]
-> - Installation of the **CUDA & GRID drivers on Secure Boot enabled Windows VMs** does not require any extra steps.
-> - Installation of the **CUDA driver on Secure Boot enabled Ubuntu VMs** requires extra steps documented at [Install NVIDIA GPU drivers on N-series VMs running Linux](./linux/n-series-driver-setup.md#install-cuda-drivers-on-n-series-vms). Secure Boot should be disabled for installing CUDA Drivers on other Linux VMs.
-> - Installation of the **GRID driver** requires secure boot to be disabled for Linux VMs.
-> - **Not Supported** size families do not support [generation 2](generation-2.md) VMs. Change VM Size to equivalent **Supported size families** for enabling Trusted Launch.
+> - Installation of the *CUDA & GRID drivers on Secure Boot-enabled Windows VMs* doesn't require any extra steps.
+> - Installation of the *CUDA driver on Secure Boot-enabled Ubuntu VMs* requires extra steps. For more information, see [Install NVIDIA GPU drivers on N-series VMs running Linux](./linux/n-series-driver-setup.md#install-cuda-drivers-on-n-series-vms). Secure Boot should be disabled for installing CUDA drivers on other Linux VMs.
+> - Installation of the *GRID driver* requires Secure Boot to be disabled for Linux VMs.
+> - *Not supported* size families don't support [Generation 2](generation-2.md) VMs. Change the VM size to equivalent *supported size families* for enabling Trusted Launch.
## Operating systems supported
Azure offers trusted launch as a seamless way to improve the security of [genera
| Windows Server |2016, 2019, 2022 &#42; | | Window Server (Azure Edition) | 2022 |
-&#42; Variations of this operating system are supported.
+&#42; Variations of this OS are supported.
-## Additional information
+## More information
**Regions**:
Azure offers trusted launch as a seamless way to improve the security of [genera
- All Azure China regions **Pricing**:
-Trusted launch does not increase existing VM pricing costs.
+Trusted Launch doesn't increase existing VM pricing costs.
## Unsupported features
-> [!NOTE]
-> The following Virtual Machine features are currently not supported with Trusted Launch.
+Currently, the following VM features aren't supported with Trusted Launch:
+
+- [Azure Site Recovery](../site-recovery/concepts-trusted-vm.md) (currently in preview).
+- [Managed Image](capture-image-resource.yml) (customers are encouraged to use [Azure Compute Gallery](trusted-launch-portal.md#trusted-launch-vm-supported-images)).
+- Nested virtualization (v5 VM size families supported).
-- [Azure Site Recovery](../site-recovery/concepts-trusted-vm.md) (**Currently in Preview**)-- [Managed Image](capture-image-resource.yml) (Customers are encouraged to use [Azure Compute Gallery](trusted-launch-portal.md#trusted-launch-vm-supported-images))-- Nested Virtualization (v5 VM size families supported)
+## Secure Boot
-## Secure boot
+At the root of Trusted Launch is Secure Boot for your VM. Secure Boot, which is implemented in platform firmware, protects against the installation of malware-based rootkits and boot kits. Secure Boot works to ensure that only signed operating systems and drivers can boot. It establishes a "root of trust" for the software stack on your VM.
-At the root of trusted launch is Secure Boot for your VM. Secure Boot, which is implemented in platform firmware, protects against the installation of malware-based rootkits and boot kits. Secure Boot works to ensure that only signed operating systems and drivers can boot. It establishes a "root of trust" for the software stack on your VM. With Secure Boot enabled, all OS boot components (boot loader, kernel, kernel drivers) require trusted publishers signing. Both Windows and select Linux distributions support Secure Boot. If Secure Boot fails to authenticate that the image is signed by a trusted publisher, the VM fails to boot. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot).
+With Secure Boot enabled, all OS boot components (boot loader, kernel, kernel drivers) require trusted publishers signing. Both Windows and select Linux distributions support Secure Boot. If Secure Boot fails to authenticate that the image is signed by a trusted publisher, the VM fails to boot. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot).
## vTPM
-Trusted launch also introduces vTPM for Azure VMs. vTPM is a virtualized version of a hardware [Trusted Platform Module](/windows/security/information-protection/tpm/trusted-platform-module-overview), compliant with the TPM2.0 spec. It serves as a dedicated secure vault for keys and measurements. Trusted launch provides your VM with its own dedicated TPM instance, running in a secure environment outside the reach of any VM. The vTPM enables [attestation](/windows/security/information-protection/tpm/tpm-fundamentals#measured-boot-with-support-for-attestation) by measuring the entire boot chain of your VM (UEFI, OS, system, and drivers).
+Trusted Launch also introduces virtual Trusted Platform Module (vTPM) for Azure VMs. This virtualized version of a hardware [Trusted Platform Module](/windows/security/information-protection/tpm/trusted-platform-module-overview) is compliant with the TPM2.0 spec. It serves as a dedicated secure vault for keys and measurements.
+
+Trusted Launch provides your VM with its own dedicated TPM instance that runs in a secure environment outside the reach of any VM. The vTPM enables [attestation](/windows/security/information-protection/tpm/tpm-fundamentals#measured-boot-with-support-for-attestation) by measuring the entire boot chain of your VM (UEFI, OS, system, and drivers).
-Trusted launch uses the vTPM to perform remote attestation through the cloud. Attestations enable platform health checks and for making trust-based decisions. As a health check, trusted launch can cryptographically certify that your VM booted correctly. If the process fails, possibly because your VM is running an unauthorized component, Microsoft Defender for Cloud issues integrity alerts. The alerts include details on which components failed to pass integrity checks.
+Trusted Launch uses the vTPM to perform remote attestation through the cloud. Attestations enable platform health checks and are used for making trust-based decisions. As a health check, Trusted Launch can cryptographically certify that your VM booted correctly.
+
+If the process fails, possibly because your VM is running an unauthorized component, Microsoft Defender for Cloud issues integrity alerts. The alerts include details on which components failed to pass integrity checks.
## Virtualization-based security
-[Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) uses the hypervisor to create a secure and isolated region of memory. Windows uses these regions to run various security solutions with increased protection against vulnerabilities and malicious exploits. Trusted launch lets you enable Hypervisor Code Integrity (HVCI) and Windows Defender Credential Guard.
+[Virtualization-based security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) uses the hypervisor to create a secure and isolated region of memory. Windows uses these regions to run various security solutions with increased protection against vulnerabilities and malicious exploits. Trusted Launch lets you enable hypervisor code integrity (HVCI) and Windows Defender Credential Guard.
-HVCI is a powerful system mitigation that protects Windows kernel-mode processes against injection and execution of malicious or unverified code. It checks kernel mode drivers and binaries before they run, preventing unsigned files from loading into memory. Checks ensure executable code can't be modified once it's allowed to load. For more information about VBS and HVCI, see [Virtualization Based Security (VBS) and Hypervisor Enforced Code Integrity (HVCI)](https://techcommunity.microsoft.com/t5/windows-insider-program/virtualization-based-security-vbs-and-hypervisor-enforced-code/m-p/240571).
+HVCI is a powerful system mitigation that protects Windows kernel-mode processes against injection and execution of malicious or unverified code. It checks kernel mode drivers and binaries before they run, preventing unsigned files from loading into memory. Checks ensure that executable code can't be modified after it's allowed to load. For more information about VBS and HVCI, see [Virtualization-based security and hypervisor-enforced code integrity](https://techcommunity.microsoft.com/t5/windows-insider-program/virtualization-based-security-vbs-and-hypervisor-enforced-code/m-p/240571).
-With trusted launch and VBS, you can enable Windows Defender Credential Guard. Credential Guard isolates and protects secrets so that only privileged system software can access them. It helps prevent unauthorized access to secrets and credential theft attacks, like Pass-the-Hash (PtH) attacks. For more information, see [Credential Guard](/windows/security/identity-protection/credential-guard/credential-guard).
+With Trusted Launch and VBS, you can enable Windows Defender Credential Guard. Credential Guard isolates and protects secrets so that only privileged system software can access them. It helps prevent unauthorized access to secrets and credential theft attacks, like Pass-the-Hash attacks. For more information, see [Credential Guard](/windows/security/identity-protection/credential-guard/credential-guard).
## Microsoft Defender for Cloud integration
-Trusted launch is integrated with Microsoft Defender for Cloud to ensure your VMs are properly configured. Microsoft Defender for Cloud continually assesses compatible VMs and issue relevant recommendations.
+Trusted Launch is integrated with Defender for Cloud to ensure that your VMs are properly configured. Defender for Cloud continually assesses compatible VMs and issues relevant recommendations:
-- **Recommendation to enable Secure Boot** - Secure Boot recommendation only applies for VMs that support trusted launch. Microsoft Defender for Cloud identifies VMs that can enable Secure Boot, but have it disabled. It issues a low severity recommendation to enable it.-- **Recommendation to enable vTPM** - If your VM has vTPM enabled, Microsoft Defender for Cloud can use it to perform Guest Attestation and identify advanced threat patterns. If Microsoft Defender for Cloud identifies VMs that support trusted launch and have vTPM disabled, it issues a low severity recommendation to enable it.-- **Recommendation to install guest attestation extension** - If your VM has secure boot and vTPM enabled but it doesn't have the guest attestation extension installed, Microsoft Defender for Cloud issues low severity recommendations to install the guest attestation extension on it. This extension allows Microsoft Defender for Cloud to proactively attest and monitor the boot integrity of your VMs. Boot integrity is attested via remote attestation.-- **Attestation health assessment or Boot Integrity Monitoring** - If your VM has Secure Boot and vTPM enabled and attestation extension installed, Microsoft Defender for Cloud can remotely validate that your VM booted in a healthy way. This is known as boot integrity monitoring. Microsoft Defender for Cloud issues an assessment, indicating the status of remote attestation.
+- **Recommendation to enable Secure Boot**: The Secure Boot recommendation only applies for VMs that support Trusted Launch. Defender for Cloud identifies VMs that can enable Secure Boot but have it disabled. It issues a low-severity recommendation to enable it.
+- **Recommendation to enable vTPM**: If your VM has vTPM enabled, Defender for Cloud can use it to perform guest attestation and identify advanced threat patterns. If Defender for Cloud identifies VMs that support Trusted Launch and have vTPM disabled, it issues a low-severity recommendation to enable it.
+- **Recommendation to install guest attestation extension**: If your VM has Secure Boot and vTPM enabled but it doesn't have the Guest Attestation extension installed, Defender for Cloud issues low-severity recommendations to install the Guest Attestation extension on it. This extension allows Defender for Cloud to proactively attest and monitor the boot integrity of your VMs. Boot integrity is attested via remote attestation.
+- **Attestation health assessment or boot integrity monitoring**: If your VM has Secure Boot and vTPM enabled and the Attestation extension installed, Defender for Cloud can remotely validate that your VM booted in a healthy way. This practice is known as boot integrity monitoring. Defender for Cloud issues an assessment that indicates the status of remote attestation.
-If your VMs are properly set up with trusted launch, Microsoft Defender for Cloud can detect and alert you of VM health problems.
+ If your VMs are properly set up with Trusted Launch, Defender for Cloud can detect and alert you of VM health problems.
-- **Alert for VM attestation failure:** Microsoft Defender for Cloud periodically performs attestation on your VMs. The attestation also happens after your VM boots. If the attestation fails, it triggers a medium severity alert.
+- **Alert for VM attestation failure**: Defender for Cloud periodically performs attestation on your VMs. The attestation also happens after your VM boots. If the attestation fails, it triggers a medium-severity alert.
VM attestation can fail for the following reasons: - The attested information, which includes a boot log, deviates from a trusted baseline. Any deviation can indicate that untrusted modules have been loaded, and the OS could be compromised. - The attestation quote couldn't be verified to originate from the vTPM of the attested VM. An unverified origin can indicate that malware is present and could be intercepting traffic to the vTPM. > [!NOTE]
- > Alerts are available for VMs with vTPM enabled and the Attestation extension installed. Secure Boot must be enabled for attestation to pass. Attestation fails if Secure Boot is disabled. If you must disable Secure Boot, you can suppress this alert to avoid false positives.
+ > Alerts are available for VMs with vTPM enabled and the Attestation extension installed. Secure Boot must be enabled for attestation to pass. Attestation fails if Secure Boot is disabled. If you must disable Secure Boot, you can suppress this alert to avoid false positives.
+
+- **Alert for untrusted Linux kernel module**: For Trusted Launch with Secure Boot enabled, it's possible for a VM to boot even if a kernel driver fails validation and is prohibited from loading. If this scenario happens, Defender for Cloud issues low-severity alerts. While there's no immediate threat, because the untrusted driver hasn't been loaded, these events should be investigated. Ask yourself:
-- **Alert for Untrusted Linux Kernel module:** For trusted launch with secure boot enabled, it's possible for a VM to boot even if a kernel driver fails validation and is prohibited from loading. If this happens, Microsoft Defender for Cloud issues low severity alerts. While there's no immediate threat, because the untrusted driver hasn't been loaded, these events should be investigated.
- - Which kernel driver failed? Am I familiar with this driver and expect it to be loaded?
- - Is this the exact version of the driver I'm expecting? Are the driver binaries intact? If this is a third party driver, did the vendor pass the OS compliance tests to get it signed?
+ - Which kernel driver failed? Am I familiar with this driver and do I expect it to load?
+ - Is this the exact version of the driver I'm expecting? Are the driver binaries intact? If this is a third-party driver, did the vendor pass the OS compliance tests to get it signed?
-## Next steps
+## Related content
-Deploy a [trusted launch VM](trusted-launch-portal.md).
+Deploy a [Trusted Launch VM](trusted-launch-portal.md).
virtual-network Configure Public Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-firewall.md
In this section, you add a public IP configuration to Azure Firewall. For more i
## Advanced configuration
-This example is a simple deployment of Azure Firewall. For advanced configuration and setup, see [Tutorial: Deploy and configure Azure Firewall and policy by using the Azure portal](../../firewall/tutorial-firewall-deploy-portal-policy.md). You can associate an Azure firewall with a network address translation (NAT) gateway to extend the extensibility of source network address translation (SNAT). A NAT gateway can be used to provide outbound connectivity associated with the firewall. With this configuration, all inbound traffic uses the public IP address or addresses of the NAT gateway. Traffic egresses through the Azure firewall public IP address or addresses. For more information, see [Scale SNAT ports with Azure Virtual Network NAT](../../firewall/integrate-with-nat-gateway.md).
+This example is a simple deployment of Azure Firewall. For advanced configuration and setup, see [Tutorial: Deploy and configure Azure Firewall and policy by using the Azure portal](../../firewall/tutorial-firewall-deploy-portal-policy.md). You can associate an Azure firewall with a network address translation (NAT) gateway to extend the extensibility of source network address translation (SNAT). A NAT gateway can be used to provide outbound connectivity associated with the firewall. With this configuration, all outbound traffic uses the public IP address or addresses of the NAT gateway. For more information, see [Scale SNAT ports with Azure Virtual Network NAT](../../firewall/integrate-with-nat-gateway.md).
> [!NOTE] > Azure firewall uses the Standard SKU load balancer. Protocols other than Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) in network filter rules are unsupported for SNAT to the public IP of the firewall.
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
description: Learn how to create, change, or delete a virtual network peering. W
Previously updated : 03/13/2024 Last updated : 06/27/2024
deselect the **Allow traffic to remote virtual network** setting if you want to
:::image type="content" source="./media/virtual-network-manage-peering/select-peering.png" alt-text="Screenshot of select a peering to delete from the virtual network.":::
-1. On the right side of the peering you want to delete, select the **...** and then select **Delete**.
+1. Select the box next to the peering you want to delete, and then select **Delete**.
:::image type="content" source="./media/virtual-network-manage-peering/delete-peering.png" alt-text="Screenshot of deleting a peering from the virtual network.":::
-1. Select **Yes** to confirm that you want to delete the peering and the corresponding peer.
+1. In **Delete Peerings**, enter **delete** in the confirmation box, and then select **Delete**.
- :::image type="content" source="./media/virtual-network-manage-peering/confirm-deletion.png" alt-text="Screenshot of peering delete confirmation.":::
+ :::image type="content" source="./media/virtual-network-manage-peering/confirm-deletion.png" alt-text="Screenshot of peering delete confirmation entry box.":::
> [!NOTE]
- > When you delete a virtual network peering from a virtual network, the peering from the remote virtual network will also be deleted.
+ > When you delete a virtual network peering from a virtual network, the peering from the remote virtual network will also be deleted.
+
+1. Select **Delete** to confirm the deletion in **Delete confirmation**.
+
+ :::image type="content" source="./media/virtual-network-manage-peering/confirm-deletion-2.png" alt-text="Screenshot of peering delete confirmation.":::
# [**PowerShell**](#tab/peering-powershell)
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Local circuits can only be connected to ExpressRoute gateways in their correspon
### <a name="update-router"></a>Why am I seeing a message and button called "Update router to latest software version" in portal? > [!NOTE]
-> As of January 2024, the Virtual WAN team has started upgrading virtual hubs to the latest version. If you did not upgrade your hub but now notice that your hub's Router Version says "latest", then your hub was upgraded by the Virtual WAN team.
+> As of July 1 2024, hubs on the old version will be retired in phases and stop functioning as expected.
> Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure portal. If the button isn't visible, open a support case.
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks aren't deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
+YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks aren't deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Please plan a maintenance window of at least 30 minutes, as downtime can last up to 30 minutes in the worst-case scenario. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
There are several things to note with the virtual hub router upgrade: