Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/language-support.md | -> Azure AI Content Safety features not listed in this article, such as Prompt Shields, Protected material detection, Groundedness detection, and Custom categories (rapid) only support English. --## Text moderation --The Azure AI Content Safety text moderation feature supports many languages, but it has been specially trained and tested on a smaller set of languages. +> Azure AI Content Safety models have been specifically trained and tested on the following languages: Chinese, English, French, German, Italian, Japanese, Portuguese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application. > [!NOTE] > **Language auto-detection** > > You don't need to specify a language code for text moderation. The service automatically detects your input language. -| Language name | Language code | Text moderation | Specially trained | +| Language name | Language code | Supported Languages | Specially trained languages| |--||--|--| | Afrikaans | `af` | ✔️ | | | Albanian | `sq` | ✔️ | | |
ai-services | Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md | The following regions are supported for Speech service features such as speech t | Asia Pacific | Japan West | `japanwest` <sup>3</sup> | | Asia Pacific | Korea Central | `koreacentral` <sup>2</sup> | | Canada | Canada Central | `canadacentral` <sup>1</sup> |-| Europe | North Europe | `northeurope` <sup>1,2,4,5,7</sup> | +| Europe | North Europe | `northeurope` <sup>1,2,4,5,7,10</sup> | | Europe | West Europe | `westeurope` <sup>1,2,4,5,7,9,10</sup> | | Europe | France Central | `francecentral` | | Europe | Germany West Central | `germanywestcentral` | | Europe | Norway East | `norwayeast` |-| Europe | Sweden Central | `swedencentral`<sup>8</sup> | +| Europe | Sweden Central | `swedencentral`<sup>8,10</sup> | | Europe | Switzerland North | `switzerlandnorth` <sup>6</sup> | | Europe | Switzerland West | `switzerlandwest` <sup>3</sup> | | Europe | UK South | `uksouth` <sup>1,2,4,7</sup> | The following regions are supported for Speech service features such as speech t | US | East US | `eastus` <sup>1,2,4,5,7,9,11</sup> | | US | East US 2 | `eastus2` <sup>1,2,4,5</sup> | | US | North Central US | `northcentralus` <sup>4,6</sup> |-| US | South Central US | `southcentralus` <sup>1,2,4,5,6,7</sup> | +| US | South Central US | `southcentralus` <sup>1,2,4,5,6,7,10</sup> | | US | West Central US | `westcentralus` <sup>3,5</sup> | | US | West US | `westus` <sup>2,5</sup> | | US | West US 2 | `westus2` <sup>1,2,4,5,7,10</sup> | |
ai-services | Avatar Gestures With Ssml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/avatar-gestures-with-ssml.md | In this example, the avatar will start waving their hand at the left after the w :::image type="content" source="./media/gesture.png" alt-text="Screenshot of displaying the prebuilt avatar waving their hand at the left." lightbox="./media/gesture.png"::: -## Supported pre-built avatar characters, styles, and gestures +## Supported prebuilt avatar characters, styles, and gestures The full list of prebuilt avatar supported gestures provided here can also be found in the text to speech avatar portal. | Characters | Styles | Gestures | ||-|--|+| Harry | business | 123<br>calm-down<br>come-on<br>five-star-reviews<br>good<br>hello<br>introduce<br>invite<br>thanks<br>welcome | +| Harry | casual | 123<br>come-on<br>five-star-reviews<br>gong-xi-fa-cai<br>good<br>happy-new-year<br>hello<br>please<br>welcome | +| Harry | youthful | 123<br>come-on<br>down<br>five-star<br>good<br>hello<br>invite<br>show-right-up-down<br>welcome | +| Jeff | business | 123<br>come-on<br>five-star-reviews<br>hands-up<br>here<br>meddle<br>please2<br>show<br>silence<br>thanks | +| Jeff | formal | 123<br>come-on<br>five-star-reviews<br>lift<br>please<br>silence<br>thanks<br>very-good | | Lisa| casual-sitting | numeric1-left-1<br>numeric2-left-1<br>numeric3-left-1<br>thumbsup-left-1<br>show-front-1<br>show-front-2<br>show-front-3<br>show-front-4<br>show-front-5<br>think-twice-1<br>show-front-6<br>show-front-7<br>show-front-8<br>show-front-9 | | Lisa | graceful-sitting | wave-left-1<br>wave-left-2<br>thumbsup-left<br>show-left-1<br>show-left-2<br>show-left-3<br>show-left-4<br>show-left-5<br>show-right-1<br>show-right-2<br>show-right-3<br>show-right-4<br>show-right-5 | | Lisa | graceful-standing | | | Lisa | technical-sitting | wave-left-1<br>wave-left-2<br>show-left-1<br>show-left-2<br>point-left-1<br>point-left-2<br>point-left-3<br>point-left-4<br>point-left-5<br>point-left-6<br>show-right-1<br>show-right-2<br>show-right-3<br>point-right-1<br>point-right-2<br>point-right-3<br>point-right-4<br>point-right-5<br>point-right-6 |-| Lisa | technical-standing | | +| Lisa | technical-standing | +| Lori | casual | 123-left<br>a-little<br>beg<br>calm-down<br>come-on<br>five-star-reviews<br>good<br>hello<br>open<br>please<br>thanks | +| Lori | graceful | 123-left<br>applaud<br>come-on<br>introduce<br>nod<br>please<br>show-left<br>show-right<br>thanks<br>welcome | +| Lori | formal | 123<br>come-on<br>come-on-left<br>down<br>five-star<br>good<br>hands-triangle<br>hands-up<br>hi<br>hopeful<br>thanks | +| Max | business | a-little-bit<br>click-the-link<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-right<br>good-01<br>good-02<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>lower-left<br>number-one<br>press-both-hands-down-1<br>press-both-hands-down-2<br>push-forward<br>raise-ones-hand<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>slide-to-the-left<br>thanks<br>the-front<br>top-middle-and-bottom-left<br>top-middle-and-bottom-right<br>upper-left<br>upper-right<br>welcome | +| Max | casual | a-little-bit<br>applaud<br>click-the-link<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>good-1<br>good-2<br>hello<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>introduction-to-products-4<br>left<br>length<br>nodding<br>number-one<br>press-both-hands-down<br>raise-ones-hand<br>right<br>right-front<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>slide-to-the-left<br>thanks<br>the-front<br>upper-left<br>upper-right<br>welcome | +| Max | formal | a-little-bit<br>click-the-link<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>front-right<br>good-1<br>good-2<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>lower-left<br>lower-right<br>press-both-hands-down<br>push-forward<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>slide-to-the-left<br>the-front<br>top-middle-and-bottom-right<br>upper-left<br>upper-right | +| Meg | formal | a-little-bit<br>click-the-link<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>front-right<br>good-1<br>good-2<br>hands-forward<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>number-one<br>press-both-hands-down-1<br>press-both-hands-down-2<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>the-front<br>upper-left<br>upper-right | +| Meg | casual | a-little-bit<br>click-the-link<br>cross-hand<br>display-number<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>front-right<br>good-1<br>good-2<br>handclap<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>length<br>lower-left<br>lower-right<br>number-one<br>press-both-hands-down<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-right-to-left<br>slide-to-the-left<br>spread-hands<br>the-front<br>top-middle-and-bottom-left<br>top-middle-and-bottom-right<br>upper-left<br>upper-right | +| Meg | business | a-little-bit<br>encourage-1<br>encourage-2<br>five-star-praise<br>front-left<br>front-right<br>good-1<br>good-2<br>introduction-to-products-1<br>introduction-to-products-2<br>introduction-to-products-3<br>left<br>length<br>number-one<br>press-both-hands-down-1<br>press-both-hands-down-2<br>raise-ones-hand<br>right<br>say-hi<br>shrug-ones-shoulders<br>slide-from-left-to-right<br>slide-to-the-left<br>spread-hands<br>thanks<br>the-front<br>upper-left | Only the `casual-sitting` style is supported via the real-time text to speech API. Gestures are only supported with the batch synthesis API and aren't supported via the real-time API. |
ai-services | Batch Synthesis Avatar Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/batch-synthesis-avatar-properties.md | The following table describes the avatar properties. | Property | Description | |||-| avatarConfig.talkingAvatarCharacter | The character name of the talking avatar.<br/><br/>The supported avatar characters can be found [here](avatar-gestures-with-ssml.md#supported-pre-built-avatar-characters-styles-and-gestures).<br/><br/>This property is required.| -| avatarConfig.talkingAvatarStyle | The style name of the talking avatar.<br/><br/>The supported avatar styles can be found [here](avatar-gestures-with-ssml.md#supported-pre-built-avatar-characters-styles-and-gestures).<br/><br/>This property is required for prebuilt avatar, and optional for customized avatar.| +| avatarConfig.talkingAvatarCharacter | The character name of the talking avatar.<br/><br/>The supported avatar characters can be found [here](avatar-gestures-with-ssml.md#supported-prebuilt-avatar-characters-styles-and-gestures).<br/><br/>This property is required.| +| avatarConfig.talkingAvatarStyle | The style name of the talking avatar.<br/><br/>The supported avatar styles can be found [here](avatar-gestures-with-ssml.md#supported-prebuilt-avatar-characters-styles-and-gestures).<br/><br/>This property is required for prebuilt avatar, and optional for customized avatar.| | avatarConfig.customized | A bool value indicating whether the avatar to be used is customized avatar or not. True for customized avatar, and false for prebuilt avatar.<br/><br/>This property is optional, and the default value is `false`.| | avatarConfig.videoFormat | The format for output video file, could be mp4 or webm.<br/><br/>The `webm` format is required for transparent background.<br/><br/>This property is optional, and the default value is mp4.| | avatarConfig.videoCodec | The codec for output video, could be h264, hevc or vp9.<br/><br/>Vp9 is required for transparent background. The synthesis speed will be slower with vp9 codec, as vp9 encoding is slower.<br/><br/>This property is optional, and the default value is hevc.| |
ai-services | Batch Synthesis Avatar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/batch-synthesis-avatar.md | To submit a batch synthesis request, construct the HTTP POST request body follow - Set the required `inputKind` property. - If the `inputKind` property is set to `PlainText`, you must also set the `voice` property in the `synthesisConfig`. In the example below, the `inputKind` is set to `SSML`, so the `speechSynthesis` isn't set. - Set the required `SynthesisId` property. Choose a unique `SynthesisId` for the same speech resource. The `SynthesisId` can be a string of 3 to 64 characters, including letters, numbers, '-', or '_', with the condition that it must start and end with a letter or number.-- Set the required `talkingAvatarCharacter` and `talkingAvatarStyle` properties. You can find supported avatar characters and styles [here](./avatar-gestures-with-ssml.md#supported-pre-built-avatar-characters-styles-and-gestures).+- Set the required `talkingAvatarCharacter` and `talkingAvatarStyle` properties. You can find supported avatar characters and styles [here](./avatar-gestures-with-ssml.md#supported-prebuilt-avatar-characters-styles-and-gestures). - Optionally, you can set the `videoFormat`, `backgroundColor`, and other properties. For more information, see [batch synthesis properties](batch-synthesis-avatar-properties.md). > [!NOTE] |
ai-services | Real Time Synthesis Avatar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/real-time-synthesis-avatar.md | The default voice is the first voice returned per locale from the [voice list AP ## Select avatar character and style -The supported avatar characters and styles can be found [here](avatar-gestures-with-ssml.md#supported-pre-built-avatar-characters-styles-and-gestures). +The supported avatar characters and styles can be found [here](avatar-gestures-with-ssml.md#supported-prebuilt-avatar-characters-styles-and-gestures). The following code snippet shows how to set avatar character and style: |
ai-services | What Is Text To Speech Avatar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md | Text to speech avatar converts text into a digital video of a photorealistic hum With text to speech avatar's advanced neural network models, the feature empowers users to deliver life-like and high-quality synthetic talking avatar videos for various applications while adhering to [responsible AI practices](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context). -> [!NOTE] -> The text to speech avatar feature is only available in the following service regions: West US 2, West Europe, and Southeast Asia. - Azure AI text to speech avatar feature capabilities include: - Converts text into a digital video of a photorealistic human speaking with natural-sounding voices powered by Azure AI text to speech. Sample code for text to speech avatar is available on [GitHub](https://github.co - When utilizing the text-to-speech avatar feature, charges will be incurred based on the minutes of video output. However, with the real-time avatar, charges are based on the minutes of avatar activation, irrespective of whether the avatar is actively speaking or remaining silent. To optimize costs for real-time avatar usage, refer to the provided tips in the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar#chat-sample) (search "Use Local Video for Idle"). - Throughout an avatar real-time session or batch content creation, the text-to-speech, speech-to-text, Azure OpenAI, or other Azure services are charged separately.-- For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). Note that avatar pricing will only be visible for service regions where the feature is available, including West US 2, West Europe, and Southeast Asia.+- For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). Note that avatar pricing will only be visible for service regions where the feature is available, including Southeast Asia, North Europe, West Europe, Sweden Central, South Central US, and West US 2. ## Available locations -The text to speech avatar feature is only available in the following service regions: West US 2, West Europe, and Southeast Asia. +The text to speech avatar feature is only available in the following service regions: Southeast Asia, North Europe, West Europe, Sweden Central, South Central US, and West US 2. ### Responsible AI |
ai-services | Install Run | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/install-run.md | In this article, learn how to install and run the Translator container online wi * **🆕 Text Transliteration**. Convert text from one language script or writing system to another language script or writing system in real time. For more information, *see* [Container: transliterate text](transliterate-text-parameters.md). -* **🆕 Document translation (preview)**. Synchronously translate documents while preserving structure and format in real time. For more information, *see* [Container:translate documents](translate-document-parameters.md). +* **🆕 Document translation**. Synchronously translate documents while preserving structure and format in real time. For more information, *see* [Container:translate documents](translate-document-parameters.md). ## Prerequisites docker run --rm -it -p 5000:5000 --memory 12g --cpus 4 \ mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest ``` -The above command: +The Docker command: * Creates a running Translator container from a downloaded container image. * Allocates 12 gigabytes (GB) of memory and four CPU core. |
ai-services | Translate Document Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translate-document-parameters.md | -# Container: Translate Documents (preview) --> [!IMPORTANT] -> -> * Azure AI Translator public preview releases provide early access to features that are in active development. -> * Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback. +# Container: Translate Documents **Translate document with source language specified**. -curl -i -X POST "http://localhost:5000/translator/document:translate?sourceLanguage=en&targetLanguage=hi&api-version=2023-11-01-preview" -F "document=@C:\Test\test-file.md;type=text/markdown" -o "C:\translation\translated-file.md" +curl -i -X POST "http://localhost:5000/translator/document:translate?sourceLanguage=en&targetLanguage=hi&api-version=2024-05-01" -F "document=@C:\Test\test-file.md;type=text/markdown" -o "C:\translation\translated-file.md" ``` ## Synchronous request headers and parameters For this project, you need a source document to translate. You can download our Here's an example cURL HTTP request using localhost:5000: ```bash-curl -v "http://localhost:5000/translator/document:translate?sourceLanguage=en&targetLanguage=es&api-version=2023-11-01-preview" -F "document=@document-translation-sample-docx" -o "C:\translation\translated-file.md" +curl -v "http://localhost:5000/translator/document:translate?sourceLanguage=en&targetLanguage=es&api-version=2024-05-01" -F "document=@document-translation-sample-docx" -o "C:\translation\translated-file.md" ``` ***Upon successful completion***: |
ai-services | Client Library Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/client-library-sdks.md | Document Translation is a cloud-based feature of the [Azure AI Translator](../.. > * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Azure AI services (multi-service) resource. > * Document Translation is supported in paid tiers. The Language Studio supports the S1 or D3 instance tiers. We suggest that you select Standard S1 to try Document Translation. *See* [Azure AI services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). > * Document Translation public preview releases provide early access to features that are in active development. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.-> * The public preview version of Document Translation client libraries default to REST API version [**2024-05-01**](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true). +> * The public preview version of Document Translation client libraries default to REST API version **2024-05-01**. ## Prerequisites |
ai-services | Get Documents Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-documents-status.md | |
ai-services | Rest Api Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/rest-api-guide.md | Document Translation is a cloud-based feature of the Azure AI Translator service | Request|Method| Description|API path| ||:-|-|--| |***Single*** |***Synchronous***|***Document***|***Translation***|-|[**Translate document**](translate-document.md)|POST|Synchronously translate a single document.|`{document-translation-endpoint}/translator/document:translate?sourceLanguage={source language}&targetLanguage={target language}&api-version=2023-11-01-preview" -H "Ocp-Apim-Subscription-Key:{your-key}" -F "document={path-to-your-document-with-file-extension};type={ContentType}/{file-extension}" -F "glossary={path-to-your-glossary-with-file-extension};type={ContentType}/{file-extension}" -o "{path-to-output-file}"`| +|[**Translate document**](translate-document.md)|POST|Synchronously translate a single document.|`{document-translation-endpoint}/translator/document:translate?sourceLanguage={source language}&targetLanguage={target language}&api-version=2024-05-01" -H "Ocp-Apim-Subscription-Key:{your-key}" -F "document={path-to-your-document-with-file-extension};type={ContentType}/{file-extension}" -F "glossary={path-to-your-glossary-with-file-extension};type={ContentType}/{file-extension}" -o "{path-to-output-file}"`| ||||| |***Batch***|***Asynchronous***|***Documents***| ***Translation***| |[**Start translation**](start-translation.md)|POST|Start a batch document translation job.|`{document-translation-endpoint}.cognitiveservices.azure.com/translator/text/batch/v1.1/batches`| |
ai-services | Translate Document | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/translate-document.md | Query string parameters: |Query parameter | Description | | | |-|**api-version** | _Required parameter_.<br>Version of the API requested by the client. Current value is `2023-11-01-preview`. | +|**api-version** | _Required parameter_.<br>Version of the API requested by the client. Current value is `2024-05-01`. | |**targetLanguage**|_Required parameter_.<br>Specifies the language of the output document. The target language must be one of the supported languages included in the translation scope.| |• **document=**<br> • **type=**|_Required parameters_.<br>• Path to the file location for your source document and file format type.</br> • Ex: **"document=@C:\Test\Test-file.txt;type=text/html**| |**--output**|_Required parameter_.<br> • File path for the target file location. Your translated file is printed to the output file.</br> • Ex: **"C:\Test\Test-file-output.txt"**. The file extension should be the same as the source file.| |
ai-studio | Deploy Jais Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-jais-models.md | You can find the JAIS model in the [Model Catalog](model-catalog.md) by filterin ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions will not work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).+- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for JAIS is only available with hubs created in these regions: - > [!IMPORTANT] - > For JAIS models, the serverless API model deployment offering is only available with hubs created in East US 2 or Sweden Central region. + * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central + For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md). - An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). ### JAIS 30b Chat -JAIS 30b Chat is an auto-regressive bi-lingual LLM for **Arabic** & **English**. The tuned versions use supervised fine-tuning (SFT). The model is finetuned with both Arabic and English prompt-response pairs. The finetuning datasets included a wide range of instructional data across various domains. The model covers a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, the Core42 team developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic. +JAIS 30b Chat is an auto-regressive bi-lingual LLM for **Arabic** & **English**. The tuned versions use supervised fine-tuning (SFT). The model is fine-tuned with both Arabic and English prompt-response pairs. The fine-tuning datasets included a wide range of instructional data across various domains. The model covers a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, the Core42 team developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic. *Context length:* JAIS supports a context length of 8K. Models deployed as a service with pay-as-you-go billing are protected by [Azure - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
ai-studio | Deploy Models Cohere Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-command.md | The previously mentioned Cohere models can be deployed as a service with pay-as- ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).-- > [!IMPORTANT] - > For Cohere family models, the serverless API model deployment offering is only available with hubs created in **EastUS2** or **Sweden Central** region. +- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Cohere Command is only available with hubs created in these regions: ++ * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central + + For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md). - An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). Models deployed as a serverless API with pay-as-you-go billing are protected by - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
ai-studio | Deploy Models Cohere Embed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-embed.md | The previously mentioned Cohere models can be deployed as a service with pay-as- ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md).-- > [!IMPORTANT] - > For Cohere family models, the serverless API model deployment offering is only available with hubs created in **EastUS2** or **Sweden Central** region. +- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Cohere Embed is only available with hubs created in these regions: ++ * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central + + For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md). - An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). Models deployed as a serverless API are protected by [Azure AI Content Safety](. - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
ai-studio | Deploy Models Jamba | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-jamba.md | To get started with Jamba Instruct deployed as a serverless API, explore our int ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Jamba Instruct is only available with hubs created in **East US 2** and **Sweden Central**.+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Jamba Instruct is only available with hubs created in these regions: ++ * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central + + For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md). - An Azure [AI Studio project](../how-to/create-projects.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions: Models deployed as a serverless API are protected by Azure AI content safety. Wi - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
ai-studio | Deploy Models Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md | If you need to deploy a different model, [deploy it to managed compute](#deploy- # [Meta Llama 3](#tab/llama-three) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md).-- > [!IMPORTANT] - > For Meta Llama 3 models, the pay-as-you-go model deployment offering is only available with hubs created in **East US 2** and **Sweden Central** regions. -+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Meta Llama 3 is only available with hubs created in these regions: ++ * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central + + For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md). - An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions: If you need to deploy a different model, [deploy it to managed compute](#deploy- # [Meta Llama 2](#tab/llama-two) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md).-- > [!IMPORTANT] - > For Meta Llama 2 models, the pay-as-you-go model deployment offering is only available with hubs created in **East US 2** and **West US 3** regions. -+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Meta Llama 2 is only available with hubs created in these regions: ++ * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + + For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md). - An [AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions: Models deployed as a serverless API with pay-as-you-go are protected by Azure AI - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Fine-tune a Meta Llama 2 model in Azure AI Studio](fine-tune-model-llama.md) - [Azure AI FAQ article](../faq.yml)+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
ai-studio | Deploy Models Phi 3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-phi-3.md | Certain models in the model catalog can be deployed as a serverless API with pay ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md).+- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Phi-3 is only available with hubs created in these regions: - > [!IMPORTANT] - > For Phi-3 family models, the serverless API model deployment offering is only available with hubs created in **East US 2** and **Sweden Central** regions. + * East US 2 + * Sweden Central + For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md). - An [Azure AI Studio project](../how-to/create-projects.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok ## Related content + - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
ai-studio | Deploy Models Timegen 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-timegen-1.md | Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok - [What is Azure AI Studio?](../what-is-ai-studio.md) - [Azure AI FAQ article](../faq.yml)+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md) |
aks | Azure Csi Disk Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md | description: Learn how to create a static or dynamic persistent volume with Azur Previously updated : 03/05/2024 Last updated : 06/28/2024 The following table includes parameters you can use to define a custom storage c | | | | | |skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `PremiumV2_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`| |fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows|-|cachingMode | [Azure Data Disk Host Cache Setting][disk-host-cache-setting] | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`| +|cachingMode | [Azure Data Disk Host Cache Setting][disk-host-cache-setting](PremiumV2_LRS and UltraSSD_LRS only support `None` caching mode) | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`| |resourceGroup | Specify the resource group for the Azure Disks | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster| |DiskIOPSReadWrite | [UltraSSD disk][ultra-ssd-disks] or [Premium SSD v2][premiumv2_lrs_disks] IOPS Capability (minimum: 2 IOPS/GiB) | 100~160000 | No | `500`| |DiskMBpsReadWrite | [UltraSSD disk][ultra-ssd-disks] or [Premium SSD v2][premiumv2_lrs_disks] Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`| |
aks | Azure Csi Files Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md | description: Learn how to create a static or dynamic persistent volume with Azur Previously updated : 03/05/2024 Last updated : 06/28/2024 Kubernetes needs credentials to access the file share created in the previous st storageClassName: azurefile-csi csi: driver: file.csi.azure.com- volumeHandle: unique-volumeid # make sure this volumeid is unique for every identical share in the cluster + volumeHandle: "{resource-group-name}#{account-name}#{file-share-name}" # make sure this volumeid is unique for every identical share in the cluster volumeAttributes: resourceGroup: resourceGroupName # optional, only set this when storage account is not in the same resource group as node shareName: aksshare |
aks | Gpu Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md | To use Azure Linux, you specify the OS SKU by setting `os-sku` to `AzureLinux` d name: nvidia-device-plugin-ds spec: tolerations:- - key: nvidia.com/gpu - operator: Exists - effect: NoSchedule + - key: "sku" + operator: "Equal" + value: "gpu" + effect: "NoSchedule" # Mark this pod as a critical add-on; when enabled, the critical add-on # scheduler reserves resources for critical add-on pods so that they can # be rescheduled after a failure. |
aks | Private Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md | Create a private cluster with default basic networking using the [`az aks create ```azurecli-interactive az aks create \ --name <private-cluster-name> \- --resource-group-name <private-cluster-resource-group> \ + --resource-group <private-cluster-resource-group> \ --load-balancer-sku standard \ --enable-private-cluster \ --generate-ssh-keys |
aks | Use Node Taints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-node-taints.md | This article assumes you have an existing AKS cluster. If you need an AKS cluste az aks nodepool update \ --cluster-name $CLUSTER_NAME \ --name $NODE_POOL_NAME \+ --resource-group $RESOURCE_GROUP_NAME \ --node-taints "sku=gpu:NoSchedule" ``` This article assumes you have an existing AKS cluster. If you need an AKS cluste ```azurecli-interactive az aks nodepool update \ --cluster-name $CLUSTER_NAME \+ --resource-group $RESOURCE_GROUP_NAME \ --name $NODE_POOL_NAME \ --node-taints "" ``` When you remove all initialization taint occurrences from node pool replicas, th az aks update \ --resource-group $RESOURCE_GROUP_NAME \ --name $CLUSTER_NAME \- --node-init-taints "sku=gpu:NoSchedule" + --node-init-taints "" ``` ## Check that the taint has been removed from the node |
aks | Use Trusted Launch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-trusted-launch.md | In this article, you learned how to enable trusted launch. Learn more about [tru [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add [az-aks-nodepool-update]: /cli/azure/aks/nodepool#az-aks-nodepool-update [azure-generation-two-virtual-machines]: ../virtual-machines/generation-2.md-[verify-secure-boot-failures]: ../virtual-machines/trusted-launch-faq.md#verifying-secure-boot-failures +[verify-secure-boot-failures]: ../virtual-machines/trusted-launch-faq.md#verify-secure-boot-failures [tusted-launch-ephemeral-os-sizes]: ../virtual-machines/ephemeral-os-disks.md#trusted-launch-for-ephemeral-os-disks [skip-gpu-driver-install]: gpu-cluster.md#skip-gpu-driver-installation-preview |
api-management | Api Management Howto Api Inspector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-api-inspector.md | In this tutorial, you learn how to: :::image type="content" source="media/api-management-howto-api-inspector/api-inspector-002.png" alt-text="Screenshot showing the API inspector." lightbox="media/api-management-howto-api-inspector/api-inspector-002.png"::: - ## Prerequisites + Learn the [Azure API Management terminology](api-management-terminology.md). |
api-management | Api Management Howto Setup Delegation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-setup-delegation.md | The final workflow will be: ### Set up API Management to route requests via delegation endpoint -1. In the Azure portal, search for **Developer portal** in your API Management resource. -1. Click the **Delegation** item. +1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. +1. In the left menu, under **Developer portal**, select **Delegation**. 1. Click the checkbox to enable **Delegate sign-in & sign-up**. :::image type="content" source="media/api-management-howto-setup-delegation/api-management-delegation-signin-up.png" alt-text="Screenshot showing delegation of sign-in and sign-up in the portal."::: |
api-management | Trace Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md | The `trace` policy adds a custom trace into the request tracing output in the te [!INCLUDE [api-management-tracing-alert](../../includes/api-management-tracing-alert.md)] - [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)] ## Policy statement |
api-management | V2 Service Tiers Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md | The following API Management capabilities are currently unavailable in the v2 ti * CA Certificates **Developer portal**-* Delegation of user registration and product subscription * Reports * Custom HTML code widget and custom widget * Self-hosted developer portal The following API Management capabilities are currently unavailable in the v2 ti * Cipher configuration * Client certificate renegotiation * Free, managed TLS certificate-* Request tracing in the test console * Requests to the gateway over localhost ## Resource limits |
app-service | App Service Web Configure Tls Mutual Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-configure-tls-mutual-auth.md | az webapp update --set clientCertEnabled=true --name <app-name> --resource-group ``` ### [Bicep](#tab/bicep) -For Bicep, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sampe Bicep snippet is provided for you: +For Bicep, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sample Bicep snippet is provided for you: ```bicep resource appService 'Microsoft.Web/sites@2020-06-01' = { resource appService 'Microsoft.Web/sites@2020-06-01' = { ### [ARM](#tab/arm) -For ARM templates, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sampe ARM template snippet is provided for you: +For ARM templates, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sample ARM template snippet is provided for you: ```ARM { public class ClientCertValidator { } ``` +## Python sample ++The following Flask and Django Python code samples implement a decorator named `authorize_certificate` that can be used on a view function to permit access only to callers that present a valid client certificate. It expects a PEM formatted certificate in the `X-ARR-ClientCert` header and uses the Python [cryptography](https://pypi.org/project/cryptography/) package to validate the certificate based on its fingerprint (thumbprint), subject common name, issuer common name, and beginning and expiration dates. If validation fails, the decorator ensures that an HTTP response with status code 403 (Forbidden) is returned to the client. ++### [Flask](#tab/flask) ++```python +from functools import wraps +from datetime import datetime, timezone +from flask import abort, request +from cryptography import x509 +from cryptography.x509.oid import NameOID +from cryptography.hazmat.primitives import hashes +++def validate_cert(request): ++ try: + cert_value = request.headers.get('X-ARR-ClientCert') + if cert_value is None: + return False + + cert_data = ''.join(['--BEGIN CERTIFICATE--\n', cert_value, '\n--END CERTIFICATE--\n',]) + cert = x509.load_pem_x509_certificate(cert_data.encode('utf-8')) + + fingerprint = cert.fingerprint(hashes.SHA1()) + if fingerprint != b'12345678901234567890': + return False + + subject = cert.subject + subject_cn = subject.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value + if subject_cn != "contoso.com": + return False + + issuer = cert.issuer + issuer_cn = issuer.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value + if issuer_cn != "contoso.com": + return False + + current_time = datetime.now(timezone.utc) + + if current_time < cert.not_valid_before_utc: + return False + + if current_time > cert.not_valid_after_utc: + return False + + return True ++ except Exception as e: + # Handle any errors encountered during validation + print(f"Encountered the following error during certificate validation: {e}") + return False + +def authorize_certificate(f): + @wraps(f) + def decorated_function(*args, **kwargs): + if not validate_cert(request): + abort(403) + return f(*args, **kwargs) + return decorated_function +``` ++The following code snippet shows how to use the decorator on a Flask view function. ++```python +@app.route('/hellocert') +@authorize_certificate +def hellocert(): + print('Request for hellocert page received') + return render_template('https://docsupdatetracker.net/index.html') +``` ++### [Django](#tab/django) ++```python +from functools import wraps +from datetime import datetime, timezone +from django.core.exceptions import PermissionDenied +from cryptography import x509 +from cryptography.x509.oid import NameOID +from cryptography.hazmat.primitives import hashes +++def validate_cert(request): ++ try: + cert_value = request.headers.get('X-ARR-ClientCert') + if cert_value is None: + return False + + cert_data = ''.join(['--BEGIN CERTIFICATE--\n', cert_value, '\n--END CERTIFICATE--\n',]) + cert = x509.load_pem_x509_certificate(cert_data.encode('utf-8')) + + fingerprint = cert.fingerprint(hashes.SHA1()) + if fingerprint != b'12345678901234567890': + return False + + subject = cert.subject + subject_cn = subject.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value + if subject_cn != "contoso.com": + return False + + issuer = cert.issuer + issuer_cn = issuer.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value + if issuer_cn != "contoso.com": + return False + + current_time = datetime.now(timezone.utc) + + if current_time < cert.not_valid_before_utc: + return False + + if current_time > cert.not_valid_after_utc: + return False + + return True ++ except Exception as e: + # Handle any errors encountered during validation + print(f"Encountered the following error during certificate validation: {e}") + return False ++def authorize_certificate(view): + @wraps(view) + def _wrapped_view(request, *args, **kwargs): + if not validate_cert(request): + raise PermissionDenied + return view(request, *args, **kwargs) + return _wrapped_view +``` ++The following code snippet shows how to use the decorator on a Django view function. ++```python +@authorize_certificate +def hellocert(request): + print('Request for hellocert page received') + return render(request, 'hello_azure/https://docsupdatetracker.net/index.html') +``` +++ [exclusion-paths]: ./media/app-service-web-configure-tls-mutual-auth/exclusion-paths.png |
app-service | Side By Side Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md | description: Learn how to migrate your App Service Environment v2 to App Service Previously updated : 6/26/2024 Last updated : 6/28/2024 # Migration to App Service Environment v3 using the side-by-side migration feature-The side-by-side migration feature automates your migration to App Service Environment v3. The side-by-side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. Because of this process, there's a rollback option if you need to cancel your migration. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md). +The side-by-side migration feature automates your migration to App Service Environment v3. The side-by-side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md). > [!IMPORTANT] > If you fail to complete all steps described in this tutorial, you'll experience downtime. For example, if you don't update all dependent resources with the new IP addresses or you don't allow access to/from your new subnet, such as the case for your custom domain suffix key vault, you'll experience downtime until that's addressed. Once you're ready to redirect traffic, you can complete the final step of the mi > You have 14 days to complete this step. If you don't complete this step in 14 days, your migration is automatically reverted back to an App Service Environment v2. If you need more than 14 days to complete this step, contact support. > -If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, you can revert all changes and return to your old App Service Environment v2. The revert process takes 3 to 6 hours to complete. Once the revert process completes, your old App Service Environment is back online and your new App Service Environment v3 is deleted. You can then attempt the migration again once you resolve any issues. +If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, contact support. ## Use the side-by-side migration feature This step is your opportunity to test and validate your new App Service Environm Once you confirm your apps are working as expected, you can finalize the migration by running the following command. This command also deletes your old environment. You have 14 days to complete this step. If you don't complete this step in 14 days, your migration is automatically reverted back to an App Service Environment v2. If you need more than 14 days to complete this step, contact support. -If you find any issues or decide at this point that you no longer want to proceed with the migration, contact support to revert the migration. Don't run the DNS change command if you need to revert the migration. For more information, see [Revert migration](#redirect-customer-traffic-validate-your-app-service-environment-v3-and-complete-migration). +If you find any issues or decide at this point that you no longer want to proceed with the migration, contact support to discuss your options. Don't run the DNS change command since that command completes the migration. ```azurecli az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=DnsChange&api-version=2022-03-01" The App Service plan SKUs available for App Service Environment v3 run on the Is - **What properties of my App Service Environment will change?** You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. Both your inbound and outbound IPs change when using the side-by-side migration feature. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md). - **What happens if migration fails or there is an unexpected issue during the migration?** - If there's an unexpected issue, support teams are on hand. We recommend that you migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads. With the side-by-side migration feature, you can revert all changes if there's any issues. + If there's an unexpected issue, support teams are on hand. We recommend that you migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads. - **What happens to my old App Service Environment?** If you decide to migrate an App Service Environment using the side-by-side migration feature, your old environment is used up until the final step in the migration process. Once you complete the final step, the old environment and all of the apps hosted on it get shutdown and deleted. Your old environment is no longer accessible. A revert to the old environment at this point isn't possible. - **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** |
automation | Automation Hrw Run Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md | Title: Run Azure Automation runbooks on a Hybrid Runbook Worker description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 02/20/2024 Last updated : 06/28/2024 For instance, a runbook with `Get-AzVM` can return all the VMs in the subscripti ### Use runbook authentication with Hybrid Worker Credentials +**Prerequisite** +- Hybrid Worker should be deployed and the machine should be in running state before executing a runbook. ++**Hybrid Worker Credentials** Instead of having your runbook provide its own authentication to local resources, you can specify Hybrid Worker Credentials for a Hybrid Runbook Worker group. To specify a Hybrid Worker Credentials, you must define a [credential asset](./shared-resources/credentials.md) that has access to local resources. These resources include certificate stores and all runbooks run under these credentials on a Hybrid Runbook Worker in the group. - The user name for the credential must be in one of the following formats: |
automation | Manage Runtime Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runtime-environment.md | Title: Manage Runtime environment and associated runbooks in Azure Automation description: This article tells how to manage runbooks in Runtime environment and associated runbooks Azure Automation Previously updated : 01/17/2024 Last updated : 06/28/2024 An Azure Automation account in supported public region (except Central India, Ge > [!NOTE] > - When you import a package, it might take several minutes. 100MB is the maximum total size of the files that you can import.- > - Use *.zip* files for PowerShell runbook types. + > - Use *.zip* files for PowerShell runbook types as mentioned [here](https://learn.microsoft.com/powershell/scripting/developer/module/understanding-a-windows-powershell-module?view=powershell-7.4) > - For Python 3.8 packages, use .tar.gz or .whl files targeting cp38-amd64. > - For Python 3.10 (preview) packages, use .whl files targeting cp310 Linux OS. |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md | To see how all Azure Arc-enabled components are validated, see [Validation progr ### PureStorage -|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version +|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version| |--|--|--|--|--|-| Portworx Enterprise 2.7 1.22.5 | 1.20.7 | 1.1.0_2021-11-02 | 15.0.2148.140 | Not validated | -| Portworx Enterprise 2.9 | 1.22.5 | 1.1.0_2021-11-02 | 15.0.2195.191 | 12.3 (Ubuntu 12.3-1) | +|[Portworx Enterprise 3.1](https://www.purestorage.com/products/cloud-native-applications/portworx.html)|1.28.7|1.30.0_2024-06-11|16.0.5349.20214|Not validated| +|Portworx Enterprise 2.7 1.22.5 |1.20.7 |1.1.0_2021-11-02 |15.0.2148.140 |Not validated | +|Portworx Enterprise 2.9 |1.22.5 |1.1.0_2021-11-02 |15.0.2195.191 |12.3 (Ubuntu 12.3-1) | ### Red Hat |
azure-cache-for-redis | Cache Best Practices Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-client-libraries.md | Although we don't own or support any client libraries, we do recommend some libr | ioredis | Node.js | [Link](https://github.com/luin/ioredis) | [More information here](https://ioredis.readthedocs.io/en/stable/API/) | > [!NOTE]-> Your application can to connect and use your Azure Cache for Redis instance with any client library that can also communicate with open-source Redis. +> Your application can use any client library that is compatible with open-source Redis to connect to your Azure Cache for Redis instance. ## Client library-specific guidance |
azure-cache-for-redis | Cache Troubleshoot Timeouts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-timeouts.md | There are several changes you can make to mitigate high server load: - Investigate what is causing high server load such as [long-running commands](#long-running-commands), noted in this article, because of high memory pressure. - [Scale](cache-how-to-scale.md) out to more shards to distribute load across multiple Redis processes or scale up to a larger cache size with more CPU cores. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).-- If your production workload on a _C1_ cache is negatively affected by extra latency from virus scanning, you can reduce the effect by to pay for a higher tier offering with multiple CPU cores, such as _C2_.+- If your production workload on a _C1_ cache is negatively affected by extra latency from some internal defender scan runs, you can reduce the effect by scaling to a higher tier offering with multiple CPU cores, such as _C2_. #### Spikes in server load -On _C0_ and _C1_ caches, you might see short spikes in server load not caused by an increase in requests a couple times a day while virus scanning is running on the VMs. You see higher latency for requests while virus scanning is happening on these tiers. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving virus scanning and Redis requests. +On _C0_ and _C1_ caches, you might see short spikes in server load not caused by an increase in requests a couple times a day while internal defender scanning is running on the VMs. You see higher latency for requests while internal defender scans happen on these tiers. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving internal defender scanning and Redis requests. ### High memory usage |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | Migration is a complex task. Start planning your migration to Azure Monitor Agen > [!IMPORTANT] > The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). You can expect the following when you use the MMA or OMS agent after this date.-> - **Data upload:** Cloud ingestion services will gradually reduce support for MMA agents, which may result in decreased support and potential compatibility issues for MMA agents over time. Ingestion for MMA will be unchanged until Febuary 1 2025. -> - **Installation:** The ability to install the legacy agents will be removed from the Azure Portal and installation policies for legacy agents will be removed. You can still install the MMA agents extension as well as perfrom offline installations. +> - **Data upload:** Cloud ingestion services will gradually reduce support for MMA agents, which may result in decreased support and potential compatibility issues for MMA agents over time. Ingestion for MMA will be unchanged until February 1 2025. +> - **Installation:** The ability to install the legacy agents will be removed from the Azure Portal and installation policies for legacy agents will be removed. You can still install the MMA agents extension as well as perform offline installations. > - **Customer Support:** You will not be able to get support for legacy agent issues.-> - **OS Support:** Support for new Linux or Windows distros, incluing service packs won't be added after the deprecation of the legacy agents. +> - **OS Support:** Support for new Linux or Windows distros, including service packs, won't be added after the deprecation of the legacy agents. ## Before you begin |
azure-monitor | Data Collection Text Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md | The table created in the script has two columns: - `TimeGenerated` (datetime) [Required] - `RawData` (string) [Optional if table schema provided]-- 'FilePath' (string) [Optional]+- `FilePath` (string) [Optional] +- `Computer` (string) [Optional] - `YourOptionalColumn` (string) [Optional] -The default table schema for log data collected from text files is 'TimeGenerated' and 'RawData'. Adding the 'FilePath' to either team is optional. If you know your final schema or your source is a JSON log, you can add the final columns in the script before creating the table. You can always [add columns using the Log Analytics table UI](../logs/create-custom-table.md#add-or-delete-a-custom-column) later. +The default table schema for log data collected from text files is 'TimeGenerated' and 'RawData'. Adding the 'FilePath' or 'Computer' to either stream is optional. If you know your final schema or your source is a JSON log, you can add the final columns in the script before creating the table. You can always [add columns using the Log Analytics table UI](../logs/create-custom-table.md#add-or-delete-a-custom-column) later. Your column names and JSON attributes must exactly match to automatically parse into the table. Both columns and JSON attributes are case sensitive. For example `Rawdata` will not collect the event data. It must be `RawData`. Ingestion will drop JSON attributes that do not have a corresponding column. $tableParams = @' "name": "FilePath", "type": "String" },- { + { + "name": "Computer", + "type": "String" + }, + { "name": "YourOptionalColumn", "type": "String" } |
azure-monitor | Api Filtering Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md | -* [Filtering](./api-filtering-sampling.md#filtering) can modify or discard telemetry before it's sent from the SDK by implementing `ITelemetryProcessor`. For example, you could reduce the volume of telemetry by excluding requests from robots. Unlike sampling, You have full control what is sent or discarded, but it will affect any metrics based on aggregated logs. Depending on how you discard items, you might also lose the ability to navigate between related items. +* [Filtering](./api-filtering-sampling.md#filtering) can modify or discard telemetry before it's sent from the SDK by implementing `ITelemetryProcessor`. For example, you could reduce the volume of telemetry by excluding requests from robots. Unlike sampling, You have full control over what is sent or discarded, but it affects any metrics based on aggregated logs. Depending on how you discard items, you might also lose the ability to navigate between related items. * [Add or Modify properties](./api-filtering-sampling.md#add-properties) to any telemetry sent from your app by implementing an `ITelemetryInitializer`. For example, you could add calculated values or version numbers by which to filter the data in the portal. You can write code to filter, modify, or enrich your telemetry before it's sent > [!NOTE] > [The SDK API](./api-custom-events-metrics.md) is used to send custom events and metrics. -Before you start: +## Prerequisites -* Install the appropriate SDK for your application: [ASP.NET](asp-net.md), [ASP.NET Core](asp-net-core.md), [Non HTTP/Worker for .NET/.NET Core](worker-service.md), or [JavaScript](javascript.md). --<a name="filtering"></a> +Install the appropriate SDK for your application: [ASP.NET](asp-net.md), [ASP.NET Core](asp-net-core.md), [Non-HTTP/Worker for .NET/.NET Core](worker-service.md), or [JavaScript](javascript.md). ## Filtering To filter telemetry, you write a telemetry processor and register it with `Telem > Filtering the telemetry sent from the SDK by using processors can skew the statistics that you see in the portal and make it difficult to follow related items. > > Instead, consider using [sampling](./sampling.md).-> -> -### Create a telemetry processor +### .NET applications -### C# --1. To create a filter, implement `ITelemetryProcessor`. +1. Implement `ITelemetryProcessor`. Telemetry processors construct a chain of processing. When you instantiate a telemetry processor, you're given a reference to the next processor in the chain. When a telemetry data point is passed to the process method, it does its work and then calls (or doesn't call) the next telemetry processor in the chain. To filter telemetry, you write a telemetry processor and register it with `Telem 2. Add your processor. -ASP.NET **apps** --Insert this snippet in ApplicationInsights.config: --```xml -<TelemetryProcessors> - <Add Type="WebApplication9.SuccessfulDependencyFilter, WebApplication9"> - <!-- Set public property --> - <MyParamFromConfigFile>2-beta</MyParamFromConfigFile> - </Add> -</TelemetryProcessors> -``` --You can pass string values from the .config file by providing public named properties in your class. --> [!WARNING] -> Take care to match the type name and any property names in the .config file to the class and property names in the code. If the .config file references a nonexistent type or property, the SDK may silently fail to send any telemetry. -> --Alternatively, you can initialize the filter in code. In a suitable initialization class, for example, AppStart in `Global.asax.cs`, insert your processor into the chain: --```csharp -var builder = TelemetryConfiguration.Active.DefaultTelemetrySink.TelemetryProcessorChainBuilder; -builder.Use((next) => new SuccessfulDependencyFilter(next)); --// If you have more processors: -builder.Use((next) => new AnotherProcessor(next)); --builder.Build(); -``` --Telemetry clients created after this point will use your processors. --ASP.NET **Core/Worker service apps** --> [!NOTE] -> Adding a processor by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK. --For apps written by using [ASP.NET Core](asp-net-core.md#add-telemetry-processors) or [WorkerService](worker-service.md#add-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class. --```csharp + #### [ASP.NET](#tab/dotnet) + + Insert this snippet in ApplicationInsights.config: + + ```xml + <TelemetryProcessors> + <Add Type="WebApplication9.SuccessfulDependencyFilter, WebApplication9"> + <!-- Set public property --> + <MyParamFromConfigFile>2-beta</MyParamFromConfigFile> + </Add> + </TelemetryProcessors> + ``` + + You can pass string values from the .config file by providing public named properties in your class. + + > [!WARNING] + > Take care to match the type name and any property names in the .config file to the class and property names in the code. If the .config file references a nonexistent type or property, the SDK may silently fail to send any telemetry. + > + + Alternatively, you can initialize the filter in code. In a suitable initialization class, for example, AppStart in `Global.asax.cs`, insert your processor into the chain: + + ```csharp + var builder = TelemetryConfiguration.Active.DefaultTelemetrySink.TelemetryProcessorChainBuilder; + builder.Use((next) => new SuccessfulDependencyFilter(next)); + + // If you have more processors: + builder.Use((next) => new AnotherProcessor(next)); + + builder.Build(); + ``` + + Telemetry clients created after this point use your processors. ++ #### [ASP.NET Core/Worker service](#tab/dotnetcore) + + > [!NOTE] + > Adding a processor by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK. + + For apps written by using [ASP.NET Core](asp-net-core.md#add-telemetry-processors) or [WorkerService](worker-service.md#add-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class. + + ```csharp public void ConfigureServices(IServiceCollection services) { // ... services.AddApplicationInsightsTelemetry(); services.AddApplicationInsightsTelemetryProcessor<SuccessfulDependencyFilter>();-+ // If you have more processors: services.AddApplicationInsightsTelemetryProcessor<AnotherProcessor>(); }-``` --To register telemetry processors that need parameters in ASP.NET Core, create a custom class implementing **ITelemetryProcessorFactory**. Call the constructor with the desired parameters in the **Create** method and then use **AddSingleton<ITelemetryProcessorFactory, MyTelemetryProcessorFactory>()**. + ``` + + To register telemetry processors that need parameters in ASP.NET Core, create a custom class implementing **ITelemetryProcessorFactory**. Call the constructor with the desired parameters in the **Create** method and then use **AddSingleton<ITelemetryProcessorFactory, MyTelemetryProcessorFactory>()**. + + ### Example filters #### Synthetic requests Filter out bots and web tests. Although Metrics Explorer gives you the option to ```csharp public void Process(ITelemetry item) {- if (!string.IsNullOrEmpty(item.Context.Operation.SyntheticSource)) {return;} -- // Send everything else: - this.Next.Process(item); + if (!string.IsNullOrEmpty(item.Context.Operation.SyntheticSource)) {return;} + + // Send everything else: + this.Next.Process(item); } ``` public void Process(ITelemetry item) <a name="add-properties"></a> -### Java +### Java applications To learn more about telemetry processors and their implementation in Java, reference the [Java telemetry processors documentation](./java-standalone-telemetry-processors.md). ### JavaScript web applications -**Filter by using ITelemetryInitializer** +You can filter telemetry from JavaScript web applications by using ITelemetryInitializer. 1. Create a telemetry initializer callback function. The callback function takes `ITelemetryItem` as a parameter, which is the event that's being processed. Returning `false` from this callback results in the telemetry item to be filtered out. - ```JS - var filteringFunction = (envelope) => { - if (envelope.data.someField === 'tobefilteredout') { - return false; - } - - return true; - }; - ``` + ```js + var filteringFunction = (envelope) => { + if (envelope.data.someField === 'tobefilteredout') { + return false; + } + return true; + }; + ``` 2. Add your telemetry initializer callback: - ```JS + ```js appInsights.addTelemetryInitializer(filteringFunction); ``` To learn more about telemetry processors and their implementation in Java, refer Use telemetry initializers to enrich telemetry with additional information or to override telemetry properties set by the standard telemetry modules. -For example, Application Insights for a web package collects telemetry about HTTP requests. By default, it flags as failed any request with a response code >=400. But if you want to treat 400 as a success, you can provide a telemetry initializer that sets the success property. --If you provide a telemetry initializer, it's called whenever any of the Track*() methods are called. This initializer includes `Track()` methods called by the standard telemetry modules. By convention, these modules don't set any property that was already set by an initializer. Telemetry initializers are called before calling telemetry processors. So any enrichments done by initializers are visible to processors. +For example, Application Insights for a web package collects telemetry about HTTP requests. By default, it flags any request with a response code >=400 as failed. If instead you want to treat 400 as a success, you can provide a telemetry initializer that sets the success property. -**Define your initializer** +If you provide a telemetry initializer, it's called whenever any of the Track*() methods are called. This initializer includes `Track()` methods called by the standard telemetry modules. By convention, these modules don't set any property that was already set by an initializer. Telemetry initializers are called before calling telemetry processors, so any enrichments done by initializers are visible to processors. -*C#* +### .NET applications -```csharp -using System; -using Microsoft.ApplicationInsights.Channel; -using Microsoft.ApplicationInsights.DataContracts; -using Microsoft.ApplicationInsights.Extensibility; +1. Define your initializer -namespace MvcWebRole.Telemetry -{ - /* - * Custom TelemetryInitializer that overrides the default SDK - * behavior of treating response codes >= 400 as failed requests - * - */ - public class MyTelemetryInitializer : ITelemetryInitializer - { - public void Initialize(ITelemetry telemetry) + ```csharp + using System; + using Microsoft.ApplicationInsights.Channel; + using Microsoft.ApplicationInsights.DataContracts; + using Microsoft.ApplicationInsights.Extensibility; + + namespace MvcWebRole.Telemetry {- var requestTelemetry = telemetry as RequestTelemetry; - // Is this a TrackRequest() ? - if (requestTelemetry == null) return; - int code; - bool parsed = Int32.TryParse(requestTelemetry.ResponseCode, out code); - if (!parsed) return; - if (code >= 400 && code < 500) + /* + * Custom TelemetryInitializer that overrides the default SDK + * behavior of treating response codes >= 400 as failed requests + * + */ + public class MyTelemetryInitializer : ITelemetryInitializer {- // If we set the Success property, the SDK won't change it: - requestTelemetry.Success = true; -- // Allow us to filter these requests in the portal: - requestTelemetry.Properties["Overridden400s"] = "true"; + public void Initialize(ITelemetry telemetry) + { + var requestTelemetry = telemetry as RequestTelemetry; + // Is this a TrackRequest() ? + if (requestTelemetry == null) return; + int code; + bool parsed = Int32.TryParse(requestTelemetry.ResponseCode, out code); + if (!parsed) return; + if (code >= 400 && code < 500) + { + // If we set the Success property, the SDK won't change it: + requestTelemetry.Success = true; + + // Allow us to filter these requests in the portal: + requestTelemetry.Properties["Overridden400s"] = "true"; + } + // else leave the SDK to set the Success property + } }- // else leave the SDK to set the Success property }- } -} -``` --ASP.NET **apps: Load your initializer** --In ApplicationInsights.config: --```xml -<ApplicationInsights> - <TelemetryInitializers> - <!-- Fully qualified type name, assembly name: --> - <Add Type="MvcWebRole.Telemetry.MyTelemetryInitializer, MvcWebRole"/> - ... - </TelemetryInitializers> -</ApplicationInsights> -``` --Alternatively, you can instantiate the initializer in code, for example, in Global.aspx.cs: --```csharp -protected void Application_Start() -{ - // ... - TelemetryConfiguration.Active.TelemetryInitializers.Add(new MyTelemetryInitializer()); -} -``` --See more of [this sample](https://github.com/MohanGsk/ApplicationInsights-Home/tree/master/Samples/AzureEmailService/MvcWebRole). --ASP.NET **Core/Worker service apps: Load your initializer** --> [!NOTE] -> Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK. + ``` -For apps written using [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) or [WorkerService](worker-service.md#add-telemetry-initializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method. +2. Load your initializer ++ #### [ASP.NET](#tab/dotnet) + + In ApplicationInsights.config: + + ```xml + <ApplicationInsights> + <TelemetryInitializers> + <!-- Fully qualified type name, assembly name: --> + <Add Type="MvcWebRole.Telemetry.MyTelemetryInitializer, MvcWebRole"/> + ... + </TelemetryInitializers> + </ApplicationInsights> + ``` + + Alternatively, you can instantiate the initializer in code, for example, in Global.aspx.cs: + + ```csharp + protected void Application_Start() + { + // ... + TelemetryConfiguration.Active.TelemetryInitializers.Add(new MyTelemetryInitializer()); + } + ``` + + See more of [this sample](https://github.com/MohanGsk/ApplicationInsights-Home/tree/master/Samples/AzureEmailService/MvcWebRole). + + #### [ASP.NET Core/Worker service](#tab/dotnetcore) + + > [!NOTE] + > Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK. + + For apps written using [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) or [WorkerService](worker-service.md#add-telemetry-initializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method. + + ```csharp + using Microsoft.ApplicationInsights.Extensibility; + using CustomInitializer.Telemetry; + public void ConfigureServices(IServiceCollection services) + { + services.AddSingleton<ITelemetryInitializer, MyTelemetryInitializer>(); + } + ``` + + -```csharp - using Microsoft.ApplicationInsights.Extensibility; - using CustomInitializer.Telemetry; - public void ConfigureServices(IServiceCollection services) -{ - services.AddSingleton<ITelemetryInitializer, MyTelemetryInitializer>(); -} -``` ### JavaScript telemetry initializers Insert a JavaScript telemetry initializer, if needed. For more information on the telemetry initializers for the Application Insights JavaScript SDK, see [Telemetry initializers](https://github.com/microsoft/ApplicationInsights-JS#telemetry-initializers). Insert a telemetry initializer by adding the onInit callback function in the [Ja <!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-sdk.md and 2) articles\azure-monitor\app\javascript-feature-extensions.md --> ```html <script type="text/javascript">-!(function (cfg){function e(){cfg.onInit&&cfg.onInit(i)}var S,u,D,t,n,i,C=window,x=document,w=C.location,I="script",b="ingestionendpoint",E="disableExceptionTracking",A="ai.device.";"instrumentationKey"[S="toLowerCase"](),u="crossOrigin",D="POST",t="appInsightsSDK",n=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=n),i=C[n]||function(l){var d=!1,g=!1,f={initialize:!0,queue:[],sv:"7",version:2,config:l};function m(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[A+"id"]=i[S](),n[A+"type"]=i,n["ai.operation.name"]=w&&w.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(f.sv||f.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:4,seq:"1",aiDataContract:undefined}}var h=-1,v=0,y=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],k=l.url||cfg.src;if(k){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~k.indexOf("ai.3")&&(k=k.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<y.length;e++)if(0<k.indexOf(y[e])){h=e;break}var i=function(e){var a,t,n,i,o,r,s,c,p,u;f.queue=[],g||(0<=h&&v+1<y.length?(a=(h+v+1)%y.length,T(k.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+y[a]+i})),v+=1):(d=g=!0,o=k,c=(p=function(){var e,t={},n=l.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][S]()]=o[1])}return t[b]||(e=(n=t.endpointsuffix)?t.location:null,t[b]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||l.instrumentationKey||"",p=(p=p[b])?p+"/v2/track":l.endpointUrl,(u=[]).push((t="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",n=o,r=p,(s=(i=m(c,"Exception")).data).baseType="ExceptionData",s.baseData.exceptions=[{typeName:"SDKLoadFailed",message:t.replace(/\./g,"-"),hasFullStack:!1,stack:t+"\nSnippet failed to load ["+n+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(w&&w.pathname||"_unknown_")+"\nEndpoint: "+r,parsedStack:[]}],i)),u.push((s=o,t=p,(r=(n=m(c,"Message")).data).baseType="MessageData",(i=r.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+s+")").replace(/\"/g,"")+'"',i.properties={endpoint:t},n)),o=u,c=p,JSON&&((r=C.fetch)&&!cfg.useXhr?r(c,{method:D,body:JSON.stringify(o),mode:"cors"}):XMLHttpRequest&&((s=new XMLHttpRequest).open(D,c),s.setRequestHeader("Content-type","application/json"),s.send(JSON.stringify(o))))))},a=function(e,t){g||setTimeout(function(){!t&&f.core||i()},500),d=!1},T=function(e){var n=x.createElement(I),e=(n.src=e,cfg[u]);return!e&&""!==e||"undefined"==n[u]||(n[u]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?x.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){x.getElementsByTagName(I)[0].parentNode.appendChild(n)},cfg.ld||0),n};T(k)}try{f.cookie=x.cookie}catch(p){}function t(e){for(;e.length;)!function(t){f[t]=function(){var e=arguments;d||f.queue.push(function(){f[t].apply(f,e)})}}(e.pop())}var r,s,n="track",o="TrackPage",c="TrackEvent",n=(t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+o,"stop"+o,"start"+c,"stop"+c,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),f.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(l.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==l[E]&&!0!==n[E]&&(t(["_"+(r="onerror")]),s=C[r],C[r]=function(e,t,n,i,a){var o=s&&s(e,t,n,i,a);return!0!==o&&f["_"+r]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},l.autoExceptionInstrumented=!0),f}(cfg.cfg),(C[n]=i).queue&&0===i.queue.length?(i.queue.push(e),i.trackPageView({})):e();})({ +!(function (cfg){function e(){cfg.onInit&&cfg.onInit(n)}var x,w,D,t,E,n,C=window,O=document,b=C.location,q="script",I="ingestionendpoint",L="disableExceptionTracking",j="ai.device.";"instrumentationKey"[x="toLowerCase"](),w="crossOrigin",D="POST",t="appInsightsSDK",E=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=E),n=C[E]||function(g){var f=!1,m=!1,h={initialize:!0,queue:[],sv:"8",version:2,config:g};function v(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[j+"id"]=i[x](),n[j+"type"]=i,n["ai.operation.name"]=b&&b.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(h.sv||h.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:undefined,seq:"1",aiDataContract:undefined}}var n,i,t,a,y=-1,T=0,S=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],o=g.url||cfg.src,r=function(){return s(o,null)};function s(d,t){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~d.indexOf("ai.3")&&(d=d.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<S.length;e++)if(0<d.indexOf(S[e])){y=e;break}var n,i=function(e){var a,t,n,i,o,r,s,c,u,l;h.queue=[],m||(0<=y&&T+1<S.length?(a=(y+T+1)%S.length,p(d.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+S[a]+i})),T+=1):(f=m=!0,s=d,!0!==cfg.dle&&(c=(t=function(){var e,t={},n=g.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][x]()]=o[1])}return t[I]||(e=(n=t.endpointsuffix)?t.location:null,t[I]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||g.instrumentationKey||"",t=(t=(t=t[I])&&"/"===t.slice(-1)?t.slice(0,-1):t)?t+"/v2/track":g.endpointUrl,t=g.userOverrideEndpointUrl||t,(n=[]).push((i="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",o=s,u=t,(l=(r=v(c,"Exception")).data).baseType="ExceptionData",l.baseData.exceptions=[{typeName:"SDKLoadFailed",message:i.replace(/\./g,"-"),hasFullStack:!1,stack:i+"\nSnippet failed to load ["+o+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(b&&b.pathname||"_unknown_")+"\nEndpoint: "+u,parsedStack:[]}],r)),n.push((l=s,i=t,(u=(o=v(c,"Message")).data).baseType="MessageData",(r=u.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+l+")").replace(/\"/g,"")+'"',r.properties={endpoint:i},o)),s=n,c=t,JSON&&((u=C.fetch)&&!cfg.useXhr?u(c,{method:D,body:JSON.stringify(s),mode:"cors"}):XMLHttpRequest&&((l=new XMLHttpRequest).open(D,c),l.setRequestHeader("Content-type","application/json"),l.send(JSON.stringify(s)))))))},a=function(e,t){m||setTimeout(function(){!t&&h.core||i()},500),f=!1},p=function(e){var n=O.createElement(q),e=(n.src=e,t&&(n.integrity=t),n.setAttribute("data-ai-name",E),cfg[w]);return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?O.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){O.getElementsByTagName(q)[0].parentNode.appendChild(n)},cfg.ld||0),n};p(d)}cfg.sri&&(n=o.match(/^((http[s]?:\/\/.*\/)\w+(\.\d+){1,5})\.(([\w]+\.){0,2}js)$/))&&6===n.length?(d="".concat(n[1],".integrity.json"),i="@".concat(n[4]),l=window.fetch,t=function(e){if(!e.ext||!e.ext[i]||!e.ext[i].file)throw Error("Error Loading JSON response");var t=e.ext[i].integrity||null;s(o=n[2]+e.ext[i].file,t)},l&&!cfg.useXhr?l(d,{method:"GET",mode:"cors"}).then(function(e){return e.json()["catch"](function(){return{}})}).then(t)["catch"](r):XMLHttpRequest&&((a=new XMLHttpRequest).open("GET",d),a.onreadystatechange=function(){if(a.readyState===XMLHttpRequest.DONE)if(200===a.status)try{t(JSON.parse(a.responseText))}catch(e){r()}else r()},a.send())):o&&r();try{h.cookie=O.cookie}catch(k){}function e(e){for(;e.length;)!function(t){h[t]=function(){var e=arguments;f||h.queue.push(function(){h[t].apply(h,e)})}}(e.pop())}var c,u,l="track",d="TrackPage",p="TrackEvent",l=(e([l+"Event",l+"PageView",l+"Exception",l+"Trace",l+"DependencyData",l+"Metric",l+"PageViewPerformance","start"+d,"stop"+d,"start"+p,"stop"+p,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),h.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(g.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==g[L]&&!0!==l[L]&&(e(["_"+(c="onerror")]),u=C[c],C[c]=function(e,t,n,i,a){var o=u&&u(e,t,n,i,a);return!0!==o&&h["_"+c]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},g.autoExceptionInstrumented=!0),h}(cfg.cfg),(C[E]=n).queue&&0===n.queue.length?(n.queue.push(e),n.trackPageView({})):e();})({ src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js",-crossOrigin: "anonymous", +crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag onInit: function (sdk) {- sdk.addTelemetryInitializer(function (envelope) { + sdk.addTelemetryInitializer(function (envelope) { envelope.data = envelope.data || {}; envelope.data.someField = 'This item passed through my telemetry initializer';- }); + }); }, // Once the application insights instance has loaded and initialized this method will be called+// sri: false, // Custom optional value to specify whether fetching the snippet from integrity file and do integrity check cfg: { // Application Insights Configuration connectionString: "YOUR_CONNECTION_STRING" }}); cfg: { // Application Insights Configuration #### [npm package](#tab/npmpackage) - ```js - import { ApplicationInsights } from '@microsoft/applicationinsights-web' -- const appInsights = new ApplicationInsights({ config: { - connectionString: 'YOUR_CONNECTION_STRING' - /* ...Other Configuration Options... */ - } }); - appInsights.loadAppInsights(); - // To insert a telemetry initializer, uncomment the following code. - /** var telemetryInitializer = (envelope) => { envelope.data = envelope.data || {}; envelope.data.someField = 'This item passed through my telemetry initializer'; - }; - appInsights.addTelemetryInitializer(telemetryInitializer); **/ - appInsights.trackPageView(); - ``` +```js +import { ApplicationInsights } from '@microsoft/applicationinsights-web' ++const appInsights = new ApplicationInsights({ config: { + connectionString: 'YOUR_CONNECTION_STRING' + /* ...Other Configuration Options... */ +} }); +appInsights.loadAppInsights(); +// To insert a telemetry initializer, uncomment the following code. +/** var telemetryInitializer = (envelope) => { envelope.data = envelope.data || {}; envelope.data.someField = 'This item passed through my telemetry initializer'; + }; +appInsights.addTelemetryInitializer(telemetryInitializer); **/ +appInsights.trackPageView(); +``` The following sample initializer adds a custom property to every tracked telemet ```csharp public void Initialize(ITelemetry item) {- var itemProperties = item as ISupportProperties; - if(itemProperties != null && !itemProperties.Properties.ContainsKey("customProp")) + var itemProperties = item as ISupportProperties; + if(itemProperties != null && !itemProperties.Properties.ContainsKey("customProp")) { itemProperties.Properties["customProp"] = "customValue"; } public void Initialize(ITelemetry telemetry) #### Control the client IP address used for geolocation mappings -The following sample initializer sets the client IP which will be used for geolocation mapping, instead of the client socket IP address, during telemetry ingestion. +The following sample initializer sets the client IP, which is used for geolocation mapping, instead of the client socket IP address, during telemetry ingestion. ```csharp public void Initialize(ITelemetry telemetry) What's the difference between telemetry processors and telemetry initializers? ## <a name="next"></a>Next steps * [Search events and logs](./transaction-search-and-diagnostics.md?tabs=transaction-search) * [sampling](./sampling.md)- |
azure-monitor | Convert Classic Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md | Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn how to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 10/11/2023- Last updated : 06/28/2024+ # Migrate to workspace-based Application Insights resources Workspace-based Application Insights resources allow you to take advantage of th * [Customer-managed keys](../logs/customer-managed-keys.md) provide encryption at rest for your data with encryption keys that only you have access to. * [Azure Private Link](../logs/private-link-security.md) allows you to securely link the Azure platform as a service (PaaS) to your virtual network by using private endpoints.-* [Bring your own storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over: +* [Profiler and Snapshot Debugger Bring your own storage (BYOS)](./profiler-bring-your-own-storage.md) gives you full control over: - Encryption-at-rest policy. - Lifetime management policy. - Network access for all data associated with Application Insights Profiler and Snapshot Debugger. If you don't need to migrate an existing resource, and instead want to create a ## Prerequisites -- A Log Analytics workspace with the access control mode set to the **Use resource or workspace permissions** setting:+- A Log Analytics workspace with the access control mode set to the **"Use resource or workspace permissions"** setting: - - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **workspace-based permissions** setting. To learn more about Log Analytics workspace access control, see the [Access control mode guidance](../logs/manage-access.md#access-control-mode). - - If you don't already have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md). + - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **workspace-based permissions** setting. To learn more about Log Analytics workspace access control, see the [Access control mode guidance](../logs/manage-access.md#access-control-mode). + - If you don't already have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md). - **Continuous export** isn't compatible with workspace-based resources and must be disabled. After the migration is finished, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs. - > [!CAUTION] - > * Diagnostic settings use a different export format/schema than continuous export. Migrating breaks any existing integrations with Azure Stream Analytics. - > * Diagnostic settings export might increase costs. For more information, see [Export telemetry from Application Insights](export-telemetry.md#diagnostic-settings-based-export). -+ > [!CAUTION] + > * Diagnostic settings use a different export format/schema than continuous export. Migrating breaks any existing integrations with Azure Stream Analytics. + > * Diagnostic settings export might increase costs. For more information, see [Export telemetry from Application Insights](export-telemetry.md#diagnostic-settings-based-export). - Check your current retention settings under **Settings** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource. - > [!NOTE] - > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#configure-retention-and-archive-at-the-table-level). - > - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention continues to be billed through that Application Insights resource until the data exceeds the retention period. - > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage. -+ > [!NOTE] + > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#configure-retention-and-archive-at-the-table-level). + > - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention continues to be billed through that Application Insights resource until the data exceeds the retention period. + > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage. - Understand [workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs. +## Find your Classic Application Insights resources ++You can use on of the following methods to find Classic Application Insights resources within your subscription: ++#### Application Insights resource in Azure portal ++Within the Overview of an Application Insights resource, Classic Application Insights resources don't have a linked Workspace and the Classic Application Insights retirement warning banner appears. Workspace-based resources have a linked workspace within the overview section ++Classic resource: ++Workspace-based resource: ++#### Azure Resource Graph ++You can use the Azure Resource Graph (ARG) Explorer and run a query on the 'resources' table to pull this information: ++```kusto +resources +| where subscriptionId == 'Replace with your own subscription ID' +| where type contains 'microsoft.insights/components' +| distinct resourceGroup, name, tostring(properties['IngestionMode']), tostring(properties['WorkspaceResourceId']) +``` +> [!NOTE] +> Classic resources are identified by ΓÇÿApplicationInsightsΓÇÖ, 'N/A', or *Empty* values. +#### Azure CLI: ++Run the following script from Cloud Shell in the portal where authentication is built in or anywhere else after authenticating using `az login`: ++```azurecli +$resources = az resource list --resource-type 'microsoft.insights/components' | ConvertFrom-Json ++$resources | Sort-Object -Property Name | Format-Table -Property @{Label="App Insights Resource"; Expression={$_.name}; width = 35}, @{Label="Ingestion Mode"; Expression={$mode = az resource show --name $_.name --resource-group $_.resourceGroup --resource-type microsoft.insights/components --query "properties.IngestionMode" -o tsv; $mode}; width = 45} +``` +> [!NOTE] +> Classic resources are identified by ΓÇÿApplicationInsightsΓÇÖ, 'N/A', or *Empty* values. +The following PowerShell script can be run from the Azure CLI: ++```azurepowershell +$subscription = "SUBSCRIPTION ID GOES HERE" +$token = (Get-AZAccessToken).Token +$header = @{Authorization = "Bearer $token"} +$uri = "https://management.azure.com/subscriptions/$subscription/providers/Microsoft.Insights/components?api-version=2015-05-01" +$RestResult="" +$RestResult = Invoke-RestMethod -Method GET -Uri $uri -Headers $header -ContentType "application/json" -ErrorAction Stop -Verbose + $list=@() +$ClassicList=@() +foreach ($app in $RestResult.value) + { + #"processing: " + $app.properties.WorkspaceResourceId ## Classic Application Insights do not have a workspace. + if ($app.properties.WorkspaceResourceId) + { + $Obj = New-Object -TypeName PSObject + #$app.properties.WorkspaceResourceId + $Obj | Add-Member -Type NoteProperty -Name Name -Value $app.name + $Obj | Add-Member -Type NoteProperty -Name WorkspaceResourceId -Value $app.properties.WorkspaceResourceId + $list += $Obj + } + else + { + $Obj = New-Object -TypeName PSObject + $app.properties.WorkspaceResourceId + $Obj | Add-Member -Type NoteProperty -Name Name -Value $app.name + $ClassicList += $Obj + } + } +$list |Format-Table -Property Name, WorkspaceResourceId -Wrap + "";"Classic:" +$ClassicList | FT +``` + ## Migrate your resource To migrate a classic Application Insights resource to a workspace-based resource: -1. From your Application Insights resource, select **Properties** under the **Configure** heading in the menu on the left. +1. From your Application Insights resource, select **"Properties"** under the **"Configure"** heading in the menu on the left. - :::image type="content" source="./media/convert-classic-resource/properties.png" lightbox="./media/convert-classic-resource/properties.png" alt-text="Screenshot that shows Properties under the Configure heading."::: + :::image type="content" source="./media/convert-classic-resource/properties.png" lightbox="./media/convert-classic-resource/properties.png" alt-text="Screenshot that shows Properties under the Configured heading."::: 1. Select **Migrate to Workspace-based**. From within the Application Insights resource pane, select **Properties** > **Ch This section provides answers to common questions. -### What will happen if I don't migrate my Application Insights classic resource to a workspace-based resource? +### What happens if I don't migrate my Application Insights classic resource to a workspace-based resource? -Microsoft will begin an automatic phased approach to migrating classic resources to workspace-based resources beginning in May 2024 and this migration will span the course of several months. We can't provide approximate dates that specific resources, subscriptions, or regions will be migrated. +Microsoft began a phased approach to migrating classic resources to workspace-based resources in May 2024 and this migration is ongoing for several months. We can't provide approximate dates that specific resources, subscriptions, or regions are migrated. -We strongly encourage manual migration to workspace-based resources, which is initiated by selecting the retirement notice banner in the classic Application Insights resource Overview pane of the Azure portal. This process typically involves a single step of choosing which Log Analytics workspace will be used to store your application data. If you use continuous export, you'll need to additionally migrate to diagnostic settings or disable the feature first. +We strongly encourage manual migration to workspace-based resources. This process is initiated by selecting the retirement notice banner. You can find it in the classic Application Insights resource Overview pane of the Azure portal. This process typically involves a single step of choosing which Log Analytics workspace is used to store your application data. If you use continuous export, you need to additionally migrate to diagnostic settings or disable the feature first. -If you don't wish to have your classic resource automatically migrated to a workspace-based resource, you may delete or manually migrate the resource. +If you don't wish to have your classic resource automatically migrated to a workspace-based resource, you can delete or manually migrate the resource. ### Is there any implication on the cost from migration? There's usually no difference, with two exceptions. -- Application Insights resources that were receiving 1 GB per month free via legacy Application Insights pricing model will no longer receive the free data.-- Application Insights resources that were in the basic pricing tier prior to April 2018 continue to be billed at the same non-regional price point as before April 2018. Application Insights resources created after that time, or those converted to be workspace-based, will receive the current regional pricing. For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/).+- Application Insights resources that were receiving 1 GB per month free via legacy Application Insights pricing model doesn't receive the free data. +- Application Insights resources that were in the basic pricing tier before April 2018 continue to be billed at the same nonregional price point as before April 2018. Application Insights resources created after that time, or those resources converted to be workspace-based, will receive the current regional pricing. For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/). -The migration to workspace-based Application Insights offers a number of options to further [optimize cost](../logs/cost-logs.md), including [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers), [dedicated clusters](../logs/cost-logs.md#dedicated-clusters), and [basic logs](../logs/cost-logs.md#basic-logs). +The migration to workspace-based Application Insights offers many options to further [optimize cost](../logs/cost-logs.md), including [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers), [dedicated clusters](../logs/cost-logs.md#dedicated-clusters), and [basic logs](../logs/cost-logs.md#basic-logs). ### How will telemetry capping work? No. We merge data during query time. Yes, they continue to work. -### Will my dashboards that have pinned metric and log charts continue to work after migration? +### Will my dashboards with pinned metric and log charts continue to work after migration? Yes, they continue to work. No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor-and- ### What happens with continuous export after migration? -To continue with automated exports, you'll need to migrate to [diagnostic settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) before migrating to workspace-based resource. The diagnostic setting carries over in the migration to workspace-based Application Insights. +To continue with automated exports, you need to migrate to [diagnostic settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) before migrating to workspace-based resource. The diagnostic setting carries over in the migration to workspace-based Application Insights. ### How do I ensure a successful migration of my App Insights resource using Terraform? -If you're using Terraform to manage your Azure resources, it's important to use the latest version of the Terraform azurerm provider before attempting to upgrade your App Insights resource. Using an older version of the provider, such as version 3.12, may result in the deletion of the classic component before creating the replacement workspace-based Application Insights resource. It can cause the loss of previous data and require updating the configurations in your monitored apps with new connection string and instrumentation key values. +If you're using Terraform to manage your Azure resources, it's important to use the latest version of the Terraform azurerm provider before attempting to upgrade your App Insights resource. Use of an older version of the provider, such as version 3.12, can result in the deletion of the classic component before creating the replacement workspace-based Application Insights resource. It can cause the loss of previous data and require updating the configurations in your monitored apps with new connection string and instrumentation key values. -To avoid this issue, make sure to use the latest version of the Terraform [azurerm provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest), version 3.89 or higher, which performs the proper migration steps by issuing the appropriate ARM call to upgrade the App Insights classic resource to a workspace-based resource while preserving all the old data and connection string/instrumentation key values. +To avoid this issue, make sure to use the latest version of the Terraform [azurerm provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest), version 3.89 or higher. It performs the proper migration steps by issuing the appropriate Azure Resource Manager (ARM) call to upgrade the App Insights classic resource to a workspace-based resource while preserving all the old data and connection string/instrumentation key values. ### Can I still use the old API to create Application Insights resources programmatically? -For backwards compatibility, calls to the old API for creating Application Insights resources will continue to work. Each of these calls will eventually create both a workspace-based Application Insights resource and a Log Analytics workspace to store the data. +For backwards compatibility, calls to the old API for creating Application Insights resources continue to work. Each of these calls creates both a workspace-based Application Insights resource and a Log Analytics workspace to store the data. We strongly encourage updating to the [new API](create-workspace-resource.md) for better control over resource creation. Yes, we recommend migrating diagnostic settings on classic Application Insights ## Troubleshooting -This section offers troubleshooting tips for common issues. +This section provides troubleshooting tips. ### Access mode -**Error message:** "The selected workspace is configured with workspace-based access mode. Some Application Performance Monitoring (APM) features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI." +**Error message:** "The selected workspace is configured with workspace-based access mode. Some Application Performance Monitoring (APM) features can be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI." For your workspace-based Application Insights resource to operate properly, you need to change the access control mode of your target Log Analytics workspace to the **Resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For instructions, see the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience remains blocked. If you can't change the access control mode for security reasons for your curren The legacy **Continuous export** functionality isn't supported for workspace-based resources. Before migrating, you need to enable diagnostic settings and disable continuous export. 1. [Enable Diagnostic Settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) on your classic Application Insights resource.-1. From your Application Insights resource view, under the **Configure** heading, select **Continuous export**. +1. From your Application Insights resource view, under the **"Configure"** heading, select **"Continuous export"**. :::image type="content" source="./media/convert-classic-resource/continuous-export.png" lightbox="./media/convert-classic-resource/continuous-export.png" alt-text="Screenshot that shows the Continuous export menu item."::: The legacy **Continuous export** functionality isn't supported for workspace-bas - After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings aren't saved, select **OK**. This prompt doesn't pertain to disabling or enabling continuous export. - - After you've successfully migrated your Application Insights resource to workspace based, you can use diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostics settings** > **Add diagnostic setting** in your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account or stream to Azure Event Hubs. For more information on diagnostic settings, see the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md). + - After migrating your Application Insights resource, you can use diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostics settings** > **Add diagnostic setting** in your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account or stream to Azure Event Hubs. For more information on diagnostic settings, see the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md). ### Retention settings -**Warning message:** "Your customized Application Insights retention settings won't apply to data sent to the workspace. You'll need to reconfigure these separately." +**Warning message:** "Your customized Application Insights retention settings doesn't apply to data sent to the workspace. You need to reconfigure them separately." -You don't have to make any changes prior to migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you might want to modify the retention settings for your Log Analytics workspace prior to migrating and starting to ingest new data. +You don't have to make any changes before migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you might want to modify the retention settings for your Log Analytics workspace before migrating and starting to ingest new data. You can check your current retention settings for Log Analytics under **Settings** > **Usage and estimated costs** > **Data Retention** in the Log Analytics UI. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource. ## Workspace-based resource changes -Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separately from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This arrangement simplifies your configuration. You can analyze data across multiple solutions more easily and use the capabilities of workspaces. +Before the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separately from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This arrangement simplifies your configuration. You can analyze data across multiple solutions more easily and use the capabilities of workspaces. ### Classic data structure Legacy table: availabilityResults |customMeasurements|dynamic|Measurements|Dynamic| |duration|real|DurationMs|real| |`id`|string|`Id`|string|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|String| Legacy table: browserTimings |cloud_RoleName|string|AppRoleName|string| |customDimensions|dynamic|Properties|Dynamic| |customMeasurements|dynamic|Measurements|Dynamic|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|string| Legacy table: dependencies |data|string|Data|string| |duration|real|DurationMs|real| |`id`|string|`Id`|string|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|String| Legacy table: customEvents |cloud_RoleName|string|AppRoleName|string| |customDimensions|dynamic|Properties|Dynamic| |customMeasurements|dynamic|Measurements|Dynamic|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|string| Legacy table: customMetrics |cloud_RoleInstance|string|AppRoleInstance|string| |cloud_RoleName|string|AppRoleName|string| |customDimensions|dynamic|Properties|Dynamic|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |itemId|string|\_ItemId|string| |itemType|string|Type|string| |name|string|Name|string| Legacy table: pageViews |customMeasurements|dynamic|Measurements|Dynamic| |duration|real|DurationMs|real| |`id`|string|`Id`|string|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|String| Legacy table: performanceCounters |cloud_RoleName|string|AppRoleName|string| |counter|string|(removed)|| |customDimensions|dynamic|Properties|Dynamic|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |instance|string|Instance|string| |itemId|string|\_ItemId|string| |itemType|string|Type|string| Legacy table: requests |customMeasurements|dynamic|Measurements|Dynamic| |duration|real|DurationMs|Real| |`id`|string|`Id`|String|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|String| Legacy table: exceptions |customMeasurements|dynamic|Measurements|dynamic| |details|dynamic|Details|dynamic| |handledAt|string|HandledAt|string|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |innermostAssembly|string|InnermostAssembly|string| |innermostMessage|string|InnermostMessage|string| |innermostMethod|string|InnermostMethod|string| Legacy table: traces |cloud_RoleName|string|AppRoleName|string| |customDimensions|dynamic|Properties|dynamic| |customMeasurements|dynamic|Measurements|dynamic|-|iKey|string|IKey|string| +|`iKey`|string|`IKey`|string| |itemCount|int|ItemCount|int| |itemId|string|\_ItemId|string| |itemType|string|Type|string| Legacy table: traces * [Explore metrics](../essentials/metrics-charts.md) * [Write Log Analytics queries](../logs/log-query-overview.md)- |
azure-monitor | Javascript Feature Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md | Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web #### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript) -1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights. - <!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-sdk.md and 2) articles\azure-monitor\app\api-filtering-sampling.md --> - ```html - <script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.min.js"></script> - <script type="text/javascript"> - var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin(); - // Click Analytics configuration - var clickPluginConfig = { - autoCapture : true, - dataTags: { - useDefaultContentNameOrId: true - } - } - // Application Insights configuration - var configObj = { - connectionString: "YOUR_CONNECTION_STRING", - // Alternatively, you can pass in the instrumentation key, - // but support for instrumentation key ingestion will end on March 31, 2025. - // instrumentationKey: "YOUR INSTRUMENTATION KEY", - extensions: [ - clickPluginInstance - ], - extensionConfig: { - [clickPluginInstance.identifier] : clickPluginConfig - }, - }; - // Application Insights JavaScript (Web) SDK Loader Script code - !(function (cfg){function e(){cfg.onInit&&cfg.onInit(i)}var S,u,D,t,n,i,C=window,x=document,w=C.location,I="script",b="ingestionendpoint",E="disableExceptionTracking",A="ai.device.";"instrumentationKey"[S="toLowerCase"](),u="crossOrigin",D="POST",t="appInsightsSDK",n=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=n),i=C[n]||function(l){var d=!1,g=!1,f={initialize:!0,queue:[],sv:"7",version:2,config:l};function m(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[A+"id"]=i[S](),n[A+"type"]=i,n["ai.operation.name"]=w&&w.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(f.sv||f.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:4,seq:"1",aiDataContract:undefined}}var h=-1,v=0,y=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],k=l.url||cfg.src;if(k){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~k.indexOf("ai.3")&&(k=k.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<y.length;e++)if(0<k.indexOf(y[e])){h=e;break}var i=function(e){var a,t,n,i,o,r,s,c,p,u;f.queue=[],g||(0<=h&&v+1<y.length?(a=(h+v+1)%y.length,T(k.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+y[a]+i})),v+=1):(d=g=!0,o=k,c=(p=function(){var e,t={},n=l.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][S]()]=o[1])}return t[b]||(e=(n=t.endpointsuffix)?t.location:null,t[b]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||l.instrumentationKey||"",p=(p=p[b])?p+"/v2/track":l.endpointUrl,(u=[]).push((t="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",n=o,r=p,(s=(i=m(c,"Exception")).data).baseType="ExceptionData",s.baseData.exceptions=[{typeName:"SDKLoadFailed",message:t.replace(/\./g,"-"),hasFullStack:!1,stack:t+"\nSnippet failed to load ["+n+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(w&&w.pathname||"_unknown_")+"\nEndpoint: "+r,parsedStack:[]}],i)),u.push((s=o,t=p,(r=(n=m(c,"Message")).data).baseType="MessageData",(i=r.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+s+")").replace(/\"/g,"")+'"',i.properties={endpoint:t},n)),o=u,c=p,JSON&&((r=C.fetch)&&!cfg.useXhr?r(c,{method:D,body:JSON.stringify(o),mode:"cors"}):XMLHttpRequest&&((s=new XMLHttpRequest).open(D,c),s.setRequestHeader("Content-type","application/json"),s.send(JSON.stringify(o))))))},a=function(e,t){g||setTimeout(function(){!t&&f.core||i()},500),d=!1},T=function(e){var n=x.createElement(I),e=(n.src=e,cfg[u]);return!e&&""!==e||"undefined"==n[u]||(n[u]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?x.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){x.getElementsByTagName(I)[0].parentNode.appendChild(n)},cfg.ld||0),n};T(k)}try{f.cookie=x.cookie}catch(p){}function t(e){for(;e.length;)!function(t){f[t]=function(){var e=arguments;d||f.queue.push(function(){f[t].apply(f,e)})}}(e.pop())}var r,s,n="track",o="TrackPage",c="TrackEvent",n=(t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+o,"stop"+o,"start"+c,"stop"+c,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),f.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(l.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==l[E]&&!0!==n[E]&&(t(["_"+(r="onerror")]),s=C[r],C[r]=function(e,t,n,i,a){var o=s&&s(e,t,n,i,a);return!0!==o&&f["_"+r]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},l.autoExceptionInstrumented=!0),f}(cfg.cfg),(C[n]=i).queue&&0===i.queue.length?(i.queue.push(e),i.trackPageView({})):e();})({ - src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js", - crossOrigin: "anonymous", - cfg: configObj // configObj is defined above. - }); - </script> - ``` --1. To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration). +Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights. +<!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-sdk.md and 2) articles\azure-monitor\app\api-filtering-sampling.md --> ++```html +<script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.min.js"></script> +<script type="text/javascript"> +var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin(); + // Click Analytics configuration +var clickPluginConfig = { + autoCapture : true, + dataTags: { + useDefaultContentNameOrId: true + } +} +// Application Insights configuration +var configObj = { + connectionString: "YOUR_CONNECTION_STRING", + // Alternatively, you can pass in the instrumentation key, + // but support for instrumentation key ingestion will end on March 31, 2025. + // instrumentationKey: "YOUR INSTRUMENTATION KEY", + extensions: [ + clickPluginInstance + ], + extensionConfig: { + [clickPluginInstance.identifier] : clickPluginConfig + }, +}; +// Application Insights JavaScript (Web) SDK Loader Script code +!(function (cfg){function e(){cfg.onInit&&cfg.onInit(n)}var x,w,D,t,E,n,C=window,O=document,b=C.location,q="script",I="ingestionendpoint",L="disableExceptionTracking",j="ai.device.";"instrumentationKey"[x="toLowerCase"](),w="crossOrigin",D="POST",t="appInsightsSDK",E=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=E),n=C[E]||function(g){var f=!1,m=!1,h={initialize:!0,queue:[],sv:"8",version:2,config:g};function v(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[j+"id"]=i[x](),n[j+"type"]=i,n["ai.operation.name"]=b&&b.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(h.sv||h.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:undefined,seq:"1",aiDataContract:undefined}}var n,i,t,a,y=-1,T=0,S=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],o=g.url||cfg.src,r=function(){return s(o,null)};function s(d,t){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~d.indexOf("ai.3")&&(d=d.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<S.length;e++)if(0<d.indexOf(S[e])){y=e;break}var n,i=function(e){var a,t,n,i,o,r,s,c,u,l;h.queue=[],m||(0<=y&&T+1<S.length?(a=(y+T+1)%S.length,p(d.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+S[a]+i})),T+=1):(f=m=!0,s=d,!0!==cfg.dle&&(c=(t=function(){var e,t={},n=g.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][x]()]=o[1])}return t[I]||(e=(n=t.endpointsuffix)?t.location:null,t[I]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||g.instrumentationKey||"",t=(t=(t=t[I])&&"/"===t.slice(-1)?t.slice(0,-1):t)?t+"/v2/track":g.endpointUrl,t=g.userOverrideEndpointUrl||t,(n=[]).push((i="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",o=s,u=t,(l=(r=v(c,"Exception")).data).baseType="ExceptionData",l.baseData.exceptions=[{typeName:"SDKLoadFailed",message:i.replace(/\./g,"-"),hasFullStack:!1,stack:i+"\nSnippet failed to load ["+o+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(b&&b.pathname||"_unknown_")+"\nEndpoint: "+u,parsedStack:[]}],r)),n.push((l=s,i=t,(u=(o=v(c,"Message")).data).baseType="MessageData",(r=u.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+l+")").replace(/\"/g,"")+'"',r.properties={endpoint:i},o)),s=n,c=t,JSON&&((u=C.fetch)&&!cfg.useXhr?u(c,{method:D,body:JSON.stringify(s),mode:"cors"}):XMLHttpRequest&&((l=new XMLHttpRequest).open(D,c),l.setRequestHeader("Content-type","application/json"),l.send(JSON.stringify(s)))))))},a=function(e,t){m||setTimeout(function(){!t&&h.core||i()},500),f=!1},p=function(e){var n=O.createElement(q),e=(n.src=e,t&&(n.integrity=t),n.setAttribute("data-ai-name",E),cfg[w]);return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?O.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){O.getElementsByTagName(q)[0].parentNode.appendChild(n)},cfg.ld||0),n};p(d)}cfg.sri&&(n=o.match(/^((http[s]?:\/\/.*\/)\w+(\.\d+){1,5})\.(([\w]+\.){0,2}js)$/))&&6===n.length?(d="".concat(n[1],".integrity.json"),i="@".concat(n[4]),l=window.fetch,t=function(e){if(!e.ext||!e.ext[i]||!e.ext[i].file)throw Error("Error Loading JSON response");var t=e.ext[i].integrity||null;s(o=n[2]+e.ext[i].file,t)},l&&!cfg.useXhr?l(d,{method:"GET",mode:"cors"}).then(function(e){return e.json()["catch"](function(){return{}})}).then(t)["catch"](r):XMLHttpRequest&&((a=new XMLHttpRequest).open("GET",d),a.onreadystatechange=function(){if(a.readyState===XMLHttpRequest.DONE)if(200===a.status)try{t(JSON.parse(a.responseText))}catch(e){r()}else r()},a.send())):o&&r();try{h.cookie=O.cookie}catch(k){}function e(e){for(;e.length;)!function(t){h[t]=function(){var e=arguments;f||h.queue.push(function(){h[t].apply(h,e)})}}(e.pop())}var c,u,l="track",d="TrackPage",p="TrackEvent",l=(e([l+"Event",l+"PageView",l+"Exception",l+"Trace",l+"DependencyData",l+"Metric",l+"PageViewPerformance","start"+d,"stop"+d,"start"+p,"stop"+p,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),h.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(g.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==g[L]&&!0!==l[L]&&(e(["_"+(c="onerror")]),u=C[c],C[c]=function(e,t,n,i,a){var o=u&&u(e,t,n,i,a);return!0!==o&&h["_"+c]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},g.autoExceptionInstrumented=!0),h}(cfg.cfg),(C[E]=n).queue&&0===n.queue.length?(n.queue.push(e),n.trackPageView({})):e();})({ + src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js", + crossOrigin: "anonymous", + // sri: false, // Custom optional value to specify whether fetching the snippet from integrity file and do integrity check + cfg: configObj // configObj is defined above. +}); +</script> +``` ++To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration). #### [npm package](#tab/npmpackage) If you have the [`contentName` callback function](#ivaluecallback) in advanced c - For a clicked HTML `<img>` or `<area>` element, the plugin collects the value of its `alt` attribute. - For all other clicked HTML elements, `contentName` is populated based on the following rules, which are listed in order of precedence: - 1. The value of the `value` attribute for the element - 1. The value of the `name` attribute for the element - 1. The value of the `alt` attribute for the element - 1. The value of the innerText attribute for the element - 1. The value of the `id` attribute for the element + 1. The value of the `value` attribute for the element + 1. The value of the `name` attribute for the element + 1. The value of the `alt` attribute for the element + 1. The value of the innerText attribute for the element + 1. The value of the `id` attribute for the element ### `parentId` key Three different `behaviorValidator` callback functions are exposed as part of th #### Passing in string vs. numerical values -To reduce the bytes you pass, pass in the number value instead of the full text string. If cost isnΓÇÖt an issue, you can pass in the full text string (e.g. NAVIGATIONBACK). +To reduce the bytes you pass, pass in the number value instead of the full text string. If cost isnΓÇÖt an issue, you can pass in the full text string (for example, NAVIGATIONBACK). #### Sample usage with behaviorValidator In example 1, the `parentDataTag` isn't declared and `data-parentid` or `data-*- ```javascript export const clickPluginConfigWithUseDefaultContentNameOrId = {- dataTags : { - customDataPrefix: "", - parentDataTag: "", - dntDataTag: "ai-dnt", - captureAllMetaDataContent:false, - useDefaultContentNameOrId: true, - autoCapture: true - }, + dataTags : { + customDataPrefix: "", + parentDataTag: "", + dntDataTag: "ai-dnt", + captureAllMetaDataContent:false, + useDefaultContentNameOrId: true, + autoCapture: true + }, }; <div className="test1" data-id="test1parent">- <div>Test1</div> - <div>with id, data-id, parent data-id defined</div> - <Button id="id1" data-id="test1id" variant="info" onClick={trackEvent}>Test1</Button> + <div>Test1</div> + <div>with id, data-id, parent data-id defined</div> + <Button id="id1" data-id="test1id" variant="info" onClick={trackEvent}>Test1</Button> </div> ``` -For clicked element `<Button>` the value of `parentId` is `ΓÇ£not_specifiedΓÇ¥`, because no `parentDataTag` details are defined and no parent element id is provided within the current element. +For clicked element `<Button>` the value of `parentId` is `ΓÇ£not_specifiedΓÇ¥`, because no `parentDataTag` details are defined and no parent element ID is provided within the current element. ### Example 2 -In example 2, `parentDataTag` is declared and `data-parentid` is defined. This example shows how parent id details are collected. +In example 2, `parentDataTag` is declared and `data-parentid` is defined. This example shows how parent ID details are collected. ```javascript export const clickPluginConfigWithParentDataTag = {- dataTags : { - customDataPrefix: "", - parentDataTag: "group", - ntDataTag: "ai-dnt", - captureAllMetaDataContent:false, - useDefaultContentNameOrId: false, - autoCapture: true - }, + dataTags : { + customDataPrefix: "", + parentDataTag: "group", + ntDataTag: "ai-dnt", + captureAllMetaDataContent:false, + useDefaultContentNameOrId: false, + autoCapture: true + }, }; - <div className="test2" data-group="buttongroup1" data-id="test2parent"> - <div>Test2</div> - <div>with data-id, parentid, parent data-id defined</div> - <Button data-id="test2id" data-parentid = "parentid2" variant="info" onClick={trackEvent}>Test2</Button> - </div> +<div className="test2" data-group="buttongroup1" data-id="test2parent"> + <div>Test2</div> + <div>with data-id, parentid, parent data-id defined</div> + <Button data-id="test2id" data-parentid = "parentid2" variant="info" onClick={trackEvent}>Test2</Button> +</div> ``` -For clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` is directly defined within the element. Therefore, this value takes precedence over all other parent ids or id details defined in its parent elements. +For clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` is directly defined within the element. Therefore, this value takes precedence over all other parent IDs or ID details defined in its parent elements. ### Example 3 In example 3, `parentDataTag` is declared and the `data-parentid` or `data-*-par ```javascript export const clickPluginConfigWithParentDataTag = {- dataTags : { - customDataPrefix: "", - parentDataTag: "group", - dntDataTag: "ai-dnt", - captureAllMetaDataContent:false, - useDefaultContentNameOrId: false, - autoCapture: true - }, + dataTags : { + customDataPrefix: "", + parentDataTag: "group", + dntDataTag: "ai-dnt", + captureAllMetaDataContent:false, + useDefaultContentNameOrId: false, + autoCapture: true + }, }; <div className="test6" data-group="buttongroup1" data-id="test6grandparent"> export const clickPluginConfigWithParentDataTag = { </div> </div> ```-For clicked element `<Button>`, the value of `parentId` is `test6parent`, because `parentDataTag` is declared. This declaration allows the plugin to traverse the current element tree and therefore the id of its closest parent will be used when parent id details are not directly provided within the current element. With the `data-group="buttongroup1"` defined, the plug-in finds the `parentId` more efficiently. ++For clicked element `<Button>`, the value of `parentId` is `test6parent`, because `parentDataTag` is declared. This declaration allows the plugin to traverse the current element tree and therefore the ID of its closest parent will be used when parent ID details aren't directly provided within the current element. With the `data-group="buttongroup1"` defined, the plug-in finds the `parentId` more efficiently. If you remove the `data-group="buttongroup1"` attribute, the value of `parentId` is still `test6parent`, because `parentDataTag` is still declared. |
azure-monitor | Javascript Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md | The Application Insights JavaScript SDK has a base SDK and several plugins for m We collect page views by default. But if you want to also collect clicks by default, consider adding the [Click Analytics Auto-Collection plug-in](./javascript-feature-extensions.md): -- If you're adding a [framework extension](./javascript-framework-extensions.md), which you can [add](#optional-add-advanced-sdk-configuration) after you follow the steps to get started below, you can optionally add Click Analytics when you add the framework extension. +- If you're adding a [framework extension](./javascript-framework-extensions.md), which you can [add](#optional-add-advanced-sdk-configuration) after you follow the steps to [get started](#get-started), you can optionally add Click Analytics when you add the framework extension. - If you're not adding a framework extension, [add the Click Analytics plug-in](./javascript-feature-extensions.md) after you follow the steps to get started. We provide the [Debug plugin](https://github.com/microsoft/ApplicationInsights-JS/blob/main/extensions/applicationinsights-debugplugin-js/README.md) and [Performance plugin](https://github.com/microsoft/ApplicationInsights-JS/blob/main/extensions/applicationinsights-perfmarkmeasure-js/README.md) for debugging/testing. In rare cases, it's possible to build your own extension by adding a [custom plugin](https://github.com/microsoft/ApplicationInsights-JS/blob/e4be62c0aa9318b540157118b729bb0c4d8b6c6e/API-reference.md#custom-extension). Two methods are available to add the code to enable Application Insights via the 1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights. - Preferably, you should add it as the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies. -- If Internet Explorer 8 is detected, JavaScript SDK v2.x is automatically loaded. - <!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-feature-extensions.md and 2) articles\azure-monitor\app\api-filtering-sampling.md --> - ```html - <script type="text/javascript"> - !(function (cfg){function e(){cfg.onInit&&cfg.onInit(i)}var S,u,D,t,n,i,C=window,x=document,w=C.location,I="script",b="ingestionendpoint",E="disableExceptionTracking",A="ai.device.";"instrumentationKey"[S="toLowerCase"](),u="crossOrigin",D="POST",t="appInsightsSDK",n=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=n),i=C[n]||function(l){var d=!1,g=!1,f={initialize:!0,queue:[],sv:"7",version:2,config:l};function m(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[A+"id"]=i[S](),n[A+"type"]=i,n["ai.operation.name"]=w&&w.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(f.sv||f.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:4,seq:"1",aiDataContract:undefined}}var h=-1,v=0,y=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],k=l.url||cfg.src;if(k){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~k.indexOf("ai.3")&&(k=k.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<y.length;e++)if(0<k.indexOf(y[e])){h=e;break}var i=function(e){var a,t,n,i,o,r,s,c,p,u;f.queue=[],g||(0<=h&&v+1<y.length?(a=(h+v+1)%y.length,T(k.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+y[a]+i})),v+=1):(d=g=!0,o=k,c=(p=function(){var e,t={},n=l.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][S]()]=o[1])}return t[b]||(e=(n=t.endpointsuffix)?t.location:null,t[b]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||l.instrumentationKey||"",p=(p=p[b])?p+"/v2/track":l.endpointUrl,(u=[]).push((t="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",n=o,r=p,(s=(i=m(c,"Exception")).data).baseType="ExceptionData",s.baseData.exceptions=[{typeName:"SDKLoadFailed",message:t.replace(/\./g,"-"),hasFullStack:!1,stack:t+"\nSnippet failed to load ["+n+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(w&&w.pathname||"_unknown_")+"\nEndpoint: "+r,parsedStack:[]}],i)),u.push((s=o,t=p,(r=(n=m(c,"Message")).data).baseType="MessageData",(i=r.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+s+")").replace(/\"/g,"")+'"',i.properties={endpoint:t},n)),o=u,c=p,JSON&&((r=C.fetch)&&!cfg.useXhr?r(c,{method:D,body:JSON.stringify(o),mode:"cors"}):XMLHttpRequest&&((s=new XMLHttpRequest).open(D,c),s.setRequestHeader("Content-type","application/json"),s.send(JSON.stringify(o))))))},a=function(e,t){g||setTimeout(function(){!t&&f.core||i()},500),d=!1},T=function(e){var n=x.createElement(I),e=(n.src=e,cfg[u]);return!e&&""!==e||"undefined"==n[u]||(n[u]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?x.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){x.getElementsByTagName(I)[0].parentNode.appendChild(n)},cfg.ld||0),n};T(k)}try{f.cookie=x.cookie}catch(p){}function t(e){for(;e.length;)!function(t){f[t]=function(){var e=arguments;d||f.queue.push(function(){f[t].apply(f,e)})}}(e.pop())}var r,s,n="track",o="TrackPage",c="TrackEvent",n=(t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+o,"stop"+o,"start"+c,"stop"+c,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),f.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(l.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==l[E]&&!0!==n[E]&&(t(["_"+(r="onerror")]),s=C[r],C[r]=function(e,t,n,i,a){var o=s&&s(e,t,n,i,a);return!0!==o&&f["_"+r]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},l.autoExceptionInstrumented=!0),f}(cfg.cfg),(C[n]=i).queue&&0===i.queue.length?(i.queue.push(e),i.trackPageView({})):e();})({ - src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js", - // name: "appInsights", - // ld: 0, - // useXhr: 1, - crossOrigin: "anonymous", - // onInit: null, - // cr: 0, - cfg: { // Application Insights Configuration - connectionString: "YOUR_CONNECTION_STRING" - }}); - </script> - ``` --1. (Optional) Add or update optional [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration), depending on if you need to optimize the loading of your web page or resolve loading errors. -- :::image type="content" source="media/javascript-sdk/sdk-loader-script-configuration.png" alt-text="Screenshot of the JavaScript (Web) SDK Loader Script. The parameters for configuring the JavaScript (Web) SDK Loader Script are highlighted." lightbox="media/javascript-sdk/sdk-loader-script-configuration.png"::: + Preferably, you should add it as the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies. + + If Internet Explorer 8 is detected, JavaScript SDK v2.x is automatically loaded. + <!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-feature-extensions.md and 2) articles\azure-monitor\app\api-filtering-sampling.md --> + ```html + <script type="text/javascript"> + !(function (cfg){function e(){cfg.onInit&&cfg.onInit(n)}var x,w,D,t,E,n,C=window,O=document,b=C.location,q="script",I="ingestionendpoint",L="disableExceptionTracking",j="ai.device.";"instrumentationKey"[x="toLowerCase"](),w="crossOrigin",D="POST",t="appInsightsSDK",E=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=E),n=C[E]||function(g){var f=!1,m=!1,h={initialize:!0,queue:[],sv:"8",version:2,config:g};function v(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[j+"id"]=i[x](),n[j+"type"]=i,n["ai.operation.name"]=b&&b.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(h.sv||h.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:undefined,seq:"1",aiDataContract:undefined}}var n,i,t,a,y=-1,T=0,S=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],o=g.url||cfg.src,r=function(){return s(o,null)};function s(d,t){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~d.indexOf("ai.3")&&(d=d.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<S.length;e++)if(0<d.indexOf(S[e])){y=e;break}var n,i=function(e){var a,t,n,i,o,r,s,c,u,l;h.queue=[],m||(0<=y&&T+1<S.length?(a=(y+T+1)%S.length,p(d.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+S[a]+i})),T+=1):(f=m=!0,s=d,!0!==cfg.dle&&(c=(t=function(){var e,t={},n=g.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][x]()]=o[1])}return t[I]||(e=(n=t.endpointsuffix)?t.location:null,t[I]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||g.instrumentationKey||"",t=(t=(t=t[I])&&"/"===t.slice(-1)?t.slice(0,-1):t)?t+"/v2/track":g.endpointUrl,t=g.userOverrideEndpointUrl||t,(n=[]).push((i="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",o=s,u=t,(l=(r=v(c,"Exception")).data).baseType="ExceptionData",l.baseData.exceptions=[{typeName:"SDKLoadFailed",message:i.replace(/\./g,"-"),hasFullStack:!1,stack:i+"\nSnippet failed to load ["+o+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(b&&b.pathname||"_unknown_")+"\nEndpoint: "+u,parsedStack:[]}],r)),n.push((l=s,i=t,(u=(o=v(c,"Message")).data).baseType="MessageData",(r=u.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+l+")").replace(/\"/g,"")+'"',r.properties={endpoint:i},o)),s=n,c=t,JSON&&((u=C.fetch)&&!cfg.useXhr?u(c,{method:D,body:JSON.stringify(s),mode:"cors"}):XMLHttpRequest&&((l=new XMLHttpRequest).open(D,c),l.setRequestHeader("Content-type","application/json"),l.send(JSON.stringify(s)))))))},a=function(e,t){m||setTimeout(function(){!t&&h.core||i()},500),f=!1},p=function(e){var n=O.createElement(q),e=(n.src=e,t&&(n.integrity=t),n.setAttribute("data-ai-name",E),cfg[w]);return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?O.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){O.getElementsByTagName(q)[0].parentNode.appendChild(n)},cfg.ld||0),n};p(d)}cfg.sri&&(n=o.match(/^((http[s]?:\/\/.*\/)\w+(\.\d+){1,5})\.(([\w]+\.){0,2}js)$/))&&6===n.length?(d="".concat(n[1],".integrity.json"),i="@".concat(n[4]),l=window.fetch,t=function(e){if(!e.ext||!e.ext[i]||!e.ext[i].file)throw Error("Error Loading JSON response");var t=e.ext[i].integrity||null;s(o=n[2]+e.ext[i].file,t)},l&&!cfg.useXhr?l(d,{method:"GET",mode:"cors"}).then(function(e){return e.json()["catch"](function(){return{}})}).then(t)["catch"](r):XMLHttpRequest&&((a=new XMLHttpRequest).open("GET",d),a.onreadystatechange=function(){if(a.readyState===XMLHttpRequest.DONE)if(200===a.status)try{t(JSON.parse(a.responseText))}catch(e){r()}else r()},a.send())):o&&r();try{h.cookie=O.cookie}catch(k){}function e(e){for(;e.length;)!function(t){h[t]=function(){var e=arguments;f||h.queue.push(function(){h[t].apply(h,e)})}}(e.pop())}var c,u,l="track",d="TrackPage",p="TrackEvent",l=(e([l+"Event",l+"PageView",l+"Exception",l+"Trace",l+"DependencyData",l+"Metric",l+"PageViewPerformance","start"+d,"stop"+d,"start"+p,"stop"+p,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),h.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(g.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==g[L]&&!0!==l[L]&&(e(["_"+(c="onerror")]),u=C[c],C[c]=function(e,t,n,i,a){var o=u&&u(e,t,n,i,a);return!0!==o&&h["_"+c]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},g.autoExceptionInstrumented=!0),h}(cfg.cfg),(C[E]=n).queue&&0===n.queue.length?(n.queue.push(e),n.trackPageView({})):e();})({ + src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js", + // name: "appInsights", // Global SDK Instance name defaults to "appInsights" when not supplied + // ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head. (default) = 0ms load after timeout, + // useXhr: 1, // Use XHR instead of fetch to report failures (if available), + // dle: true, // Prevent the SDK from reporting load failure log + crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag + // onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DON'T ADD anything to the sdk.queue -- As they won't get called) + // sri: false, // Custom optional value to specify whether fetching the snippet from integrity file and do integrity check + cfg: { // Application Insights Configuration + connectionString: "YOUR_CONNECTION_STRING" + }}); + </script> + ``` ++1. (Optional) Add or update optional [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration), if you need to optimize the loading of your web page or resolve loading errors. ++ :::image type="content" source="media/javascript-sdk/sdk-loader-script-configuration.png" alt-text="Screenshot of the JavaScript (Web) SDK Loader Script. The parameters for configuring the JavaScript (Web) SDK Loader Script are highlighted." lightbox="media/javascript-sdk/sdk-loader-script-configuration.png"::: #### JavaScript (Web) SDK Loader Script configuration - | Name | Type | Required? | Description - |||--| - | src | string | Required | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added <script /> tag. You can use the public CDN location or your own privately hosted one. - | name | string | Optional | The global name for the initialized SDK. Use this setting if you need to initialize two different SDKs at the same time.<br><br>The default value is appInsights, so ```window.appInsights``` is a reference to the initialized instance.<br><br> Note: If you assign a name value or if a previous instance has been assigned to the global name appInsightsSDK, the SDK initialization code requires it to be in the global namespace as `window.appInsightsSDK=<name value>` to ensure the correct JavaScript (Web) SDK Loader Script skeleton, and proxy methods are initialized and updated. - | ld | number in ms | Optional | Defines the load delay to wait before attempting to load the SDK. Use this setting when the HTML page is failing to load because the JavaScript (Web) SDK Loader Script is loading at the wrong time.<br><br>The default value is 0ms after timeout. If you use a negative value, the script tag is immediately added to the `<head>` region of the page and blocks the page load event until the script is loaded or fails. - | useXhr | boolean | Optional | This setting is used only for reporting SDK load failures. For example, this setting is useful when the JavaScript (Web) SDK Loader Script is preventing the HTML page from loading, causing fetch() to be unavailable.<br><br>Reporting first attempts to use fetch() if available and then fallback to XHR. Set this setting to `true` to bypass the fetch check. This setting is only required if your application is being used in an environment where fetch would fail to send the failure events such as if the JavaScript (Web) SDK Loader Script isn't loading successfully. - | crossOrigin | string | Optional | By including this setting, the script tag added to download the SDK includes the crossOrigin attribute with this string value. Use this setting when you need to provide support for CORS. When not defined (the default), no crossOrigin attribute is added. Recommended values are not defined (the default), "", or "anonymous". For all valid values, see the [cross origin HTML attribute](https://developer.mozilla.org/docs/Web/HTML/Attributes/crossorigin) documentation. - | onInit | function(aiSdk) { ... } | Optional | This callback function is called after the main SDK script has been successfully loaded and initialized from the CDN (based on the src value). This callback function is useful when you need to insert a telemetry initializer. It's passed one argument, which is a reference to the SDK instance that's being called for and is also called before the first initial page view. If the SDK has already been loaded and initialized, this callback is still called. NOTE: During the processing of the sdk.queue array, this callback is called. You CANNOT add any more items to the queue because they're ignored and dropped. (Added as part of JavaScript (Web) SDK Loader Script version 5--the sv:"5" value within the script). | - | cr | boolean | Optional | If the SDK fails to load and the endpoint value defined for `src` is the public CDN location, this configuration option attempts to immediately load the SDK from one of the following backup CDN endpoints:<ul><li>js.monitor.azure.com</li><li>js.cdn.applicationinsights.io</li><li>js.cdn.monitor.azure.com</li><li>js0.cdn.applicationinsights.io</li><li>js0.cdn.monitor.azure.com</li><li>js2.cdn.applicationinsights.io</li><li>js2.cdn.monitor.azure.com</li><li>az416426.vo.msecnd.net</li></ul>NOTE: az416426.vo.msecnd.net is partially supported, so it's not recommended.<br><br>If the SDK successfully loads from a backup CDN endpoint, it loads from the first available one, which is determined when the server performs a successful load check. If the SDK fails to load from any of the backup CDN endpoints, the SDK Failure error message appears.<br><br>When not defined, the default value is `true`. If you donΓÇÖt want to load the SDK from the backup CDN endpoints, set this configuration option to `false`.<br><br>If youΓÇÖre loading the SDK from your own privately hosted CDN endpoint, this configuration option is not applicable. +| Name | Type | Required? | Description +|||--| +| src | string | Required | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added <script /> tag. You can use the public CDN location or your own privately hosted one. +| name | string | Optional | The global name for the initialized SDK. Use this setting if you need to initialize two different SDKs at the same time.<br><br>The default value is appInsights, so ```window.appInsights``` is a reference to the initialized instance.<br><br> Note: If you assign a name value or if a previous instance is assigned to the global name appInsightsSDK, the SDK initialization code requires it to be in the global namespace as `window.appInsightsSDK=<name value>` to ensure the correct JavaScript (Web) SDK Loader Script skeleton, and proxy methods are initialized and updated. +| ld | number in ms | Optional | Defines the load delay to wait before attempting to load the SDK. Use this setting when the HTML page is failing to load because the JavaScript (Web) SDK Loader Script is loading at the wrong time.<br><br>The default value is 0ms after timeout. If you use a negative value, the script tag is immediately added to the `<head>` region of the page and blocks the page load event until the script is loaded or fails. +| useXhr | boolean | Optional | This setting is used only for reporting SDK load failures. For example, this setting is useful when the JavaScript (Web) SDK Loader Script is preventing the HTML page from loading, causing fetch() to be unavailable.<br><br>Reporting first attempts to use fetch() if available and then fallback to XHR. Set this setting to `true` to bypass the fetch check. This setting is necessary only in environments where fetch cannot transmit failure events, for example, when the JavaScript (Web) SDK Loader Script fails to load successfully. +| crossOrigin | string | Optional | By including this setting, the script tag added to download the SDK includes the crossOrigin attribute with this string value. Use this setting when you need to provide support for CORS. When not defined (the default), no crossOrigin attribute is added. Recommended values aren't defined (the default), "", or "anonymous". For all valid values, see the [cross origin HTML attribute](https://developer.mozilla.org/docs/Web/HTML/Attributes/crossorigin) documentation. +| onInit | function(aiSdk) { ... } | Optional | This callback function is called after the main SDK script is successfully loaded and initialized from the CDN (based on the src value). This callback function is useful when you need to insert a telemetry initializer. It's passed one argument, which is a reference to the SDK instance that's being called for and is also called before the first initial page view. If the SDK has already been loaded and initialized, this callback is still called. NOTE: During the processing of the sdk.queue array, this callback is called. You CANNOT add any more items to the queue because they're ignored and dropped. (Added as part of JavaScript (Web) SDK Loader Script version 5--the sv:"5" value within the script). | +| cr | boolean | Optional | If the SDK fails to load and the endpoint value defined for `src` is the public CDN location, this configuration option attempts to immediately load the SDK from one of the following backup CDN endpoints:<ul><li>js.monitor.azure.com</li><li>js.cdn.applicationinsights.io</li><li>js.cdn.monitor.azure.com</li><li>js0.cdn.applicationinsights.io</li><li>js0.cdn.monitor.azure.com</li><li>js2.cdn.applicationinsights.io</li><li>js2.cdn.monitor.azure.com</li><li>az416426.vo.msecnd.net</li></ul>NOTE: az416426.vo.msecnd.net is partially supported, so it's not recommended.<br><br>If the SDK successfully loads from a backup CDN endpoint, it loads from the first available one, which is determined when the server performs a successful load check. If the SDK fails to load from any of the backup CDN endpoints, the SDK Failure error message appears.<br><br>When not defined, the default value is `true`. If you donΓÇÖt want to load the SDK from the backup CDN endpoints, set this configuration option to `false`.<br><br>If youΓÇÖre loading the SDK from your own privately hosted CDN endpoint, this configuration option isn't applicable. #### [npm package](#tab/npmpackage) 1. Use the following command to install the Microsoft Application Insights JavaScript SDK - Web package. - ```sh - npm i --save @microsoft/applicationinsights-web - ``` + ```sh + npm i --save @microsoft/applicationinsights-web + ``` - *Typings are included with this package*, so you do *not* need to install a separate typings package. + *Typings are included with this package*, so you *don't* need to install a separate typings package. 1. Add the following JavaScript to your application's code. - Where and also how you add this JavaScript code depends on your application code. For example, you might be able to add it exactly as it appears below or you may need to create wrappers around it. + Where and also how you add this JavaScript code depends on your application code. For example, you might be able to add it exactly as it appears below or you may need to create wrappers around it. + + ```js + import { ApplicationInsights } from '@microsoft/applicationinsights-web' - ```js - import { ApplicationInsights } from '@microsoft/applicationinsights-web' -- const appInsights = new ApplicationInsights({ config: { - connectionString: 'YOUR_CONNECTION_STRING' - /* ...Other Configuration Options... */ - } }); - appInsights.loadAppInsights(); - appInsights.trackPageView(); - ``` + const appInsights = new ApplicationInsights({ config: { + connectionString: 'YOUR_CONNECTION_STRING' + /* ...Other Configuration Options... */ + } }); + appInsights.loadAppInsights(); + appInsights.trackPageView(); + ``` Two methods are available to add the code to enable Application Insights via the To paste the connection string in your environment, follow these steps: - 1. Navigate to the **Overview** pane of your Application Insights resource. - 1. Locate the **Connection String**. - 1. Select the **Copy to clipboard** icon to copy the connection string to the clipboard. +1. Navigate to the **Overview** pane of your Application Insights resource. +1. Locate the **Connection String**. +1. Select the **Copy to clipboard** icon to copy the connection string to the clipboard. - :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png"::: + :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png"::: - 1. Replace the placeholder `"YOUR_CONNECTION_STRING"` in the JavaScript code with your [connection string](./sdk-connection-string.md) copied to the clipboard. +1. Replace the placeholder `"YOUR_CONNECTION_STRING"` in the JavaScript code with your [connection string](./sdk-connection-string.md) copied to the clipboard. - The `connectionString` format must follow "InstrumentationKey=xxxx;....". If the string provided does not meet this format, the SDK load process fails. -- The connection string isn't considered a security token or key. For more information, see [Do new Azure regions require the use of connection strings?](./sdk-connection-string.md#do-new-azure-regions-require-the-use-of-connection-strings). + The `connectionString` format must follow "InstrumentationKey=xxxx;....". If the string provided doesn't meet this format, the SDK load process fails. + + The connection string isn't considered a security token or key. For more information, see [Do new Azure regions require the use of connection strings?](./sdk-connection-string.md#do-new-azure-regions-require-the-use-of-connection-strings). ### (Optional) Add SDK configuration If you want to use the extra features provided by plugins for specific framework 1. Open the **Event types** dropdown menu and select **Select all** to clear the checkboxes in the menu. 1. From the **Event types** dropdown menu, select: - - **Page View** for Azure Monitor Application Insights Real User Monitoring - - **Custom Event** for the Click Analytics Auto-Collection plug-in. -- It might take a few minutes for data to show up in the portal. If the only data you see showing up is a load failure exception, see [Troubleshoot SDK load failure for JavaScript web apps](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting#troubleshoot-sdk-load-failure-for-javascript-web-apps). -- In some cases, if multiple instances of different versions of Application Insights are running on the same page, errors can occur during initialization. For these cases and the error message that appears, see [Running multiple versions of the Application Insights JavaScript SDK in one session](https://github.com/microsoft/ApplicationInsights-JS/blob/main/versionConflict.md). If you've encountered one of these errors, try changing the namespace by using the `name` setting. For more information, see [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration). -- :::image type="content" source="media/javascript-sdk/confirm-data-flowing.png" alt-text="Screenshot of the Application Insights Transaction search pane in the Azure portal with the Page View option selected. The page views are highlighted." lightbox="media/javascript-sdk/confirm-data-flowing.png"::: + - **Page View** for Azure Monitor Application Insights Real User Monitoring + - **Custom Event** for the Click Analytics Auto-Collection plug-in. + + It might take a few minutes for data to show up in the portal. If the only data you see showing up is a load failure exception, see [Troubleshoot SDK load failure for JavaScript web apps](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting#troubleshoot-sdk-load-failure-for-javascript-web-apps). + + In some cases, if multiple instances of different versions of Application Insights are running on the same page, errors can occur during initialization. For these cases and the error message that appears, see [Running multiple versions of the Application Insights JavaScript SDK in one session](https://github.com/microsoft/ApplicationInsights-JS/blob/main/versionConflict.md). If you've encountered one of these errors, try changing the namespace by using the `name` setting. For more information, see [JavaScript (Web) SDK Loader Script configuration](#javascript-web-sdk-loader-script-configuration). + + :::image type="content" source="media/javascript-sdk/confirm-data-flowing.png" alt-text="Screenshot of the Application Insights Transaction search pane in the Azure portal with the Page View option selected. The page views are highlighted." lightbox="media/javascript-sdk/confirm-data-flowing.png"::: 1. If you want to query data to confirm data is flowing: - 1. Select **Logs** in the left pane. -- When you select Logs, the [Queries dialog](../logs/queries.md#queries-dialog) opens, which contains sample queries relevant to your data. - - 1. Select **Run** for the sample query you want to run. - - 1. If needed, you can update the sample query or write a new query by using [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). - - For essential KQL operators, see [Learn common KQL operators](/azure/data-explorer/kusto/query/tutorials/learn-common-operators). + 1. Select **Logs** in the left pane. + + When you select Logs, the [Queries dialog](../logs/queries.md#queries-dialog) opens, which contains sample queries relevant to your data. + + 1. Select **Run** for the sample query you want to run. + + 1. If needed, you can update the sample query or write a new query by using [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). + + For essential KQL operators, see [Learn common KQL operators](/azure/data-explorer/kusto/query/tutorials/learn-common-operators). ## Frequently asked questions |
azure-monitor | Opentelemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry.md | The **.NET** OpenTelemetry implementation uses logging, metrics, and activity AP **Azure Monitor pipeline at edge** is a powerful solution designed to facilitate high-scale data ingestion and routing from edge environments to seamlessly enable observability across cloud, edge, and multicloud. It uses the OpenTelemetry Collector. Currently, in public preview, it can be deployed on a single Arc-enabled Kubernetes cluster, and it can collect OpenTelemetry Protocol (OTLP) logs. -- [Accelerate your observability journey with Azure Monitor pipeline (preview)](https://devblogs.microsoft.com/dotnet/introducing-dotnet-aspire-simplifying-cloud-native-development-with-dotnet-8/)-- [Configure Azure Monitor pipeline for edge and multicloud](/dotnet/aspire/fundamentals/dashboard/overview)+- [Accelerate your observability journey with Azure Monitor pipeline (preview)](https://techcommunity.microsoft.com/t5/azure-observability-blog/accelerate-your-observability-journey-with-azure-monitor/ba-p/4124852) +- [Configure Azure Monitor pipeline for edge and multicloud](../essentials/edge-pipeline-configure.md) **OpenTelemetry Collector Azure Data Explorer Exporter** is a data exporter component that can be plugged into the OpenTelemetry Collector. It supports ingestion of data from many receivers into to Azure Data Explorer, Azure Synapse Data Explorer, and Real-Time Analytics in Fabric. |
azure-monitor | Daily Cap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md | A daily cap on a Log Analytics workspace allows you to avoid unexpected increase ## How the daily cap works Each workspace has a daily cap that defines its own data volume limit. When the daily cap is reached, a warning banner appears across the top of the page for the selected Log Analytics workspace in the Azure portal, and an operation event is sent to the *Operation* table under the **LogManagement** category. You can optionally create an alert rule to send an alert when this event is created. +The data size used for the daily cap is the size after customer-defined data transformations. (Learn more about data [transformations in Data Collection Rules](../essentials/data-collection-transformations.md).) + Data collection resumes at the reset time which is a different hour of the day for each workspace. This reset hour can't be configured. You can optionally create an alert rule to send an alert when this event is created. > [!NOTE] |
azure-resource-manager | Bicep Config Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md | Title: Module setting for Bicep config description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 02/16/2024 Last updated : 06/28/2024 # Add module settings in the Bicep config file For a template spec, use: module stgModule 'ts/CoreSpecs:storage:v1' = { ``` -An alias has been predefined for the [public module registry](./modules.md#path-to-module). To reference a public module, you can use the format: +An alias has been predefined for [public modules](./modules.md#file-in-registry). To reference a public module, you can use the format: ```bicep br/public:<file>:<tag> ``` -You can override the public module registry alias definition in the bicepconfig.json file: +> [!NOTE] +> Non-AVM (Azure Verified Modules) modules are retired from the public module registry with most of them available as AVM modules. ++You can override the public module registry alias definition in the [bicepconfig.json file](./bicep-config.md): ```json { |
azure-resource-manager | Bicep Using | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-using.md | Title: Using statement description: Describes how to use the using statement in Bicep. Previously updated : 10/11/2023 Last updated : 06/28/2024 # Using statement The `using` statement in [Bicep parameter files](./parameter-files.md) ties the using '<path>/<file-name>.json' ``` -- To use public module:+- To use [public modules](./modules.md#path-to-module): ```bicep using 'br/public:<file-path>:<tag>' The `using` statement in [Bicep parameter files](./parameter-files.md) ties the For example: ```bicep- using 'br/public:storage/storage-account:3.0.1' + using 'br/public:avm/res/storage/storage-account:0.9.0' param name = 'mystorage' ``` |
azure-resource-manager | Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md | Title: Bicep modules description: Describes how to define a module in a Bicep file, and how to use module scopes. Previously updated : 02/02/2024 Last updated : 06/28/2024 # Bicep modules -Bicep enables you to organize deployments into modules. A module is a Bicep file (or an ARM JSON template) that is deployed from another Bicep file. With modules, you improve the readability of your Bicep files by encapsulating complex details of your deployment. You can also easily reuse modules for different deployments. +Bicep enables you to organize deployments into modules. A module is a Bicep file (or an Azure Resource Manager JSON template) that is deployed from another Bicep file. With modules, you improve the readability of your Bicep files by encapsulating complex details of your deployment. You can also easily reuse modules for different deployments. -To share modules with other people in your organization, create a [template spec](../bicep/template-specs.md), [public registry](https://github.com/Azure/bicep-registry-modules), or [private registry](private-module-registry.md). Template specs and modules in the registry are only available to users with the correct permissions. +To share modules with other people in your organization, create a [template spec](../bicep/template-specs.md), or [private registry](private-module-registry.md). Template specs and modules in the registry are only available to users with the correct permissions. > [!TIP] > The choice between module registry and template specs is mostly a matter of preference. There are a few things to consider when you choose between the two: To share modules with other people in your organization, create a [template spec > - Content in the Bicep module registry can only be deployed from another Bicep file. Template specs can be deployed directly from the API, Azure PowerShell, Azure CLI, and the Azure portal. You can even use [`UiFormDefinition`](../templates/template-specs-create-portal-forms.md) to customize the portal deployment experience. > - Bicep has some limited capabilities for embedding other project artifacts (including non-Bicep and non-ARM-template files. For example, PowerShell scripts, CLI scripts and other binaries) by using the [`loadTextContent`](./bicep-functions-files.md#loadtextcontent) and [`loadFileAsBase64`](./bicep-functions-files.md#loadfileasbase64) functions. Template specs can't package these artifacts. -Bicep modules are converted into a single Azure Resource Manager template with [nested templates](../templates/linked-templates.md#nested-template). For more information about how Bicep resolves configuration files and how Bicep merge user-defined configuration file with the default configuration file, see [Configuration file resolution process](./bicep-config.md#understand-the-file-resolution-process) and [Configuration file merge process](./bicep-config.md#understand-the-merge-process). +Bicep modules are converted into a single Azure Resource Manager template with [nested templates](../templates/linked-templates.md#nested-template). For more information about how Bicep resolves configuration files and how Bicep merges user-defined configuration file with the default configuration file, see [Configuration file resolution process](./bicep-config.md#understand-the-file-resolution-process) and [Configuration file merge process](./bicep-config.md#understand-the-merge-process). ### Training resources Like resources, modules are deployed in parallel unless they depend on other mod ## Path to module -The file for the module can be either a local file or an external file. The external file can be in template spec or a Bicep module registry. All of these options are shown below. +The file for the module can be either a local file or an external file. The external file can be in template spec or a Bicep module registry. ### Local file For example, to deploy a file that is up one level in the directory from your ma #### Public module registry -The public module registry is hosted in a Microsoft container registry (MCR). The source code and the modules are stored in [GitHub](https://github.com/azure/bicep-registry-modules). To view the available modules and their versions, see [Bicep registry Module Index](https://aka.ms/br-module-index). +> [!NOTE] +> Non-AVM (Azure Verified Modules) modules are retired from the public module registry. ++[Azure Verified Modules](https://azure.github.io/Azure-Verified-Modules/) are prebuilt, pretested, and preverified modules for deploying resources on Azure. Created and owned by Microsoft employees, these modules are designed to simplify and accelerate the deployment process for common Azure resources and configurations whilst also aligning to best practices; such as the Well-Architected Framework. +Browse to the [Azure Verified Modules Bicep Index](https://azure.github.io/Azure-Verified-Modules/indexes/bicep/)to see the list of modules available, select the highlighted numbers in the following screenshot to be taken directly to that filtered view. -Select the versions to see the available versions. You can also select **Source code** to see the module source code, and open the Readme files. -There are only a few published modules currently. More modules are coming. If you like to contribute to the registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md). +The module list shows the latest version. Select the version number to see a list of available versions: -To link to a public registry module, specify the module path with the following syntax: ++To link to a public module, specify the module path with the following syntax: ```bicep module <symbolic-name> 'br/public:<file-path>:<tag>' = {} ``` -- **br/public** is the alias for the public module registry. This alias is predefined in your configuration.+- **br/public** is the alias for public modules. You can customize this alias in the [Bicep configuration file](./bicep-config-modules.md). - **file path** can contain segments that can be separated by the `/` character. - **tag** is used for specifying a version for the module. For example: +```bicep +module storage 'br/public:avm/res/storage/storage-account:0.9.0' = { + name: 'myStorage' + params: { + name: 'store${resourceGroup().name}' + } +} +``` > [!NOTE]-> **br/public** is the alias for the public registry. It can also be written as +> **br/public** is the alias for public modules. It can also be written as: > > ```bicep > module <symbolic-name> 'br:mcr.microsoft.com/bicep/<file-path>:<tag>' = {} The full path for a module in a registry can be long. Instead of providing the f An alias for the public module registry has been predefined: +```bicep +module storage 'br/public:avm/res/storage/storage-account:0.9.0' = { + name: 'myStorage' + params: { + name: 'store${resourceGroup().name}' + } +} +``` You can override the public alias in the bicepconfig.json file. |
azure-resource-manager | Parameter Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md | Title: Create parameters files for Bicep deployment description: Create parameters file for passing in values during deployment of a Bicep file. Previously updated : 04/01/2024 Last updated : 06/28/2024 # Create parameters files for Bicep deployment using './azuredeploy.json' ``` ```bicep-using 'br/public:storage/storage-account:3.0.1' +using 'br/public:avm/res/storage/storage-account:0.9.0' ... ``` |
azure-resource-manager | Private Module Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md | Title: Create private registry for Bicep module description: Learn how to set up an Azure container registry for private Bicep modules Previously updated : 05/10/2024 Last updated : 06/28/2024 # Create private registry for Bicep modules -To share [modules](modules.md) within your organization, you can create a private module registry. You publish modules to that registry and give read access to users who need to deploy the modules. After the modules are shared in the registries, you can reference them from your Bicep files. To contribute to the public module registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md). +To share [modules](modules.md) within your organization, you can create a private module registry. You can then publish modules to that registry and give read access to users who need to deploy the modules. After the modules are shared in the registries, you can reference them from your Bicep files. To use public modules, see [Bicep Modules](./modules.md#file-in-registry). To work with module registries, you must have [Bicep CLI](./install.md) version **0.4.1008 or later**. To use with Azure CLI, you must also have version **2.31.0 or later**; to use with Azure PowerShell, you must also have version **7.0.0** or later. A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-r ## Publish files to registry -After setting up the container registry, you can publish files to it. Use the [publish](bicep-cli.md#publish) command and provide any Bicep files you intend to use as modules. Specify the target location for the module in your registry. The publish command will create an ARM template which will be stored in the registry. This means if publishing a Bicep file that references other local modules, these modules will be fully expanded as one JSON file and published to the registry. +After setting up the container registry, you can publish files to it. Use the [publish](bicep-cli.md#publish) command and provide any Bicep files you intend to use as modules. Specify the target location for the module in your registry. The publish command creates an ARM template, which is stored in the registry. This means if publishing a Bicep file that references other local modules, these modules are fully expanded as one JSON file and published to the registry. # [PowerShell](#tab/azure-powershell) az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bic -With the with source switch, you see an additional layer in the manifest: +With the with source switch, you see another layer in the manifest: :::image type="content" source="./media/private-module-registry/bicep-module-with-source-manifest.png" lightbox="./media/private-module-registry/bicep-module-with-source-manifest.png" alt-text="Screenshot of bicep module registry with source."::: -Note that if the Bicep module references a module in a Private Registry, the ACR endpoint will be visible. To hide the full endpoint, you can configure an alias for the private registry. +If the Bicep module references a module in a Private Registry, the ACR endpoint is visible. To hide the full endpoint, you can configure an alias for the private registry. ## View files in registry To see the published module in the portal: 1. Search for **container registries**. 1. Select your registry. 1. Select **Services** -> **Repositories** from the left menu.-1. Select the module path (repository). In the preceding example, the module path name is **bicep/modules/storage**. +1. Select the module path (repository). In the preceding example, the module path name is **bicep/modules/storage**. 1. Select the tag. In the preceding example, the tag is **v1**.-1. The **Artifact reference** matches the reference you'll use in the Bicep file. +1. The **Artifact reference** matches the reference you use in the Bicep file. ![Bicep module registry artifact reference](./media/private-module-registry/bicep-module-registry-artifact-reference.png) You're now ready to reference the file in the registry from a Bicep file. For ex ## Working with Bicep registry files -When leveraging bicep files that are hosted in a remote registry, it's important to understand how your local machine will interact with the registry. When you first declare the reference to the registry, your local editor will try to communicate with the Azure Container Registry and download a copy of the registry to your local cache. +When using bicep files that are hosted in a remote registry, it's important to understand how your local machine interacts with the registry. When you first declare the reference to the registry, your local editor tries to communicate with the Azure Container Registry and download a copy of the registry to your local cache. The local cache is found in: The local cache is found in: ~/.bicep ``` -Any changes made to the remote registry will not be recognized by your local machine until a `restore` has been ran with the specified file that includes the registry reference. +Your local machine can recognize any changes made to the remote registry until you run a `restore` with the specified file that includes the registry reference. ```azurecli az bicep restore --file <bicep-file> [--force] ``` -For more information refer to the [`restore` command.](bicep-cli.md#restore) -+For more information, see the [`restore` command.](bicep-cli.md#restore) ## Next steps -* To learn about modules, see [Bicep modules](modules.md). -* To configure aliases for a module registry, see [Add module settings in the Bicep config file](bicep-config-modules.md). -* For more information about publishing and restoring modules, see [Bicep CLI commands](bicep-cli.md). +- To learn about modules, see [Bicep modules](modules.md). +- To configure aliases for a module registry, see [Add module settings in the Bicep config file](bicep-config-modules.md). +- For more information about publishing and restoring modules, see [Bicep CLI commands](bicep-cli.md). |
azure-resource-manager | Quickstart Private Module Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md | Title: Publish modules to private module registry description: Publish Bicep modules to private module registry and use the modules. Previously updated : 06/20/2024 Last updated : 06/28/2024 #Customer intent: As a developer new to Azure deployment, I want to learn how to publish Bicep modules to private module registry. -Learn how to publish Bicep modules to private modules registry, and how to call the modules from your Bicep files. Private module registry allows you to share Bicep modules within your organization. To learn more, see [Create private registry for Bicep modules](./private-module-registry.md). To contribute to the public module registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md). +Learn how to publish Bicep modules to private modules registry, and how to call the modules from your Bicep files. Private module registry allows you to share Bicep modules within your organization. To learn more, see [Create private registry for Bicep modules](./private-module-registry.md). ## Prerequisites |
batch | Batch Automatic Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md | Title: Autoscale compute nodes in an Azure Batch pool description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 06/11/2024 Last updated : 06/27/2024 You can get the value of these service-defined variables to make adjustments tha | $UsableNodeCount | The number of usable compute nodes. | | $PreemptedNodeCount | The number of nodes in the pool that are in a preempted state. | -> [!WARNING] -> Select service-defined variables will be retired after **31 March 2024** as noted in the table above. After the retirement -> date, these service-defined variables will no longer be populated with sample data. Please discontinue use of these variables -> before this date. - > [!NOTE] > Use `$RunningTasks` when scaling based on the number of tasks running at a point in time, and `$ActiveTasks` when scaling based on the number of tasks that are queued up to run. In Batch .NET, the [CloudPool.AutoScaleRun](/dotnet/api/microsoft.azure.batch.cl - [AutoScaleRun.Results](/dotnet/api/microsoft.azure.batch.autoscalerun.results) - [AutoScaleRun.Error](/dotnet/api/microsoft.azure.batch.autoscalerun.error) -In the REST API, the [Get information about a pool request](/rest/api/batchservice/get-information-about-a-pool) returns information about the pool, which includes the latest automatic scaling run information in the [autoScaleRun](/rest/api/batchservice/get-information-about-a-pool) property. +In the REST API, [information about a pool](/rest/api/batchservice/get-information-about-a-pool) includes the latest automatic scaling run information in the [autoScaleRun](/rest/api/batchservice/get-information-about-a-pool) property. The following C# example uses the Batch .NET library to print information about the last autoscaling run on pool *myPool*. |
batch | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md | Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 05/31/2024 Last updated : 06/27/2024 This article discusses best practices and useful tips for using the Azure Batch - **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable a small but important subset of scenarios. For more information, see [configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode). -- **`virtualMachineConfiguration` or `cloudServiceConfiguration`:** While you can currently create pools using either-configuration, new pools should be configured using `virtualMachineConfiguration` and not `cloudServiceConfiguration`. -All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Service Configuration -pools don't support all features and no new capabilities are planned. You won't be able to create new -`cloudServiceConfiguration` pools or add new nodes to existing pools -[after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). -For more information, see -[Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md). - - **`classic` or `simplified` node communication mode:** Pools can be configured in one of two node communication modes, classic or [simplified](simplified-compute-node-communication.md). In the classic node communication model, the Batch service initiates communication to the compute nodes, and compute nodes also require communicating to Azure Storage. In the simplified Before you recreate or resize your pool, you should download any node agent logs #### Operating system updates It's recommended that the VM image selected for a Batch pool should be up-to-date with the latest publisher provided security updates.-Some images may perform automatic updates upon boot (or shortly thereafter), which may interfere with certain user directed actions such +Some images may perform automatic package updates upon boot (or shortly thereafter), which may interfere with certain user directed actions such as retrieving package repository updates (for example, `apt update`) or installing packages during actions such as a [StartTask](jobs-and-tasks.md#start-task). +It's recommended to enable [Auto OS upgrade for Batch pools](batch-upgrade-policy.md), which allows the underlying +Azure infrastructure to coordinate updates across the pool. This option can be configured to be nondisrupting for task +execution. Automatic OS upgrade doesn't support all operating systems that Batch supports. For more information, see the +[Virtual Machine Scale Sets Auto OS upgrade Support Matrix](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#supported-os-images). +For Windows operating systems, ensure that you aren't enabling the property +`virtualMachineConfiguration.windowsConfiguration.enableAutomaticUpdates` when using Auto OS upgrade on the Batch pool. + Azure Batch doesn't verify or guarantee that images allowed for use with the service have the latest security updates. Updates to images are under the purview of the publisher of the image, and not that of Azure Batch. For certain images published under `microsoft-azure-batch`, there's no guarantee that these images are kept up-to-date with their upstream derived image. Pools can be created using third-party images published to Azure Marketplace. Wi ### Container pools -When specifying a Batch pool with a [virtual network](batch-virtual-network.md), there can be interaction +When you create a Batch pool with a [virtual network](batch-virtual-network.md), there can be interaction side effects between the specified virtual network and the default Docker bridge. Docker, by default, will create a network bridge with a subnet specification of `172.17.0.0/16`. Ensure that there are no conflicting IP ranges between the Docker network bridge and your virtual network. Tasks that only run for one to two seconds aren't ideal. Try to do a significant ### Use pool scope for short tasks on Windows nodes -When scheduling a task on Batch nodes, you can choose whether to run it with task scope or pool scope. If the task will only run for a short time, task scope can be inefficient due to the resources needed to create the auto-user account for that task. For greater efficiency, consider setting these tasks to pool scope. For more information, see [Run a task as an auto-user with pool scope](batch-user-accounts.md#run-a-task-as-an-auto-user-with-pool-scope). +When scheduling a task on Batch nodes, you can choose whether to run it with task scope or pool scope. If the task will only run for a short time, task scope can be inefficient due to the resources needed to create the autouser account for that task. For greater efficiency, consider setting these tasks to pool scope. For more information, see [Run a task as an autouser with pool scope](batch-user-accounts.md#run-a-task-as-an-auto-user-with-pool-scope). ## Nodes promotion into production use. If you notice a problem involving the behavior of a node or tasks running on a node, collect the Batch agent logs prior to deallocating the nodes in question. The Batch agent logs can be collected using the Upload Batch service logs API. These logs can be supplied as part of a support ticket to Microsoft and will help with issue troubleshooting and resolution. -### Manage OS upgrades --For user subscription mode Batch accounts, automated OS upgrades can interrupt task progress, especially if the tasks are long-running. [Building idempotent tasks](#build-durable-tasks) can help to reduce errors caused by these interruptions. We also recommend [scheduling OS image upgrades for times when tasks aren't expected to run](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#manually-trigger-os-image-upgrades). --For Windows pools, `enableAutomaticUpdates` is set to `true` by default. Allowing automatic updates is recommended, but you can set this value to `false` if you need to ensure that an OS update doesn't happen unexpectedly. - ## Batch API ### Timeout Failures Ensure that your Batch service clients have appropriate retry policies in place Typically, virtual machines in a Batch pool are accessed through public IP addresses that can change over the lifetime of the pool. This dynamic nature can make it difficult to interact with a database or other external service that limits access to certain IP addresses. To address this concern, you can create a pool using a set of static public IP addresses that you control. For more information, see [Create an Azure Batch pool with specified public IP addresses](create-pool-public-ip.md). -### Testing connectivity with Cloud Services configuration --You can't use the normal "ping"/ICMP protocol with cloud services, because the ICMP protocol isn't permitted through the Azure load balancer. For more information, see [Connectivity and networking for Azure Cloud Services](../cloud-services/cloud-services-connectivity-and-networking-faq.yml#can-i-ping-a-cloud-service-). - ## Batch node underlying dependencies Consider the following dependencies and restrictions when designing your Batch solutions. |
batch | Security Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-best-practices.md | Title: Batch security and compliance best practices description: Learn best practices and useful tips for enhancing security with your Azure Batch solutions. Previously updated : 09/13/2023 Last updated : 06/27/2024 -By default, Azure Batch accounts have a public endpoint and are publicly accessible. When an Azure Batch pool is created, the pool is provisioned in a specified subnet of an Azure virtual network. Virtual machines in the Batch pool are accessed through public IP addresses that are created by Batch. Compute nodes in a pool can communicate with each other when needed, such as to run multi-instance tasks, but nodes in a pool can't communicate with virtual machines outside of the pool. +By default, Azure Batch accounts have a public endpoint and are publicly accessible. When an Azure Batch pool is created, +the pool is provisioned in a specified subnet of an Azure virtual network. Virtual machines in the Batch pool are accessed, +by default, through public IP addresses that Batch creates. Compute nodes in a pool can communicate with each other when needed, +such as to run multi-instance tasks, but nodes in a pool can't communicate with virtual machines outside of the pool. :::image type="content" source="media/security-best-practices/typical-environment.png" alt-text="Diagram showing a typical Batch environment."::: Many features are available to help you create a more secure Azure Batch deploym ### Pool configuration -Many security features are only available for pools configured using [Virtual Machine Configuration](nodes-and-pools.md#configurations), and not for pools with Cloud Services Configuration. We recommend using Virtual Machine Configuration pools, which utilize [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), whenever possible. --Pools can also be configured in one of two node communication modes, classic or [simplified](simplified-compute-node-communication.md). +Pools can be configured in one of two node communication modes, classic or [simplified](simplified-compute-node-communication.md). In the classic node communication model, the Batch service initiates communication to the compute nodes, and compute nodes also require communicating to Azure Storage. In the simplified node communication model, compute nodes initiate communication with the Batch service. Due to the reduced scope of inbound/outbound connections required, and not requiring Azure Storage node communication model will be Batch account access supports two methods of authentication: Shared Key and [Microsoft Entra ID](batch-aad-auth.md). -We strongly recommend using Microsoft Entra ID for Batch account authentication. Some Batch capabilities require this method of authentication, including many of the security-related features discussed here. The service API authentication mechanism for a Batch account can be restricted to only Microsoft Entra ID using the [allowedAuthenticationModes](/rest/api/batchmanagement/batch-account/create) property. When this property is set, API calls using Shared Key authentication will be rejected. +We strongly recommend using Microsoft Entra ID for Batch account authentication. Some Batch capabilities require this method of authentication, including many of the security-related features discussed here. The service API authentication mechanism for a Batch account can be restricted to only Microsoft Entra ID using the [allowedAuthenticationModes](/rest/api/batchmanagement/batch-account/create) property. When this property is set, API calls using Shared Key authentication is rejected. ### Batch account pool allocation mode When creating a Batch account, you can choose between two [pool allocation modes](accounts.md#batch-accounts): -- **Batch service**: The default option, where the underlying Cloud Service or Virtual Machine Scale Set resources used to allocate and manage pool nodes are created on Batch-owned subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible.-- **User subscription**: The underlying Cloud Service or Virtual Machine Scale Set resources are created in the same subscription as the Batch account. These resources are therefore visible in the subscription, in addition to the corresponding Batch resources.+- **Batch service**: The default option, where the underlying Virtual Machine Scale Set resources used to allocate and manage pool nodes are created on Batch-owned subscriptions, and aren't directly visible in the Azure portal. Only the Batch pools and nodes are visible. +- **User subscription**: The underlying Virtual Machine Scale Set resources are created in the same subscription as the Batch account. These resources are therefore visible in the subscription, in addition to the corresponding Batch resources. With user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription mode is required if you want to create Batch pools using Azure Reserved VM Instances, use Azure Policy on Virtual Machine Scale Set resources, and/or manage the core quota on the subscription (shared across all Batch accounts in the subscription). To create a Batch account in user subscription mode, you must also register your subscription with Azure Batch, and associate the account with an Azure Key Vault. Batch supports both Linux and Windows operating systems. Batch supports Linux wi distributions. It's recommended that the operating system is kept up-to-date with the latest patches provided by the OS publisher. +It's recommended to enable [Auto OS upgrade for Batch pools](batch-upgrade-policy.md), which allows the underlying +Azure infrastructure to coordinate updates across the pool. This option can be configured to be nondisrupting for task +execution. Automatic OS upgrade doesn't support all operating systems that Batch supports. For more information, see the +[Virtual Machine Scale Sets Auto OS upgrade Support Matrix](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#supported-os-images). +For Windows operating systems, ensure that you aren't enabling the property +`virtualMachineConfiguration.windowsConfiguration.enableAutomaticUpdates` when using Auto OS upgrade on the Batch pool. + Batch support for images and node agents phase out over time, typically aligned with publisher support timelines. It's recommended to avoid using images with impending end-of-life (EOL) dates or images that are past their EOL date. It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads at any time. EOL dates can be discovered via the The Batch node agent doesn't modify operating system level defaults for SSL/TLS versions or cipher suite ordering. In Windows, SSL/TLS versions and cipher suite order is controlled at the operating system level, and therefore the Batch node agent adopts the settings set by the image used by each compute node. Although the Batch node agent attempts to utilize the-most secure settings available when possible, it can still be limited by operating system level settings. We recommend that +most secure settings available when possible, it can still be limited by operating system level settings. We recommend that you review your OS level defaults and set them appropriately for the most secure mode that is amenable for your workflow and organizational requirements. For more information, please visit [Manage TLS](/windows-server/security/tls/manage-tls) for cipher suite order enforcement and |
cloud-services-extended-support | Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-portal.md | Title: Deploy Azure Cloud Services (extended support) - Azure portal -description: Deploy an Azure Cloud Service (extended support) using the Azure portal -+description: Deploy Azure Cloud Services (extended support) by using the Azure portal. + Previously updated : 10/13/2020 Last updated : 06/18/2024 -# Deploy Azure Cloud Services (extended support) using the Azure portal -This article explains how to use the Azure portal to create a Cloud Service (extended support) deployment. +# Deploy Cloud Services (extended support) by using the Azure portal -## Before you begin +This article shows you how to use the Azure portal to create an Azure Cloud Services (extended support) deployment. -Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources. +## Prerequisites ++Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the required resources. ++## Deploy Cloud Services (extended support) ++To deploy Cloud Services (extended support) by using the portal: -## Deploy a Cloud Services (extended support) 1. Sign in to the [Azure portal](https://portal.azure.com). -2. Using the search bar located at the top of the Azure portal, search for and select **Cloud Services (extended support)**. +1. In the search bar, enter **Cloud Services (extended support)**, and then select it in the search results. ++ :::image type="content" source="media/deploy-portal-1.png" alt-text="Screenshot that shows a Cloud Services (extended support) search in the Azure portal, and selecting the result."::: ++1. On the **Cloud services (extended support)** services pane, select **Create**. ++ :::image type="content" source="media/deploy-portal-2.png" alt-text="Screenshot that shows selecting Create in the menu to create a new instance of Cloud Services (extended support)."::: ++ The **Create a cloud service (extended support)** pane opens. ++1. On the **Basics** tab, select or enter the following information: ++ - **Subscription**: Select a subscription to use for the deployment. + - **Resource group**: Select an existing resource group, or create a new one. + - **Cloud service name**: Enter a name for your Cloud Services (extended support) deployment. + - The DNS name of the cloud service is separate and is specified by the DNS name label of the public IP address. You can modify the DNS name in **Public IP** on the **Configuration** tab. + - **Region**: Select the region to deploy the service to. ++ :::image type="content" source="media/deploy-portal-3.png" alt-text="Image shows the Cloud Services (extended support) Basics tab."::: ++1. On the **Basics** tab under **Cloud service configuration, package, and service definition**, add your package (.cspkg or .zip) file, configuration (.cscfg) file, and definition (.csdef) file for the deployment. You can add existing files from blob storage or upload the files from your local machine. If you upload the files from your local machine, the files are then stored in a storage account in Azure. - :::image type="content" source="media/deploy-portal-1.png" alt-text="Image shows the all resources blade in the Azure portal."::: - -3. In the Cloud Services (extended support) pane select **Create**. + :::image type="content" source="media/deploy-portal-4.png" alt-text="Screenshot that shows the section of the Basics tab where you upload files and select storage."::: - :::image type="content" source="media/deploy-portal-2.png" alt-text="Image shows purchasing a cloud service from the marketplace."::: +1. Select the **Configuration** tab, and then select or enter the following information: -4. The Cloud Services (extended support) creation window will open to the **Basics** tab. - - Select a Subscription. - - Choose a resource group or create a new one. - - Enter the desired name for your Cloud Service (extended support) deployment. - - The DNS name of the cloud service is separate and specified by the DNS name label of the public IP address and can be modified in the public IP section in the configuration tab. - - Select the region to deploy to. + - **Virtual network**: Select a virtual network to associate with the cloud service, or create a new virtual network. - :::image type="content" source="media/deploy-portal-3.png" alt-text="Image shows the Cloud Services (extended support) home blade."::: + - Cloud Services (extended support) deployments *must* be in a virtual network. + - The virtual network *must* also be referenced in the configuration (.cscfg) file under `NetworkConfiguration`. -5. Add your cloud service configuration, package and definition files. You can add existing files from blob storage or upload these from your local machine. If uploading from your local machine, these will be then be stored in a storage account. + - **Public IP**: Select an existing public IP address to associate with the cloud service, or create a new one. - :::image type="content" source="media/deploy-portal-4.png" alt-text="Image shows the upload section of the basics tab during creation."::: + - If you have IP input endpoints defined in your definition (.csdef) file, create a public IP address for your cloud service. + - Cloud Services (extended support) supports only a Basic SKU public IP address. + - If your configuration (.cscfg) file contains a reserved IP address, set the allocation type for the public IP address to **Static**. + - (Optional) You can assign a DNS name for your cloud service endpoint by updating the DNS label property of the public IP address that's associated with the cloud service. + - (Optional) **Start cloud service**: Select the checkbox if you want to start the service immediately after it's deployed. + - **Key vault**: Select a key vault. + - A key vault is required when you specify one or more certificates in your configuration (.cscfg) file. When you select a key vault, we attempt to find the selected certificates that are defined in your configuration (.cscfg) file based on the certificate thumbprints. If any certificates are missing from your key vault, you can upload them now , and then select **Refresh**. -6. Once all fields have been completed, move to and complete the **Configuration** tab. - - Select a virtual network to associate with the Cloud Service or create a new one. - - Cloud Service (extended support) deployments **must** be in a virtual network. The virtual network **must** also be referenced in the Service Configuration (.cscfg) file under the `NetworkConfiguration` section. - - Select an existing public IP address to associate with the Cloud Service or create a new one. - - If you have **IP Input Endpoints** defined in your Service Definition (.csdef) file, a public IP address will need to be created for your Cloud Service. - - Cloud Services (extended support) only supports the Basic IP address SKU. - - If your Service Configuration (.cscfg) contains a reserved IP address, the allocation type for the public IP must be set tp **Static**. - - Optionally, assign a DNS name for your cloud service endpoint by updating the DNS label property of the Public IP address that is associated with the cloud service. - - (Optional) Start Cloud Service. Choose start or not start the service immediately after creation. - - Select a Key Vault - - Key Vault is required when you specify one or more certificates in your Service Configuration (.cscfg) file. When you select a key vault we will try to find the selected certificates from your Service Configuration (.cscfg) file based on their thumbprints. If any certificates are missing from your key vault you can upload them now and click **Refresh**. + :::image type="content" source="media/deploy-portal-5.png" alt-text="Screenshot that shows the Configuration tab in the Azure portal when you create a Cloud Services (extended support) deployment."::: - :::image type="content" source="media/deploy-portal-5.png" alt-text="Image shows the configuration blade in the Azure portal when creating a Cloud Services (extended support)."::: +1. When all information is entered or selected, select the **Review + Create** tab to validate your deployment configuration and create your Cloud Services (extended support) deployment. -7. Once all fields have been completed, move to the **Review and Create** tab to validate your deployment configuration and create your Cloud Service (extended support). +## Related content -## Next steps - Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)+- Deploy Cloud Services (extended support) by using [Azure PowerShell](deploy-powershell.md), an [ARM template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md). +- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support). |
cloud-services-extended-support | Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-powershell.md | Title: Deploy a Cloud Service (extended support) - PowerShell -description: Deploy a Cloud Service (extended support) using PowerShell -+ Title: Deploy Azure Cloud Services (extended support) - Azure PowerShell +description: Deploy Azure Cloud Services (extended support) by using Azure PowerShell. + Previously updated : 10/13/2020 Last updated : 06/18/2024 -# Deploy a Cloud Service (extended support) using Azure PowerShell +# Deploy Cloud Services (extended support) by using Azure PowerShell -This article shows how to use the `Az.CloudService` PowerShell module to deploy Cloud Services (extended support) in Azure that has multiple roles (WebRole and WorkerRole). +This article shows you how to use the Az.CloudService Azure PowerShell module to create an Azure Cloud Services (extended support) deployment that has multiple roles (WebRole and WorkerRole). -## Pre-requisites +## Prerequisites -1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources. -2. Install Az.CloudService PowerShell module. +Complete the following steps as prerequisites to creating your deployment by using Azure PowerShell. ++1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the required resources. ++1. Install the Az.CloudService PowerShell module: ```azurepowershell-interactive Install-Module -Name Az.CloudService ``` -3. Create a new resource group. This step is optional if using an existing resource group. +1. Create a new resource group. This step is optional if you use an existing resource group. ```azurepowershell-interactive New-AzResourceGroup -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ ``` -4. Create a storage account and container, which will be used to store the Cloud Service package (.cspkg) and Service Configuration (.cscfg) files. A unique name for storage account name is required. This step is optional if using an existing storage account. +1. Create a storage account and container in Azure to store the package (.cspkg or .zip) file and configuration (.cscfg) file for the Cloud Services (extended support) deployment. You must use a unique name for the storage account name. This step is optional if you use an existing storage account. ```azurepowershell-interactive $storageAccount = New-AzStorageAccount -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Name ΓÇ£contosostorageaccountΓÇ¥ -Location ΓÇ£East USΓÇ¥ -SkuName ΓÇ£Standard_RAGRSΓÇ¥ -Kind ΓÇ£StorageV2ΓÇ¥ $container = New-AzStorageContainer -Name ΓÇ£contosocontainerΓÇ¥ -Context $storageAccount.Context -Permission Blob ```- -## Deploy a Cloud Services (extended support) -Use any of the following PowerShell cmdlets to deploy Cloud Services (extended support): +## Deploy Cloud Services (extended support) ++Use any of the following PowerShell cmdlet options to deploy Cloud Services (extended support): ++- Quick-create a deployment by using a [storage account](#quick-create-a-deployment-by-using-a-storage-account) ++ - This parameter set inputs the package (.cspkg or .zip) file, the configuration (.cscfg) file, and the definition (.csdef) file for the deployment as inputs with the storage account. + - The Cloud Services (extended support) role profile, network profile, and OS profile are created by the cmdlet with minimal input. + - To input a certificate, you must specify a key vault name. The certificate thumbprints in the key vault are validated against the certificates that you specify in the configuration (.cscfg) file for the deployment. -1. [**Quick Create Cloud Service using a Storage Account**](#quick-create-cloud-service-using-a-storage-account) +- Quick-create a deployment by using a [shared access signature URI](#quick-create-a-deployment-by-using-an-sas-uri) - - This parameter set inputs the .cscfg, .cspkg and .csdef files as inputs along with the storage account. - - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input from the user. - - For certificate input, the keyvault name is to be specified. The certificate thumbprints in the keyvault are validated against those specified in the .cscfg file. - - 2. [**Quick Create Cloud Service using a SAS URI**](#quick-create-cloud-service-using-a-sas-uri) - - - This parameter set inputs the SAS URI of the .cspkg along with the local paths of .csdef and .cscfg files. There is no storage account input required. - - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input from the user. - - For certificate input, the keyvault name is to be specified. The certificate thumbprints in the keyvault are validated against those specified in the .cscfg file. - -3. [**Create Cloud Service with role, OS, network and extension profile and SAS URIs**](#create-cloud-service-using-profile-objects--sas-uris) + - This parameter set inputs the shared access signature (SAS) URI of the package (.cspkg or .zip) file with the local paths to the configuration (.cscfg) file and definition (.csdef) file. No storage account input is required. + - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input. + - To input a certificate, you must specify a key vault name. The certificate thumbprints in the key vault are validated against the certificates that you specify in the configuration (.cscfg) file for the deployment. - - This parameter set inputs the SAS URIs of the .cscfg and .cspkg files. - - The role, network, OS, and extension profile must be specified by the user and must match the values in the .cscfg and .csdef. +- Create a deployment by using a [role profile, OS profile, network profile, and extension profile with shared access signature URIs](#create-a-deployment-by-using-profile-objects-and-sas-uris) -### Quick Create Cloud Service using a Storage Account + - This parameter set inputs the SAS URIs of the package (.cspkg or .zip) file and configuration (.cscfg) file. + - You must specify profile objects: role profile, network profile, OS profile, and extension profile. The profiles must match the values that you set in the configuration (.cscfg) file and definition (.csdef) file. -Create Cloud Service deployment using .cscfg, .csdef and .cspkg files. +### Quick-create a deployment by using a storage account ++Create a Cloud Services (extended support) deployment by using the package (.cspkg or .zip) file, configuration (.cscfg) file, and definition (.csdef) files: ```azurepowershell-interactive-$cspkgFilePath = "<Path to cspkg file>" -$cscfgFilePath = "<Path to cscfg file>" -$csdefFilePath = "<Path to csdef file>" +$cspkgFilePath = "<Path to .cspkg file>" +$cscfgFilePath = "<Path to .cscfg file>" +$csdefFilePath = "<Path to .csdef file>" -# Create Cloud Service +# Create a Cloud Services (extended support) deployment New-AzCloudService -Name "ContosoCS" ` -ResourceGroupName "ContosOrg" ` New-AzCloudService [-KeyVaultName <string>] ``` -### Quick Create Cloud Service using a SAS URI +### Quick-create a deployment by using an SAS URI -1. Upload your Cloud Service package (cspkg) to the storage account. +1. Upload the package (.cspkg or .zip) file for the deployment to the storage account: ```azurepowershell-interactive $tokenStartTime = Get-Date New-AzCloudService $csdefFilePath = "<Path to csdef file>" ``` - 2. Create Cloud Service deployment using .cscfg, .csdef and .cspkg SAS URI. +1. Create the Cloud Services (extended support) deployment by using the package (.cspkg or .zip) file, configuration (.cscfg) file, and definition (.csdef) file SAS URI: ```azurepowershell-interactive New-AzCloudService New-AzCloudService -PackageURL $cspkgUrl ` [-KeyVaultName <string>] ```- -### Create Cloud Service using profile objects & SAS URIs -1. Upload your cloud service configuration (cscfg) to the storage account. +### Create a deployment by using profile objects and SAS URIs ++1. Upload your Cloud Services (extended support) configuration (.cscfg) file to the storage account: ```azurepowershell-interactive $cscfgBlob = Set-AzStorageBlobContent -File ΓÇ£./ContosoApp/ContosoApp.cscfgΓÇ¥ -Container contosocontainer -Blob ΓÇ£ContosoApp.cscfgΓÇ¥ -Context $storageAccount.Context $cscfgToken = New-AzStorageBlobSASToken -Container ΓÇ£contosocontainerΓÇ¥ -Blob $cscfgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context $cscfgUrl = $cscfgBlob.ICloudBlob.Uri.AbsoluteUri + $cscfgToken ```-2. Upload your Cloud Service package (cspkg) to the storage account. ++1. Upload your Cloud Services (extended support) package (.cspkg or .zip) file to the storage account: ```azurepowershell-interactive $tokenStartTime = Get-Date New-AzCloudService $cspkgToken = New-AzStorageBlobSASToken -Container ΓÇ£contosocontainerΓÇ¥ -Blob $cspkgBlob.Name -Permission rwd -StartTime $tokenStartTime -ExpiryTime $tokenEndTime -Context $storageAccount.Context $cspkgUrl = $cspkgBlob.ICloudBlob.Uri.AbsoluteUri + $cspkgToken ```- -3. Create a virtual network and subnet. This step is optional if using an existing network and subnet. This example uses a single virtual network and subnet for both cloud service roles (WebRole and WorkerRole). ++1. Create a virtual network and subnet. This step is optional if you use an existing network and subnet. This example uses a single virtual network and subnet for both Cloud Services (extended support) roles (WebRole and WorkerRole). ```azurepowershell-interactive $subnet = New-AzVirtualNetworkSubnetConfig -Name "ContosoWebTier1" -AddressPrefix "10.0.0.0/24" -WarningAction SilentlyContinue $virtualNetwork = New-AzVirtualNetwork -Name ΓÇ£ContosoVNetΓÇ¥ -Location ΓÇ£East USΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -AddressPrefix "10.0.0.0/24" -Subnet $subnet ```- -4. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](../virtual-network/ip-services/public-ip-addresses.md#sku) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services. -If you are using a Static IP, you need to reference it as a Reserved IP in Service Configuration (.cscfg) file. ++1. Create a public IP address and set a DNS label value for the public IP address. Cloud Services (extended support) supports only a [Basic](../virtual-network/ip-services/public-ip-addresses.md#sku) SKU public IP address. Standard SKU public IP addresses don't work with Cloud Services (extended support). ++ If you use a static IP address, you must reference it as a reserved IP address in the configuration (.cscfg) file for the deployment. ```azurepowershell-interactive $publicIp = New-AzPublicIpAddress -Name ΓÇ£ContosIpΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ -AllocationMethod Dynamic -IpAddressVersion IPv4 -DomainNameLabel ΓÇ£contosoappdnsΓÇ¥ -Sku Basic ``` -5. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in Azure Resource Manager. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef). +1. Create a network profile object, and then associate the public IP address to the front end of the load balancer. The Azure platform automatically creates a Classic SKU load balancer resource in the same subscription as the Cloud Services (extended support) resource. The load balancer is a read-only resource in Azure Resource Manager. You can update resources only via the Cloud Services (extended support) configuration (.cscfg) file and deployment (.csdef) file. ```azurepowershell-interactive $publicIP = Get-AzPublicIpAddress -ResourceGroupName ContosOrg -Name ContosIp If you are using a Static IP, you need to reference it as a Reserved IP in Servi $loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name 'ContosoLB' -FrontendIPConfiguration $feIpConfig $networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig} ```- -6. Create a Key Vault. This Key Vault will be used to store certificates that are associated with the Cloud Service (extended support) roles. The Key Vault must be located in the same region and subscription as cloud service and have a unique name. For more information, see [use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md). ++1. Create a key vault. The key vault stores certificates that are associated with Cloud Services (extended support) roles. The key vault must be in the same region and subscription as the Cloud Services (extended support) deployment and have a unique name. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md). ```azurepowershell-interactive New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ ``` -7. Update the Key Vault access policy and grant certificate permissions to your user account. +1. Update the key vault access policy and grant certificate permissions to your user account: ```azurepowershell-interactive Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -EnabledForDeployment Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete ``` - Alternatively, set access policy via ObjectId (which can be obtained by running `Get-AzADUser`). - + Alternatively, set the access policy by using the `ObjectId` value. To get the `ObjectId` value, run `Get-AzADUser`: + ```azurepowershell-interactive Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -ObjectId 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' -PermissionsToCertificates create,get,list,delete ```- -8. In this example, we will add a self-signed certificate to a Key Vault. The certificate thumbprint needs to be added in Cloud Service Configuration (.cscfg) file for deployment on cloud service roles. +1. The following example adds a self-signed certificate to a key vault. You must add the certificate thumbprint via the configuration (.cscfg) file for Cloud Services (extended support) roles. ```azurepowershell-interactive $Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" -SubjectName "CN=contoso.com" -IssuerName "Self" -ValidityInMonths 6 -ReuseKeyOnRenewal Add-AzKeyVaultCertificate -VaultName "ContosKeyVault" -Name "ContosCert" -CertificatePolicy $Policy ```- -9. Create an OS Profile in-memory object. OS Profile specifies the certificates, which are associated to cloud service roles. This will be the same certificate created in the previous step. ++1. Create an OS profile in-memory object. An OS profile specifies the certificates that are associated with Cloud Services (extended support) roles. This is the certificate that you created in the preceding step. ```azurepowershell-interactive $keyVault = Get-AzKeyVault -ResourceGroupName ContosOrg -VaultName ContosKeyVault If you are using a Static IP, you need to reference it as a Reserved IP in Servi $osProfile = @{secret = @($secretGroup)} ``` -10. Create a Role Profile in-memory object. Role profile defines a role sku specific properties such as name, capacity, and tier. In this example, we have defined two roles: frontendRole and backendRole. Role profile information should match the role configuration defined in configuration (cscfg) file and service definition (csdef) file. +1. Create a role profile in-memory object. A role profile defines a role's SKU-specific properties such as name, capacity, and tier. In this example, two roles are defined: frontendRole and backendRole. Role profile information must match the role configuration that's defined in the deployment configuration (.cscfg) file and definition (.csdef) file. ```azurepowershell-interactive $frontendRole = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2 If you are using a Static IP, you need to reference it as a Reserved IP in Servi $roleProfile = @{role = @($frontendRole, $backendRole)} ``` -11. (Optional) Create an Extension Profile in-memory object that you want to add to your cloud service. For this example we will add RDP extension. +1. (Optional) Create an extension profile in-memory object to add to your Cloud Services (extended support) deployment. This example adds a Remote Desktop Protocol (RDP) extension: ```azurepowershell-interactive $credential = Get-Credential If you are using a Static IP, you need to reference it as a Reserved IP in Servi $wadExtension = New-AzCloudServiceDiagnosticsExtension -Name "WADExtension" -ResourceGroupName "ContosOrg" -CloudServiceName "ContosCS" -StorageAccountName "contosostorageaccount" -StorageAccountKey $storageAccountKey[0].Value -DiagnosticsConfigurationPath $configFile -TypeHandlerVersion "1.5" -AutoUpgradeMinorVersion $true $extensionProfile = @{extension = @($rdpExtension, $wadExtension)} ```- - ConfigFile should have only PublicConfig tags and should contain a namespace as following: - ++ The configuration (.cscfg) file should have only `PublicConfig` tags and should contain a namespace as shown in the following example: + ```xml <?xml version="1.0" encoding="utf-8"?> <PublicConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> ............... </PublicConfig> ```- -12. (Optional) Define Tags as PowerShell hash table that you want to add to your cloud service. ++1. (Optional) In a PowerShell hash table, you can define tags to add to your deployment: ```azurepowershell-interactive $tag=@{"Owner" = "Contoso"} ``` -13. Create Cloud Service deployment using profile objects & SAS URLs. +1. Create the Cloud Services (extended support) deployment by using the profile objects and SAS URIs that you defined: ```azurepowershell-interactive $cloudService = New-AzCloudService ` If you are using a Static IP, you need to reference it as a Reserved IP in Servi -Tag $tag ``` -## Next steps +## Related content + - Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).+- Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), an [ARM template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md). - Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support). |
cloud-services-extended-support | Deploy Prerequisite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-prerequisite.md | Title: Prerequisites for deploying Azure Cloud Services (extended support) -description: Prerequisites for deploying Azure Cloud Services (extended support) + Title: Prerequisites for deploying Cloud Services (extended support) +description: Learn about the prerequisites for deploying Azure Cloud Services (extended support). Previously updated : 10/13/2020 Last updated : 06/16/2024 # Prerequisites for deploying Azure Cloud Services (extended support) -To ensure a successful Cloud Services (extended support) deployment review the below steps and complete each item prior to attempting any deployments. +To help ensure a successful Azure Cloud Services (extended support) deployment, review the following steps. Complete each prerequisitive before you begin to create a deployment. -## Required Service Configuration (.cscfg) file updates +## Required configuration file updates -### 1) Virtual Network -Cloud Service (extended support) deployments must be in a virtual network. Virtual network can be created through [Azure portal](../virtual-network/quick-create-portal.md), [PowerShell](../virtual-network/quick-create-powershell.md), [Azure CLI](../virtual-network/quick-create-cli.md) or [ARM Template](../virtual-network/quick-create-template.md). The virtual network and subnets must also be referenced in the Service Configuration (.cscfg) under the [NetworkConfiguration](schema-cscfg-networkconfiguration.md) section. +Use the information in the following sections to make required updates to the configuration (.cscfg) file for your Cloud Services (extended support) deployment. -For a virtual networks belonging to the same resource group as the cloud service, referencing only the virtual network name in the Service Configuration (.cscfg) file is sufficient. If the virtual network and cloud service are in two different resource groups, then the complete Azure Resource Manager ID of the virtual network needs to be specified in the Service Configuration (.cscfg) file. +### Virtual network ++Cloud Services (extended support) deployments must be in a virtual network. You can create a virtual network by using the [Azure portal](../virtual-network/quick-create-portal.md), [Azure PowerShell](../virtual-network/quick-create-powershell.md), the [Azure CLI](../virtual-network/quick-create-cli.md), or an [Azure Resource Manager template (ARM template)](../virtual-network/quick-create-template.md). The virtual network and subnets must be referenced in the [NetworkConfiguration](schema-cscfg-networkconfiguration.md) section of the configuration (.cscfg) file. ++For a virtual network that is in the same resource group as the cloud service, referencing only the virtual network name in the configuration (.cscfg) file is sufficient. If the virtual network and Cloud Services (extended support) are in two different resource groups, specify the complete Azure Resource Manager ID of the virtual network in the configuration (.cscfg) file. > [!NOTE]-> Virtual Network and cloud service located in a different resource groups is not supported in Visual Studio 2019. Please consider using the ARM template or Portal for successful deployments in such scenarios - -#### Virtual Network located in same resource group +> If the virtual network and Cloud Services (extended support) are located in different resource groups, you can't use Visual Studio 2019 for your deployment. For this scenario, consider using an ARM template or the Azure portal to create your deployment. ++#### Virtual network in the same resource group + ```xml <VirtualNetworkSite name="<vnet-name>"/> <AddressAssignments> For a virtual networks belonging to the same resource group as the cloud service </AddressAssignments> ``` -#### Virtual network located in different resource group +#### Virtual network in a different resource group + ```xml <VirtualNetworkSite name="/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.Network/virtualNetworks/<vnet-name>"/> <AddressAssignments> For a virtual networks belonging to the same resource group as the cloud service </InstanceAddress> </AddressAssignments> ```-### 2) Remove the old plugins -Remove old remote desktop settings from the Service Configuration (.cscfg) file. +### Remove earlier versions of plugins ++Remove earlier versions of remote desktop settings from the configuration (.cscfg) file: ```xml <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> Remove old remote desktop settings from the Service Configuration (.cscfg) file. <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2021-12-17T23:59:59.0000000+05:30" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> ```-Remove old diagnostics settings for each role in the Service Configuration (.cscfg) file. ++Remove earlier versions of diagnostics settings for each role in the configuration (.cscfg) file: ```xml <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> ``` -## Required Service Definition file (.csdef) updates +## Required definition file updates > [!NOTE]-> Changes in service definition file (.csdef) requires the package file (.cspkg) to be generated again. Please build and repackage your .cspkg post making the following changes in the .csdef file to get the latest settings for your cloud service +> If you make changes to the definition (.csdef) file, you must generate the package (.cspkg or .zip) file again. Build and repackage your package (.cspkg or .zip) file after you make the following changes in the definition (.csdef) file to get the latest settings for your cloud service. -### 1) Virtual Machine sizes -The sizes listed in the left column below are deprecated in Azure Resource Manager. However, if you want to continue to use them update the `vmsize` name with the associated Azure Resource Manager naming convention. +### Virtual machine sizes -| Previous size name | Updated size name | +The following table lists deprecated virtual machine sizes and updated naming conventions through which you can continue to use the sizes. ++The sizes listed in the left column of the table are deprecated in Azure Resource Manager. If you want to continue to use the virtual machine sizes, update the `vmsize` value to use the new naming convention from the right column. ++| Previous size name | Updated size name | |||-| ExtraSmall | Standard_A1_v2 | +| ExtraSmall | Standard_A1_v2 | | Small | Standard_A1_v2 |-| Medium | Standard_A2_v2 | -| Large | Standard_A4_v2 | -| ExtraLarge | Standard_A8_v2 | -| A5 | Standard_A2m_v2 | -| A6 | Standard_A4m_v2 | +| Medium | Standard_A2_v2 | +| Large | Standard_A4_v2 | +| ExtraLarge | Standard_A8_v2 | +| A5 | Standard_A2m_v2 | +| A6 | Standard_A4m_v2 | | A7 | Standard_A8m_v2 | -| A8 | Deprecated | +| A8 | Deprecated | | A9 | Deprecated |-| A10 | Deprecated | -| A11 | Deprecated | -| MSODSG5 | Deprecated | +| A10 | Deprecated | +| A11 | Deprecated | +| MSODSG5 | Deprecated | ++For example, `<WorkerRole name="WorkerRole1" vmsize="Medium">` becomes `<WorkerRole name="WorkerRole1" vmsize="Standard_A2">`. - For example, `<WorkerRole name="WorkerRole1" vmsize="Medium"` would become `<WorkerRole name="WorkerRole1" vmsize="Standard_A2"`. - > [!NOTE]-> To retrieve a list of available sizes see [Resource Skus - List](/rest/api/compute/resourceskus/list) and apply the following filters: <br> -`ResourceType = virtualMachines ` <br> -`VMDeploymentTypes = PaaS ` +> To retrieve a list of available sizes, see the [list of resource SKUs](/rest/api/compute/resourceskus/list). Apply the following filters: +> +> `ResourceType = virtualMachines` +> `VMDeploymentTypes = PaaS` +### Remove earlier versions of remote desktop plugins -### 2) Remove old remote desktop plugins -Deployments that utilized the old remote desktop plugins need to have the modules removed from the Service Definition (.csdef) file and any associated certificates. +For deployments that use earlier versions of remote desktop plugins, remove the modules from the definition (.csdef) file and from any associated certificates: ```xml <Imports> Deployments that utilized the old remote desktop plugins need to have the module <Import moduleName="RemoteForwarder" /> </Imports> ```-Deployments that utilized the old diagnostics plugins need the settings removed for each role from the Service Definition (.csdef) file ++For deployments that use earlier versions of diagnostics plugins, remove the settings for each role from the definition (.csdef) file: ```xml <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" /> ```-## Access Control -The subscription containing networking resources needs to have [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) access or above for Cloud Services (extended support). For more details on please refer to [RBAC built in roles](../role-based-access-control/built-in-roles.md) +## Access control ++The subscription that contains networking resources must have the [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) or greater role for Cloud Services (extended support). For more information, see [RBAC built-in roles](../role-based-access-control/built-in-roles.md). ++## Key vault creation -## Key Vault creation +Azure Key Vault stores certificates that are associated with Cloud Services (extended support). Add the certificates to a key vault, and then reference the certificate thumbprints in the configuration (.cscfg) file for your deployment. You also must enable the key vault access policy (in the portal) for **Azure Virtual Machines for deployment** so that the Cloud Services (extended support) resource can retrieve the certificate that's stored as secrets in the key vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). You must create the key vault in the same region and subscription as the cloud service. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md). -Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration file. You also need to enable Key Vault 'Access policies' (in portal) for 'Azure Virtual Machines for deployment' so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). The key vault must be created in the same region and subscription as the cloud service. For more information, see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md). +## Related content -## Next steps -- Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).+- Deploy a Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), an [ARM template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md). - Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support). |
cloud-services-extended-support | Deploy Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-sdk.md | Title: Deploy Cloud Services (extended support) - SDK -description: Deploy Cloud Services (extended support) by using the Azure SDK -+ Title: Deploy Azure Cloud Services (extended support) - SDK +description: Deploy Azure Cloud Services (extended support) by using the Azure SDK. + Previously updated : 10/13/2020 Last updated : 06/18/2024 # Deploy Cloud Services (extended support) by using the Azure SDK -This article shows how to use the [Azure SDK](https://azure.microsoft.com/downloads/) to deploy a Cloud Services (extended support) instance that has multiple roles (web role and worker role) and the remote desktop extension. Cloud Services (extended support) is a deployment model of Azure Cloud Services that's based on Azure Resource Manager. +This article shows how to use the [Azure SDK](https://azure.microsoft.com/downloads/) to create an Azure Cloud Services (extended support) deployment that has multiple roles (WebRole and WorkerRole) and the Remote Desktop Protocol (RDP) extension. Cloud Services (extended support) is a deployment model of Azure Cloud Services that's based on Azure Resource Manager. -## Before you begin +## Prerequisites -Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create associated resources. +Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the required resources. ## Deploy Cloud Services (extended support)-1. Install the [Azure Compute SDK NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Management.Compute/43.0.0-preview) and initialize the client by using a standard authentication mechanism. ++To deploy Cloud Services (extended support) by using the SDK: ++1. Install the [Azure Compute SDK NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Management.Compute/43.0.0-preview) and initialize the client by using a standard authentication method: ```csharp public class CustomLoginCredentials : ServiceClientCredentials Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services m_SrpClient.SubscriptionId = m_subId; ``` -2. Create a new resource group by installing the Azure Resource Manager NuGet package. +1. Create a new resource group by installing the Azure Resource Manager NuGet package: - ```csharp + ```csharp var resourceGroups = m_ResourcesClient.ResourceGroups; var m_location = ΓÇ£East USΓÇ¥; var resourceGroupName = "ContosoRG";//provide existing resource group name, if created already Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services resourceGroup = await resourceGroups.CreateOrUpdateAsync(resourceGroupName, resourceGroup); ``` -3. Create a storage account and container where you'll store the service package (.cspkg) and service configuration (.cscfg) files. Install the [Azure Storage NuGet package](https://www.nuget.org/packages/Azure.Storage.Common/). This step is optional if you're using an existing storage account. The storage account name must be unique. +1. Create a storage account and container where you'll store the package (.cspkg or .zip) file and configuration (.cscfg) file for the deployment. Install the [Azure Storage NuGet package](https://www.nuget.org/packages/Azure.Storage.Common/). This step is optional if you're using an existing storage account. The storage account name must be unique. ```csharp string storageAccountName = ΓÇ£ContosoSASΓÇ¥ Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services sasConstraints.Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.Write; ``` -4. Upload the service package (.cspkg) file to the storage account. The package URL can be a shared access signature (SAS) URI from any storage account. +1. Upload the package (.cspkg or .zip) file to the storage account. The package URL can be a shared access signature (SAS) URI from any storage account. - ```csharp - CloudBlockBlob cspkgblockBlob = container.GetBlockBlobReference(ΓÇ£ContosoApp.cspkgΓÇ¥); - cspkgblockBlob.UploadFromFileAsync(ΓÇ£./ContosoApp/ContosoApp.cspkgΓÇ¥). Wait(); + ```csharp + CloudBlockBlob cspkgblockBlob = container.GetBlockBlobReference(ΓÇ£ContosoApp.cspkgΓÇ¥); + cspkgblockBlob.UploadFromFileAsync(ΓÇ£./ContosoApp/ContosoApp.cspkgΓÇ¥). Wait(); - //Generate the shared access signature on the blob, setting the constraints directly on the signature. - string cspkgsasContainerToken = cspkgblockBlob.GetSharedAccessSignature(sasConstraints); + //Generate the shared access signature on the blob, setting the constraints directly on the signature. + string cspkgsasContainerToken = cspkgblockBlob.GetSharedAccessSignature(sasConstraints); - //Return the URI string for the container, including the SAS token. - string cspkgSASUrl = cspkgblockBlob.Uri + cspkgsasContainerToken; - ``` + //Return the URI string for the container, including the SAS token. + string cspkgSASUrl = cspkgblockBlob.Uri + cspkgsasContainerToken; + ``` -5. Upload your service configuration (.cscfg) file to the storage account. Specify service configuration as either string XML or URL format. +1. Upload the configuration (.cscfg) file to the storage account. Specify the service configuration as either string XML or URL format. ```csharp CloudBlockBlob cscfgblockBlob = container.GetBlockBlobReference(ΓÇ£ContosoApp.cscfgΓÇ¥); Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services string cscfgSASUrl = cscfgblockBlob.Uri + sasCscfgContainerToken; ``` -6. Create a virtual network and subnet. Install the [Azure Network NuGet package](https://www.nuget.org/packages/Azure.ResourceManager.Network/). This step is optional if you're using an existing network and subnet. +1. Create a virtual network and subnet. Install the [Azure Network NuGet package](https://www.nuget.org/packages/Azure.ResourceManager.Network/). This step is optional if you're using an existing network and subnet. ```csharp VirtualNetwork vnet = new VirtualNetwork(name: vnetName) Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services m_NrpClient.VirtualNetworks.CreateOrUpdate(resourceGroupName, ΓÇ£ContosoVNetΓÇ¥, vnet); ``` -7. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](../virtual-network/ip-services/public-ip-addresses.md#sku) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services. -If you are using a Static IP you need to reference it as a Reserved IP in Service Configuration (.cscfg) file +1. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) supports only a [Basic](../virtual-network/ip-services/public-ip-addresses.md#sku) SKU public IP address. Standard SKU public IP addresses do not work with Cloud Services (extended support). - ```csharp + If you use a static IP address, you must reference it as a reserved IP address in the configuration (.cscfg) file. ++ ```csharp PublicIPAddress publicIPAddressParams = new PublicIPAddress(name: ΓÇ£ContosIpΓÇ¥) { Location = m_location, If you are using a Static IP you need to reference it as a Reserved IP in Servic PublicIPAddress publicIpAddress = m_NrpClient.PublicIPAddresses.CreateOrUpdate(resourceGroupName, publicIPAddressName, publicIPAddressParams); ``` -8. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. The Azure platform automatically creates a 'Classic' SKU load balancer resource in the same subscription as the cloud service resource. The load balancer resource is a read-only resource in ARM. Any updates to the resource are supported only via the cloud service deployment files (.cscfg & .csdef) +1. Create a network profile object and associate the public IP address with the front end of the load balancer. The Azure platform automatically creates a Classic SKU load balancer resource in the same subscription as the deployment. The load balancer resource is read-only in Azure Resource Manager. You can update the resource only via the Cloud Services (extended support) configuration (.cscfg) file and definition (.csdef) file. ```csharp LoadBalancerFrontendIPConfiguration feipConfiguration = new LoadBalancerFrontendIPConfiguration() If you are using a Static IP you need to reference it as a Reserved IP in Servic ``` -9. Create a key vault. This key vault will be used to store certificates that are associated with the Cloud Services (extended support) roles. The key vault must be located in the same region and subscription as the Cloud Services (extended support) instance and have a unique name. For more information, see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md). +1. Create a key vault. This key vault stores certificates that are associated with the Cloud Services (extended support) roles. The key vault must be in the same region and subscription as the Cloud Services (extended support) resource and have a unique name. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md). - ```powershell - New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosoOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ - ``` + ```powershell + New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosoOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥ + ``` -10. Update the key vault's access policy and grant certificate permissions to your user account. +1. Update the key vault's access policy and grant certificate permissions to your user account: - ```powershell - Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosoOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete - ``` + ```powershell + Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosoOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete + ``` - Alternatively, set the access policy via object ID (which you can get by running `Get-AzADUser`). + Alternatively, set the access policy via object ID (which you can get by running `Get-AzADUser`): ```powershell- Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' - ObjectId 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' -PermissionsToCertificates create,get,list,delete + Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -ObjectId 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' -PermissionsToCertificates create,get,list,delete ``` -11. In this example, we'll add a self-signed certificate to a key vault. The certificate thumbprint needs to be added in the service configuration (.cscfg) file for deployment on Cloud Services (extended support) roles. +1. The following example adds a self-signed certificate to a key vault. The certificate thumbprint must be added in the configuration (.cscfg) file for Cloud Services (extended support) roles. ```powershell- $Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" - SubjectName "CN=contoso.com" -IssuerName "Self" -ValidityInMonths 6 -ReuseKeyOnRenewal - Add-AzKeyVaultCertificate -VaultName "ContosKeyVault" -Name "ContosCert" - CertificatePolicy $Policy + $Policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" - SubjectName "CN=contoso.com" -IssuerName "Self" -ValidityInMonths 6 -ReuseKeyOnRenewal + Add-AzKeyVaultCertificate -VaultName "ContosKeyVault" -Name "ContosCert" -CertificatePolicy $Policy ``` -12. Create an OS profile object. The OS profile specifies the certificates that are associated with Cloud Services (extended support) roles. Here, it's the same certificate that we created in the previous step. +1. Create an OS profile object. The OS profile specifies the certificates that are associated with Cloud Services (extended support) roles. You use the same certificate that you created in the preceding step. ```csharp CloudServiceOsProfile cloudServiceOsProfile = If you are using a Static IP you need to reference it as a Reserved IP in Servic }; ``` -13. Create a role profile object. A role profile defines role-specific properties for a SKU, such as name, capacity, and tier. +1. Create a role profile object. A role profile defines role-specific properties for a SKU, such as name, capacity, and tier. - In this example, we define two roles: ContosoFrontend and ContosoBackend. Role profile information should match the role configuration defined in the service configuration (.cscfg) file and the service definition (.csdef) file. + This example defines two roles: ContosoFrontend and ContosoBackend. Role profile information must match the role that's defined in the configuration (.cscfg) file and definition (.csdef) file. ```csharp CloudServiceRoleProfile cloudServiceRoleProfile = new CloudServiceRoleProfile() If you are using a Static IP you need to reference it as a Reserved IP in Servic } ``` -14. (Optional) Create an extension profile object that you want to add to your Cloud Services (extended support) instance. In this example, we add an RDP extension. +1. (Optional) Create an extension profile object to add to your Cloud Services (extended support) deployment. This example adds a Remote Desktop Protocol (RDP) extension: ```csharp string rdpExtensionPublicConfig = "<PublicConfig>" + If you are using a Static IP you need to reference it as a Reserved IP in Servic }; ``` -15. Create the deployment of the Cloud Services (extended support) instance. +1. Create the Cloud Services (extended support) deployment: ```csharp CloudService cloudService = new CloudService If you are using a Static IP you need to reference it as a Reserved IP in Servic CloudService createOrUpdateResponse = m_CrpClient.CloudServices.CreateOrUpdate(ΓÇ£ContosOrgΓÇ¥, ΓÇ£ContosoCSΓÇ¥, cloudService); ``` -## Next steps +## Related content + - Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), a [template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md).-- Visit the [Samples repository for Cloud Services (extended support)](https://github.com/Azure-Samples/cloud-services-extended-support)+- Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [Azure PowerShell](deploy-powershell.md), an [ARM template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md). +- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support). |
cloud-services-extended-support | Deploy Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-template.md | Title: Deploy Azure Cloud Services (extended support) - Templates -description: Deploy Azure Cloud Services (extended support) by using ARM templates -+ Title: Deploy Azure Cloud Services (extended support) - ARM template +description: Deploy Azure Cloud Services (extended support) by using an ARM template. + Previously updated : 10/13/2020 Last updated : 06/18/2024 -# Deploy a Cloud Service (extended support) using ARM templates +# Deploy Cloud Services (extended support) by using an ARM template -This tutorial explains how to create a Cloud Service (extended support) deployment using [ARM templates](../azure-resource-manager/templates/overview.md). +This article shows you how to use an [Azure Resource Manager template (ARM template)](../azure-resource-manager/templates/overview.md) to create an Azure Cloud Services (extended support) deployment. -## Before you begin +## Prerequisites -1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the associated resources. +Complete the following steps as prerequisites to creating your deployment by using ARM templates. -2. Create a new resource group using the [Azure portal](../azure-resource-manager/management/manage-resource-groups-portal.md) or [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md). This step is optional if you are using an existing resource group. - -3. Create a new storage account using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you are using an existing storage account. +1. Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support) and create the required resources. -4. Upload your Package (.cspkg) and Service Configuration (.cscfg) files to the storage account using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob), or [PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Obtain the SAS URIs of both files to be added to the ARM template later in this tutorial. +1. Create a new resource group by using the [Azure portal](../azure-resource-manager/management/manage-resource-groups-portal.md) or [Azure PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md). This step is optional if you use an existing resource group. -5. (Optional) Create a key vault and upload the certificates. +1. Create a new storage account by using the [Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal) or [Azure PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). This step is optional if you use an existing storage account. - - Certificates can be attached to cloud services to enable secure communication to and from the service. In order to use certificates, their thumbprints must be specified in your Service Configuration (.cscfg) file and uploaded to a key vault. A key vault can be created through the [Azure portal](../key-vault/general/quick-create-portal.md) or [PowerShell](../key-vault/general/quick-create-powershell.md). - - The associated key vault must be located in the same region and subscription as cloud service. - - The associated key vault for must be enabled appropriate permissions so that Cloud Services (extended support) resource can retrieve certificates from Key Vault. For more information, see [Certificates and Key Vault](certificates-and-key-vault.md) - - The key vault needs to be referenced in the OsProfile section of the ARM template shown in the below steps. +1. Upload the package (.cspkg or .zip) file and configuration (.cscfg) file to the storage account by using the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob) or [Azure PowerShell](../storage/blobs/storage-quickstart-blobs-powershell.md#upload-blobs-to-the-container). Save the shared access signature (SAS) URIs for both files to add to the ARM template in a later step. -## Deploy a Cloud Service (extended support) +1. (Optional) Create a key vault and upload the certificates. ++ - You can attach certificates to your deployment for secure communication to and from the service. If you use certificates, the certificate thumbprints must be specified in your configuration (.cscfg) file and be uploaded to a key vault. You can create a key vault by using the [Azure portal](../key-vault/general/quick-create-portal.md) or [Azure PowerShell](../key-vault/general/quick-create-powershell.md). + - The associated key vault must be in the same region and subscription as your Cloud Services (extended support) deployment. + - The associated key vault must have the relevant permissions so that Cloud Services (extended support) resources can retrieve certificates from the key vault. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md). + - The key vault must be referenced in the `osProfile` section of the ARM template as shown in a later step. ++## Deploy Cloud Services (extended support) ++To deploy Cloud Services (extended support) by using a template: > [!NOTE]-> An easier and faster way of generating your ARM template and parameter file is via the [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) via the portal to create your Cloud Service via PowerShell - -1. Create virtual network. The name of the virtual network must match the references in the Service Configuration (.cscfg) file. If using an existing virtual network, omit this section from the ARM template. +> An easier and faster way to generate your ARM template and parameter file is by using the [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) in the portal to create your Cloud Services (extended support) via Azure PowerShell. ++1. Create a virtual network. The name of the virtual network must match virtual network references in the configuration (.cscfg) file. If you use an existing virtual network, omit this section from the ARM template. ```json "resources": [ This tutorial explains how to create a Cloud Service (extended support) deployme } ] ```- - If creating a new virtual network, add the following to the `dependsOn` section to ensure the platform creates the virtual network prior to creating the cloud service. + + If you create a new virtual network, add the following lines to the `dependsOn` section to ensure that the platform creates the virtual network before it creates the Cloud Services (extended support) instance: ```json "dependsOn": [ "[concat('Microsoft.Network/virtualNetworks/', parameters('vnetName'))]" ] ```- -2. Create a public IP address and (optionally) set the DNS label property of the public IP address. If you are using a Static IP you need to reference it as a Reserved IP in Service Configuration (.cscfg) file. If using an existing IP address, skip this step and add the IP address information directly into the load balancer configuration settings of your ARM template. - ++1. Create a public IP address and (optionally) set the DNS label property of the public IP address. If you use a static IP address, you must reference it as a reserved IP address in the configuration (.cscfg) file. If you use an existing IP address, skip this step and add the IP address information directly in the load balancer configuration settings in your ARM template. + ```json "resources": [ { This tutorial explains how to create a Cloud Service (extended support) deployme } ] ```- - If creating a new IP address, add the following to the `dependsOn` section to ensure the platform creates the IP address prior to creating the cloud service. - ++ If you create a new IP address, add the following lines to the `dependsOn` section to ensure that the platform creates the IP address before it creates the Cloud Services (extended support) instance: + ```json "dependsOn": [ "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPName'))]" ] ```- -3. Create a Cloud Service (Extended Support) object, adding appropriate `dependsOn` references if you are deploying Virtual Networks or Public IP within your template. ++1. Create a Cloud Services (extended support) object. Add relevant `dependsOn` references if you are deploying virtual networks or public IP addresses in your template. ```json { This tutorial explains how to create a Cloud Service (extended support) deployme } } ```-4. Create a Network Profile Object for your Cloud Service and associate the public IP address to the frontend of the load balancer. A Load balancer is automatically created by the platform. ++1. Create a network profile object for your deployment, and associate the public IP address with the front end of the load balancer. The Azure platform automatically creates a load balancer. ```json "networkProfile": { This tutorial explains how to create a Cloud Service (extended support) deployme ] } ```- -5. Add your key vault reference in the `OsProfile` section of the ARM template. Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration (.cscfg) file. You also need to enable Key Vault 'Access policies' for 'Azure Virtual Machines for deployment'(on portal) so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. The key vault must be located in the same region and subscription as cloud service and have a unique name. For more information, see [using certificates with Cloud Services (extended support)](certificates-and-key-vault.md). - +1. Add your key vault reference in the `osProfile` section of the ARM template. A key vault stores certificates that are associated with Cloud Services (extended support). Add the certificates to the key vault, and then reference the certificate thumbprints in the configuration (.cscfg) file. Also, set the key vault access policy for **Azure Virtual Machines for deployment** in the Azure portal so that the Cloud Services (extended support) resource can retrieve the certificates that are stored as secrets in the key vault. The key vault must be in the same region and subscription as your Cloud Services (extended support) resource and have a unique name. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md). + ```json "osProfile": { "secrets": [ This tutorial explains how to create a Cloud Service (extended support) deployme ``` > [!NOTE]- > SourceVault is the ARM Resource ID to your key vault. You can find this information by locating the Resource ID in the properties section of your key vault. - > - certificateUrl can be found by navigating to the certificate in the key vault labeled as **Secret Identifier**.  - > - certificateUrl should be of the form https://{keyvault-endpoin}/secrets/{secretname}/{secret-id} + > `sourceVault`in the ARM template  is the value of the resource ID for your key vault. You can get this information by finding **Resource ID** in the **Properties** section of your key vault. + > - You can get the value for `certificateUrl` by going to the certificate in the key vault that's labeled **Secret Identifier**.  + > - `certificateUrl` should be of the form of `https://{keyvault-endpoint}/secrets/{secret-name}/{secret-id}`. -6. Create a Role Profile. Ensure that the number of roles, role names, number of instances in each role and sizes are the same across the Service Configuration (.cscfg), Service Definition (.csdef) and role profile section in ARM template. - +1. Create a role profile. Ensure that the number of roles, the number of instances in each role, role names, and role sizes are the same across the configuration (.cscfg) file, the definition (.csdef) file, and the `roleProfile` section in the ARM template. + ```json "roleProfile": { "roles": { This tutorial explains how to create a Cloud Service (extended support) deployme } ``` -7. (Optional) Create an extension profile to add extensions to your cloud service. For this example, we are adding the remote desktop and Windows Azure diagnostics extension. - > [!Note] - > The password for remote desktop must be between 8-123 characters long and must satisfy at least 3 of password complexity requirements from the following: 1) Contains an uppercase character 2) Contains a lowercase character 3) Contains a numeric digit 4) Contains a special character 5) Control characters are not allowed +1. (Optional) Create an extension profile to add extensions to your Cloud Services (extended support) deployment. The following example adds the Remote Desktop Protocol (RDP) extension and the Azure Diagnostics extension. ++ > [!NOTE] + > The password for RDP must from 8 to 123 characters and must satisfy at least *three* of the following password-complexity requirements: + > + > Contains an uppercase character. + > Contains a lowercase character. + > Contains a numeric digit. + > Contains a special character. + > Cannot contain a control character. ```json "extensionProfile": { This tutorial explains how to create a Cloud Service (extended support) deployme } ``` -8. Review the full template. +1. Review the full template: ```json { This tutorial explains how to create a Cloud Service (extended support) deployme "packageSasUri": { "type": "securestring", "metadata": {- "description": "SAS Uri of the CSPKG file to deploy" + "description": "SAS URI of the package (.cspkg) file to deploy" } }, "configurationSasUri": { "type": "securestring", "metadata": {- "description": "SAS Uri of the service configuration (.cscfg)" + "description": "SAS URI of the configuration (.cscfg) file" } }, "roles": { This tutorial explains how to create a Cloud Service (extended support) deployme "wadPublicConfig_WebRole1": { "type": "string", "metadata": {- "description": "Public configuration of Windows Azure Diagnostics extension" + "description": "Public configuration of the Azure Diagnostics extension" } }, "wadPrivateConfig_WebRole1": { "type": "securestring", "metadata": {- "description": "Private configuration of Windows Azure Diagnostics extension" + "description": "Private configuration of the Azure Diagnostics extension" } }, "vnetName": { This tutorial explains how to create a Cloud Service (extended support) deployme } ``` -9. Deploy the template and parameter file (defining parameters in template file) to create the Cloud Service (extended support) deployment. Please refer these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support) as required. +1. Deploy the template and parameter file (to define parameters in the template file) to create the Cloud Services (extended support) deployment. You can use these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support). ```powershell New-AzResourceGroupDeployment -ResourceGroupName "ContosOrg" -TemplateFile "file path to your template file" -TemplateParameterFile "file path to your parameter file" ``` -## Next steps +## Related content - Review [frequently asked questions](faq.yml) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)+- Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [Azure PowerShell](deploy-powershell.md), or [Visual Studio](deploy-visual-studio.md). +- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support). |
cloud-services-extended-support | In Place Migration Technical Details | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md | These are top scenarios involving combinations of resources, features and Cloud | Migration of deployments containing both production and staging slot deployment using Reserved IP addresses | Not supported. | | Migration of production and staging deployment in different virtual network|Migration of a two slot cloud service requires deleting the staging slot. Once the staging slot is deleted, migrate the production slot as an independent cloud service (extended support) in Azure Resource Manager. A new Cloud Services (extended support) deployment can then be linked to the migrated deployment with swappable property enabled. Deployments files of the old staging slot deployment can be reused to create this new swappable deployment. | | Migration of empty Cloud Service (Cloud Service with no deployment) | Not supported. | -| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins aren't recommended](./deploy-prerequisite.md#required-service-definition-file-csdef-updates) for use on Cloud Services (extended support).| +| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins aren't recommended](./deploy-prerequisite.md#required-definition-file-updates) for use on Cloud Services (extended support).| | Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This will cause downtime. | Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The role sizes need to be updated before migration. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)| | Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This will cause downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This will cause downtime. | |
cloud-services-extended-support | Post Migration Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/post-migration-changes.md | -# Post migration changes -The Cloud Services (classic) deployment is converted to a Cloud Service (extended support) deployment. For more information, see [Cloud Services (extended support) documentation](deploy-prerequisite.md). +# Post-migration changes ++The Cloud Services (classic) deployment is converted to a Cloud Services (extended support) deployment. For more information, see [Cloud Services (extended support) documentation](deploy-prerequisite.md). ## Changes to deployment files Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deploy - Virtual Network uses full Azure Resource Manager resource ID instead of just the resource name in the NetworkConfiguration section of the .cscfg file. For example, `/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Network/virtualNetworks/vnet-name`. For virtual networks belonging to the same resource group as the cloud service, you can choose to update the .cscfg file back to using just the virtual network name. -- Classic sizes like Small, Large, ExtraLarge are replaced by their new size names, Standard_A*. The size names need to be changed to their new names in .csdef file. For more information, see [Cloud Services (extended support) deployment prerequisites](deploy-prerequisite.md#required-service-definition-file-csdef-updates)+- Classic sizes like Small, Large, ExtraLarge are replaced by their new size names, Standard_A*. The size names need to be changed to their new names in .csdef file. For more information, see [Cloud Services (extended support) deployment prerequisites](deploy-prerequisite.md#required-definition-file-updates) - Use the Get API to get the latest copy of the deployment files. - Get the template using [Portal](../azure-resource-manager/templates/export-template-portal.md), [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), [CLI](../azure-resource-manager/management/manage-resource-groups-cli.md#export-resource-groups-to-templates), and [REST API](/rest/api/resources/resourcegroups/exporttemplate) Customers need to update their tooling and automation to start using the new API - As part of migration, the names of few resources like the Cloud Service, public IP addresses, etc. change. These changes might need to be reflected in deployment files before update of Cloud Service. [Learn More about the names of resources changing](in-place-migration-technical-details.md#translation-of-resources-and-naming-convention-post-migration). - Recreate rules and policies required to manage and scale cloud services - - [Auto Scale rules](configure-scaling.md) are not migrated. After migration, recreate the auto scale rules. - - [Alerts](enable-alerts.md) are not migrated. After migration, recreate the alerts. + - [Auto Scale rules](configure-scaling.md) aren't migrated. After migration, recreate the auto scale rules. + - [Alerts](enable-alerts.md) aren't migrated. After migration, recreate the alerts. - The Key Vault is created without any access policies. [Create appropriate policies](../key-vault/general/assign-access-policy-portal.md) on the Key Vault to view or manage your certificates. Certificates will be visible under settings on the tab called secrets. Customers need to update their tooling and automation to start using the new API As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell or REST API. -Currently, Azure Portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate is not found. However, if you are planning to use Certificates as secrets, then these certificates cannot be validated for their thumbprint and any update operation which involves addition of secrets would fail via Portal. Customers are reccomended to use PowerShell or RestAPI to continue updates involving Secrets. +Currently, the Azure portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate isn't found. However, if you're planning to use Certificates as secrets, then these certificates can't be validated for their thumbprint and any update operation that involves addition of secrets would fail via Portal. Customers are recommended to use PowerShell or RestAPI to continue updates involving Secrets. ## Changes for Update via Visual Studio-If you were publishing updates via Visual Studio directly, then you would need to first download the latest CSCFG file from your deployment post migration. Use this file as reference to add Network Configuration details to your current CSCFG file in Visual Studio project. Then build the solution and publish it. You may have to choose the Key Vault and Resource Group for this update. +If you were publishing updates via Visual Studio directly, then you would need to first download the latest CSCFG file from your deployment post migration. Use this file as reference to add Network Configuration details to your current CSCFG file in Visual Studio project. Then build the solution and publish it. You might have to choose the Key Vault and Resource Group for this update. ## Next steps |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates +###### **June 27, 2024** +The June Guest OS has released. + ###### **June 1, 2024** The May Guest OS has released. The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-7.42_202406-01 | June 27, 2024 | Post 7.45 | | WA-GUEST-OS-7.41_202405-01 | June 1, 2024 | Post 7.44 | | WA-GUEST-OS-7.40_202404-01 | April 19, 2024 | Post 7.43 |-| WA-GUEST-OS-7.39_202403-02 | April 9, 2024 | Post 7.42 | +|~~WA-GUEST-OS-7.39_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-7.38_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-7.37_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-7.36_202312-01~~| January 16, 2024 | April 9, 2024 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-6.72_202406-01 | June 27, 2024 | Post 6.75 | | WA-GUEST-OS-6.71_202405-01 | June 1, 2024 | Post 6.74 | | WA-GUEST-OS-6.70_202404-01 | April 19, 2024 | Post 6.73 |-| WA-GUEST-OS-6.69_202403-02 | April 9, 2024 | Post 6.72 | +|~~WA-GUEST-OS-6.69_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-6.68_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-6.67_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-6.66_202312-01~~| January 16, 2024 | April 9, 2024 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-5.96_202406-01 | June 27, 2024 | Post 5.99 | | WA-GUEST-OS-5.95_202405-01 | June 1, 2024 | Post 5.98 | | WA-GUEST-OS-5.94_202404-01 | April 19, 2024 | Post 5.97 |-| WA-GUEST-OS-5.93_202403-02 | April 9, 2024 | Post 5.96 | +|~~WA-GUEST-OS-5.93_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-5.92_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-5.91_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-5.90_202312-01~~| January 16, 2024 | April 9, 2024 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-4.132_202406-01 | June 27, 2024 | Post 4.135 | | WA-GUEST-OS-4.131_202405-01 | June 1, 2024 | Post 4.134 | | WA-GUEST-OS-4.130_202404-01 | April 19, 2024 | Post 4.133 |-| WA-GUEST-OS-4.129_202403-02 | April 9, 2024 | Post 4.132 | +|~~WA-GUEST-OS-4.129_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-4.128_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-4.127_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-4.126_202312-01~~| January 16, 2024 | April 9, 2024 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-3.140_202406-01 | June 27, 2024 | Post 3.143 | | WA-GUEST-OS-3.139_202405-01 | June 1, 2024 | Post 3.142 | | WA-GUEST-OS-3.138_202404-01 | April 19, 2024 | Post 3.141 |-| WA-GUEST-OS-3.137_202403-02 | April 9, 2024 | Post 3.140 | +|~~WA-GUEST-OS-3.137_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-3.136_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-3.135_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-3.134_202312-01~~| January 16, 2024 | April 9, 2024 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-2.152_202406-01 | June 27, 2024 | Post 2.155 | | WA-GUEST-OS-2.151_202405-01 | June 1, 2024 | Post 2.154 | | WA-GUEST-OS-2.150_202404-01 | April 19, 2024 | Post 2.153 |-| WA-GUEST-OS-2.149_202403-02 | April 9, 2024 | Post 2.152 | +|~~WA-GUEST-OS-2.149_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-2.148_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-2.147_202401-01~~| January 22, 2024 | April 19, 2024 | |~~WA-GUEST-OS-2.146_202312-01~~| January 16, 2024 | April 9, 2024 | |
communication-services | Incoming Call Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md | Title: Incoming call concepts description: Learn about Azure Communication Services IncomingCall notification-+ Last updated 09/26/2022-+ # Incoming call concepts |
communication-services | Teams Administration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md | Tenant configurations are organization-wide settings that impact everyone in the |Setting name | Description| Tenant configuration |Property | |--|--|--|--|-|Enable federation with Azure Communication Services| If enabled, Azure Communication Services users can join Teams meeting as Communication Services users even if Teams anonymous users are not allowed| [CsTeamsAcsFederationConfiguration](/PowerShell/module/teams/set-csteamsacsfederationconfiguration)| EnableAcsUsers| -|List federated Azure Communication Services resources | Users from listed Azure Communication Services resources can join Teams meeting if Teams anonymous users are not allowed to join. |[CsTeamsAcsFederationConfiguration](/PowerShell/module/teams/set-csteamsacsfederationconfiguration)| AllowedAcsResources | |[Anonymous users can join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | If disabled, Teams external users can't join Teams meetings. | [CsTeamsMeetingConfiguration](/PowerShell/module/skype/set-csteamsmeetingconfiguration) | DisableAnonymousJoin | Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings. Use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users. |
communication-services | Room Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md | Azure Communication Services provides a concept of a room for developers who are Here are the main scenarios where rooms are useful: - **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services.-- **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This will allow only a subset of users with assigned Communication Services identities to join a room call.+- **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This allows only a subset of users with assigned Communication Services identities to join a room call. - **Rooms enable structured communications through roles and permissions.** Rooms allow developers to assign predefined roles to users to exercise a higher degree of control and structure in communication. Ensure only presenters can speak and share content in a large meeting or in a virtual conference. - **Add PSTN participants.** Invite public switched telephone network (PSTN) participants to a call using a number purchased through your subscription or via Azure direct routing to your Session Border Controller (SBC). Rooms are created and managed via rooms APIs or SDKs. Use the rooms API/SDKs in Use the [Calling SDKs](../voice-video-calling/calling-sdk-features.md) to join the room call. Room calls can be joined using the Web, iOS or Android Calling SDKs. You can find quick start samples for joining room calls [here](../../quickstarts/rooms/join-rooms-call.md). -Rooms can also be accessed using the [Azure Communication Services UI Library](https://azure.github.io/communication-ui-library/?path=/docs/rooms--page). The UI Library enables developers to add a call client that is Rooms enabled into their application with only a couple lines of code. +Rooms can also be accessed using the [Azure Communication Services UI Library](../../concepts/ui-library/ui-library-overview.md). The UI Library enables developers to add a call client that is Rooms enabled into their application with only a couple lines of code. ## Predefined participant roles and permissions |
communication-services | Classification Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/classification-concepts.md | Title: Job classification concepts for Azure Communication Services description: Learn about the Azure Communication Services Job Router classification concepts.-+ -+ Last updated 10/14/2021 |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md | Title: Job Router overview for Azure Communication Services description: Learn about the Azure Communication Services Job Router.-+ -+ Last updated 10/14/2021 |
communication-services | Router Rule Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/router-rule-concepts.md | Title: Job Router rule engines description: Learn about the Azure Communication Services Job Router rules engine concepts.-+ -- + + Last updated 10/14/2021 |
communication-services | Escalate Job | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/escalate-job.md | Title: Escalate a Job in Job Router description: Use Azure Communication Services SDKs to escalate a Job--++ Last updated 10/14/2021 |
communication-services | Job Classification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/job-classification.md | Title: Classify a Job description: Use Azure Communication Services SDKs to change the properties of a job--++ Last updated 10/14/2021 |
communication-services | Manage Queue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/manage-queue.md | Title: Manage a queue in Job Router description: Use Azure Communication Services SDKs to manage the behavior of a queue--++ Last updated 10/14/2021 |
communication-services | Subscribe Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/subscribe-events.md | Title: Subscribe to events in Job Router description: Use Azure Communication Services SDKs to subscribe to Job Router events from Event Grid--++ Last updated 10/14/2021 |
communication-services | Theming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/theming.md | The Azure Communication Services UI Library is a set of components, icons, and c In this article, you learn how to change the theme for UI Library components as you configure an application. -The UI Library is fully documented for developers on a separate site. The documentation is interactive and helps you understand how the APIs work by giving you the ability to try them directly from a webpage. For more information, see the [UI Library documentation](https://azure.github.io/communication-ui-library/?path=/docs/overview--page). +The UI Library is fully documented for developers on a separate site. The documentation is interactive and helps you understand how the APIs work by giving you the ability to try them directly from a webpage. For more information, see the [UI Library documentation](../../concepts/ui-library/ui-library-overview.md). ## Prerequisites |
communication-services | Get Started Router | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/router/get-started-router.md | Title: Quickstart - Submit a Job for queuing and routing description: In this quickstart, you'll learn how to create a Job Router client, Distribution Policy, Queue, and Job within your Azure Communication Services resource.-+ -+ Last updated 10/18/2021 |
communication-services | Understanding Error Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/general-troubleshooting-strategies/understanding-error-codes.md | There are different explanations for why a call ended. Here are the meanings of ||||--|--| | 0 | 0 | Call ended successfully by local participant. | Success | | | 0 | 487 | Call ended successfully as caller canceled the call. | Success | |-| 0 | 603 | Call ended successfully as it was declined from callee. | Success | | +| 0 | 603 | Call ended successfully as it was declined by the callee. | Success | | +| 3100 | 410 | Call setup failed due to unexpected network problem on the client, please check client's network and retry. | UnxpectedClientError | - Ensure that you're using the latest SDK in a supported environment.<br> | +| 3101 | 410 | Call dropped due to unexpected network problem on the client, please check client's network and retry. | UnxpectedClientError | | +| 3112 | 410 | Call setup failed due to network configuration problem on the client side, please check client's network configuration, and retry. | ExpectedError | | | 4097 | 0 | Call ended for all users by the meeting organizer. | Success | |-| 4507 | 495 | Call ended as application didn't provide valid Azure Communication Services token. | UnexpectedClientError |- Ensure that your application implements token refresh mechanism correctly. | +| 4507 | 495 | Call ended as application didn't provide a valid Azure Communication Services token. | UnexpectedClientError |- Ensure that your application implements token refresh mechanism correctly. | +| 4521 | 0 | Call ended because user disconnected from the call abruptly, this may be a result of a user closing the application that hosted the call, eg a user terminated application, closed browser of browser tab without proper hang-up. | ExpectedError | | | 5000 | 0 | Call ended for this participant as it was removed from the conversation by another participant. | Success | | | 5003 | 0 | Call ended successfully, as all callee endpoints declined the call. | Success | | | 5300 | 0 | Call ended for this participant as it was removed from the conversation by another participant. | Success | | | 7000 | 0 | Call ended by Azure Communication Services platform. | Success | | | 10003 | 487 | Call was accepted elsewhere, by another endpoint of this user. | Success | | | 10004 | 487 | Call was canceled on timeout, no callee endpoint accepted on time. Ensure that user saw the notification and try to initiate that call again. | ExpectedError | |-| 10024 | 487 | Call ended successfully as it was declined by all callee endpoint. | Success | - Try to place the call again. | +| 10024 | 487 | Call ended successfully as it was declined by all callee endpoints. | Success | - Try to place the call again. | +| 10057 | 408 | Call failed, callee failed to finalize call setup, most likely callee lost network or terminated the application abruptly. Ensure clients are connected and available. | ExpectedError | | | 301005 | 410 | Participant was removed from the call by the Azure Communication Services infrastructure due to loss of media connectivity with Azure Communication Services infrastructure, this usually happens if participant leaves the call abruptly or looses network connectivity. If participant wants to continue the call, it should reconnect. | UnexpectedClientError | - Ensure that you're using the latest SDK in a supported environment.<br> | | 510403 | 403 | Call ended, as it has been marked as a spam and got blocked. | ExpectedError | - Ensure that your Communication Services token is valid and not expired.<br> - Ensure to pass in AlternateId in the call options.<br> | | 540487 | 487 | Call ended successfully as caller canceled the call. | Success | | | 560000 | 0 | Call ended successfully by remote PSTN participant. | Success |Possible causes:<br> - User ended the call.<br> - Call was ended by media agent.<br> |-| 560486 | 486 | Call ended because remote PSTN participant was busy. The number called was already in a call or having technical iss +| 560486 | 486 | Call ended because remote PSTN participant was busy. The number called was already in a call or having technical issues. | Success | - For Direct Routing calls, check your Session Border Control logs and settings and timeouts configuration.<br> Possible causes: <br> - The number called was already in a call or having technical issues.<br> | ## Azure Communication Services Calling SDK client error codes and subcodes For client errors, if the resultCategories property is `ExpectedError`, the error is expected from the SDK's perspective. Such errors are commonly encountered in precondition failures, such as incorrect arguments passed by the app, or when the current system state doesn't allow the API call. The application should check the error reason and the logic for invoking API. |
communication-services | Events Playbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md | Title: Build a custom event management platform with Microsoft Teams, Microsoft Graph and Azure Communication Services -description: Learn how to use Microsoft Teams, Graph and Azure Communication Services to build a custom event management platform. +description: Learn how to use Microsoft Teams, Graph, and Azure Communication Services to build a custom event management platform. The goal of this document is to reduce the time it takes for Event Management Pl ## What are virtual events and event management platforms? -Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Microsoft Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios. +Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Microsoft Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars, and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios. ## What are the building blocks of an event management platform? To get started, event organizers must schedule and configure the event. This pro ### 2. Attendee experience -For event attendees, they are presented with an experience that enables them to attend, participate, and engage with an eventΓÇÖs content. This experience might include capabilities like watching content, sharing their camera stream, asking questions, responding to polls, and more. Microsoft provides two options for attendees to consume events powered by Teams and Azure Communication +For event attendees, they're presented with an experience that enables them to attend, participate, and engage with an eventΓÇÖs content. This experience might include capabilities like watching content, sharing their camera stream, asking questions, responding to polls, and more. Microsoft provides two options for attendees to consume events powered by Teams and Azure Communication - Teams Client (Web or Desktop): Attendees can directly join events using a Teams Client by using a provided join link. They get access to the full Teams experience. Event hosts and organizers require the ability to present content, manage attend ## Building a custom solution for event management with Azure Communication Services and Microsoft Graph -Throughout the rest of this tutorial, we will focus on how using Azure Communication Services and Microsoft Graph to build a custom event management platform. We will be using the sample architecture below. Based on that architecture we will be focusing on setting up scheduling and registration flows and embedding the attendee experience right on the event platform to join the event. +Throughout the rest of this tutorial, we'll focus on how using Azure Communication Services and Microsoft Graph to build a custom event management platform. We'll be using the sample architecture below. Based on that architecture we'll be focusing on setting up scheduling and registration flows and embedding the attendee experience right on the event platform to join the event. :::image type="content" source="./media/event-management-platform-architecture.svg" alt-text="Diagram showing sample architecture for event management platform"::: Microsoft Graph enables event management platforms to empower organizers to sche 1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of reminders. - 2. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](/entra/identity-platform/access-tokens). and [refresh tokens](/entra/identity-platform/refresh-tokens). + 2. As part of the application setup, the service account is used to log in into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](/entra/identity-platform/access-tokens). and [refresh tokens](/entra/identity-platform/refresh-tokens). 3. The application will require "on behalf of" permissions with the [offline scope](/entra/identity-platform/permissions-consent-overview#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Microsoft Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs. Through Azure Communication Services, developers can use SMS and Email capabilit >[!NOTE] > Limitations when using Azure Communication Services as part of a Teams Webinar experience. Please visit our [documentation for more details.](../concepts/join-teams-meeting.md#limitations-and-known-issues) -Attendee experience can be directly embedded into an application or platform using [Azure Communication Services](../overview.md) so that your attendees never need to leave your platform. It provides low-level calling and chat SDKs which support [interoperability with Teams Events](../concepts/teams-interop.md), as well as a turn-key UI Library which can be used to reduce development time and easily embed communications. Azure Communication Services enables developers to have flexibility with the type of solution they need. Review [limitations](../concepts/join-teams-meeting.md#limitations-and-known-issues) of using Azure Communication Services for webinar scenarios. +Attendee experience can be directly embedded into an application or platform using [Azure Communication Services](../overview.md) so that your attendees never need to leave your platform. It provides low-level calling and chat SDKs that support [interoperability with Teams Events](../concepts/teams-interop.md), as well as a turn-key UI Library, which can be used to reduce development time and easily embed communications. Azure Communication Services enables developers to have flexibility with the type of solution they need. Review [limitations](../concepts/join-teams-meeting.md#limitations-and-known-issues) of using Azure Communication Services for webinar scenarios. 1. To start, developers can leverage Microsoft Graph APIs to retrieve the join URL. This URL is provided uniquely per attendee during [registration](/graph/api/externalmeetingregistrant-post?tabs=http&view=graph-rest-beta&preserve-view=true). Alternatively, it can be [requested for a given meeting](/graph/api/onlinemeeting-get?tabs=http&view=graph-rest-beta&preserve-view=true). Attendee experience can be directly embedded into an application or platform usi 3. Once a resource is created, developers must [generate access tokens](../quickstarts/identity/access-tokens.md?pivots=programming-language-javascript&preserve-view=true) for attendees to access Azure Communication Services. We recommend using a [trusted service architecture](../concepts/client-and-server-architecture.md). -4. Developers can leverage [headless SDKs](../concepts/teams-interop.md) or [UI Library](https://azure.github.io/communication-ui-library/) using the join link URL to join the Teams meeting through [Teams Interoperability](../concepts/teams-interop.md). Details below: +4. Developers can leverage [headless SDKs](../concepts/teams-interop.md) or [UI Library](../concepts/ui-library/ui-library-overview.md) using the join link URL to join the Teams meeting through [Teams Interoperability](../concepts/teams-interop.md). Details below: |Headless SDKs | UI Library | |-||-| Developers can leverage the [calling](../quickstarts/voice-video-calling/get-started-teams-interop.md?pivots=platform-javascript&preserve-view=true) and [chat](../quickstarts/chat/meeting-interop.md?pivots=platform-javascript&preserve-view=true) SDKs to join a Teams meeting with your custom client | Developers can choose between the [call + chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-meeting-basicexample--basic-example) or pure [call](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-basicexample--basic-example) and [chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-chat-basicexample--basic-example) composites to build their experience. Alternatively, developers can leverage [composable components](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-uicomponents--page) to build a custom Teams interop experience.| +| Developers can leverage the [calling](../quickstarts/voice-video-calling/get-started-teams-interop.md?pivots=platform-javascript&preserve-view=true) and [chat](../quickstarts/chat/meeting-interop.md?pivots=platform-javascript&preserve-view=true) SDKs to join a Teams meeting with your custom client | Developers can choose between the [call + chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-meeting-basicexample--basic-example) or pure [call](../concepts/ui-library/ui-library-overview.md) and [chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-chat-basicexample--basic-example) composites to build their experience. Alternatively, developers can leverage [composable components](../concepts/ui-library/ui-library-use-cases.md) to build a custom Teams interop experience.| >[!NOTE] |
container-apps | Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md | To get a token for a resource, make an HTTP `GET` request to the endpoint, inclu -## Use managed identity for scale rules +## <a name="scale-rules"></a>Use managed identity for scale rules -Starting in API version `2024-02-02-preview`, you can use managed identities in your scale rules to authenticate with Azure services that support managed identities. To use a managed identity in your scale rule, use the `identity` property instead of the `auth` property in your scale rule. Acceptable values for the `identity` property are either the Azure resource ID of a user-assigned identity, or `system` to use a system-assigned identity +You can use managed identities in your scale rules to authenticate with Azure services that support managed identities. To use a managed identity in your scale rule, use the `identity` property instead of the `auth` property in your scale rule. Acceptable values for the `identity` property are either the Azure resource ID of a user-assigned identity, or `system` to use a system-assigned identity. -The following example shows how to use a managed identities with an Azure Queue Storage scale rule. The queue storage account uses the `accountName` property to identify the storage account, while the `identity` property specifies which managed identity to use. You do not need to use the `auth` property. +> [!NOTE] +> Managed identity authentication in scale rules is in public preview. It's available in API version `2024-02-02-preview`. ++The following ARM template example shows how to use a managed identity with an Azure Queue Storage scale rule: ++The queue storage account uses the `accountName` property to identify the storage account, while the `identity` property specifies which managed identity to use. You do not need to use the `auth` property. ```json "scale": { The following example shows how to use a managed identities with an Azure Queue }] } ```+To learn more about using managed identity with scale rules, see [Set scaling rules in Azure Container Apps](scale-app.md?pivots=azure-portal#authentication-2). ## Control managed identity availability -Container Apps allow you to specify [init containers](containers.md#init-containers) and main containers. By default, both main and init containers in a consumption workload profile environment can use managed identity to access other Azure services. In consumption-only environments and dedicated workload profile environments, only main containers can use managed identity. Managed identity access tokens are available for every managed identity configured on the container app. However, in some situations only the init container or the main container require access tokens for a managed identity. Other times, you may use a managed identity only to access your Azure Container Registry to pull the container image, and your application itself doesn't need to have access to your Azure Container Registry. +Container Apps allows you to specify [init containers](containers.md#init-containers) and main containers. By default, both main and init containers in a consumption workload profile environment can use managed identity to access other Azure services. In consumption-only environments and dedicated workload profile environments, only main containers can use managed identity. Managed identity access tokens are available for every managed identity configured on the container app. However, in some situations only the init container or the main container require access tokens for a managed identity. Other times, you may use a managed identity only to access your Azure Container Registry to pull the container image, and your application itself doesn't need to have access to your Azure Container Registry. Starting in API version `2024-02-02-preview`, you can control which managed identities are available to your container app during the init and main phases to follow the security principle of least privilege. The following options are available: -- `Init`: available only to init containers. Use this when you want to perform some intilization work that requires a managed identity, but you no longer need the managed identity in the main container. This option is currently only supported in [workload profile consumption environments](environment.md#types)-- `Main`: available only to main containers. Use this if your init container does not need managed identity.-- `All`: available to all containers. This is the default setting.-- `None`: not available to any containers. Use this when you have a managed identity that is only used for ACR image pull, scale rules, or Key Vault secrets and does not need to be available to the code running in your containers.+- `Init`: Available only to init containers. Use this when you want to perform some initialization work that requires a managed identity, but you no longer need the managed identity in the main container. This option is currently only supported in [workload profile consumption environments](environment.md#types) +- `Main`: Available only to main containers. Use this if your init container does not need managed identity. +- `All`: Available to all containers. This value is the default setting. +- `None`: Not available to any containers. Use this when you have a managed identity that is only used for ACR image pull, scale rules, or Key Vault secrets and does not need to be available to the code running in your containers. -The following example shows how to configure a container app on a workload profile consumption environment that: +The following ARM template example shows how to configure a container app on a workload profile consumption environment that: - Restricts the container app's system-assigned identity to main containers only. - Restricts a specific user-assigned identity to init containers only. This approach limits the resources that can be accessed if a malicious actor wer "identitySettings":[ { "identity": "ACR_IMAGEPULL_IDENTITY_RESOURCE_ID",- "lifecycle": "none" + "lifecycle": "None" }, { "identity": "<IDENTITY1_RESOURCE_ID>",- "lifecycle": "init" + "lifecycle": "Init" }, { "identity": "system",- "lifecycle": "main" + "lifecycle": "Main" }] }, "template": { |
container-apps | Scale App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md | The following example demonstrates how to create a custom scale rule. This example shows how to convert an [Azure Service Bus scaler](https://keda.sh/docs/latest/scalers/azure-service-bus/) to a Container Apps scale rule, but you use the same process for any other [ScaledObject](https://keda.sh/docs/latest/concepts/scaling-deployments/)-based [KEDA scaler](https://keda.sh/docs/latest/scalers/) specification. -For authentication, KEDA scaler authentication parameters convert into [Container Apps secrets](manage-secrets.md). +For authentication, KEDA scaler authentication parameters take [Container Apps secrets](manage-secrets.md) or [managed identity](managed-identity.md#scale-rules). ::: zone pivot="azure-resource-manager" First, you define the type and metadata of the scale rule. ### Authentication -A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the `authenticationRef` property. You can map the TriggerAuthentication object to the Container Apps scale rule. +Container Apps scale rules support secrets-based authentication. Scale rules for Azure resources, including Azure Queue Storage, Azure Service Bus, and Azure Event Hubs, also support managed identity. Where possible, use managed identity authentication to avoid storing secrets within the app. -> [!NOTE] -> Container Apps scale rules only support secret references. Other authentication types such as pod identity are not supported. +#### Use secrets ++To use secrets for authentication, you need to create a secret in the container app's `secrets` array. The secret value is used in the `auth` array of the scale rule. ++KEDA scalers can use secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the `authenticationRef` property. You can map the TriggerAuthentication object to the Container Apps scale rule. 1. Find the `TriggerAuthentication` object referenced by the KEDA `ScaledObject` specification. A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s Refer to the [considerations section](#considerations) for more security related information. +#### Using managed identity ++Container Apps scale rules can use managed identity to authenticate with Azure services. The following ARM template passes in system-based managed identity to authenticate for an Azure Queue scaler. ++``` +"scale": { + "minReplicas": 0, + "maxReplicas": 4, + "rules": [ + { + "name": "azure-queue", + "custom": { + "type": "azure-queue", + "metadata": { + "accountName": "apptest123", + "queueName": "queue1", + "queueLength": "1" + }, + "identity": "system" + } + } + ] +} +``` ++To learn more about using managed identity with scale rules, see [Managed identity](managed-identity.md#scale-rules). + ::: zone-end ::: zone pivot="azure-cli" A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s ### Authentication -A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the authenticationRef property. You can map the TriggerAuthentication object to the Container Apps scale rule. +Container Apps scale rules support secrets-based authentication. Scale rules for Azure resources, including Azure Queue Storage, Azure Service Bus, and Azure Event Hubs, also support managed identity. Where possible, use managed identity authentication to avoid storing secrets within the app. -> [!NOTE] -> Container Apps scale rules only support secret references. Other authentication types such as pod identity are not supported. +#### Use secrets ++To configure secrets-based authentication for a Container Apps scale rule, you configure the secrets in the container app and reference them in the scale rule. ++A KEDA scaler supports secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) which the `authenticationRef` property uses for reference. You can map the `TriggerAuthentication` object to the Container Apps scale rule. 1. Find the `TriggerAuthentication` object referenced by the KEDA `ScaledObject` specification. Identify each `secretTargetRef` of the `TriggerAuthentication` object. A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s 1. Create an authentication entry with the `--scale-rule-auth` parameter. If there are multiple entries, separate them with a space. :::code language="bash" source="~/azure-docs-snippets-pr/container-apps/container-apps-azure-service-bus-cli.bash" highlight="8,14":::+ +#### Using managed identity ++Container Apps scale rules can use managed identity to authenticate with Azure services. The following command creates a container app with a user-assigned managed identity and uses it to authenticate for an Azure Queue scaler. ++```bash +az containerapp create \ + --resource-group <RESOURCE_GROUP> \ + --name <APP_NAME> \ + --environment <ENVIRONMENT_ID> \ + --user-assigned <USER_ASSIGNED_IDENTITY_ID> \ + --scale-rule-name azure-queue \ + --scale-rule-type azure-queue \ + --scale-rule-metadata "accountName=<AZURE_STORAGE_ACCOUNT_NAME>" "queueName=queue1" "queueLength=1" \ + --scale-rule-identity <USER_ASSIGNED_IDENTITY_ID> +``` ++Replace placeholders with your values. ::: zone-end A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s ### Authentication -A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the authenticationRef property. You can map the TriggerAuthentication object to the Container Apps scale rule. +Container Apps scale rules support secrets-based authentication. Scale rules for Azure resources, including Azure Queue Storage, Azure Service Bus, and Azure Event Hubs, also support managed identity. Where possible, use managed identity authentication to avoid storing secrets within the app. -> [!NOTE] -> Container Apps scale rules only support secret references. Other authentication types such as pod identity are not supported. +#### Use secrets 1. In your container app, create the [secrets](./manage-secrets.md) that you want to reference. A KEDA scaler supports using secrets in a [TriggerAuthentication](https://keda.s 1. In the *Authentication* section, select **Add** to create an entry for each KEDA `secretTargetRef` parameter. +#### Using managed identity ++Managed identity authentication is not supported in the Azure portal. Use the [Azure CLI](scale-app.md?pivots=azure-cli#authentication) or [Azure Resource Manager](scale-app.md?&pivots=azure-resource-manager#authentication) to authenticate using managed identity. + ::: zone-end ## Default scale rule |
container-registry | Tasks Agent Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-agent-pools.md | This feature is available in the **Premium** container registry service tier. Fo Agent pool tiers provide the following resources per instance in the pool. -|Tier | Type | CPU |Memory (GB) | -||||| -|S1 | standard | 2 | 3 | -|S2 | standard | 4 | 8 | -|S3 | standard | 8 | 16 | -|I6 | isolated | 64 | 216 | +| Tier | Type | CPU | Memory (GB) | +| - | -- | | -- | +| S1 | standard | 2 | 3 | +| S2 | standard | 4 | 8 | +| S3 | standard | 8 | 16 | +| I6 | isolated | 64 | 216 | ## Create and manage a task agent pool az acr agentpool update \ Task agent pools require access to the following Azure services. The following firewall rules must be added to any existing network security groups or user-defined routes. | Direction | Protocol | Source | Source Port | Destination | Dest Port | Used |-|--|-|-|-|-|--|| +| | -- | -- | -- | -- | | - | | Outbound | TCP | VirtualNetwork | Any | AzureKeyVault | 443 | Default | | Outbound | TCP | VirtualNetwork | Any | Storage | 443 | Default | | Outbound | TCP | VirtualNetwork | Any | EventHub | 443 | Default | | Outbound | TCP | VirtualNetwork | Any | AzureActiveDirectory | 443 | Default |-| Outbound | TCP | VirtualNetwork | Any | AzureMonitor | 443 | Default | +| Outbound | TCP | VirtualNetwork | Any | AzureMonitor | 443,12000 | Default | > [!NOTE] > If your tasks require additional resources from the public internet, add the corresponding rules. For example, additional rules are needed to run a docker build task that pulls the base images from Docker Hub, or restores a NuGet package. |
cosmos-db | Ai Agents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-agents.md | Unlike standalone large language models (LLMs) or rule-based software/hardware s - [Planning](#reasoning-and-planning). AI agents can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities. - [Tool usage](#frameworks). Advanced AI agents can utilize various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. Tool usage is often done through function calling. - [Perception](#frameworks). AI agents can perceive and process information from their environment, including visual, auditory, and other sensory data, making them more interactive and context aware.-- [Memory](#agent-memory-system). AI agents possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). They store these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.+- [Memory](#ai-agent-memory-system). AI agents possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). They store these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time. > [!NOTE] > The usage of the term "memory" in the context of AI agents should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory). For advanced and autonomous planning and execution workflows, [AutoGen](https:// > [!TIP] > See the implementation sample section at the end of this article for tutorial on building a simple multi-agent system using one of the popular frameworks and a unified agent memory system. -### Agent memory system +### AI agent memory system The prevalent practice for experimenting with AI-enhanced applications in 2022 through 2024 has been using standalone database management systems for various data workflows or types. For example, an in-memory database for caching, a relational database for operational data (including tracing/activity logs and LLM conversation history), and a [pure vector database](vector-database.md#integrated-vector-database-vs-pure-vector-database) for embedding management. However, this practice of using a complex web of standalone databases can hurt AI agent's performance. Integrating all these disparate databases into a cohesive, interoperable, and resilient memory system for AI agents is a significant challenge in and of itself. Moreover, many of the frequently used database services are not optimal for the speed and scalability that AI agent systems need. These databases' individual weaknesses are exacerbated in multi-agent systems: -**In-memory databases** are excellent for speed but may struggle with the large-scale data persistence that AI agents require. +#### In-memory databases +In-memory databases are excellent for speed but may struggle with the large-scale data persistence that AI agents require. -**Relational databases** are not ideal for the varied modalities and fluid schemas of data handled by agents. Moreover, relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding. +#### Relational databases +Relational databases are not ideal for the varied modalities and fluid schemas of data handled by agents. Moreover, relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding. -**Pure vector databases** tend to be less effective for transactional operations, real-time updates, and distributed workloads. The popular pure vector databases nowadays typically offer +#### Pure vector databases +Pure vector databases tend to be less effective for transactional operations, real-time updates, and distributed workloads. The popular pure vector databases nowadays typically offer - no guarantee on reads & writes - limited ingestion throughput - low availability (below 99.9%, or annualized outage of almost 9 hours or more) However, this practice of using a complex web of standalone databases can hurt A The next section dives deeper into what makes a robust AI agent memory system. -## Memory can make or break AI agents +## Memory can make or break agents Just as efficient database management systems are critical to software applications' performances, it is critical to provide LLM-powered agents with relevant and useful information to guide their inference. Robust memory systems enable organizing and storing different kinds of information that the agents can retrieve at inference time. Currently, LLM-powered applications often use [retrieval-augmented generation](v For example, if the task is to write code, vector search may not be able to retrieve the syntax tree, file system layout, code summaries, or API signatures that are important for generating coherent and correct code. Similarly, if the task is to work with tabular data, vector search may not be able to retrieve the schema, the foreign keys, the stored procedures, or the reports that are useful for querying or analyzing the data. -Weaving together [a web of standalone in-memory, relational, and vector databases](#agent-memory-system) is not an optimal solution for the varied data types, either. This approach may work for prototypical agent systems; however, it adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents. +Weaving together [a web of standalone in-memory, relational, and vector databases](#ai-agent-memory-system) is not an optimal solution for the varied data types, either. This approach may work for prototypical agent systems; however, it adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents. Therefore, a robust memory system should have the following characteristics: At the macro level, memory systems should enable multiple AI agents to collabora Not only are memory systems critical to AI agents; they are also important for the humans who develop, maintain, and use these agents. For example, humans may need to supervise agents' planning and execution workflows in near real-time. While supervising, humans may interject with guidance or make in-line edits of agents' dialogues or monologues. Humans may also need to audit the reasoning and actions of agents to verify the validity of the final output. Human-agent interactions are likely in natural or programming languages, while agents "think," "learn," and "remember" through embeddings. This data modal difference poses another requirement on memory systems' consistency across data modalities. -## Infastructure for a robust memory system +## Building a robust AI agent memory system -The above characteristics require AI agent memory systems to be highly scalable and swift. Painstakingly weaving together [a plethora of disparate in-memory, relational, and vector databases](#agent-memory-system) may work for early-stage AI-enabled applications; however, this approach adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents. +The above characteristics require AI agent memory systems to be highly scalable and swift. Painstakingly weaving together [a plethora of disparate in-memory, relational, and vector databases](#ai-agent-memory-system) may work for early-stage AI-enabled applications; however, this approach adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents. In place of all the standalone databases, Azure Cosmos DB can serve as a unified solution for AI agent memory systems. Its robustness successfully [enabled OpenAI's ChatGPT service](https://www.youtube.com/watch?v=6IIUtEFKJec&t) to scale dynamically with high reliability and low maintenance. Powered by an atom-record-sequence engine, it is the world's first globally distributed [NoSQL](distributed-nosql.md), [relational](distributed-relational.md), and [vector database](vector-database.md) service that offers a serverless mode. AI agents built on top of Azure Cosmos DB enjoy speed, scale, and simplicity. The five available [consistency levels](consistency-levels.md) (from strong to e This section explores the implementation of an autonomous agent to process traveler inquiries and bookings in a CruiseLine travel application. -Chatbots have been a long-standing concept, but AI agents are advancing beyond basic human conversation to carry out tasks based on natural language, traditionally requiring coded logic. This AI travel agent uses the LangChain Agent framework for agent planning, tool usage, and perception. Its [unified memory system](#memory-can-make-or-break-ai-agents) uses the [vector database](vector-database.md) and document store capabilities of Azure Cosmos DB to address traveler inquiries and facilitate trip bookings, ensuring [speed, scale, and simplicity](#infastructure-for-a-robust-memory-system). It operates within a Python FastAPI backend and support user interactions through a React JS user interface. +Chatbots have been a long-standing concept, but AI agents are advancing beyond basic human conversation to carry out tasks based on natural language, traditionally requiring coded logic. This AI travel agent uses the LangChain Agent framework for agent planning, tool usage, and perception. Its [unified memory system](#memory-can-make-or-break-agents) uses the [vector database](vector-database.md) and document store capabilities of Azure Cosmos DB to address traveler inquiries and facilitate trip bookings, ensuring [speed, scale, and simplicity](#building-a-robust-ai-agent-memory-system). It operates within a Python FastAPI backend and support user interactions through a React JS user interface. ### Prerequisites from langchain_core.runnables.history import RunnableWithMessageHistory from langchain.agents import AgentExecutor, create_openai_tools_agent from service import TravelAgentTools as agent_tools -load_dotenv(override=True) +load_dotenv(override=False) chat : ChatOpenAI | None=None def LLM_init(): LLM_init() ``` -The **init.py** file commences by initiating the loading of environment variables from a **.env** file utilizing the ```load_dotenv(override=True)``` method. Then, a global variable named ```agent_with_chat_history``` is instantiated for the agent, intended for use by our **TravelAgent.py**. The ```LLM_init()``` method is invoked during module initialization to configure our AI agent for conversation via the API web layer. The OpenAI Chat object is instantiated using the GPT-3.5 model, incorporating specific parameters such as model name and temperature. The chat object, tools list, and prompt template are combined to generate an ```AgentExecutor```, which operates as our AI Travel Agent. Lastly, the agent with history, ```agent_with_chat_history```, is established using ```RunnableWithMessageHistory``` with chat history (MongoDBChatMessageHistory), enabling it to maintain a complete conversation history via Azure Cosmos DB. +The **init.py** file commences by initiating the loading of environment variables from a **.env** file utilizing the ```load_dotenv(override=False)``` method. Then, a global variable named ```agent_with_chat_history``` is instantiated for the agent, intended for use by our **TravelAgent.py**. The ```LLM_init()``` method is invoked during module initialization to configure our AI agent for conversation via the API web layer. The OpenAI Chat object is instantiated using the GPT-3.5 model, incorporating specific parameters such as model name and temperature. The chat object, tools list, and prompt template are combined to generate an ```AgentExecutor```, which operates as our AI Travel Agent. Lastly, the agent with history, ```agent_with_chat_history```, is established using ```RunnableWithMessageHistory``` with chat history (MongoDBChatMessageHistory), enabling it to maintain a complete conversation history via Azure Cosmos DB. #### Prompt from model.prompt import PromptResponse import time from dotenv import load_dotenv -load_dotenv(override=True) +load_dotenv(override=False) def agent_chat(input:str, session_id:str)->str: |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md | Today's applications are required to be highly responsive and always online. The The surge of AI-powered applications created another layer of complexity, because many of these applications integrate a multitude of data stores. For example, some organizations built applications that simultaneously connect to MongoDB, Postgres, Redis, and Gremlin. These databases differ in implementation workflow and operational performances, posing extra complexity for scaling applications. -Azure Cosmos DB simplifies and expedites your application development by being the single database for your operational data needs, from [geo-replicated distributed caching](https://medium.com/@marcodesanctis2/using-azure-cosmos-db-as-your-persistent-geo-replicated-distributed-cache-b381ad80f8a0) to backup to [vector indexing and search](vector-database.md). It provides the data infrastructure for modern applications like AI, digital commerce, Internet of Things, and booking management. It can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table. +Azure Cosmos DB simplifies and expedites your application development by being the single database for your operational data needs, from [geo-replicated distributed caching](https://medium.com/@marcodesanctis2/using-azure-cosmos-db-as-your-persistent-geo-replicated-distributed-cache-b381ad80f8a0) to backup to [vector indexing and search](vector-database.md). It provides the data infrastructure for modern applications like [AI agents](ai-agents.md), digital commerce, Internet of Things, and booking management. It can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table. ## An AI database providing industry-leading capabilities... |
cosmos-db | Vector Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md | In this example, `vectorIndex` is returned with all the `cosmosSearch` parameter ## Example using an IVF Index +Inverted File (IVF) Indexing is a method that organizes vectors into clusters. During a vector search, the query vector is first compared against the centers of these clusters. The search is then conducted within the cluster whose center is closest to the query vector. ++The `numList`s parameter determines the number of clusters to be created. A single cluster implies that the search is conducted against all vectors in the database, akin to a brute-force or kNN search. This setting provides the highest accuracy but also the highest latency. ++Increasing the `numLists` value results in more clusters, each containing fewer vectors. For instance, if `numLists=2`, each cluster contains more vectors than if `numLists=3`, and so on. Fewer vectors per cluster speed up the search (lower latency, higher queries per second). However, this increases the likelihood of missing the most similar vector in your database to the query vector. This is due to the imperfect nature of clustering, where the search might focus on one cluster while the actual ΓÇ£closestΓÇ¥ vector resides in a different cluster. ++The `nProbes` parameter controls the number of clusters to be searched. By default, itΓÇÖs set to 1, meaning it searches only the cluster with the center closest to the query vector. Increasing this value allows the search to cover more clusters, improving accuracy but also increasing latency (thus decreasing queries per second) as more clusters and vectors are being searched. + The following examples show you how to index vectors, add documents that have vector properties, perform a vector search, and retrieve the index configuration. ### Create a vector index |
cosmos-db | Try Free | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md | This article walks you through how to create your account, limits, and upgrading If you decide that Azure Cosmos DB is right for you, you can receive up to 63% discount on [Azure Cosmos DB prices through Reserved Capacity](reserved-capacity.md). +<br> ++> [!VIDEO https://www.youtube.com/embed/7EFcxFGRB5Y?si=e7BiJ-JGK7WH79NG] + ## Limits to free account ### [NoSQL / Cassandra/ Gremlin / Table](#tab/nosql+cassandra+gremlin+table) |
cosmos-db | Vector Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md | There are two common types of vector database implementations - pure vector data A pure vector database is designed to efficiently store and manage vector embeddings, along with a small amount of metadata; it is separate from the data source from which the embeddings are derived. -A vector database that is integrated in a highly performant NoSQL or relational database provides additional capabilities. The integrated vector database in a NoSQL or relational database can store, index, and query embeddings alongside the corresponding original data. This approach eliminates the extra cost of replicating data in a separate pure vector database. Moreover, keeping the vector embeddings and original data together better facilitates multi-modal data operations, and enables greater data consistency, scale, and performance. +A vector database that is integrated in a highly performant NoSQL or relational database provides additional capabilities. The integrated vector database in a NoSQL or relational database can store, index, and query embeddings alongside the corresponding original data. This approach eliminates the extra cost of replicating data in a separate pure vector database. Moreover, keeping the vector embeddings and original data together better facilitates multi-modal data operations, and enables greater data consistency, scale, and performance. A highly performant database with schema flexibility and integrated vector database is especially optimal for [AI agents](ai-agents.md). ### Vector database use cases |
cost-management-billing | Manage Billing Across Tenants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-across-tenants.md | -You can simplify billing management for your organization by creating multi-tenant billing relationships using associated billing tenants. A multi-tenant billing relationship lets you securely share your organizationΓÇÖs billing account with other tenants, while maintaining control over your billing data. You can move subscriptions in different tenants and provide users in those tenants with access to your organizationΓÇÖs billing account. This relationship lets users on those tenants do billing activities like viewing and downloading invoices or managing licenses. +You can simplify billing management for your organization by creating multitenant billing relationships using associated billing tenants. A multitenant billing relationship lets you securely share your organizationΓÇÖs billing account with other tenants, while maintaining control over your billing data. You can move subscriptions in different tenants and provide users in those tenants with access to your organizationΓÇÖs billing account. This relationship lets users on those tenants do billing activities like viewing and downloading invoices or managing licenses. ## Understand tenant types Primary billing tenant: The primary billing tenant is the tenant used when the b Associated billing tenants: An associated billing tenant is a tenant that is linked to your primary billing tenantΓÇÖs billing account. You can move Microsoft 365 subscriptions to these tenants. You can also assign billing account roles to users in associated billing tenants. -> [!IMPORTANT] -> Adding associated billing tenants, moving subscriptions and assigning roles to users in associated billing tenants are only available for billing accounts of type Microsoft Customer Agreement that are created by working with a Microsoft sales representative. To learn more about types of billing accounts, see [Billing accounts and scopes in the Azure portal](view-all-accounts.md). +## Prerequisites ++You must have a Microsoft Customer Agreement - enterprise billing account to use associated billing tenants. An enterprise billing account is a billing account that is created by working with a Microsoft sales representative. ++If you don't have one, you don't see the **Associated billing tenants** option in the Azure portal. You also can't move subscriptions to other tenants or assign roles to users in other tenants. ++To learn more about types of billing accounts, see [Billing accounts and scopes in the Azure portal](view-all-accounts.md). + ## Access settings for associated billing tenants Before assigning roles, make sure you [add a tenant as an associated billing ten 1. Select **Access control (IAM)** on the left side of the page. 1. On the Access control (IAM) page, select **Add** at the top of the page. :::image type="content" source="./media/manage-billing-across-tenants/access-management-add-role-assignment-button.png" alt-text="Screenshot showing access control page while assigning roles." lightbox="./media/manage-billing-across-tenants/access-management-add-role-assignment-button.png" :::-1. In the Add role assignment pane, select a role, select the associated billing tenant from the tenant dropdown, then enter the email address of the users, groups or apps to whom you want to assign roles. +1. In the Add role assignment pane, select a role, select the associated billing tenant from the tenant dropdown, then enter the email address of the users, groups, or apps to whom you want to assign roles. 1. Select **Add**. :::image type="content" source="./media/manage-billing-across-tenants/associated-tenants-add-role-assignment.png" alt-text="Screenshot showing saving a role assignment." lightbox="./media/manage-billing-across-tenants/associated-tenants-add-role-assignment.png" ::: 1. The users receive an email with a link to review the role assignment request. After they accept the role, they have access to your billing account. Choosing to assign roles to users from associated billing tenants might be the r | Consideration |Associated billing tenants |Azure B2B | ||||-|Security | The users that you invite to share your billing account will follow their tenant's security policies. | The users that you invite to share your billing account will follow your tenant's security policies. | +|Security | The users that you invite to share your billing account follow their tenant's security policies. | The users that you invite to share your billing account follow your tenant's security policies. | |Access | The users get access to your billing account in their own tenant and can manage billing and make purchases without switching tenants. | External guest identities are created for users in your tenant and these identities get access to your billing account. Users would have to switch tenant to manage billing and make purchases. | ## Move Microsoft 365 subscriptions to a billing tenant |
cost-management-billing | How To View Csp Reservations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/how-to-view-csp-reservations.md | -Roles assigned with Azure Lighthouse aren't supported by reservations. To view reservations, you need to be a global admin or an admin agent in the customer's tenant. +Reservations don't support roles assigned with Azure Lighthouse. To view reservations, you need to be a global admin or an admin agent in the customer's tenant. ## View reservations Roles assigned with Azure Lighthouse aren't supported by reservations. To view r 1. In the Azure portal, go to **Reservations**. > [!NOTE]-> Being a guest in the customer's tenant prevents you from viewing reservations. If you have guest access, you need to remove it from the tenant. Admin agent privilege doesn't override guest access. +> Being a guest in the customer's tenant allows you to view reservations. However, guest access prevents you from refunding or exchanging reservations. To make changes to reservations, you must remove guest access from the tenant. Admin agent privilege doesn't override guest access. - To remove your guest access in the Partner Center, navigate to **My Account** > **[Organizations](https://myaccount.microsoft.com/organizations)** and then select **Leave organization**. Alternately, ask another user who can access the reservation to add your guest account to the reservation order. -## Next steps +## Related content - [View Azure reservations](view-reservations.md) |
cost-management-billing | Review Enterprise Agreement Bill | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-enterprise-agreement-bill.md | You receive an Azure invoice when any of the following events occur during your Your invoice shows Azure usage charges with costs associated to them first, followed by any marketplace charges. If you have a credit balance, it gets applied to Azure usage and your invoice shows Azure usage and marketplace usage without any cost, last in the list. +If an invoice includes over 1,000 line items, it gets split into multiple invoices. + Compare your combined total amount shown in the Azure portal in **Usage & Charges** with your Azure invoice. The amounts in the **Total Charges** don't include tax. 1. Sign in to the [Azure portal](https://portal.azure.com). |
databox-online | Azure Stack Edge Gpu Create Virtual Machine Marketplace Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md | Below is a list of URNs for some of the most commonly used images. If you just w | Windows Desktop | Windows 10 20H2 Pro | 19042.928.2104091209 | MicrosoftWindowsDesktop:Windows-10:20h2-pro:19042.928.2104091209 | | Ubuntu Server | Canonical Ubuntu Server 18.04 LTS | 18.04.202002180 | Canonical:UbuntuServer:18.04-LTS:18.04.202002180 | | Ubuntu Server | Canonical Ubuntu Server 16.04 LTS | 16.04.202104160 | Canonical:UbuntuServer:16.04-LTS:16.04.202104160 |-| CentOS | CentOS 8.1 | 8.1.2020062400 | OpenLogic:CentOS:8_1:8.1.2020062400 | ## Create a new managed disk from the Marketplace image |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md | This article describes how to install GPU driver extension to install appropriat Before you install GPU extension on the GPU VMs running on your device, make sure that: -1. You have access to an Azure Stack Edge device on which you've deployed one or more GPU VMs. See how to [Deploy a GPU VM on your device](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md). +1. You have access to an Azure Stack Edge device on which you deploy one or more GPU VMs. See how to [Deploy a GPU VM on your device](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md). - Make sure that the port enabled for compute network on your device is connected to Internet and has access. The GPU drivers are downloaded through the internet access. Here's an example where Port 2 was connected to the internet and was used to enable the compute network. If Kubernetes isn't deployed on your environment, you can skip the Kubernetes node IP and external service IP assignment. ![Screenshot of the Compute pane for an Azure Stack Edge device. Compute settings for Port 2 are highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension/enable-compute-network-1.png)-1. [Download the GPU extension templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory. -1. Verify that the client you'll use to access your device is still connected to the Azure Resource Manager over Azure PowerShell. The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute will return error messages to the effect that you aren't connected to Azure anymore. You'll need to sign in again. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md). +1. [Download the GPU extension templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory you use as a working directory. +1. Verify that the client you'll use to access your device is still connected to the Azure Resource Manager over Azure PowerShell. The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute will return error messages to the effect that you aren't connected to Azure anymore. You must sign in again. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md). ## Edit parameters file Here's a sample Ubuntu parameter file that was used in this article: If you created your VM using a Red Hat Enterprise Linux Bring Your Own Subscription image (RHEL BYOS), make sure that: -- You've followed the steps in [using RHEL BYOS image](azure-stack-edge-gpu-create-virtual-machine-image.md).+- You follow the steps in [using RHEL BYOS image](azure-stack-edge-gpu-create-virtual-machine-image.md). - After you created the GPU VM, register and subscribe the VM with the Red Hat Customer portal. If your VM isn't properly registered, installation doesn't proceed as the VM isn't entitled. See [Register and automatically subscribe in one step using the Red Hat Subscription Manager](https://access.redhat.com/solutions/253273). This step allows the installation script to download relevant packages for the GPU driver.-- You either manually install the `vulkan-filesystem` package or add CentOS7 repo to your yum repo list. When you install the GPU extension, the installation script looks for a `vulkan-filesystem` package that is on CentOS7 repo (for RHEL7).+- You install the `vulkan-filesystem` package, as the installation script looks for a `vulkan-filesystem` package. PS C:\WINDOWS\system32> Extension execution output is logged to the following file. Refer to this file `C:\Packages\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\1.3.0.0\Status` to track the status of installation. -A successful install is indicated by a `message` as `Enable Extension` and `status` as `success`. +A successful install displays a `message` with `Enable Extension` and `status` of `success`. ```powershell "status": { Follow these steps to verify the driver installation: Administrator@VM1:~$ ``` -2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you'll be able to run the utility and see the following output: +2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you are able to run the utility and see the following output: ```powershell Administrator@VM1:~$ nvidia-smi |
defender-for-cloud | Recommendations Reference Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-devops.md | DevOps recommendations don't affect your [secure score](secure-score-security-co **Severity**: Medium +### [(Preview) Azure DevOps repositories should require minimum two-reviewer approval for code pushes](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/470742ea-324a-406c-b91f-fc1da6a27c0c) ++**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend requiring at least two code reviewers to approve pull requests before the code is merged with the default branch. By requiring approval from a minimum number of two reviewers, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities. ++**Severity**: High ++### [(Preview) Azure DevOps repositories should not allow requestors to approve their own Pull Requests](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/98b5895a-0ad8-4ed9-8c9d-d654f5bda816) ++**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend prohibiting pull request creators from approving their own submissions to ensure that every change undergoes objective review by someone other than the author. By doing this, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities. ++**Severity**: High + ### GitHub recommendations ### [GitHub repositories should have secret scanning enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/b6ad173c-0cc6-4d44-b954-8217c8837a8e/showSecurityCenterCommandBar~/false) DevOps recommendations don't affect your [secure score](secure-score-security-co **Severity**: Medium +### [(Preview) GitHub organizations should not make action secrets accessible to all repositories](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6331fad3-a7a2-497d-b616-52672057e0f3) ++**Description**: For secrets used in GitHub Action workflows that are stored at the GitHub organization-level, you can use access policies to control which repositories can use organization secrets. Organization-level secrets let you share secrets between multiple repositories, which reduces the need for creating duplicate secrets. However, once a secret is made accessible to a repository, anyone with write access on repository can access the secret from any branch in a workflow. To reduce the attack surface, ensure that the secret is accessible from selected repositories only. ++**Severity**: High + ### GitLab recommendations ### [GitLab projects should have secret scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/867001c3-2d01-4db7-b513-5cb97638f23d/showSecurityCenterCommandBar~/false) |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | If you're looking for items older than six months, you can find them in the [Arc |Date | Update | |--|--|-| June 27 | [General Availability of Checkov IaC Scanning in Defender for Cloud](#general-availability-of-checkov-iac-scanning-in-defender-for-cloud) | -| June 27 | [Four security incidents have been deprecated](#four-security-incidents-have-been-deprecated) | +| June 28 | [New DevOps security recommendations](#new-devops-security-recommendations) | +| June 27 | [General Availability of Checkov IaC Scanning in Defender for Cloud](#general-availability-of-checkov-iac-scanning-in-defender-for-cloud) | +| June 27 | [Four security incidents have been deprecated](#four-security-incidents-have-been-deprecated) | | June 24 | [Change in pricing for Defender for Containers in multicloud](#change-in-pricing-for-defender-for-containers-in-multicloud) | | June 10 | [Copilot for Security in Defender for Cloud (Preview)](#copilot-for-security-in-defender-for-cloud-preview) | +### New DevOps security recommendations ++June 28, 2024 ++We're announcing DevOps security recommendations that improve the security posture of Azure DevOps and GitHub environments. If issues are found, these recommendations offer remediation steps. ++The following new recommendations are supported if you have connected Azure DevOps or GitHub to Microsoft Defender for Cloud. All recommendations are included in Foundational Cloud Security Posture Management. ++| Recommendation name | Description | Severity | +|--|--|--| +| [Azure DevOps repositories should require minimum two-reviewer approval for code pushes](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/470742ea-324a-406c-b91f-fc1da6a27c0c) | To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend requiring at least two code reviewers to approve pull requests before the code is merged with the default branch. By requiring approval from a minimum number of two reviewers, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities. | High | +| [Azure DevOps repositories should not allow requestors to approve their own Pull Requests](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/98b5895a-0ad8-4ed9-8c9d-d654f5bda816) | To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend prohibiting pull request creators from approving their own submissions to ensure that every change undergoes objective review by someone other than the author. By doing this, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities. | High | +| [GitHub organizations should not make action secrets accessible to all repositories](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6331fad3-a7a2-497d-b616-52672057e0f3) | For secrets used in GitHub Action workflows that are stored at the GitHub organization-level, you can use access policies to control which repositories can use organization secrets. Organization-level secrets let you share secrets between multiple repositories, which reduces the need for creating duplicate secrets. However, once a secret is made accessible to a repository, anyone with write access on repository can access the secret from any branch in a workflow. To reduce the attack surface, ensure that the secret is accessible from selected repositories only. | High | + ### General Availability of Checkov IaC Scanning in Defender for Cloud June 27, 2024 |
dms | Howto Sql Server To Azure Sql Managed Instance Powershell Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md | To complete these steps, you need: * To ensure that the credentials used to connect to target SQL Managed Instance has the CONTROL DATABASE permission on the target SQL Managed Instance databases. > [!IMPORTANT]- > For online migrations, you must already have set up your Microsoft Entra credentials. For more information, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). + > For online migrations, you must already have set up your Microsoft Entra credentials. For more information, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal). ## Create a resource group |
dms | Known Issues Azure Sql Migration Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md | This article provides a list of known issues and troubleshooting steps associate - **Cause**: Before migrating data, you need to migrate the certificate of the source SQL Server instance from a database that is protected by Transparent Data Encryption (TDE) to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine. -- **Recommendation**: Migrate the TDE certificate to the target instance and retry the process. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](/azure/dms/tutorial-transparent-data-encryption-migration-ads).+- **Recommendation**: Migrate the TDE certificate to the target instance and retry the process. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](tutorial-transparent-data-encryption-migration-ads.md). - **Message**: `Migration for Database <DatabaseName> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3169 The database was backed up on a server running version %ls. That version is incompatible with this server, which is running version %ls. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server.` |
dms | Migration Using Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md | For information about specific migration scenarios and Azure SQL targets, see th | Migration scenario | Migration mode |||-SQL Server to Azure SQL Managed Instance| [Online](./tutorial-sql-server-managed-instance-online-ads.md) / [Offline](./tutorial-sql-server-managed-instance-offline-ads.md) -SQL Server to SQL Server on an Azure virtual machine|[Online](./tutorial-sql-server-to-virtual-machine-online-ads.md) / [Offline](./tutorial-sql-server-to-virtual-machine-offline-ads.md) -SQL Server to Azure SQL Database | [Offline](./tutorial-sql-server-azure-sql-database-offline.md) +SQL Server to Azure SQL Managed Instance| [Online](/data-migration/sql-server/managed-instance/database-migration-service) / [Offline](/data-migration/sql-server/managed-instance/database-migration-service) +SQL Server to SQL Server on an Azure virtual machine|[Online](/data-migration/sql-server/virtual-machines/database-migration-service) / [Offline](/data-migration/sql-server/virtual-machines/database-migration-service) +SQL Server to Azure SQL Database | [Offline](/data-migration/sql-server/database/database-migration-service) > [!IMPORTANT] > If your target is Azure SQL Database, you can migrate database Schema and data both using Database Migration Service via Azure Portal. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio to deploy the database schema before you begin the data migration. |
dms | Resource Custom Roles Sql Database Ads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-database-ads.md | - Title: "Custom roles for SQL Server to Azure SQL Database migrations in Azure Data Studio"- -description: Learn how to use custom roles for SQL Server to Azure SQL Database migrations in Azure Data Studio. -- Previously updated : 09/28/2022---- - sql-migration-content ---# Custom roles for SQL Server to Azure SQL Database migrations in Azure Data Studio --This article explains how to set up a custom role in Azure for SQL Server database migrations. A custom role will have only the permissions that are required to create and run an instance of Azure Database Migration Service with Azure SQL Database as a target. --Use the AssignableScopes section of the role definition JSON string to control where the permissions appear in the **Add role assignment** UI in the Azure portal. To avoid cluttering the UI with extra roles, you might want to define the role at the level of the resource group, or even the level of the resource. The resource that the custom role applies to doesn't perform the actual role assignment. --```json -{ - "properties": { - "roleName": "DmsCustomRoleDemoForSqlDB", - "description": "", - "assignableScopes": [ - "/subscriptions/<SQLDatabaseSubscription>/resourceGroups/<SQLDatabaseResourceGroup>", - "/subscriptions/<DatabaseMigrationServiceSubscription>/resourceGroups/<DatabaseMigrationServiceResourceGroup>" - ], - "permissions": [ - { - "actions": [ - "Microsoft.Sql/servers/read", - "Microsoft.Sql/servers/write", - "Microsoft.Sql/servers/databases/read", - "Microsoft.Sql/servers/databases/write", - "Microsoft.Sql/servers/databases/delete", - "Microsoft.DataMigration/locations/operationResults/read", - "Microsoft.DataMigration/locations/operationStatuses/read", - "Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read", - "Microsoft.DataMigration/databaseMigrations/write", - "Microsoft.DataMigration/databaseMigrations/read", - "Microsoft.DataMigration/databaseMigrations/delete", - "Microsoft.DataMigration/databaseMigrations/cancel/action", - "Microsoft.DataMigration/sqlMigrationServices/write", - "Microsoft.DataMigration/sqlMigrationServices/delete", - "Microsoft.DataMigration/sqlMigrationServices/read", - "Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action", - "Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action", - "Microsoft.DataMigration/sqlMigrationServices/deleteNode/action", - "Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action", - "Microsoft.DataMigration/sqlMigrationServices/listMigrations/read", - "Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read" - ], - "notActions": [], - "dataActions": [], - "notDataActions": [] - } - ] - } -} -``` --You can use either the Azure portal, Azure PowerShell, the Azure CLI, or the Azure REST API to create the roles. --For more information, see [Create custom roles by using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md). --## Permissions required to migrate to Azure SQL Database --| Permission action | Description | -| - | --| -| Microsoft.Sql/servers/read | Return the list of SQL database resources or get the properties for the specified SQL database. | -| Microsoft.Sql/servers/write | Create a SQL database with the specified parameters or update the properties or tags for the specified SQL database. | -| Microsoft.Sql/servers/databases/read | Get an existing SQL database. | -| Microsoft.Sql/servers/databases/write | Create a new database or update an existing database. | -| Microsoft.Sql/servers/databases/delete | Delete an existing SQL database. | -| Microsoft.DataMigration/locations/operationResults/read | Get the results of a long-running operation related to a 202 Accepted response. | -| Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response. | -| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve service operation results. | -| Microsoft.DataMigration/databaseMigrations/write | Create or update a database migration resource. | -| Microsoft.DataMigration/databaseMigrations/read | Retrieve a database migration resource. | -| Microsoft.DataMigration/databaseMigrations/delete | Delete a database migration resource. | -| Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database. | -| Microsoft.DataMigration/sqlMigrationServices/write | Create a new service or change the properties of an existing service. | -| Microsoft.DataMigration/sqlMigrationServices/delete | Delete an existing service. | -| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve the details of the migration service. | -| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the list of authentication keys. | -| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate authentication keys. | -| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | Deregister the integration runtime node. | -| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | List the monitoring data for all migrations. | -| Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | Lists the migrations for the user. | -| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the monitoring data. | --## Assign a role --To assign a role to a user or an app ID: --1. In the Azure portal, go to the resource. --1. In the left menu, select **Access control (IAM)**, and then scroll to find the custom roles you created. --1. Select the roles to assign, select the user or app ID, and then save the changes. -- The user or app ID now appears on the **Role assignments** tab. --## Next steps --- Review the [migration guidance for your scenario](/data-migration/). |
dms | Resource Custom Roles Sql Db Managed Instance Ads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance-ads.md | - Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations using ADS"- -description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance migrations. -- Previously updated : 05/02/2022---- - sql-migration-content ---# Custom roles for SQL Server to Azure SQL Managed Instance migrations using ADS --This article explains how to set up a custom role in Azure for Database Migrations. The custom role will only have the permissions necessary to create and run a Database Migration Service with SQL Managed Instance as a target. --The AssignableScopes section of the role definition json string allows you to control where the permissions appear in the **Add Role Assignment** UI in the portal. You'll likely want to define the role at the resource group or even resource level to avoid cluttering the UI with extra roles. This doesn't perform the actual role assignment. --```json -{ - "properties": { - "roleName": "DmsCustomRoleDemoForMI", - "description": "", - "assignableScopes": [ - "/subscriptions/<storageSubscription>/resourceGroups/<storageAccountRG>", - "/subscriptions/<ManagedInstanceSubscription>/resourceGroups/<managedInstanceRG>", - "/subscriptions/<DMSSubscription>/resourceGroups/<dmsServiceRG>" - ], - "permissions": [ - { - "actions": [ - "Microsoft.Storage/storageAccounts/read", - "Microsoft.Storage/storageAccounts/listkeys/action", - "Microsoft.Storage/storageAccounts/blobServices/read", - "Microsoft.Storage/storageAccounts/blobServices/write", - "Microsoft.Storage/storageAccounts/blobServices/containers/read", - "Microsoft.Sql/managedInstances/read", - "Microsoft.Sql/managedInstances/write", - "Microsoft.Sql/managedInstances/databases/read", - "Microsoft.Sql/managedInstances/databases/write", - "Microsoft.Sql/managedInstances/databases/delete", - "Microsoft.DataMigration/locations/operationResults/read", - "Microsoft.DataMigration/locations/operationStatuses/read", - "Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read", - "Microsoft.DataMigration/databaseMigrations/write", - "Microsoft.DataMigration/databaseMigrations/read", - "Microsoft.DataMigration/databaseMigrations/delete", - "Microsoft.DataMigration/databaseMigrations/cancel/action", - "Microsoft.DataMigration/databaseMigrations/cutover/action", - "Microsoft.DataMigration/sqlMigrationServices/write", - "Microsoft.DataMigration/sqlMigrationServices/delete", - "Microsoft.DataMigration/sqlMigrationServices/read", - "Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action", - "Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action", - "Microsoft.DataMigration/sqlMigrationServices/deleteNode/action", - "Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action", - "Microsoft.DataMigration/sqlMigrationServices/listMigrations/read", - "Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read" - ], - "notActions": [], - "dataActions": [], - "notDataActions": [] - } - ] - } -} -``` -You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure REST API to create the roles. --For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md). --## Description of permissions needed to migrate to Azure SQL Managed Instance --| Permission Action | Description | -| - | --| -| Microsoft.Storage/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. | -| Microsoft.Storage/storageAccounts/listkeys/action | Returns the access keys for the specified storage account. | -| Microsoft.Storage/storageAccounts/blobServices/read | List blob services. | -| Microsoft.Storage/storageAccounts/blobServices/write | Returns the result of put blob service properties. | -| Microsoft.Storage/storageAccounts/blobServices/containers/read | Returns list of containers. | -| Microsoft.Sql/managedInstances/read | Return the list of managed instances or gets the properties for the specified managed instance. | -| Microsoft.Sql/managedInstances/write | Creates a managed instance with the specified parameters or update the properties or tags for the specified managed instance. | -| Microsoft.Sql/managedInstances/databases/read | Gets existing managed database. | -| Microsoft.Sql/managedInstances/databases/write | Creates a new database or updates an existing database. | -| Microsoft.Sql/managedInstances/databases/delete | Deletes an existing managed database. | -| Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response. | -| Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response. | -| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results. | -| Microsoft.DataMigration/databaseMigrations/write | Create or Update Database Migration resource. | -| Microsoft.DataMigration/databaseMigrations/read | Retrieve the Database Migration resource. | -| Microsoft.DataMigration/databaseMigrations/delete | Delete Database Migration resource. | -| Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database. | -| Microsoft.DataMigration/databaseMigrations/cutover/action | Cutover online migration operation for the database. | -| Microsoft.DataMigration/sqlMigrationServices/write | Create a new or change properties of existing Service | -| Microsoft.DataMigration/sqlMigrationServices/delete | Delete existing Service. | -| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Service. | -| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the List of Authentication Keys. | -| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate the Authentication Keys. | -| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | De-register the IR node. | -| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | Lists the Monitoring Data for all migrations. | -| Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | Lists the migrations for the user. | -| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the Monitoring Data. | -| Microsoft.SqlVirtualMachine/sqlVirtualMachines/read | Retrieve details of SQL virtual machine. | -| Microsoft.SqlVirtualMachine/sqlVirtualMachines/write | Create a new or change properties of existing SQL virtual machine. | --## Role assignment --To assign a role to users/APP ID, open the Azure portal, perform the following steps: --1. Navigate to the resource, go to **Access Control**, and then scroll to find the custom roles you created. --2. Select the appropriate role, select the User or APP ID, and then save the changes. -- The user or APP ID(s) now appears listed on the **Role assignments** tab. --## Next steps --* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](/data-migration/). |
dms | Resource Custom Roles Sql Db Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md | We currently recommend creating a minimum of two custom roles for the APP ID, on > [!NOTE] > The last custom role requirement may eventually be removed, as new SQL Managed Instance code is deployed to Azure. -**Custom Role for the APP ID**. This role is required for Azure Database Migration Service migration at the *resource* or *resource group* level that hosts the Azure Database Migration Service (for more information about the APP ID, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md)). +**Custom Role for the APP ID**. This role is required for Azure Database Migration Service migration at the *resource* or *resource group* level that hosts the Azure Database Migration Service (for more information about the APP ID, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal)). ```json { |
dms | Resource Custom Roles Sql Db Virtual Machine Ads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-virtual-machine-ads.md | - Title: "Custom roles: Online SQL Server to Azure Virtual Machines migrations with ADS"- -description: Learn to use the custom roles for SQL Server to Azure VM's migrations. -- Previously updated : 05/02/2022---- - sql-migration-content ---# Custom roles for SQL Server to Azure Virtual Machines migrations using ADS --This article explains how to set up a custom role in Azure for Database Migrations. The custom role will only have the permissions necessary to create and run a Database Migration Service with an Azure Virtual Machine as a target. --The AssignableScopes section of the role definition json string allows you to control where the permissions appear in the **Add Role Assignment** UI in the portal. You'll likely want to define the role at the resource group or even resource level to avoid cluttering the UI with extra roles. This doesn't perform the actual role assignment. --```json -{ - "properties": { - "roleName": "DmsCustomRoleDemoForVM", - "description": "", - "assignableScopes": [ - "/subscriptions/<storageSubscription>/resourceGroups/<storageAccountRG>", - "/subscriptions/<ManagedInstanceSubscription>/resourceGroups/<virtualMachineRG>", - "/subscriptions/<DMSSubscription>/resourceGroups/<dmsServiceRG>" - ], - "permissions": [ - { - "actions": [ - "Microsoft.Storage/storageAccounts/read", - "Microsoft.Storage/storageAccounts/listkeys/action", - "Microsoft.Storage/storageAccounts/blobServices/read", - "Microsoft.Storage/storageAccounts/blobServices/write", - "Microsoft.Storage/storageAccounts/blobServices/containers/read", - "Microsoft.SqlVirtualMachine/sqlVirtualMachines/read", - "Microsoft.SqlVirtualMachine/sqlVirtualMachines/write", - "Microsoft.DataMigration/locations/operationResults/read", - "Microsoft.DataMigration/locations/operationStatuses/read", - "Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read", - "Microsoft.DataMigration/databaseMigrations/write", - "Microsoft.DataMigration/databaseMigrations/read", - "Microsoft.DataMigration/databaseMigrations/delete", - "Microsoft.DataMigration/databaseMigrations/cancel/action", - "Microsoft.DataMigration/databaseMigrations/cutover/action", - "Microsoft.DataMigration/sqlMigrationServices/write", - "Microsoft.DataMigration/sqlMigrationServices/delete", - "Microsoft.DataMigration/sqlMigrationServices/read", - "Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action", - "Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action", - "Microsoft.DataMigration/sqlMigrationServices/deleteNode/action", - "Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action", - "Microsoft.DataMigration/sqlMigrationServices/listMigrations/read", - "Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read" - ], - "notActions": [], - "dataActions": [], - "notDataActions": [] - } - ] - } -} -``` -You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure REST API to create the roles. --For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md). --## Description of permissions needed to migrate to a virtual machine --| Permission Action | Description | -| - | --| -| Microsoft.Storage/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. | -| Microsoft.Storage/storageAccounts/listkeys/action | Returns the access keys for the specified storage account. | -| Microsoft.Storage/storageAccounts/blobServices/read | List blob services. | -| Microsoft.Storage/storageAccounts/blobServices/write | Returns the result of put blob service properties. | -| Microsoft.Storage/storageAccounts/blobServices/containers/read | Returns list of containers. | -| Microsoft.Sql/managedInstances/read | Return the list of managed instances or gets the properties for the specified managed instance. | -| Microsoft.Sql/managedInstances/write | Creates a managed instance with the specified parameters or update the properties or tags for the specified managed instance. | -| Microsoft.Sql/managedInstances/databases/read | Gets existing managed database. | -| Microsoft.Sql/managedInstances/databases/write | Creates a new database or updates an existing database. | -| Microsoft.Sql/managedInstances/databases/delete | Deletes an existing managed database. | -| Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response. | -| Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response. | -| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results. | -| Microsoft.DataMigration/databaseMigrations/write | Create or Update Database Migration resource. | -| Microsoft.DataMigration/databaseMigrations/read | Retrieve the Database Migration resource. | -| Microsoft.DataMigration/databaseMigrations/delete | Delete Database Migration resource. | -| Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database. | -| Microsoft.DataMigration/databaseMigrations/cutover/action | Cutover online migration operation for the database. | -| Microsoft.DataMigration/sqlMigrationServices/write | Create a new or change properties of existing Service | -| Microsoft.DataMigration/sqlMigrationServices/delete | Delete existing Service. | -| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Service. | -| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the List of Authentication Keys. | -| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate the Authentication Keys. | -| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | De-register the IR node. | -| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | Lists the Monitoring Data for all migrations. | -| Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | Lists the migrations for the user. | -| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the Monitoring Data. | -| Microsoft.SqlVirtualMachine/sqlVirtualMachines/read | Retrieve details of SQL virtual machine. | -| Microsoft.SqlVirtualMachine/sqlVirtualMachines/write | Create a new or change properties of existing SQL virtual machine. | --## Role assignment --To assign a role to users/APP ID, open the Azure portal, perform the following steps: --1. Navigate to the resource, go to **Access Control**, and then scroll to find the custom roles you created. --2. Select the appropriate role, select the User or APP ID, and then save the changes. -- The user or APP ID(s) now appears listed on the **Role assignments** tab. --## Next steps --* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](/data-migration/). |
dms | Tutorial Login Migration Ads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-login-migration-ads.md | Before you begin the tutorial: | Migration scenario | Migration mode | | | |- | SQL Server to Azure SQL Managed Instance | [Online](tutorial-sql-server-managed-instance-online-ads.md) / [Offline](tutorial-sql-server-managed-instance-offline-ads.md) | - | SQL Server to SQL Server on an Azure virtual machine | [Online](tutorial-sql-server-to-virtual-machine-online-ads.md) / [Offline](./tutorial-sql-server-to-virtual-machine-offline-ads.md) | + | SQL Server to Azure SQL Managed Instance | [Online](/data-migration/sql-server/managed-instance/database-migration-service) / [Offline](/data-migration/sql-server/managed-instance/database-migration-service) | + | SQL Server to SQL Server on an Azure virtual machine | [Online](/data-migration/sql-server/virtual-machines/database-migration-service) / [Offline](/data-migration/sql-server/virtual-machines/database-migration-service) | > [!IMPORTANT] > If you haven't completed the database migration and the login migration process is started, the migration of logins and server roles will still happen, but login/role mappings won't be performed correctly. The following table describes the current status of the Login migration support ## Next steps - [Migrate databases with Azure SQL Migration extension for Azure Data Studio](./migration-using-azure-data-studio.md)-- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](./tutorial-sql-server-azure-sql-database-offline.md)-- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](./tutorial-sql-server-managed-instance-online-ads.md)-- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](./tutorial-sql-server-to-virtual-machine-online-ads.md)+- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](/data-migration/sql-server/database/database-migration-service) +- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](/data-migration/sql-server/managed-instance/database-migration-service) +- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](/data-migration/sql-server/virtual-machines/database-migration-service) |
dms | Tutorial Sql Server Azure Sql Database Offline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline.md | - Title: "Tutorial: Migrate SQL Server to Azure SQL Database (offline)"- -description: Learn how to migrate on-premises SQL Server to Azure SQL Database offline by using Azure Database Migration Service. --- Previously updated : 10/10/2023---- - sql-migration-content ---# Tutorial: Migrate SQL Server to Azure SQL Database (offline) --You can use Azure Database Migration Service via the Azure SQL Migration extension for Azure Data Studio, or the Azure portal, to migrate databases from an on-premises instance of SQL Server to Azure SQL Database (offline). --In this tutorial, learn how to migrate the sample `AdventureWorks2019` database from an on-premises instance of SQL Server to an instance of Azure SQL Database, by using Database Migration Service. This tutorial uses offline migration mode, which considers an acceptable downtime during the migration process. --In this tutorial, you learn how to: -> [!div class="checklist"] -> - Open the Migrate to Azure SQL wizard in Azure Data Studio -> - Run an assessment of your source SQL Server databases -> - Collect performance data from your source SQL Server instance -> - Get a recommendation of the Azure SQL Database SKU that will work best for your workload -> - Create an instance of Azure Database Migration Service -> - Start your migration and monitor progress to completion ---> [!IMPORTANT] -> Currently, *online* migrations for Azure SQL Database targets aren't available. --## Migration options --The following section describes how to use Azure Database Migration Service with the Azure SQL Migration extension, or in the Azure portal. --## [Migrate using Azure SQL Migration extension](#tab/azure-data-studio) --### Prerequisites --Before you begin the tutorial: --- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.-- Have an Azure account that's assigned to one of the following built-in roles:-- - Contributor for the target instance of Azure SQL Database - - Reader role for the Azure resource group that contains the target instance of Azure SQL Database - - Owner or Contributor role for the Azure subscription (required if you create a new instance of Azure Database Migration Service) -- As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md). -- > [!IMPORTANT] - > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio. --- Create a target instance of [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).--- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the db_datareader role and that the login for the target SQL Server instance is a member of the db_owner role.--- To migrate the database schema from source to target Azure SQL DB by using the Database Migration Service, the minimum supported [SHIR version](https://www.microsoft.com/download/details.aspx?id=39717) required is 5.37 or above.- -- If you're using Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).--> [!NOTE] -> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate. -> -> If no table exist on the Azure SQL Database target, or no tables are selected before starting the migration, the **Next** button isn't available to select to initiate the migration task. If no table exists on target then you must select the Schema migration option to move forward. --### Open the Migrate to Azure SQL wizard in Azure Data Studio --To open the Migrate to Azure SQL wizard: --1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine. --1. Right-click the server connection and select **Manage**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/azure-data-studio-manage-panel.png" alt-text="Screenshot that shows a server connection and the Manage option in Azure Data Studio." lightbox="media/tutorial-sql-server-azure-sql-database-offline/azure-data-studio-manage-panel.png"::: --1. In the server menu under **General**, select **Azure SQL Migration**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/launch-migrate-to-azure-sql-wizard-1.png" alt-text="Screenshot that shows the Azure Data Studio server menu."::: --1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/launch-migrate-to-azure-sql-wizard-2.png" alt-text="Screenshot that shows the Migrate to Azure SQL wizard."::: --1. On the first page of the wizard, start a new session or resume a previously saved session. --### Run database assessment, collect performance data, and get Azure recommendations --1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/assessment-database-selection.png" alt-text="Screenshot that shows selecting a database for assessment."::: --1. In **Step 2: Assessment results and recommendations**, complete the following steps: -- 1. In **Choose your Azure SQL target**, select **Azure SQL Database**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/assessment-target-selection.png" alt-text="Screenshot that shows selecting the Azure SQL Database target."::: -- 1. Select **View/Select** to view the assessment results. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/assessment.png" alt-text="Screenshot that shows view/select assessment results."::: -- 1. In the assessment results, select the database, and then review the assessment report to make sure no issues were found. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/assessment-issues-details.png" alt-text="Screenshot that shows the assessment report."::: -- 1. Select **Get Azure recommendation** to open the recommendations pane. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/get-azure-recommendation.png" alt-text="Screenshot that shows Azure recommendations."::: -- 1. Select **Collect performance data now**. Select a folder on your local computer to store the performance logs, and then select **Start**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/get-azure-recommendation-zoom.png" alt-text="Screenshot that shows performance data collection."::: -- Azure Data Studio collects performance data until you either stop data collection or you close Azure Data Studio. -- After 10 minutes, Azure Data Studio indicates that a recommendation is available for Azure SQL Database. After the first recommendation is generated, you can select **Restart data collection** to continue the data collection process and refine the SKU recommendation. An extended assessment is especially helpful if your usage patterns vary over time. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/get-azure-recommendation-collected.png" alt-text="Screenshot that shows performance data collected."::: -- 1. In the selected **Azure SQL Database** target, select **View details** to open the detailed SKU recommendation report: -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/get-azure-recommendation-view-details.png" alt-text="Screenshot that shows the View details link for the target database recommendations."::: -- 1. In **Review Azure SQL Database Recommendations**, review the recommendation. To save a copy of the recommendation, select **Save recommendation report**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/azure-sku-recommendation-zoom.png" alt-text="Screenshot that shows SKU recommendation details."::: --1. Select **Close** to close the recommendations pane. --1. Select **Next** to continue your database migration in the wizard. --### Configure migration settings --1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, complete these steps for your target Azure SQL Database instance: -- 1. Select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the Azure SQL Database deployment. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/configuration-azure-target-account.png" alt-text="Screenshot that shows Azure account details."::: -- 1. For **Azure SQL Database Server**, select the target Azure SQL Database server (logical server). Enter a username and password for the target database deployment. Then, select **Connect**. Enter the credentials to verify connectivity to the target database. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/configuration-azure-target-database.png" alt-text="Screenshot that shows Azure SQL Database details."::: -- 1. Next, map the source database and the target database for the migration. For **Target database**, select the Azure SQL Database target. Then, select **Next** to move to the next step in the migration wizard. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/configuration-azure-target-map.png" alt-text="Screenshot that shows source and target mapping."::: --1. In **Step 4: Migration mode**, select **Offline migration**, and then select **Next**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/migration-mode.png" alt-text="Screenshot that shows offline migrations selection."::: --1. In **Step 5: Data source configuration**, complete the following steps: -- 1. Under **Source credentials**, enter the source SQL Server credentials. -- 1. Under **Select tables**, select the **Edit** pencil icon. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/migration-source-credentials.png" alt-text="Screenshot that shows source SQL Server credentials."::: -- 1. In **Select tables for \<database-name\>**, select the tables to migrate to the target. The **Has rows** column indicates whether the target table has rows in the target database. You can select one or more tables. Then, select **Update**. -- You can update the list of selected tables anytime before you start the migration. -- In the following example, a text filter is applied to select tables that contain the word `Employee`. Select a list of tables based on your migration needs. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/migration-source-tables.png" alt-text="Screenshot that shows the table selection."::: --1. Review your table selections, and then select **Next** to move to the next step in the migration wizard. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/migration-target-tables.png" alt-text="Screenshot that shows selected tables to migrate."::: --> [!NOTE] -> If no tables are selected or if a username and password aren't entered, the **Next** button isn't available to select. -> -> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate Schema before selecting the list of tables to migrate. --### Create a Database Migration Service instance --In **Step 6: Azure Database Migration Service** in the Migrate to Azure SQL wizard, create a new instance of Database Migration Service, or reuse an existing instance that you created earlier. --> [!NOTE] -> If you previously created a Database Migration Service instance by using the Azure portal, you can't reuse the instance in the migration wizard in Azure Data Studio. You can reuse an instance only if you created the instance by using Azure Data Studio. --#### Use an existing instance of Database Migration Service --To use an existing instance of Database Migration Service: --1. In **Resource group**, select the resource group that contains an existing instance of Database Migration Service. --1. In **Azure Database Migration Service**, select an existing instance of Database Migration Service that's in the selected resource group. --1. Select **Next**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/create-dms.png" alt-text="Screenshot that shows Database Migration Service selection."::: --#### Create a new instance of Database Migration Service --To create a new instance of Database Migration Service: --1. In **Resource group**, create a new resource group to contain a new instance of Database Migration Service. --1. Under **Azure Database Migration Service**, select **Create new**. --1. In **Create Azure Database Migration Service**, enter a name for your Database Migration Service instance, and then select **Create**. --1. Under **Set up integration runtime**, complete the following steps: -- 1. Select the **Download and install integration runtime** link to open the download link in a web browser. Download the integration runtime, and then install it on a computer that meets the prerequisites for connecting to the source SQL Server instance. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/create-dms-integration-runtime-download.png" alt-text="Screenshot that shows the Download and install integration runtime link."::: -- When installation is finished, Microsoft Integration Runtime Configuration Manager automatically opens to begin the registration process. -- 1. In the **Authentication key** table, copy one of the authentication keys that are provided in the wizard and paste it in Azure Data Studio. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/create-dms-integration-runtime-authentication-key.png" alt-text="Screenshot that highlights the authentication key table in the wizard."::: -- If the authentication key is valid, a green check icon appears in Integration Runtime Configuration Manager. A green check indicates that you can continue to **Register**. -- After you register the self-hosted integration runtime, close Microsoft Integration Runtime Configuration Manager. -- > [!NOTE] - > For more information about the self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md). --1. In **Create Azure Database Migration Service** in Azure Data Studio, select **Test connection** to validate that the newly created Database Migration Service instance is connected to the newly registered self-hosted integration runtime. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/create-dms-integration-runtime-connected.png" alt-text="Screenshot that shows IR connectivity test."::: --1. Return to the migration wizard in Azure Data Studio. --### Start the database migration --In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configuration you created, and then select **Start migration** to start the database migration. ---### Monitor the database migration --1. In Azure Data Studio, in the server menu under **General**, select **Azure SQL Migration** to go to the dashboard for your Azure SQL Database migrations. -- Under **Database migration status**, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard.png" alt-text="Screenshot that shows monitor migration dashboard." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard.png"::: --1. Select **Database migrations in progress** to view active migrations. -- To get more information about a specific migration, select the database name. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-details.png" alt-text="Screenshot that shows database migration details." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-details.png"::: -- Database Migration Service returns the latest known migration status each time migration status refreshes. The following table describes possible statuses: -- | Status | Description | - | | | - | Preparing for copy | The service is disabling autostats, triggers, and indexes in the target table. | - | Copying | Data is being copied from the source database to the target database. | - | Copy finished | Data copy is finished. The service is waiting on other tables to finish copying to begin the final steps to return tables to their original schema. | - | Rebuilding indexes | The service is rebuilding indexes on target tables. | - | Succeeded | All data is copied and the indexes are rebuilt. | --1. Check the migration details page to view the current status for each database. -- Here's an example of the `AdventureWorks2019` database migration with the status **Creating**: -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-creating.png" alt-text="Screenshot that shows a creating migration status." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-creating.png"::: --1. In the menu bar, select **Refresh** to update the migration status. -- After migration status is refreshed, the updated status for the example `AdventureWorks2019` database migration is **In progress**: -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-in-progress.png" alt-text="Screenshot that shows a migration in progress status." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-dashboard-in-progress.png"::: --1. Select a database name to open the table view. In this view, you see the current status of the migration, the number of tables that currently are in that status, and a detailed status of each table. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-monitoring-panel-in-progress.png" alt-text="Screenshot that shows monitoring table migration." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-monitoring-panel-in-progress.png"::: -- When all table data is migrated to the Azure SQL Database target, Database Migration Service updates the migration status from **In progress** to **Succeeded**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-monitoring-panel-succeeded.png" alt-text="Screenshot that shows succeeded migration." lightbox="media/tutorial-sql-server-azure-sql-database-offline/monitor-migration-monitoring-panel-succeeded.png"::: --> [!NOTE] -> Database Migration Service optimizes migration by skipping tables with no data (0 rows). Tables that don't have data don't appear in the list, even if you select the tables when you create the migration. --You've completed the migration to Azure SQL Database. We encourage you to go through a series of post-migration tasks to ensure that everything functions smoothly and efficiently. --> [!IMPORTANT] -> Be sure to take advantage of the advanced cloud-based features of Azure SQL Database. The features include [built-in high availability](/azure/azure-sql/database/high-availability-sla), [threat detection](/azure/azure-sql/database/azure-defender-for-sql), and [monitoring and tuning your workload](/azure/azure-sql/database/monitor-tune-overview). --## [Migrate using Azure portal](#tab/portal) --### Prerequisites --Before you begin the tutorial: --- Ensure that you can access the [Azure portal](https://portal.azure.com)--- Have an Azure account that's assigned to one of the following built-in roles:- - Contributor for the target instance of Azure SQL Database - - Reader role for the Azure resource group that contains the target instance of Azure SQL Database - - Owner or Contributor role for the Azure subscription (required if you create a new instance of Azure Database Migration Service) -- As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md). --- Create a target instance of [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).--- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the **db_datareader** role, and that the login for the target SQL Server instance is a member of the **db_owner** role.--- To migrate the database Schema from source to target Azure SQL DB by using the Database Migration Service, the minimum supported [SHIR version](https://www.microsoft.com/download/details.aspx?id=39717) required is 5.37 or above.--- If you're using Database Migration Service for the first time, make sure that the `Microsoft.DataMigration` [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).--> [!NOTE] -> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate. -> -> If no table exists on the Azure SQL Database target, or no tables are selected before starting the migration. The **Next** button isn't available to select to initiate the migration task. If no table exists on target then you must select the Schema migration option to move forward. ---### Start a new migration --1. In **Step 2** to start a new migration using Database Migration Service from Azure portal, under **Azure Database Migration Services**, select an existing instance of Database Migration Service that you want to use, and then select either **New Migration** or **Start migrations**. --1. Under **Select new migration** scenario, choose your source, target server type, migration mode and choose **Select**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-select-migration.png" alt-text="Screenshot that shows new migration scenario details."::: --1. Now under Azure SQL Database Offline Migration wizard: -- 1. Provide below details to **connect to source SQL server** and select Next: -- - Source server name - - Authentication type - - User name and password - - Connection properties -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-connect.png" alt-text="Screenshot that shows source SQL server details." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-connect.png"::: -- 1. On next page, **select databases for migration**. This page might take some time to populate the list of databases from source. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-select-database.png" alt-text="Screenshot that shows list of databases from source."::: -- 1. Assuming you have already provisioned the Target based upon the assessment results, provide the target details on **Connect to target Azure SQL Database** page, and select Next: -- - Azure subscription - - Azure resource group - - Target Azure SQL Database server - - Authentication type - - User name and password -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-connect-target.png" alt-text="Screenshot that shows details for target."::: -- 1. Under **Map source and target databases**, map the databases between source and target. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-map-target.png" alt-text="Screenshot that shows list of mapping between source and target."::: -- 1. Before moving to this step, ensure to migrate the schema from source to target for all selected databases. Then, **Select database tables to migrate** for each selected database and select the table/s for which you want to migrate the data". -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-select-table.png" alt-text="Screenshot that shows list of tables select source database to migrate data to target." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-select-table.png"::: -- 1. Review all the inputs provided on **Database migration summary** page and select **Start migration** button to start the database migration. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-summary.png" alt-text="Screenshot that shows summary of the migration configuration." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-sql-database-summary.png"::: -- > [!NOTE] - > In an offline migration, application downtime starts when the migration starts. - > - > Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate. --### Monitor the database migration --1. In the Database Migration Service instance overview, select Monitor migrations to view the details of your database migrations. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-overview.png" alt-text="Screenshot that shows monitor migration dashboard." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-overview.png"::: --1. Under the **Migrations** tab, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations. In the menu bar, select **Refresh** to update the migration status. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-in-progress.png" alt-text="Screenshot that shows database migration details." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-in-progress.png"::: -- Database Migration Service returns the latest known migration status each time migration status refreshes. The following table describes possible statuses: -- | Status | Description | - | | | - | Preparing for copy | The service is disabling autostats, triggers, and indexes in the target table. | - | Copying | Data is being copied from the source database to the target database. | - | Copy finished | Data copy is finished. The service is waiting on other tables to finish copying to begin the final steps to return tables to their original schema. | - | Rebuilding indexes | The service is rebuilding indexes on target tables. | - | Succeeded | All data is copied and the indexes are rebuilt. | --1. Under **Source name** , select a database name to open the table view. In this view, you see the current status of the migration, the number of tables that currently are in that status, and a detailed status of each table. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-copy.png" alt-text="Screenshot that shows a migration status." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-copy.png"::: --1. When all table data is migrated to the Azure SQL Database target, Database Migration Service updates the migration status from **In progress** to **Succeeded**. -- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-succeeded.png" alt-text="Screenshot that shows succeeded migration." lightbox="media/tutorial-sql-server-azure-sql-database-offline/dms-portal-monitor-succeeded.png"::: --> [!NOTE] -> Database Migration Service optimizes migration by skipping tables with no data (0 rows). Tables that don't have data don't appear in the list, even if you select the tables when you create the migration. --You've completed the migration to Azure SQL Database. We encourage you to go through a series of post-migration tasks to ensure that everything functions smoothly and efficiently. ----## Limitations ---## Next steps --- [Create an Azure SQL database](/azure/azure-sql/database/single-database-create-quickstart)-- [Azure SQL Database overview](/azure/azure-sql/database/sql-database-paas-overview)-- [Connect apps to Azure SQL Database](/azure/azure-sql/database/connect-query-content-reference-guide)-- [Known issues](known-issues-azure-sql-migration-azure-data-studio.md) |
dms | Tutorial Sql Server Managed Instance Offline Ads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md | - Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio"- -description: Learn how to migrate on-premises SQL Server to Azure SQL Managed Instance offline by using Azure Data Studio and Azure Database Migration Service. -- Previously updated : 06/07/2023---- - sql-migration-content ---# Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio --You can use Azure Database Migration Service and the Azure SQL Migration extension in Azure Data Studio to migrate databases from an on-premises instance of SQL Server to Azure SQL Managed Instance offline and with minimal downtime. --For database migration methods that might require some manual configuration, see [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide). --In this tutorial, learn how to migrate the AdventureWorks database from an on-premises instance of SQL Server to an instance of Azure SQL Managed Instance by using Azure Data Studio and Database Migration Service. This tutorial uses offline migration mode, which considers an acceptable downtime during the migration process. --In this tutorial, you learn how to: -> [!div class="checklist"] -> -> - Open the Migrate to Azure SQL wizard in Azure Data Studio -> - Run an assessment of your source SQL Server databases -> - Collect performance data from your source SQL Server instance -> - Get a recommendation of the Azure SQL Managed Instance SKU that will work best for your workload -> - Specify details of your source SQL Server instance, backup location, and target instance of Azure SQL Managed Instance -> - Create an instance of Azure Database Migration Service -> - Start your migration and monitor progress to completion ---This tutorial describes an offline migration from SQL Server to Azure SQL Managed Instance. For an online migration, see [Migrate SQL Server to Azure SQL Managed Instance online in Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md). --## Prerequisites --Before you begin the tutorial: --- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.-- Have an Azure account that's assigned to one of the following built-in roles:-- - Contributor for the target instance of Azure SQL Managed Instance and for the storage account where you upload your database backup files from a Server Message Block (SMB) network share - - Reader role for the Azure resource groups that contain the target instance of Azure SQL Managed Instance or your Azure storage account - - Owner or Contributor role for the Azure subscription (required if you create a new Database Migration Service instance) -- As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md). -- > [!IMPORTANT] - > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio. --- Create a target instance of [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).--- Ensure that the logins that you use to connect the source SQL Server instance are members of the SYSADMIN server role or have CONTROL SERVER permission.--- Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files. Database Migration Service uses the backup location during database migration.-- > [!IMPORTANT] - > - > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration. - > - If your database backup files are in an SMB network share, [create an Azure storage account](../storage/common/storage-account-create.md) that Database Migration Service can use to upload database backup files to and to migrate databases. Make sure you create the Azure storage account in the same region where you create your instance of Database Migration Service. - > - You can write each backup to either a separate backup file or to multiple backup files. Appending multiple backups such as full and transaction logs into a single backup media isn't supported. - > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. --- Ensure that the service account that's running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.--- If you're migrating a database that's protected by Transparent Data Encryption (TDE), the certificate from the source SQL Server instance must be migrated to your target managed instance before you restore the database. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](./tutorial-transparent-data-encryption-migration-ads.md).-- > [!TIP] - > If your database contains sensitive data that's protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), the migration process automatically migrates your Always Encrypted keys to your target managed instance. --- If your database backups are on a network file share, provide a computer on which you can install a [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard gives you the download link and authentication keys to download and install your self-hosted integration runtime.-- In preparation for the migration, ensure that the computer on which you install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled: -- | Domain names | Outbound port | Description | - | -- | -- | | - | Public cloud: `{datafactory}.{region}.datafactory.azure.net`<br />or `*.frontend.clouddatahub.net` <br /><br /> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br /><br /> Microsoft Azure operated by 21Vianet: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to Database Migration Service. <br/><br/>For a newly created data factory in a public cloud, locate the fully qualified domain name (FQDN) from your self-hosted integration runtime key, in the format `{datafactory}.{region}.datafactory.azure.net`. <br /><br /> For an existing data factory, if you don't see the FQDN in your self-hosted integration key, use `*.frontend.clouddatahub.net` instead. | - | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled autoupdate, you can skip configuring this domain. | - | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account to upload database backups from your network share | -- > [!TIP] - > If your database backup files are already provided in an Azure storage account, a self-hosted integration runtime isn't required during the migration process. --- If you use a self-hosted integration runtime, make sure that the computer on which the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located.--- Enable outbound port 445 to allow access to the network file share. For more information, see [recommendations for using a self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations).--- If you're using Database Migration Service for the first time, make sure that the Microsoft.DataMigration resource provider is registered in your subscription. You can complete the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).--## Open the Migrate to Azure SQL wizard in Azure Data Studio --To open the Migrate to Azure SQL wizard: --1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine. --1. Right-click the server connection and select **Manage**. --1. In the server menu, under **General**, select **Azure SQL Migration**. --1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard. -- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard"::: --1. On the first page of the wizard, start a new session or resume a previously saved session. --## Run a database assessment, collect performance data, and get Azure recommendations --1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**. --1. In **Step 2: Assessment results and recommendations**, complete the following steps: -- 1. In **Choose your Azure SQL target**, select **Azure SQL Managed Instance**. -- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation"::: --1. Select **View/Select** to view the assessment results. --1. In the assessment results, select the database, and then review the assessment report to make sure no issues were found. -- 1. Select **Get Azure recommendation** to open the recommendations pane. -- 1. Select **Collect performance data now**. Select a folder on your local computer to store the performance logs, and then select **Start**. -- Azure Data Studio collects performance data until you either stop data collection or you close Azure Data Studio. -- After 10 minutes, Azure Data Studio indicates that a recommendation is available for Azure SQL Managed Instance. After the first recommendation is generated, you can select **Restart data collection** to continue the data collection process and refine the SKU recommendation. An extended assessment is especially helpful if your usage patterns vary over time. -- 1. In the selected **Azure SQL Managed Instance** target, select **View details** to open the detailed SKU recommendation report: -- 1. In **Review Azure SQL Managed Instance Recommendations**, review the recommendation. To save a copy of the recommendation, select the **Save recommendation report** checkbox. --1. Select **Close** to close the recommendations pane. --1. Select **Next** to continue your database migration in the wizard. --## Configure migration settings --1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the target instance of Azure SQL Managed Instance. Then, select **Next**. --1. In **Step 4: Migration mode**, select **Offline migration**, and then select **Next**. -- > [!NOTE] - > In offline migration mode, the source SQL Server database shouldn't be used for write activity while database backups are restored on a target instance of Azure SQL Managed Instance. Application downtime needs to be considered until the migration is finished. --1. In **Step 5: Data source configuration**, select the location of your database backups. Your database backups can be located either on an on-premises network share or in an Azure storage blob container. --- For backups that are located on a network share, enter or select the following information:-- |Name |Description | - ||-| - |**Source Credentials - Username** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. | - |**Source Credentials - Password** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. | - |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backup files in the network share that don't belong to the valid backup set are automatically ignored during the migration process. | - |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. | - |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | - |**Target database name** |You can modify the target database name during the migration process. | - |**Storage account details** |The resource group and storage account where backup files are uploaded. You don't need to create a container. Database Migration Service automatically creates a blob container in the specified storage account during the upload process. | --- For backups that are stored in an Azure storage blob container, enter or select the following information:-- |Name |Description | - ||-| - |**Target database name** |You can modify the target database name during the migration process. | - |**Storage account details** |The resource group, storage account, and container where backup files are located. - |**Last Backup File** |The file name of the last backup of the database you're migrating. - - > [!IMPORTANT] - > If loopback check functionality is enabled and the source SQL Server instance and file share are on the same computer, the source can't access the file share by using an FQDN. To fix this issue, [disable loopback check functionality](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd). --- The [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) no longer requires specific configurations on your Azure Storage account network settings to migrate your SQL Server databases to Azure. However, depending on your database backup location and desired storage account network settings, there are a few steps needed to ensure your resources can access the Azure Storage account. See the following table for the various migration scenarios and network configurations:-- | Scenario | SMB network share | Azure Storage account container | - | | | | - | Enabled from all networks | No extra steps | No extra steps | - | Enabled from selected virtual networks and IP addresses | [See 1a](#1aazure-blob-storage-network-configuration) | [See 2a](#2aazure-blob-storage-network-configuration-private-endpoint)| - | Enabled from selected virtual networks and IP addresses + private endpoint | [See 1b](#1bazure-blob-storage-network-configuration) | [See 2b](#2bazure-blob-storage-network-configuration-private-endpoint) | -- ### 1a - Azure Blob storage network configuration - If you have your Self-Hosted Integration Runtime (SHIR) installed on an Azure VM, see section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration). If you have your Self-Hosted Integration Runtime (SHIR) installed on your on-premises network, you need to add your client IP address of the hosting machine in your Azure Storage account as so: - - :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/storage-networking-details.png" alt-text="Screenshot that shows the storage account network details"::: - - To apply this specific configuration, connect to the Azure portal from the SHIR machine, open the Azure Storage account configuration, select **Networking**, and then mark the **Add your client IP address** checkbox. Select **Save** to make the change persistent. See section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining steps. - - ### 1b - Azure Blob storage network configuration - If your SHIR is hosted on an Azure VM, you need to add the virtual network of the VM to the Azure Storage account since the Virtual Machine has a nonpublic IP address that can't be added to the IP address range section. - - :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/storage-networking-firewall.png" alt-text="Screenshot that shows the storage account network firewall configuration"::: - - To apply this specific configuration, locate your Azure Storage account, from the **Data storage** panel select **Networking**, then mark the **Add existing virtual network** checkbox. A new panel opens up, select the subscription, virtual network, and subnet of the Azure VM hosting the Integration Runtime. This information can be found on the **Overview** page of the Azure Virtual Machine. The subnet may say **Service endpoint required** if so, select **Enable**. Once everything is ready, save the updates. Refer to section [2a - Azure Blob storage network configuration (Private endpoint)a](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining required steps. - - ### 2a - Azure Blob storage network configuration (Private endpoint) - If your backups are placed directly into an Azure Storage Container, all the above steps are unnecessary since there's no Integration Runtime communicating with the Azure Storage account. However, we still need to ensure that the target SQL Server instance can communicate with the Azure Storage account to restore the backups from the container. To apply this specific configuration, follow the instructions in section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration), specifying the target SQL instance Virtual Network when filling out the "Add existing virtual network" popup. - - ### 2b - Azure Blob storage network configuration (Private endpoint) - If you have a private endpoint set up on your Azure Storage account, follow the steps outlined in section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint). However, you need to select the subnet of the private endpoint, not just the target SQL Server subnet. Ensure the private endpoint is hosted in the same VNet as the target SQL Server instance. If it isn't, create another private endpoint using the process in the Azure Storage account configuration section. --## Create a Database Migration Service instance --In **Step 6: Azure Database Migration Service** in the Migrate to Azure SQL wizard, create a new instance of Azure Database Migration Service or reuse an existing instance that you created earlier. --> [!NOTE] -> If you previously created a Database Migration Service instance by using the Azure portal, you can't reuse the instance in the migration wizard in Azure Data Studio. You can reuse an instance only if you created the instance by using Azure Data Studio. --### Use an existing instance of Database Migration Service --To use an existing instance of Database Migration Service: --1. In **Resource group**, select the resource group that contains an existing instance of Database Migration Service. --1. In **Azure Database Migration Service**, select an existing instance of Database Migration Service that's in the selected resource group. --1. Select **Next**. --### Create a new instance of Database Migration Service --To create a new instance of Database Migration Service: --1. In **Resource group**, create a new resource group to contain a new instance of Database Migration Service. - -1. Under **Azure Database Migration Service**, select **Create new**. --1. In **Create Azure Database Migration Service**, enter a name for your Database Migration Service instance, and then select **Create**. --1. Under **Set up integration runtime**, complete the following steps: -- 1. Select the **Download and install integration runtime** link to open the download link in a web browser. Download the integration runtime, and then install it on a computer that meets the prerequisites to connect to the source SQL Server instance. -- When installation is finished, Microsoft Integration Runtime Configuration Manager automatically opens to begin the registration process. -- 1. In the **Authentication key** table, copy one of the authentication keys that are provided in the wizard and paste it in Azure Data Studio. If the authentication key is valid, a green check icon appears in Integration Runtime Configuration Manager. A green check indicates that you can continue to **Register**. -- After you register the self-hosted integration runtime, close Microsoft Integration Runtime Configuration Manager. -- > [!NOTE] - > For more information about how to use the self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md). --1. In **Create Azure Database Migration Service** in Azure Data Studio, select **Test connection** to validate that the newly created Database Migration Service instance is connected to the newly registered self-hosted integration runtime. -- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/test-connection-integration-runtime-complete.png" alt-text="Test connection integration runtime"::: --1. Return to the migration wizard in Azure Data Studio. --## Start the database migration --In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configuration you created, and then select **Start migration** to start the database migration. --## Monitor the database migration --1. In Azure Data Studio, in the server menu, under **General**, select **Azure SQL Migration** to go to the dashboard for your Azure SQL migrations. -- Under **Database migration status**, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations. -- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard"::: --1. Select **Database migrations in progress** to view active migrations. -- To get more information about a specific migration, select the database name. -- The migration details pane displays the backup files and their corresponding status: -- | Status | Description | - |--|-| - | Arrived | The backup file arrived in the source backup location and was validated. | - | Uploading | The integration runtime is uploading the backup file to the Azure storage account. | - | Uploaded | The backup file was uploaded to the Azure storage account. | - | Restoring | The service is restoring the backup file to Azure SQL Managed Instance. | - | Restored | The backup file is successfully restored in Azure SQL Managed Instance. | - | Canceled | The migration process was canceled. | - | Ignored | The backup file was ignored because it doesn't belong to a valid database backup chain. | --After all database backups are restored on the instance of Azure SQL Managed Instance, an automatic migration cutover is initiated by Database Migration Service to ensure that the migrated database is ready to use. The migration status changes from **In progress** to **Succeeded**. --> [!IMPORTANT] -> After the migration, the availability of SQL Managed Instance with Business Critical service tier might take significantly longer than the General Purpose tier because three secondary replicas have to be seeded for an Always On High Availability group. The duration of this operation depends on the size of the data. For more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration). --## Limitations --Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations: ----## Next steps --- Complete a quickstart to [migrate a database to SQL Managed Instance by using the T-SQL RESTORE command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart).-- Learn more about [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview).-- Learn how to [connect apps to SQL Managed Instance](/azure/azure-sql/managed-instance/connect-application-instance).-- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md). |
dms | Tutorial Sql Server Managed Instance Online Ads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md | - Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance online by using Azure Data Studio"- -description: Learn how to migrate on-premises SQL Server to Azure SQL Managed Instance only by using Azure Data Studio and Azure Database Migration Service. --- Previously updated : 06/07/2023---- - sql-migration-content ---# Tutorial: Migrate SQL Server to Azure SQL Managed Instance online in Azure Data Studio --Use the Azure SQL migration extension in Azure Data Studio to migrate database(s) from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) with minimal downtime. For methods that might require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide). --In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance with minimal downtime by using Azure Data Studio with Azure Database Migration Service (DMS). This tutorial focuses on the online migration mode where application downtime is limited to a short cutover at the end of the migration. --In this tutorial, you learn how to: -> [!div class="checklist"] -> -> * Launch the *Migrate to Azure SQL* wizard in Azure Data Studio -> * Run an assessment of your source SQL Server database(s) -> * Collect performance data from your source SQL Server -> * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload -> * Specify details of your source SQL Server, backup location and your target Azure SQL Managed Instance -> * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups -> * Start and monitor the progress for your migration -> * Perform the migration cutover when you are ready --> [!IMPORTANT] -> Prepare for migration and reduce the duration of the online migration process as much as possible to minimize the risk of interruption caused by instance reconfiguration or planned maintenance. In case of such an event, migration process will start from the beginning. In case of planned maintenance, there is a grace period of 36 hours where the target Azure SQL Managed Instance configuration or maintenance will be held before migration process is restarted. ---This article describes an online database migration from SQL Server to Azure SQL Managed Instance. For an offline database migration, see [Migrate SQL Server to a SQL Managed Instance offline using Azure Data Studio with DMS](tutorial-sql-server-managed-instance-offline-ads.md). --## Prerequisites --To complete this tutorial, you need to: --* [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio) -* [Install the Azure SQL migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace -* Have an Azure account that is assigned to one of the built-in roles listed below: - - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - - Owner or Contributor role for the Azure subscription (required if creating a new DMS service). - - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-managed-instance-ads.md) - > [!IMPORTANT] - > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard. -* Create a target [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart). -* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. -* Use one of the following storage options for the full database and transaction log backup files: - - SMB network share - - Azure storage account file share or blob container -- > [!IMPORTANT] - > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration. - > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created. - > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (that is, full and t-log) into a single backup media isn't supported. - > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. -* Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files. -* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target Azure SQL Managed Instance or SQL Server on Azure virtual machine before you migrate data. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](./tutorial-transparent-data-encryption-migration-ads.md). - > [!TIP] - > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), the migration process that uses Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target Azure SQL Managed Instance or SQL Server on Azure virtual machine. --* If your database backups are in a network file share, provide a machine to install [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard provides the download link and authentication keys to download and install your self-hosted integration runtime. In preparation for the migration, ensure that the machine where you plan to install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled: -- | Domain names | Outbound ports | Description | - | -- | -- | | - | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For a newly created data factory in the public cloud, locate the FQDN from your self-hosted integration runtime key, which is in format `{datafactory}.{region}.datafactory.azure.net`. For the old data factory, if you don't see the FQDN in your self-hosted integration key, use *.frontend.clouddatahub.net instead. | - | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled autoupdate, you can skip configuring this domain. | - | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account for uploading database backups from your network share | -- > [!TIP] - > If your database backup files are already provided in an Azure storage account, a self-hosted integration runtime is not required during the migration process. --* When you're using a self-hosted integration runtime, make sure that the machine where the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share. Also see [recommendations for using a self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations) -* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider) --## Launch the Migrate to Azure SQL wizard in Azure Data Studio --1. Open Azure Data Studio and select the server icon to connect to your on-premises SQL Server (or SQL Server on Azure virtual machine). -1. On the server connection, right-click and select **Manage**. -1. On the server's home page, select **Azure SQL Migration** extension. -1. On the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to launch the migration wizard. - :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard"::: -1. The first page of the wizard allows you to start a new session or resume a previously saved one. Pick the first option to start a new session. -## Run database assessment, collect performance data and get Azure recommendation --1. Select the database(s) to run assessment and select **Next**. -1. Select Azure SQL Managed Instance as the target. - :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation"::: -1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**. If any issues are displayed in the assessment results, they need to be remediated before proceeding with the next steps. - :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/assessment-issues-details.png" alt-text="Database assessment details"::: -1. Select the **Get Azure recommendation** button. -2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and select the **Start** button. -3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio. -4. After 10 minutes you see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link after the initial -10 minutes to refresh the recommendation with the extra data collected. -5. In the above **Azure SQL Managed Instance*** box, select the **View details** button for more information about your recommendation. -6. Close the view details box and press the **Next** button. --## Configure migration settings --1. Specify your **Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. -1. Select **Online migration** as the migration mode. - > [!NOTE] - > In the online migration mode, the source SQL Server database can be used for read and write activity while database backups are continuously restored on target Azure SQL Managed Instance. Application downtime is limited to duration for the cutover at the end of migration. -1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. - > [!NOTE] - > If your database backups are provided in an on-premises network share, DMS will require you to set up a self-hosted integration runtime in the next step of the wizard. If a self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to your Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you don't need to set up a self-hosted integration runtime. --- For backups located on a network share, provide the following details of your source SQL Server, source backup location, target database name, and Azure storage account for the backup files to be uploaded to:-- |Field |Description | - ||-| - |**Source Credentials - Username** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. | - |**Source Credentials - Password** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. | - |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backups files in the network share that don't belong to the valid backup set will be automatically ignored during the migration process. | - |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. | - |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | - |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. | - |**Storage account details** |The resource group and storage account where backup files are uploaded to. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process. --- For backups stored in an Azure storage blob container, specify the below details of the Target database name, -Resource group, Azure storage account, and Blob container from the corresponding drop-down lists. -- |Field |Description | - ||-| - |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. | - |**Storage account details** |The resource group, storage account and container where backup files are located. - - > [!IMPORTANT] - > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the file share using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd) --- The [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) no longer requires specific configurations on your Azure Storage account network settings to migrate your SQL Server databases to Azure. However, depending on your database backup location and desired storage account network settings, there are a few steps needed to ensure your resources can access the Azure Storage account. See the following table for the various migration scenarios and network configurations:-- | Scenario | SMB network share | Azure Storage account container | - | | | | - | Enabled from all networks | No extra steps | No extra steps | - | Enabled from selected virtual networks and IP addresses | [See 1a](#1aazure-blob-storage-network-configuration) | [See 2a](#2aazure-blob-storage-network-configuration-private-endpoint)| - | Enabled from selected virtual networks and IP addresses + private endpoint | [See 1b](#1bazure-blob-storage-network-configuration) | [See 2b](#2bazure-blob-storage-network-configuration-private-endpoint) | -- ### 1a - Azure Blob storage network configuration - If you have your Self-Hosted Integration Runtime (SHIR) installed on an Azure VM, see section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration). If you have your Self-Hosted Integration Runtime (SHIR) installed on your on-premises network, you need to add your client IP address of the hosting machine in your Azure Storage account as so: - - :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/storage-networking-details.png" alt-text="Screenshot that shows the storage account network details"::: - - To apply this specific configuration, connect to the Azure portal from the SHIR machine, open the Azure Storage account configuration, select **Networking**, and then mark the **Add your client IP address** checkbox. Select **Save** to make the change persistent. See section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining steps. - - ### 1b - Azure Blob storage network configuration - If your SHIR is hosted on an Azure VM, you need to add the virtual network of the VM to the Azure Storage account since the Virtual Machine has a nonpublic IP address that can't be added to the IP address range section. - - :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/storage-networking-firewall.png" alt-text="Screenshot that shows the storage account network firewall configuration."::: - - To apply this specific configuration, locate your Azure Storage account, from the **Data storage** panel select **Networking**, then mark the **Add existing virtual network** checkbox. A new panel opens up, select the subscription, virtual network, and subnet of the Azure VM hosting the Integration Runtime. This information can be found on the **Overview** page of the Azure Virtual Machine. The subnet may say **Service endpoint required** if so, select **Enable**. Once everything is ready, save the updates. Refer to section [2a - Azure Blob storage network configuration (Private endpoint)a](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining required steps. - - ### 2a - Azure Blob storage network configuration (Private endpoint) - If your backups are placed directly into an Azure Storage Container, all the above steps are unnecessary since there's no Integration Runtime communicating with the Azure Storage account. However, we still need to ensure that the target SQL Server instance can communicate with the Azure Storage account to restore the backups from the container. To apply this specific configuration, follow the instructions in section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration), specifying the target SQL instance Virtual Network when filling out the "Add existing virtual network" popup. - - ### 2b - Azure Blob storage network configuration (Private endpoint) - If you have a private endpoint set up on your Azure Storage account, follow the steps outlined in section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint). However, you need to select the subnet of the private endpoint, not just the target SQL Server subnet. Ensure the private endpoint is hosted in the same VNet as the target SQL Server instance. If it isn't, create another private endpoint using the process in the Azure Storage account configuration section. --## Create Azure Database Migration Service --1. Create a new Azure Database Migration Service or reuse an existing Service that you previously created. - > [!NOTE] - > If you had previously created DMS using the Azure Portal, you cannot reuse it in the migration wizard in Azure Data Studio. Only DMS created previously using Azure Data Studio can be reused. -1. Select the **Resource group** where you have an existing DMS or need to create a new one. The **Azure Database Migration Service** dropdown lists any existing DMS in the selected resource group. -1. To reuse an existing DMS, select it from the dropdown list and the status of the self-hosted integration runtime will be displayed at the bottom of the page. -1. To create a new DMS, select **Create new**. On the **Create Azure Database Migration Service**, screen provide the name for your DMS and select **Create**. -1. After successful creation of DMS, you'll be provided with details to set up **integration runtime**. -1. Select on **Download and install integration runtime** to open the download link in a web browser. Complete the download. Install the integration runtime on a machine that meets the prerequisites of connecting to source SQL Server and the location containing the source backup. -1. After the installation is complete, the **Microsoft Integration Runtime Configuration Manager** will automatically launch to begin the registration process. -1. Copy and paste one of the authentication keys provided in the wizard screen in Azure Data Studio. If the authentication key is valid, a green check icon is displayed in the Integration Runtime Configuration Manager indicating that you can continue to **Register**. -1. After successfully completing the registration of self-hosted integration runtime, close the **Microsoft Integration Runtime Configuration Manager** and switch back to the migration wizard in Azure Data Studio. -1. Select **Test connection** in the **Create Azure Database Migration Service** screen in Azure Data Studio to validate that the newly created DMS is connected to the newly registered self-hosted integration runtime. - :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/test-connection-integration-runtime-complete.png" alt-text="Test connection integration runtime"::: -1. Review the migration summary and select **Done** to start the database migration. --## Monitor your migration --1. On the **Database Migration Status**, you can track the migrations in progress, migrations completed, and migrations failed (if any). -- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard"::: -1. Select **Database migrations in progress** to view ongoing migrations and get further details by selecting the database name. -1. The migration details page displays the backup files and the corresponding status: -- | Status | Description | - |--|-| - | Arrived | Backup file arrived in the source backup location and validated | - | Uploading | Integration runtime is currently uploading the backup file to Azure storage| - | Uploaded | Backup file is uploaded to Azure storage | - | Restoring | Azure Database Migration Service is currently restoring the backup file to Azure SQL Managed Instance| - | Restored | Backup file is successfully restored on Azure SQL Managed Instance | - | Canceled | Migration process was canceled | - | Ignored | Backup file was ignored as it doesn't belong to a valid database backup chain | -- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/online-to-mi-migration-details-all-backups-restored.png" alt-text="backup restore details"::: --## Complete migration cutover --The final step of the tutorial is to complete the migration cutover to ensure the migrated database in Azure SQL Managed Instance is ready for use. This process is the only part that requires downtime for applications that connect to the database and hence the timing of the cutover needs to be carefully planned with business or application stakeholders. --To complete the cutover: --1. Stop all incoming transactions to the source database. -2. Make application configuration changes to point to the target database in Azure SQL Managed Instance. -3. Take a final log backup of the source database in the backup location specified -4. Put the source database in read-only mode. Therefore, users can read data from the database but not modify it. -5. Ensure all database backups have the status *Restored* in the monitoring details page. -6. Select *Complete cutover* in the monitoring details page. --During the cutover process, the migration status changes from *in progress* to *completing*. When the cutover process is completed, the migration status changes to *succeeded* to indicate that the database migration is successful and that the migrated database is ready for use. --> [!IMPORTANT] -> After the cutover, availability of SQL Managed Instance with Business Critical service tier only can take significantly longer than General Purpose as three secondary replicas have to be seeded for Always On High Availability group. This operation duration depends on the size of data, for more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration). --## Limitations --Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations: ---## Next steps --* For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart). -* For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). -* For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance). -* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md). |
dms | Tutorial Sql Server Managed Instance Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md | -> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md). +> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/managed-instance/database-migration-service). > > To compare features between versions, review [compare versions](dms-overview.md#compare-versions). To complete this tutorial, you need to: * Provide an SMB network share that contains all your database full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration. * Ensure that the service account running the source SQL Server instance has write privileges on the network share that you created and that the computer account for the source server has read/write access to the same share. * Make a note of a Windows user (and password) that has full control privilege on the network share that you previously created. Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation.-* Create a Microsoft Entra Application ID that generates the Application ID key that Azure Database Migration Service can use to connect to target Azure SQL Managed Instance and Azure Storage Container. For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). +* Create a Microsoft Entra Application ID that generates the Application ID key that Azure Database Migration Service can use to connect to target Azure SQL Managed Instance and Azure Storage Container. For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal). > [!NOTE] > The Application ID used by the Azure Database Migration Service supports secret (password-based) authentication for service principals. It does not support certificate-based authentication. After an instance of the service is created, locate it within the Azure portal, 1. On the **Select target** screen, specify the **Application ID** and **Key** that the DMS instance can use to connect to the target instance of SQL Managed Instance and the Azure Storage Account. - For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). + For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal). 2. Select the **Subscription** containing the target instance of SQL Managed Instance, and then choose the target SQL Managed instance. |
dms | Tutorial Sql Server To Azure Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md | -> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline.md). +> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/database/database-migration-service). > > To compare features between versions, review [compare versions](dms-overview.md#compare-versions). |
dms | Tutorial Sql Server To Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md | -> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-offline-ads.md). +> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/managed-instance/database-migration-service). > > To compare features between versions, review [compare versions](dms-overview.md#compare-versions). |
dms | Tutorial Sql Server To Virtual Machine Offline Ads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md | - Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio"- -description: Learn how to migrate on-premises SQL Server to SQL Server on Azure Virtual Machines offline by using Azure Data Studio and Azure Database Migration Service. --- Previously updated : 06/07/2023---- - sql-migration-content ---# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio --You can use Azure Database Migration Service and the Azure SQL Migration extension in Azure Data Studio to migrate databases from an on-premises instance of SQL Server to [SQL Server on Azure Virtual Machines (SQL Server 2016 and later)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) offline and with minimal downtime. --For database migration methods that might require some manual configuration, see [SQL Server instance migration to SQL Server on Azure Virtual Machines](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview). --In this tutorial, learn how to migrate the example AdventureWorks database from an on-premises instance of SQL Server to an instance of SQL Server on Azure Virtual Machines by using Azure Data Studio and Azure Database Migration Service. This tutorial uses offline migration mode, which considers an acceptable downtime during the migration process. --In this tutorial, you learn how to: -> [!div class="checklist"] -> -> - Open the Migrate to Azure SQL wizard in Azure Data Studio -> - Run an assessment of your source SQL Server databases -> - Collect performance data from your source SQL Server instance -> - Get a recommendation of the SQL Server on Azure Virtual Machines SKU that will work best for your workload -> - Set the details of your source SQL Server instance, backup location, and target instance of SQL Server on Azure Virtual Machines -> - Create an instance of Azure Database Migration Service -> - Start your migration and monitor progress to completion --This tutorial describes an offline migration from SQL Server to SQL Server on Azure Virtual Machines. For an online migration, see [Migrate SQL Server to SQL Server on Azure Virtual Machines online in Azure Data Studio](tutorial-sql-server-to-virtual-machine-online-ads.md). --## Prerequisites --Before you begin the tutorial: --- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.-- Have an Azure account that's assigned to one of the following built-in roles:-- - Contributor for the target instance of SQL Server on Azure Virtual Machines and for the storage account where you upload your database backup files from a Server Message Block (SMB) network share - - Reader role for the Azure resource group that contains the target instance of SQL Server on Azure Virtual Machines or for your Azure Storage account - - Owner or Contributor role for the Azure subscription - - As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md). -- > [!IMPORTANT] - > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio. --- Create a target instance of [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal).-- > [!IMPORTANT] - > If you have an existing Azure virtual machine, it should be registered with the [SQL IaaS Agent extension in Full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes). --- Ensure that the logins that you use to connect the source SQL Server instance are members of the SYSADMIN server role or have CONTROL SERVER permission.--- Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files. Database Migration Service uses the backup location during database migration.-- > [!IMPORTANT] - > - > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration. - > - If your database backup files are in an SMB network share, [create an Azure storage account](../storage/common/storage-account-create.md) that Database Migration Service can use to upload database backup files to and to migrate databases. Make sure you create the Azure storage account in the same region where you create your instance of Database Migration Service. - > - You can write each backup to either a separate backup file or to multiple backup files. Appending multiple backups such as full and transaction logs into a single backup media isn't supported. - > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. --- Ensure that the service account that's running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.--- If you're migrating a database that's protected by Transparent Data Encryption (TDE), the certificate from the source SQL Server instance must be migrated to SQL Server on Azure Virtual Machines before you migrate data. To learn more, see [Move a TDE-protected database to another SQL Server instance](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).-- > [!TIP] - > If your database contains sensitive data that's protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), the migration process automatically migrates your Always Encrypted keys to your target instance of SQL Server on Azure Virtual Machines. --- If your database backups are on a network file share, provide a computer on which you can install a [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard gives you the download link and authentication keys to download and install your self-hosted integration runtime.-- In preparation for the migration, ensure that the computer on which you install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled: -- | Domain names | Outbound port | Description | - | -- | -- | | - | Public cloud: `{datafactory}.{region}.datafactory.azure.net`<br />or `*.frontend.clouddatahub.net` <br /><br /> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br /><br /> Microsoft Azure operated by 21Vianet: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to Database Migration Service. <br/><br/>For a newly created data factory in a public cloud, locate the fully qualified domain name (FQDN) from your self-hosted integration runtime key, in the format `{datafactory}.{region}.datafactory.azure.net`. <br /><br /> For an existing data factory, if you don't see the FQDN in your self-hosted integration key, use `*.frontend.clouddatahub.net` instead. | - | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled autoupdate, you can skip configuring this domain. | - | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account to upload database backups from your network share | -- > [!TIP] - > If your database backup files are already provided in an Azure storage account, a self-hosted integration runtime isn't required during the migration process. --- If you use a self-hosted integration runtime, make sure that the computer on which the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located.--- Enable outbound port 445 to allow access to the network file share. For more information, see [recommendations for using a self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations).--- If you're using Azure Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).--## Open the Migrate to Azure SQL wizard in Azure Data Studio --To open the Migrate to Azure SQL wizard: --1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine. --1. Right-click the server connection and select **Manage**. --1. In the server menu under **General**, select **Azure SQL Migration**. --1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard. -- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Screenshot that shows how to open the Migrate to Azure SQL wizard."::: --1. On the first page of the wizard, start a new session or resume a previously saved session. --## Run a database assessment, collect performance data, and get Azure recommendations --1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**. --1. In **Step 2: Assessment results and recommendations**, complete the following steps: -- 1. In **Choose your Azure SQL target**, select **SQL Server on Azure Virtual Machine**. -- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/assessment-complete-target-selection.png" alt-text="Screenshot that shows an assessment confirmation."::: -- 1. Select **View/Select** to view the assessment results. -- 1. In the assessment results, select the database, and then review the assessment report to make sure no issues were found. -- 1. Select **Get Azure recommendation** to open the recommendations pane. -- 1. Select **Collect performance data now**. Select a folder on your local computer to store the performance logs, and then select **Start**. -- Azure Data Studio collects performance data until you either stop data collection or you close Azure Data Studio. -- After 10 minutes, Azure Data Studio indicates that a recommendation is available for SQL Server on Azure Virtual Machines. After the first recommendation is generated, you can select **Restart data collection** to continue the data collection process and refine the SKU recommendation. An extended assessment is especially helpful if your usage patterns vary over time. -- 1. In the selected **SQL Server on Azure Virtual Machines** target, select **View details** to open the detailed SKU recommendation report: -- 1. In **Review SQL Server on Azure Virtual Machines Recommendations**, review the recommendation. To save a copy of the recommendation, select the **Save recommendation report** checkbox. --1. Select **Close** to close the recommendations pane. --1. Select **Next** to continue your database migration in the wizard. --## Configure migration settings --1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the target SQL Server to Azure Virtual Machines instance. Then, select **Next**. --1. In **Step 4: Migration mode**, select **Offline migration**, and then select **Next**. -- > [!NOTE] - > In offline migration mode, the source SQL Server database shouldn't be used for write activity while database backup files are restored on the target instance of SQL Server to Azure Virtual Machines. Application downtime persists from the start of the migration process until it's finished. - -1. In **Step 5: Data source configuration**, select the location of your database backups. Your database backups can be located either on an on-premises network share or in an Azure storage blob container. -- > [!NOTE] - > If your database backups are provided in an on-premises network share, you must set up a self-hosted integration runtime in the next step of the wizard. A self-hosted integration runtime is required to access your source database backups, check the validity of the backup set, and upload backups to Azure storage account. - > - > If your database backups are already in an Azure storage blob container, you don't need to set up a self-hosted integration runtime. - -- For backups that are located on a network share, enter or select the following information:-- |Name |Description | - ||-| - |**Source Credentials - Username** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. | - |**Source Credentials - Password** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. | - |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backup files in the network share that don't belong to the valid backup set are automatically ignored during the migration process. | - |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. | - |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | - |**Target database name** |You can modify the target database name during the migration process. | --- For backups that are stored in an Azure storage blob container, enter or select the following information:-- |Name |Description | - ||-| - |**Target database name** |You can modify the target database name during the migration process. | - |**Storage account details** |The resource group, storage account, and container where backup files are located. - |**Last Backup File** |The file name of the last backup of the database you're migrating. - - > [!IMPORTANT] - > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, the source won't be able to access the file share by using the FQDN. To fix this issue, [disable loopback check functionality](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd). --- The [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) no longer requires specific configurations on your Azure Storage account network settings to migrate your SQL Server databases to Azure. However, depending on your database backup location and desired storage account network settings, there are a few steps needed to ensure your resources can access the Azure Storage account. See the following table for the various migration scenarios and network configurations:-- | Scenario | SMB network share | Azure Storage account container | - | | | | - | Enabled from all networks | No extra steps | No extra steps | - | Enabled from selected virtual networks and IP addresses | [See 1a](#1aazure-blob-storage-network-configuration) | [See 2a](#2aazure-blob-storage-network-configuration-private-endpoint)| - | Enabled from selected virtual networks and IP addresses + private endpoint | [See 1b](#1bazure-blob-storage-network-configuration) | [See 2b](#2bazure-blob-storage-network-configuration-private-endpoint) | -- ### 1a - Azure Blob storage network configuration - If you have your Self-Hosted Integration Runtime (SHIR) installed on an Azure VM, see section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration). If you have your Self-Hosted Integration Runtime (SHIR) installed on your on-premises network, you need to add your client IP address of the hosting machine in your Azure Storage account as so: - - :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/storage-networking-details.png" alt-text="Screenshot that shows the storage account network details"::: - - To apply this specific configuration, connect to the Azure portal from the SHIR machine, open the Azure Storage account configuration, select **Networking**, and then mark the **Add your client IP address** checkbox. Select **Save** to make the change persistent. See section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining steps. - - ### 1b - Azure Blob storage network configuration - If your SHIR is hosted on an Azure VM, you need to add the virtual network of the VM to the Azure Storage account since the Virtual Machine has a nonpublic IP address that can't be added to the IP address range section. - - :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/storage-networking-firewall.png" alt-text="Screenshot that shows the storage account network firewall configuration."::: - - To apply this specific configuration, locate your Azure Storage account, from the **Data storage** panel select **Networking**, then mark the **Add existing virtual network** checkbox. A new panel opens up, select the subscription, virtual network, and subnet of the Azure VM hosting the Integration Runtime. This information can be found on the **Overview** page of the Azure Virtual Machine. The subnet may say **Service endpoint required** if so, select **Enable**. Once everything is ready, save the updates. Refer to section [2a - Azure Blob storage network configuration (Private endpoint)a](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining required steps. - - ### 2a - Azure Blob storage network configuration (Private endpoint) - If your backups are placed directly into an Azure Storage Container, all the above steps are unnecessary since there's no Integration Runtime communicating with the Azure Storage account. However, we still need to ensure that the target SQL Server instance can communicate with the Azure Storage account to restore the backups from the container. To apply this specific configuration, follow the instructions in section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration), specifying the target SQL instance Virtual Network when filling out the "Add existing virtual network" popup. - - ### 2b - Azure Blob storage network configuration (Private endpoint) - If you have a private endpoint set up on your Azure Storage account, follow the steps outlined in section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint). However, you need to select the subnet of the private endpoint, not just the target SQL Server subnet. Ensure the private endpoint is hosted in the same VNet as the target SQL Server instance. If it isn't, create another private endpoint using the process in the Azure Storage account configuration section. --## Create a Database Migration Service instance --In **Step 6: Azure Database Migration Service** in the Migrate to Azure SQL wizard, create a new instance of Azure Database Migration Service or reuse an existing instance that you created earlier. --> [!NOTE] -> If you previously created a Database Migration Service instance by using the Azure portal, you can't reuse the instance in the migration wizard in Azure Data Studio. You can reuse an instance only if you created the instance by using Azure Data Studio. --### Use an existing instance of Database Migration Service --To use an existing instance of Database Migration Service: --1. In **Resource group**, select the resource group that contains an existing instance of Database Migration Service. --1. In **Azure Database Migration Service**, select an existing instance of Database Migration Service that's in the selected resource group. --1. Select **Next**. --### Create a new instance of Database Migration Service --To create a new instance of Database Migration Service: --1. In **Resource group**, create a new resource group to contain a new instance of Database Migration Service. - -1. Under **Azure Database Migration Service**, select **Create new**. --1. In **Create Azure Database Migration Service**, enter a name for your Database Migration Service instance, and then select **Create**. --1. Under **Set up integration runtime**, complete the following steps: -- 1. Select the **Download and install integration runtime** link to open the download link in a web browser. Download the integration runtime, and then install it on a computer that meets the prerequisites to connect to the source SQL Server instance. -- When installation is finished, Microsoft Integration Runtime Configuration Manager automatically opens to begin the registration process. -- 1. In the **Authentication key** table, copy one of the authentication keys that are provided in the wizard and paste it in Azure Data Studio. If the authentication key is valid, a green check icon appears in Integration Runtime Configuration Manager. A green check indicates that you can continue to **Register**. -- After you register the self-hosted integration runtime, close Microsoft Integration Runtime Configuration Manager. -- > [!NOTE] - > For more information about how to use the self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md). --1. In **Create Azure Database Migration Service** in Azure Data Studio, select **Test connection** to validate that the newly created Database Migration Service instance is connected to the newly registered self-hosted integration runtime. --1. Return to the migration wizard in Azure Data Studio. --## Start the database migration --In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configuration you created, and then select **Start migration** to start the database migration. --## Monitor the database migration --1. In Azure Data Studio, in the server menu under **General**, select **Azure SQL Migration** to go to the dashboard for your Azure SQL migrations. -- Under **Database migration status**, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations. -- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard"::: --1. Select **Database migrations in progress** to view active migrations. -- To get more information about a specific migration, select the database name. -- The migration details pane displays the backup files and their corresponding status: -- | Status | Description | - |--|-| - | Arrived | The backup file arrived in the source backup location and was validated. | - | Uploading | The integration runtime is uploading the backup file to Azure storage. | - | Uploaded | The backup file has been uploaded to Azure storage. | - | Restoring | The service is restoring the backup file to SQL Server on Azure Virtual Machines. | - | Restored | The backup file was successfully restored on SQL Server on Azure Virtual Machines. | - | Canceled | The migration process was canceled. | - | Ignored | The backup file was ignored because it doesn't belong to a valid database backup chain. | --After all database backups are restored on the instance of SQL Server on Azure Virtual Machines, an automatic migration cutover is initiated by Database Migration Service to ensure that the migrated database is ready to use. The migration status changes from **In progress** to **Succeeded**. --## Limitations --Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations: ----## Next steps --- Complete a quickstart to [migrate a database to SQL Server on Azure Virtual Machines by using the T-SQL RESTORE command](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).-- Learn more about [SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview).-- Learn how to [connect apps to SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).-- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md). |
dms | Tutorial Sql Server To Virtual Machine Online Ads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md | - Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio"- -description: Learn how to migrate on-premises SQL Server to SQL Server on Azure Virtual Machines online by using Azure Data Studio and Azure Database Migration Service. --- Previously updated : 06/07/2023---- - sql-migration-content ---# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines online in Azure Data Studio --Use the Azure SQL migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview). --In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with minimal downtime by using Azure Data Studio with Azure Database Migration Service. --In this tutorial, you learn how to: -> [!div class="checklist"] -> -> * Launch the Migrate to Azure SQL wizard in Azure Data Studio. -> * Run an assessment of your source SQL Server database(s) -> * Collect performance data from your source SQL Server -> * Get a recommendation of the SQL Server on Azure Virtual Machine SKU best suited for your workload -> * Specify details of your source SQL Server, backup location and your target SQL Server on Azure Virtual Machine -> * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups. -> * Start and monitor the progress for your migration. -> * Perform the migration cutover when you are ready. --This article describes an online migration from SQL Server to a SQL Server on Azure Virtual Machine. Offline migration, see [Migrate SQL Server to a SQL Server on Azure Virtual Machine offline using Azure Data Studio with DMS](tutorial-sql-server-to-virtual-machine-offline-ads.md). --## Prerequisites --To complete this tutorial, you need to: --* [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio) -* [Install the Azure SQL migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace -* Have an Azure account that is assigned to one of the built-in roles listed below: - - Contributor for the target SQL Server on Azure Virtual Machine (and Storage Account to upload your database backup files from SMB network share). - - Reader role for the Azure Resource Groups containing the target SQL Server on Azure Virtual Machine or the Azure storage account. - - Owner or Contributor role for the Azure subscription. - - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-virtual-machine-ads.md) - > [!IMPORTANT] - > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard. -* Create a target [SQL Server on Azure Virtual Machine](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal). -- > [!IMPORTANT] - > If you have an existing Azure Virtual Machine, it should be registered with [SQL IaaS Agent extension in Full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes). -* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. -* Use one of the following storage options for the full database and transaction log backup files: - - SMB network share - - Azure storage account file share or blob container -- > [!IMPORTANT] - > - The Azure SQL Migration extension for Azure Data Studio doesn't take database backups, or neither initiate any database backups on your behalf. Instead, the service uses existing database backup files for the migration. - > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created. - > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration. - > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported. - > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. -* Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files. -* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target SQL Server on Azure Virtual Machine before migrating data. To learn more, see [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). - > [!TIP] - > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), migration process using Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target SQL Server on Azure Virtual Machine. --* If your database backups are in a network file share, provide a machine to install [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard provides the download link and authentication keys to download and install your self-hosted integration runtime. In preparation for the migration, ensure that the machine where you plan to install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled: -- | Domain names | Outbound ports | Description | - | -- | -- | | - | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For new created Data Factory in public cloud, locate the FQDN from your Self-hosted Integration Runtime key, which is in format `{datafactory}.{region}.datafactory.azure.net`. For old Data factory, if you don't see the FQDN in your Self-hosted Integration key, use *.frontend.clouddatahub.net instead. | - | `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled autoupdate, you can skip configuring this domain. | - | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account for uploading database backups from your network share | -- > [!TIP] - > If your database backup files are already provided in an Azure storage account, self-hosted integration runtime is not required during the migration process. --* Runtime is installed on the machine using self-hosted integration runtime. The machine connects to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share. Also see [recommendations for using self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations) -* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider) --## Launch the Migrate to Azure SQL wizard in Azure Data Studio --1. Open Azure Data Studio and select on the server icon to connect to your on-premises SQL Server (or SQL Server on Azure Virtual Machine). -1. On the server connection, right-click and select **Manage**. -1. On the server's home page, Select **Azure SQL Migration** extension. -1. On the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to launch the migration wizard. - :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard"::: -1. In the first step of the migration wizard, link your existing or new Azure account to Azure Data Studio. --## Run database assessment, collect performance data and get Azure recommendation --1. Select the database(s) to run assessment and select **Next**. -1. Select SQL Server on Azure Virtual Machine as the target. - :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/assessment-complete-target-selection.png" alt-text="Screenshot of assessment confirmation."::: -1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**. -1. Select the **Get Azure recommendation** button. -2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and select the **Start** button. -3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio. -4. After 10 minutes you see a recommended configuration for your Azure SQL VM. You can also press the Refresh recommendation link after the initial 10 minutes to refresh the recommendation with the extra data collected. -5. In the above **SQL Server on Azure Virtual Machine** box, select the **View details** button for more information about your recommendation. -6. Close the view details box and press the **Next** button. --## Configure migration settings --1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. -2. Select **Online migration** as the migration mode. - > [!NOTE] - > In the online migration mode, the source SQL Server database can be used for read and write activity while database backups are continuously restored on the target SQL Server on Azure Virtual Machine. Application downtime is limited to duration for the cutover at the end of migration. -3. In step 5, select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. - > [!NOTE] - > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime. --- For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.-- |Field |Description | - ||-| - |**Source Credentials - Username** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. | - |**Source Credentials - Password** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. | - |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backups files in the network share that don't belong to the valid backup set will be automatically ignored during the migration process. | - |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. | - |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | - |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. | --- For backups stored in an Azure storage blob container, specify the below details of the Target database name, -Resource group, Azure storage account, Blob container from the corresponding drop-down lists. -- |Field |Description | - ||-| - |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. | - |**Storage account details** |The resource group, storage account and container where backup files are located. --4. Select **Next** to continue. - > [!IMPORTANT] - > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd) --- The [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) no longer requires specific configurations on your Azure Storage account network settings to migrate your SQL Server databases to Azure. However, depending on your database backup location and desired storage account network settings, there are a few steps needed to ensure your resources can access the Azure Storage account. See the following table for the various migration scenarios and network configurations:-- | Scenario | SMB network share | Azure Storage account container | - | | | | - | Enabled from all networks | No extra steps | No extra steps | - | Enabled from selected virtual networks and IP addresses | [See 1a](#1aazure-blob-storage-network-configuration) | [See 2a](#2aazure-blob-storage-network-configuration-private-endpoint)| - | Enabled from selected virtual networks and IP addresses + private endpoint | [See 1b](#1bazure-blob-storage-network-configuration) | [See 2b](#2bazure-blob-storage-network-configuration-private-endpoint) | -- ### 1a - Azure Blob storage network configuration - If you have your Self-Hosted Integration Runtime (SHIR) installed on an Azure VM, see section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration). If you have your Self-Hosted Integration Runtime (SHIR) installed on your on-premises network, you need to add your client IP address of the hosting machine in your Azure Storage account as so: - - :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/storage-networking-details.png" alt-text="Screenshot that shows the storage account network details."::: - - To apply this specific configuration, connect to the Azure portal from the SHIR machine, open the Azure Storage account configuration, select **Networking**, and then mark the **Add your client IP address** checkbox. Select **Save** to make the change persistent. See section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining steps. - - ### 1b - Azure Blob storage network configuration - If your SHIR is hosted on an Azure VM, you need to add the virtual network of the VM to the Azure Storage account since the Virtual Machine has a nonpublic IP address that can't be added to the IP address range section. - - :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/storage-networking-firewall.png" alt-text="Screenshot that shows the storage account network firewall configuration."::: - - To apply this specific configuration, locate your Azure Storage account, from the **Data storage** panel select **Networking**, then mark the **Add existing virtual network** checkbox. A new panel opens up, select the subscription, virtual network, and subnet of the Azure VM hosting the Integration Runtime. This information can be found on the **Overview** page of the Azure Virtual Machine. The subnet may say **Service endpoint required** if so, select **Enable**. Once everything is ready, save the updates. Refer to section [2a - Azure Blob storage network configuration (Private endpoint)a](#2aazure-blob-storage-network-configuration-private-endpoint) for the remaining required steps. - - ### 2a - Azure Blob storage network configuration (Private endpoint) - If your backups are placed directly into an Azure Storage Container, all the above steps are unnecessary since there's no Integration Runtime communicating with the Azure Storage account. However, we still need to ensure that the target SQL Server instance can communicate with the Azure Storage account to restore the backups from the container. To apply this specific configuration, follow the instructions in section [1b - Azure Blob storage network configuration](#1bazure-blob-storage-network-configuration), specifying the target SQL instance Virtual Network when filling out the "Add existing virtual network" popup. - - ### 2b - Azure Blob storage network configuration (Private endpoint) - If you have a private endpoint set up on your Azure Storage account, follow the steps outlined in section [2a - Azure Blob storage network configuration (Private endpoint)](#2aazure-blob-storage-network-configuration-private-endpoint). However, you need to select the subnet of the private endpoint, not just the target SQL Server subnet. Ensure the private endpoint is hosted in the same VNet as the target SQL Server instance. If it isn't, create another private endpoint using the process in the Azure Storage account configuration section. --## Create Azure Database Migration Service --1. Create a new Azure Database Migration Service or reuse an existing Service that you previously created. - > [!NOTE] - > If you had previously created DMS using the Azure Portal, you cannot reuse it in the migration wizard in Azure Data Studio. Only DMS created previously using Azure Data Studio can be reused. -1. Select the **Resource group** where you have an existing DMS or need to create a new one. The **Azure Database Migration Service** dropdown lists any existing DMS in the selected resource group. -1. To reuse an existing DMS, select it from the dropdown list and the status of the self-hosted integration runtime will be displayed at the bottom of the page. -1. To create a new DMS, select on **Create new**. -1. On the **Create Azure Database Migration Service**, screen provide the name for your DMS and select **Create**. -1. After successful creation of DMS, you'll be provided with details to **Setup integration runtime**. -1. Select on **Download and install integration runtime** to open the download link in a web browser. Complete the download. Install the integration runtime on a machine that meets the prerequisites of connecting to source SQL Server and the location containing the source backup. -1. After the installation is complete, the **Microsoft Integration Runtime Configuration Manager** will automatically launch to begin the registration process. -1. Copy and paste one of the authentication keys provided in the wizard screen in Azure Data Studio. If the authentication key is valid, a green check icon is displayed in the Integration Runtime Configuration Manager indicating that you can continue to **Register**. -1. After successfully completing the registration of self-hosted integration runtime, close the **Microsoft Integration Runtime Configuration Manager** and switch back to the migration wizard in Azure Data Studio. -1. Select **Test connection** in the **Create Azure Database Migration Service** screen in Azure Data Studio to validate that the newly created DMS is connected to the newly registered self-hosted integration runtime and select **Done**. - :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/test-connection-integration-runtime-complete.png" alt-text="Test connection integration runtime"::: -1. Review the summary and select **Done** to start the database migration. --## Monitor your migration --1. On the **Database Migration Status**, you can track the migrations in progress, migrations completed, and migrations failed (if any). -- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard"::: -1. Select **Database migrations in progress** to view ongoing migrations and get further details by selecting the database name. -1. The migration details page displays the backup files and the corresponding status: -- | Status | Description | - |--|-| - | Arrived | Backup file arrived in the source backup location and validated | - | Uploading | Integration runtime is currently uploading the backup file to Azure storage| - | Uploaded | Backup file is uploaded to Azure storage | - | Restoring | Azure Database Migration Service is currently restoring the backup file to SQL Server on Azure Virtual Machine| - | Restored | Backup file is successfully restored on SQL Server on Azure Virtual Machine | - | Canceled | Migration process was canceled | - | Ignored | Backup file was ignored as it doesn't belong to a valid database backup chain | -- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/online-to-vm-migration-status-detailed.png" alt-text="online vm backup restore details"::: --## Complete migration cutover --The final step of the tutorial is to complete the migration cutover. The completion ensures the migrated database in SQL Server on Azure Virtual Machine is ready for use. Downtime is required for applications that connect to the database and the timing of the cutover needs to be carefully planned with business or application stakeholders. --To complete the cutover: --1. Stop all incoming transactions to the source database. -2. Make application configuration changes to point to the target database in SQL Server on Azure Virtual Machines. -3. Take a final log backup of the source database in the backup location specified -4. Put the source database in read-only mode. Therefore, users can read data from the database but not modify it. -5. Ensure all database backups have the status *Restored* in the monitoring details page. -6. Select *Complete cutover* in the monitoring details page. --During the cutover process, the migration status changes from *in progress* to *completing*. The migration status changes to *succeeded* when the cutover process is completed. The database migration is successful and that the migrated database is ready for use. --## Limitations --Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations: ----## Next steps --* How to migrate a database to SQL Server on Azure Virtual Machines using the T-SQL RESTORE command, see [Migrate a SQL Server database to SQL Server on a virtual machine](/azure/azure-sql/virtual-machines/windows/migrate-to-vm-from-sql-server). -* For information about SQL Server on Azure Virtual Machines, see [Overview of SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview). -* For information about connecting apps to SQL Server on Azure Virtual Machines, see [Connect applications](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql). -* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md). |
dms | Tutorial Transparent Data Encryption Migration Ads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-transparent-data-encryption-migration-ads.md | Before you begin the tutorial: - Contributor for the target managed instance (and Storage Account to upload your backups of the TDE certificate files from SMB network share). - Reader role for the Azure Resource Groups containing the target managed instance or the Azure storage account. - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).- - As an alternative to using the above built-in roles, you can assign a custom role. For more information, see [Custom roles: Online SQL Server to SQL Managed Instance migrations using ADS](resource-custom-roles-sql-db-managed-instance-ads.md). + - As an alternative to using the above built-in roles, you can assign a custom role. For more information, see [Custom roles: Online SQL Server to SQL Managed Instance migrations using ADS](/data-migration/sql-server/managed-instance/custom-roles). - Create a target instance of [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart). In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, complete the Check the following step-by-step tutorials for more information about migrating databases online or offline to Azure SQL Managed Instance targets: - - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance online](./tutorial-sql-server-managed-instance-offline-ads.md) - - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline](./tutorial-sql-server-managed-instance-offline-ads.md) + - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance online](/data-migration/sql-server/managed-instance/database-migration-service) + - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline](/data-migration/sql-server/managed-instance/database-migration-service) ## Post-migration steps The following table describes the current status of the TDE-enabled database mig ## Related content - [Migrate databases with Azure SQL Migration extension for Azure Data Studio](migration-using-azure-data-studio.md)-- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](tutorial-sql-server-azure-sql-database-offline.md)-- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](tutorial-sql-server-managed-instance-online-ads.md)-- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](tutorial-sql-server-to-virtual-machine-online-ads.md)+- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](/data-migration/sql-server/database/database-migration-service) +- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](/data-migration/sql-server/managed-instance/database-migration-service) +- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](/data-migration/sql-server/virtual-machines/database-migration-service) |
education-hub | Custom Tenant Set Up Classroom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/custom-tenant-set-up-classroom.md | - Title: Create a custom Azure Classroom tenant and billing profile -description: This article shows you how to make a custom tenant and billing profile for educators in your organization. ---- Previously updated : 2/22/2024----# Create a custom tenant and billing profile for Azure Classroom --This article is for IT admins who use Azure Classroom (subject to regional availability). When you sign up for this offer, you should already have a tenant and billing profile created. But this article shows you how to create a custom tenant and billing profile and then associate them with an educator. --## Prerequisites --You must be signed up for Azure Classroom. --## Create a new tenant --1. Go to the [Azure portal](https://ms.portal.azure.com/), search for **entra**, and select the **Microsoft Entra ID** result. -1. On the **Manage tenants** tab, select **Create**. -1. Complete the tenant information. -1. On the **Tenant details** pane, copy the **Tenant ID** value for the newly created tenant. You'll use it in the next procedure. -- :::image type="content" source="media/custom-tenant-set-up-classroom/save-tenant-id.png" alt-text="Screenshot that shows tenant details and the button for copying the tenant ID." border="true"::: --## Associate the new tenant with a university tenant --1. Go to **Cost Management** and select **Access control (IAM)**. -1. Select **Associated billing tenants**. -1. Select **Add** and paste the tenant ID of the newly created tenant. -1. Select the box for billing management. -1. Select **Add** to complete the association between the newly created tenant and university tenant. --## Invite an educator to the newly created tenant --1. Switch to the newly created tenant. -1. Go to **Users**, and then select **New user**. -1. On the **New user** pane, select **Invite user**, fill in the **Identity** information, and change the role to **Global Administrator**. Then select **Invite**. -- :::image type="content" source="media/custom-tenant-set-up-classroom/add-user.png" alt-text="Screenshot of selections for inviting an existing user to a tenant." border="true"::: -1. Tell the educator to accept the invitation to this tenant. -1. After the educator joins the tenant, go to the tenant properties and select **Yes** under **Access management for Azure resources**. --## Next step --Now that you've created a custom tenant, you can go to the Azure Education Hub and begin distributing credit to educators to use in labs. --> [!div class="nextstepaction"] -> [Create an assignment and allocate credit](create-assignment-allocate-credit.md) |
firewall | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md | Forced Tunnel mode can't be configured at run time. You can either redeploy the ## Outbound SNAT support -All outbound virtual network traffic IP addresses are translated to the Azure Firewall public IP (Source Network Address Translation). You can identify and allow traffic originating from your virtual network to remote Internet destinations. Azure Firewall doesn't SNAT when the destination IP is a private IP range per [IANA RFC 1918](https://tools.ietf.org/html/rfc1918). +All outbound virtual network traffic IP addresses are translated to the Azure Firewall public IP (Source Network Address Translation). You can identify and allow traffic originating from your virtual network to remote Internet destinations. When Azure Firewall has multiple public IPs configured for providing outbound connectivity, it will use IPs as needed based on available ports. It will only use the next available public IP once the connections cannot be made from the current public IP. ++In scenarios where you have high throughput or dynamic traffic patterns, it is recommended to us an [Azure NAT Gateway](/azure/nat-gateway/nat-overview). Azure NAT Gateway dynamically selects SNAT ports for providing outbound connectivity, +so all the SNAT ports provided by its associated IP addresses is available on demand. To learn more about how to integrate NAT Gateway with Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](/azure/firewall/integrate-with-nat-gateway). ++Azure NAT Gateway can be used with Azure Firewall by associating NAT Gateway to the Azure Firewall subnet. See the [Integrate NAT gateway with Azure Firewall](/azure/nat-gateway/tutorial-hub-spoke-nat-firewall) tutorial for guidance on this configuration. ++Azure Firewall doesn't SNAT when the destination IP is a private IP range per [IANA RFC 1918](https://tools.ietf.org/html/rfc1918). If your organization uses a public IP address range for private networks, Azure Firewall will SNAT the traffic to one of the firewall private IP addresses in AzureFirewallSubnet. You can configure Azure Firewall to **not** SNAT your public IP address range. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md). |
firewall | Integrate With Nat Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md | -One of the challenges with using a large number of public IP addresses is when there are downstream IP address filtering requirements. Azure Firewall randomly selects the source public IP address to use for a connection, so you need to allow all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes. +One of the challenges with using a large number of public IP addresses is when there are downstream IP address filtering requirements. When Azure Firewall is associated with multiple public IP addresses, you need to apply the filtering requirements across all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes. A better option to scale and dynamically allocate outbound SNAT ports is to use an [Azure NAT Gateway](../virtual-network/nat-gateway/nat-overview.md). It provides 64,512 SNAT ports per public IP address and supports up to 16 public IP addresses. This effectively provides up to 1,032,192 outbound SNAT ports. Azure NAT Gateway also [dynamically allocates SNAT ports](/azure/nat-gateway/nat-gateway-resource#nat-gateway-dynamically-allocates-snat-ports) on a subnet level, so all the SNAT ports provided by its associated IP addresses is available on demand to provide outbound connectivity. |
governance | General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/troubleshoot/general.md | Title: Troubleshoot common errors description: Learn how to troubleshoot problems with creating policy definitions, the various SDKs, and the add-on for Kubernetes. Previously updated : 10/26/2022 Last updated : 06/27/2024 + # Troubleshoot errors with using Azure Policy -When you create policy definitions, work with SDKs, or set up the -[Azure Policy for Kubernetes](../concepts/policy-for-kubernetes.md) add-on, you might run into -errors. This article describes various general errors that might occur, and it suggests ways to -resolve them. +When you create policy definitions, work with SDKs, or set up the [Azure Policy for Kubernetes](../concepts/policy-for-kubernetes.md) add-on, you might run into errors. This article describes various general errors that might occur, and it suggests ways to resolve them. ## Find error details The location of the error details depends on what aspect of Azure Policy you're working with. -- If you're working with a custom policy, go to the Azure portal to get linting feedback about the- schema, or review resulting [compliance data](../how-to/get-compliance-data.md) to see how - resources were evaluated. -- If you're working with any of the various SDKs, the SDK provides details about why the function- failed. -- If you're working with the add-on for Kubernetes, start with the- [logging](../concepts/policy-for-kubernetes.md#logging) in the cluster. +- If you're working with a custom policy, go to the Azure portal to get linting feedback about the schema, or review resulting [compliance data](../how-to/get-compliance-data.md) to see how resources were evaluated. +- If you're working with any of the various SDKs, the SDK provides details about why the function failed. +- If you're working with the add-on for Kubernetes, start with the [logging](../concepts/policy-for-kubernetes.md#logging) in the cluster. ## General errors The location of the error details depends on what aspect of Azure Policy you're #### Issue -An incorrect or nonexistent alias is used in a policy definition. Azure Policy uses -[aliases](../concepts/definition-structure.md#aliases) to map to Azure Resource Manager properties. +An incorrect or nonexistent alias is used in a policy definition. Azure Policy uses [aliases](../concepts/definition-structure-alias.md) to map to Azure Resource Manager properties. #### Cause An incorrect or nonexistent alias is used in a policy definition. #### Resolution -First, validate that the Resource Manager property has an alias. To look up the available aliases, -go to [Azure Policy extension for Visual Studio Code](../how-to/extension-for-vscode.md) or the SDK. -If the alias for a Resource Manager property doesn't exist, create a support ticket. +First, validate that the Resource Manager property has an alias. To look up the available aliases, go to [Azure Policy extension for Visual Studio Code](../how-to/extension-for-vscode.md) or the SDK. If the alias for a Resource Manager property doesn't exist, create a support ticket. ### Scenario: Evaluation details aren't up to date A resource is in the _Not Started_ state, or the compliance details aren't curre #### Cause -A new policy or initiative assignment takes about five minutes to be applied. New or updated -resources within scope of an existing assignment become available in about 15 minutes. A -standard compliance scan occurs every 24 hours. For more information, see -[evaluation triggers](../how-to/get-compliance-data.md#evaluation-triggers). +A new policy or initiative assignment takes about five minutes to be applied. New or updated resources within scope of an existing assignment become available in about 15 minutes. A standard compliance scan occurs every 24 hours. For more information, see [evaluation triggers](../how-to/get-compliance-data.md#evaluation-triggers). #### Resolution -First, wait an appropriate amount of time for an evaluation to finish and compliance results to -become available in the Azure portal or the SDK. To start a new evaluation scan with Azure -PowerShell or the REST API, see -[On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan). +First, wait an appropriate amount of time for an evaluation to finish and compliance results to become available in the Azure portal or the SDK. To start a new evaluation scan with Azure PowerShell or the REST API, see [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan). ### Scenario: Compliance isn't as expected #### Issue -A resource isn't in either the _Compliant_ or _Not-Compliant_ evaluation state that's expected for -the resource. +A resource isn't in either the _Compliant_ or _Not-Compliant_ evaluation state expected for the resource. #### Cause -The resource isn't in the correct scope for the policy assignment, or the policy definition doesn't -operate as intended. +The resource isn't in the correct scope for the policy assignment, or the policy definition doesn't operate as intended. #### Resolution -To troubleshoot your policy definition, do the following: --1. First, wait the appropriate amount of time for an evaluation to finish and compliance results - to become available in the Azure portal or SDK. +To troubleshoot your policy definition, do the following steps: -1. To start a new evaluation scan with Azure PowerShell or the REST API, see - [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan). +1. First, wait the appropriate amount of time for an evaluation to finish and compliance results to become available in the Azure portal or SDK. +1. To start a new evaluation scan with Azure PowerShell or the REST API, see [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan). 1. Ensure that the assignment parameters and assignment scope are set correctly.-1. Check the [policy definition mode](../concepts/definition-structure.md#mode): +1. Check the [policy definition mode](../concepts/definition-structure-basics.md#mode): - The mode should be `all` for all resource types. - The mode should be `indexed` if the policy definition checks for tags or location.-1. Ensure that the scope of the resource isn't - [excluded](../concepts/assignment-structure.md#excluded-scopes) or - [exempt](../concepts/exemption-structure.md). -1. If compliance for a policy assignment shows `0/0` resources, no resources were determined to be - applicable within the assignment scope. Check both the policy definition and the assignment - scope. +1. Ensure that the scope of the resource isn't [excluded](../concepts/assignment-structure.md#excluded-scopes) or [exempt](../concepts/exemption-structure.md). +1. If compliance for a policy assignment shows `0/0` resources, no resources were determined to be applicable within the assignment scope. Check both the policy definition and the assignment scope. 1. For a noncompliant resource that was expected to be compliant, see [determine the reasons for noncompliance](../how-to/determine-non-compliance.md). The comparison of the definition to the evaluated property value indicates why a resource was noncompliant. - If the **target value** is wrong, revise the policy definition. - If the **current value** is wrong, validate the resource payload through `resources.azure.com`.-1. For a [Resource Provider mode](../concepts/definition-structure.md#resource-provider-modes) - definition that supports a RegEx string parameter (such as `Microsoft.Kubernetes.Data` and the - built-in definition "Container images should be deployed from trusted registries only"), validate - that the [RegEx string](/dotnet/standard/base-types/regular-expression-language-quick-reference) - parameter is correct. -1. For other common issues and solutions, see - [Troubleshoot: Enforcement not as expected](#scenario-enforcement-not-as-expected). +1. For a [Resource Provider mode](../concepts/definition-structure-basics.md#resource-provider-modes) definition that supports a RegEx string parameter (such as `Microsoft.Kubernetes.Data` and the built-in definition "Container images should be deployed from trusted registries only"), validate that the [RegEx string](/dotnet/standard/base-types/regular-expression-language-quick-reference) parameter is correct. +1. For other common issues and solutions, see [Troubleshoot: Enforcement not as expected](#scenario-enforcement-not-as-expected). -If you still have an issue with your duplicated and customized built-in policy definition or custom -definition, create a support ticket under **Authoring a policy** to route the issue correctly. +If you still have an issue with your duplicated and customized built-in policy definition or custom definition, create a support ticket under **Authoring a policy** to route the issue correctly. ### Scenario: Enforcement not as expected #### Issue -A resource that you expect Azure Policy to act on isn't being acted on, and there's no entry in the -[Azure Activity log](../../../azure-monitor/essentials/platform-logs-overview.md). +A resource that you expect Azure Policy to act on isn't being acted on, and there's no entry in the [Azure Activity log](../../../azure-monitor/data-sources.md#azure-resources). #### Cause -The policy assignment has been configured for an -[**enforcementMode**](../concepts/assignment-structure.md#enforcement-mode) setting of _Disabled_. -While **enforcementMode** is disabled, the policy effect isn't enforced, and there's no entry in the -Activity log. +The policy assignment was configured for an [enforcementMode](../concepts/assignment-structure.md#enforcement-mode) setting of _Disabled_. While `enforcementMode` is disabled, the policy effect isn't enforced, and there's no entry in the Activity log. #### Resolution -Troubleshoot your policy assignment's enforcement by doing the following: --1. First, wait the appropriate amount of time for an evaluation to finish and compliance results to - become available in the Azure portal or the SDK. +Troubleshoot your policy assignment's enforcement by doing the following steps: -1. To start a new evaluation scan with Azure PowerShell or the REST API, see - [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan). -1. Ensure that the assignment parameters and assignment scope are set correctly and that - **enforcementMode** is _Enabled_. -1. Check the [policy definition mode](../concepts/definition-structure.md#mode): +1. First, wait the appropriate amount of time for an evaluation to finish and compliance results to become available in the Azure portal or the SDK. +1. To start a new evaluation scan with Azure PowerShell or the REST API, see [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan). +1. Ensure that the assignment parameters and assignment scope are set correctly and that `enforcementMode` is _Enabled_. +1. Check the [policy definition mode](../concepts/definition-structure-basics.md#mode): - The mode should be `all` for all resource types. - The mode should be `indexed` if the policy definition checks for tags or location.-1. Ensure that the scope of the resource isn't - [excluded](../concepts/assignment-structure.md#excluded-scopes) or - [exempt](../concepts/exemption-structure.md). -1. Verify that the resource payload matches the policy logic. This can be done by - [capturing an HTTP Archive (HAR) trace](../../../azure-portal/capture-browser-trace.md) or - reviewing the Azure Resource Manager template (ARM template) properties. -1. For other common issues and solutions, see - [Troubleshoot: Compliance not as expected](#scenario-compliance-isnt-as-expected). --If you still have an issue with your duplicated and customized built-in policy definition or custom -definition, create a support ticket under **Authoring a policy** to route the issue correctly. +1. Ensure that the scope of the resource isn't [excluded](../concepts/assignment-structure.md#excluded-scopes) or [exempt](../concepts/exemption-structure.md). +1. Verify that the resource payload matches the policy logic. This verification can be done by [capturing an HTTP Archive (HAR) trace](../../../azure-portal/capture-browser-trace.md) or reviewing the Azure Resource Manager template (ARM template) properties. +1. For other common issues and solutions, see [Troubleshoot: Compliance not as expected](#scenario-compliance-isnt-as-expected). ++If you still have an issue with your duplicated and customized built-in policy definition or custom definition, create a support ticket under **Authoring a policy** to route the issue correctly. ### Scenario: Denied by Azure Policy Creation or update of a resource is denied. #### Cause -A policy assignment to the scope of your new or updated resource meets the criteria of a policy -definition with a [Deny](../concepts/effects.md#deny) effect. Resources that meet these definitions -are prevented from being created or updated. +A policy assignment to the scope of your new or updated resource meets the criteria of a policy definition with a [Deny](../concepts/effect-deny.md) effect. Resources that meet these definitions are prevented from being created or updated. #### Resolution -The error message from a deny policy assignment includes the policy definition and policy assignment -IDs. If the error information in the message is missed, it's also available in the -[Activity log](../../../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log). Use this -information to get more details to understand the resource restrictions and adjust the resource -properties in your request to match allowed values. +The error message from a deny policy assignment includes the policy definition and policy assignment IDs. If the error information in the message is missed, it's also available in the [Activity log](../../../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log). Use this information to get more details to understand the resource restrictions and adjust the resource properties in your request to match allowed values. ### Scenario: Definition targets multiple resource types #### Issue -A policy definition that includes multiple resource types fails validation during creation or update -with the following error: +A policy definition that includes multiple resource types fails validation during creation or update with the following error: ```error The policy definition '{0}' targets multiple resource types, but the policy rule is authored in a way that makes the policy not applicable to the target resource types '{1}'. The policy definition '{0}' targets multiple resource types, but the policy rule #### Cause -The policy definition rule has one or more conditions that don't get evaluated by the target -resource types. +The policy definition rule has one or more conditions that don't get evaluated by the target resource types. #### Resolution -If an alias is used, make sure that the alias gets evaluated against only the resource type it -belongs to by adding a type condition before it. An alternative is to split the policy definition -into multiple definitions to avoid targeting multiple resource types. +If an alias is used, make sure that the alias gets evaluated against only the resource type it belongs to by adding a type condition before it. An alternative is to split the policy definition into multiple definitions to avoid targeting multiple resource types. ### Scenario: Subscription limit exceeded #### Issue -An error message on the compliance page in Azure portal is shown when retrieving compliance for -policy assignments. +An error message on the compliance page in Azure portal is shown when retrieving compliance for policy assignments. #### Cause -The number of subscriptions under the selected scopes in the request has exceeded the limit of 5000 -subscriptions. The compliance results may be partially displayed. +The number of subscriptions under the selected scopes in the request exceeded the limit of 5,000 subscriptions. The compliance results might be partially displayed. #### Resolution -Select a more granular scope with fewer child subscriptions to see the complete results. +To see the complete results, select a more granular scope with fewer child subscriptions. ## Template errors Select a more granular scope with fewer child subscriptions to see the complete #### Issue -Azure Policy supports a number of ARM template functions and functions that are available only in a -policy definition. Resource Manager processes these functions as part of a deployment instead of as -part of a policy definition. +Azure Policy supports many ARM template functions and functions that are available only in a policy definition. Resource Manager processes these functions as part of a deployment instead of as part of a policy definition. #### Cause -Using supported functions, such as `parameter()` or `resourceGroup()`, results in the processed -outcome of the function at deployment time instead of allowing the function for the policy -definition and Azure Policy engine to process. +Using supported functions, such as `parameter()` or `resourceGroup()`, results in the processed outcome of the function at deployment time instead of allowing the function for the policy definition and Azure Policy engine to process. #### Resolution -To pass a function through as part of a policy definition, escape the entire string with `[` such -that the property looks like `[[resourceGroup().tags.myTag]`. The escape character causes Resource -Manager to treat the value as a string when it processes the template. Azure Policy then places the -function into the policy definition, which allows it to be dynamic as expected. For more -information, see -[Syntax and expressions in Azure Resource Manager templates](../../../azure-resource-manager/templates/template-expressions.md). +To pass a function through as part of a policy definition, escape the entire string with `[` such that the property looks like `[[resourceGroup().tags.myTag]`. The escape character causes Resource Manager to treat the value as a string when it processes the template. Azure Policy then places the function into the policy definition, which allows it to be dynamic as expected. For more information, see [Syntax and expressions in Azure Resource Manager templates](../../../azure-resource-manager/templates/template-expressions.md). ## Add-on for Kubernetes installation errors The generated password includes a comma (`,`), which the Helm Chart is splitting #### Resolution -When you run `helm install azure-policy-addon`, escape the comma (`,`) in the password value with a -backslash (`\`). +When you run `helm install azure-policy-addon`, escape the comma (`,`) in the password value with a backslash (`\`). ### Scenario: Installation by using a Helm Chart fails because the name already exists The `helm install azure-policy-addon` command fails, and it returns the followin #### Cause -The Helm Chart with the name `azure-policy-addon` has already been installed or partially installed. +The Helm Chart with the name `azure-policy-addon` was already installed or partially installed. #### Resolution -Follow the instructions to -[remove the Azure Policy for Kubernetes add-on](../concepts/policy-for-kubernetes.md#remove-the-add-on), -then rerun the `helm install azure-policy-addon` command. +Follow the instructions to [remove the Azure Policy for Kubernetes add-on](../concepts/policy-for-kubernetes.md#remove-the-add-on), then rerun the `helm install azure-policy-addon` command. ### Scenario: Azure virtual machine user-assigned identities are replaced by system-assigned managed identities #### Issue -After you assign Guest Configuration policy initiatives to audit settings inside a machine, the -user-assigned managed identities that were assigned to the machine are no longer assigned. Only a -system-assigned managed identity is assigned. +After you assign Guest Configuration policy initiatives to audit settings inside a machine, the user-assigned managed identities that were assigned to the machine are no longer assigned. Only a system-assigned managed identity is assigned. #### Cause -The policy definitions that were previously used in Guest Configuration DeployIfNotExists -definitions ensured that a system-assigned identity is assigned to the machine, but they also -removed the user-assigned identity assignments. +The policy definitions that were previously used in Guest Configuration `deployIfNotExists` definitions ensured that a system-assigned identity is assigned to the machine. But they also removed the user-assigned identity assignments. #### Resolution -The definitions that previously caused this issue appear as _\[Deprecated\]_, and they're replaced -by policy definitions that manage prerequisites without removing user-assigned managed identities. A -manual step is required. Delete any existing policy assignments that are marked as -_\[Deprecated\]_, and replace them with the updated prerequisite policy initiative and policy -definitions that have the same name as the original. +The definitions that previously caused this issue appear as `\[Deprecated\]`, and are replaced by policy definitions that manage prerequisites without removing user-assigned managed identities. A manual step is required. Delete any existing policy assignments that are marked as `\[Deprecated\]`, and replace them with the updated prerequisite policy initiative and policy definitions that have the same name as the original. -For a detailed narrative, see the blog post -[Important change released for Guest Configuration audit policies](https://techcommunity.microsoft.com/t5/azure-governance-and-management/important-change-released-for-guest-configuration-audit-policies/ba-p/1655316). +For a detailed narrative, see the blog post [Important change released for Guest Configuration audit policies](https://techcommunity.microsoft.com/t5/azure-governance-and-management/important-change-released-for-guest-configuration-audit-policies/ba-p/1655316). ## Add-on for Kubernetes general errors Ensure that the domains and ports mentioned in the following article are open: #### Issue -The add-on can't reach the Azure Policy service endpoint, and it returns one of the following -errors: +The add-on can't reach the Azure Policy service endpoint, and it returns one of the following errors: - `azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://gov-prod-policy-data.trafficmanager.net/checkDataPolicyCompliance?api-version=2019-01-01-preview: StatusCode=404` - `adal: Refresh request failed. Status Code = '404'. Response body: getting assigned identities for pod kube-system/azure-policy-8c785548f-r882p in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>` #### Cause -This error occurs when _aad-pod-identity_ is installed on the cluster and the _kube-system_ pods -aren't excluded in _aad-pod-identity_. +This error occurs when `aad-pod-identity` is installed on the cluster and the _kube-system_ pods aren't excluded in `aad-pod-identity`. -The _aad-pod-identity_ component Node Managed Identity (NMI) pods modify the nodes' iptables to -intercept calls to the Azure instance metadata endpoint. This setup means that any request that's -made to the metadata endpoint is intercepted by NMI, even if the pod doesn't use _aad-pod-identity_. -The _AzurePodIdentityException_ CustomResourceDefinition (CRD) can be configured to inform -_aad-pod-identity_ that any requests to a metadata endpoint that originate from a pod matching the -labels defined in the CRD should be proxied without any processing in NMI. +The `aad-pod-identity` component Node Managed Identity (NMI) pods modify the nodes' iptables to intercept calls to the Azure instance metadata endpoint. This setup means that any request made to the metadata endpoint is intercepted by NMI, even if the pod doesn't use `aad-pod-identity`. The `AzurePodIdentityException` CustomResourceDefinition (CRD) can be configured to inform `aad-pod-identity` that any requests to a metadata endpoint that originate from a pod matching the labels defined in the CRD should be proxied without any processing in NMI. #### Resolution -Exclude the system pods that have the `kubernetes.azure.com/managedby: aks` label in _kube-system_ -namespace in _aad-pod-identity_ by configuring the _AzurePodIdentityException_ CRD. +Exclude the system pods that have the `kubernetes.azure.com/managedby: aks` label in _kube-system_ namespace in `aad-pod-identity` by configuring the `AzurePodIdentityException` CRD. -For more information, see -[Disable the Azure Active Directory (Azure AD) pod identity for a specific pod/application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception). +For more information, see [Disable the Azure Active Directory (Azure AD) pod identity for a specific pod/application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception). To configure an exception, follow this example: spec: #### Issue -The add-on can reach the Azure Policy service endpoint, but the add-on logs display one of the -following errors: +The add-on can reach the Azure Policy service endpoint, but the add-on logs display one of the following errors: - `The resource provider 'Microsoft.PolicyInsights' is not registered in subscription '{subId}'. See https://aka.ms/policy-register-subscription for how to register subscriptions.` following errors: #### Cause -The 'Microsoft.PolicyInsights' resource provider isn't registered. It must be registered for the -add-on to get policy definitions and return compliance data. +The `Microsoft.PolicyInsights` resource provider isn't registered. It must be registered for the add-on to get policy definitions and return compliance data. #### Resolution -Register the 'Microsoft.PolicyInsights' resource provider in the cluster subscription. For -instructions, see -[Register a resource provider](../../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). +Register the `Microsoft.PolicyInsights` resource provider in the cluster subscription. For instructions, see [Register a resource provider](../../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). ### Scenario: The subscription is disabled The add-on can reach the Azure Policy service endpoint, but the following error #### Cause -This error means that the subscription was determined to be problematic, and the feature flag -`Microsoft.PolicyInsights/DataPlaneBlocked` was added to block the subscription. +This error means that the subscription was determined to be problematic, and the feature flag `Microsoft.PolicyInsights/DataPlaneBlocked` was added to block the subscription. #### Resolution To investigate and resolve this issue, [contact the feature team](mailto:azuredg #### Issue -When attempting to create a custom policy definition from the Azure portal page for policy -definitions, you select the "Duplicate definition" button. After assigning the policy, you -find machines are _NonCompliant_ because no guest configuration assignment resource exists. +When attempting to create a custom policy definition from the Azure portal page for policy definitions, you select the **Duplicate definition** button. After assigning the policy, you find machines are _NonCompliant_ because no guest configuration assignment resource exists. #### Cause -Guest configuration relies on custom metadata added to policy definitions when -creating guest configuration assignment resources. The "Duplicate definition" activity in -the Azure portal does not copy custom metadata. +Guest configuration relies on custom metadata added to policy definitions when creating guest configuration assignment resources. The _Duplicate definition_ activity in the Azure portal doesn't copy custom metadata. #### Resolution New-AzPolicyDefinition -name (new-guid).guid -DisplayName "$($def.DisplayName) ( #### Issue -In the event of a Kubernetes cluster connectivity failure, evaluation for newly created or updated resources may be bypassed due to Gatekeeper's fail-open behavior. +If there's a Kubernetes cluster connectivity failure, evaluation for newly created or updated resources might be bypassed due to Gatekeeper's fail-open behavior. #### Cause The GK fail-open model is by design and based on community feedback. Gatekeeper #### Resolution -In the above event, the error case can be monitored from the [admission webhook metrics](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhook-metrics) provided by the kube-apiserver. And even if evaluation is bypassed at creation time and an object is created, it will still be reported on Azure Policy compliance as non-compliant as a flag to customers. +In the prior event, the error case can be monitored from the [admission webhook metrics](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhook-metrics) provided by the `kube-apiserver`. If evaluation is bypassed at creation time and an object is created, it's reported on Azure Policy compliance as non-compliant as a flag to customers. -Regardless of the above, in such a scenario, Azure policy will still retain the last known policy on the cluster and keep the guardrails in place. +Regardless of the scenario, Azure policy retains the last known policy on the cluster and keeps the guardrails in place. ## Next steps -If your problem isn't listed in this article or you can't resolve it, get support by visiting one of -the following channels: --- Get answers from experts through- [Microsoft Q&A](/answers/topics/azure-policy.html). -- Connect with [@AzureSupport](https://twitter.com/azuresupport). This official Microsoft Azure- resource on Twitter helps improve the customer experience by connecting the Azure community to the - right answers, support, and experts. -- If you still need help, go to the- [Azure support site](https://azure.microsoft.com/support/options/) and select **Submit a support - request**. +If your problem isn't listed in this article or you can't resolve it, get support by visiting one of the following channels: ++- Get answers from experts through [Microsoft Q&A](/answers/topics/azure-policy.html). +- Connect with [@AzureSupport](https://twitter.com/azuresupport). This official Microsoft Azure resource on Twitter helps improve the customer experience by connecting the Azure community to the right answers, support, and experts. +- If you still need help, go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Submit a support ticket**. |
governance | First Query Azurecli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-azurecli.md | Title: "Quickstart: Run Resource Graph query using Azure CLI" description: In this quickstart, you run a Resource Graph query using Azure CLI and the resource-graph extension. Previously updated : 06/26/2024 Last updated : 06/27/2024 This quickstart describes how to run an Azure Resource Graph query using the Azu - [Azure CLI](/cli/azure/install-azure-cli) must be version 2.22.0 or higher for the Resource Graph extension. - A Bash shell environment where you can run Azure CLI commands. For example, Git Bash in a [Visual Studio Code](https://code.visualstudio.com/) terminal session. -## Connect to Azure --From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID. --```azurecli -az login --# Run these commands if you have multiple subscriptions -az account list --output table -az account set --subscription <subscriptionID> -``` - ## Install the extension To enable Azure CLI to query resources using Azure Resource Graph, the Resource Graph extension must be installed. The first time you run a query with `az graph` a prompt is displayed to install the extension. Otherwise, use the following steps to do a manual installation. To enable Azure CLI to query resources using Azure Resource Graph, the Resource For more information about Azure CLI extensions, go to [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview). +## Connect to Azure ++From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID. ++```azurecli +az login ++# Run these commands if you have multiple subscriptions +az account list --output table +az account set --subscription <subscriptionID> +``` + ## Run a query After the Azure CLI extension is added to your environment, you can run a tenant-based query. The query in this example returns five Azure resources with the `name` and `type` of each resource. To query by [management group](../management-groups/overview.md) or subscription, use the `--management-groups` or `--subscriptions` arguments. az logout ## Next steps -In this quickstart, you ran Azure Resource Graph queries using the extension for Azure CLI. To learn more, go to the query language details article. +In this quickstart, you ran Azure Resource Graph queries using the extension for Azure CLI. To learn more about the Resource Graph language, continue to the query language details page. > [!div class="nextstepaction"] > [Understanding the Azure Resource Graph query language](./concepts/query-language.md) |
governance | First Query Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-powershell.md | Title: "Quickstart: Run Resource Graph query using Azure PowerShell" description: In this quickstart, you run an Azure Resource Graph query using the module for Azure PowerShell. Previously updated : 04/24/2024 Last updated : 06/27/2024 # Quickstart: Run Resource Graph query using Azure PowerShell -This quickstart describes how to run an Azure Resource Graph query using the `Az.ResourceGraph` module for Azure PowerShell. The article also shows how to order (sort) and limit the query's results. You can run a query for resources in your tenant, management groups, or subscriptions. When you're finished, you can remove the module. +This quickstart describes how to run an Azure Resource Graph query using the `Az.ResourceGraph` module for Azure PowerShell. The module is included with the latest version of Azure PowerShell and adds [cmdlets](/powershell/module/az.resourcegraph) for Resource Graph. ++The article also shows how to order (sort) and limit the query's results. You can run a query for resources in your tenant, management groups, or subscriptions. ## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- [PowerShell](/powershell/scripting/install/installing-powershell).-- [Azure PowerShell](/powershell/azure/install-azure-powershell).+- Latest versions of [PowerShell](/powershell/scripting/install/installing-powershell) and [Azure PowerShell](/powershell/azure/install-azure-powershell). - [Visual Studio Code](https://code.visualstudio.com/). ## Install the module -Install the `Az.ResourceGraph` module so that you can use Azure PowerShell to run Azure Resource Graph queries. The Azure Resource Graph module requires PowerShellGet version 2.0.1 or higher. If you installed the latest versions of PowerShell and Azure PowerShell, you already have the required version. +If you installed the latest versions of PowerShell and Azure PowerShell, you already have the `Az.ResourceGraph` module and required version of PowerShellGet. ++### Optional module installation ++Use the following steps to install the `Az.ResourceGraph` module so that you can use Azure PowerShell to run Azure Resource Graph queries. The Azure Resource Graph module requires PowerShellGet version 2.0.1 or higher. 1. Verify your PowerShellGet version: If a query doesn't return results from a subscription you already have access to ## Clean up resources +To sign out of your Azure PowerShell session: ++```azurepowershell +Disconnect-AzAccount +``` ++### Optional clean up steps ++If you installed the latest version of Azure PowerShell, the `Az.ResourceGraph` module is included and shouldn't be removed. The following steps are optional if you did a manual install of the `Az.ResourceGraph` module and want to remove the module. + To remove the `Az.ResourceGraph` module from your PowerShell session, run the following command: ```azurepowershell Uninstall-Module -Name Az.ResourceGraph A message might be displayed that _module Az.ResourceGraph is currently in use_. If so, you need to shut down your PowerShell session and start a new session. Then run the command to uninstall the module from your computer. -To sign out of your Azure PowerShell session: --```azurepowershell -Disconnect-AzAccount -``` - ## Next steps In this quickstart, you added the Resource Graph module to your Azure PowerShell environment and ran a query. To learn more, go to the query language details page. |
governance | Shared Query Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-azure-cli.md | Title: "Quickstart: Create Resource Graph shared query using Azure CLI" description: In this quickstart, you create an Azure Resource Graph shared query using Azure CLI and the resource-graph extension. Previously updated : 06/26/2024 Last updated : 06/27/2024 A shared query can be run from Azure CLI with the _experimental_ feature's comma - [Azure CLI](/cli/azure/install-azure-cli) must be version 2.22.0 or higher for the Resource Graph extension. - A Bash shell environment where you can run Azure CLI commands. For example, Git Bash in a [Visual Studio Code](https://code.visualstudio.com/) terminal session. -## Connect to Azure --From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID. --```azurecli -az login --# Run these commands if you have multiple subscriptions -az account list --output table -az account set --subscription <subscriptionID> -``` - ## Install the extension To enable Azure CLI to query resources using Azure Resource Graph, the Resource Graph extension must be installed. The first time you run a query with `az graph` a prompt is displayed to install the extension. Otherwise, use the following steps to do a manual installation. To enable Azure CLI to query resources using Azure Resource Graph, the Resource For more information about Azure CLI extensions, go to [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview). +## Connect to Azure ++From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID. ++```azurecli +az login ++# Run these commands if you have multiple subscriptions +az account list --output table +az account set --subscription <subscriptionID> +``` + ## Create a shared query Create a resource group and a shared that summarizes the count of all resources grouped by location. You can verify the shared query works using Azure Resource Graph Explorer. To ch 1. Change **Type** to _Shared queries_. 1. Select the query _Count VMs by OS_. 1. Select **Run query** and the view output in the **Results** tab.+1. Select **Charts** and then select **Map** to view the location map. You can also run the query from your resource group. 1. In Azure, go to the resource group, _demoSharedQuery_. 1. From the **Overview** tab, select the query _Count VMs by OS_. 1. Select the **Results** tab.+1. Select **Charts** and then select **Map** to view the location map. ## Clean up resources -To remove the resource group and shared query: +To remove the shared query: ++```azurecli +az graph shared-query delete --name "Summarize resources by location" --resource-group demoSharedQuery +``` ++When a resource group is deleted, the resource group and all its resources are deleted. To remove the resource group: ```azurecli az group delete --name demoSharedQuery az logout ## Next steps -In this quickstart, you added the Resource Graph extension to your Azure CLI environment and -created a shared query. To learn more about the Resource Graph language, continue to the query -language details page. +In this quickstart, you added the Resource Graph extension to your Azure CLI environment and created a shared query. To learn more about the Resource Graph language, continue to the query language details page. > [!div class="nextstepaction"] > [Understanding the Azure Resource Graph query language](./concepts/query-language.md) |
governance | Shared Query Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-azure-powershell.md | Title: 'Quickstart: Create a shared query with Azure PowerShell' -description: In this quickstart, you follow the steps to create a Resource Graph shared query using Azure PowerShell. Previously updated : 11/09/2022+ Title: "Quickstart: Create a Resource Graph shared query using Azure PowerShell" +description: In this quickstart, you create a Resource Graph shared query using Azure PowerShell. Last updated : 06/27/2024 + # Quickstart: Create a Resource Graph shared query using Azure PowerShell -This article describes how you can create an Azure Resource Graph shared query using the -[Az.ResourceGraph](/powershell/module/az.resourcegraph) PowerShell module. +In this quickstart, you create an Azure Resource Graph shared query using the `Az.ResourceGraph` Azure PowerShell module. The module is included with the latest version of Azure PowerShell and adds [cmdlets](/powershell/module/az.resourcegraph) for Resource Graph. ++A shared query can be run from Azure CLI with the _experimental_ feature's commands, or you can run the shared query from the Azure portal. A shared query is an Azure Resource Manager object that you can grant permission to or run in Azure Resource Graph Explorer. When you finish, you can remove the Resource Graph extension. ## Prerequisites -- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account-before you begin. +- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- Latest versions of [PowerShell](/powershell/scripting/install/installing-powershell) and [Azure PowerShell](/powershell/azure/install-azure-powershell). +- [Visual Studio Code](https://code.visualstudio.com/). ++## Install the module +If you installed the latest versions of PowerShell and Azure PowerShell, you already have the `Az.ResourceGraph` module and required version of PowerShellGet. - > [!IMPORTANT] - > While the **Az.ResourceGraph** PowerShell module is in preview, you must install it separately - > using the `Install-Module` cmdlet. +### Optional module installation - ```azurepowershell-interactive - Install-Module -Name Az.ResourceGraph -Scope CurrentUser -Repository PSGallery -Force - ``` +Use the following steps to install the `Az.ResourceGraph` module so that you can use Azure PowerShell to run Azure Resource Graph queries. The Azure Resource Graph module requires PowerShellGet version 2.0.1 or higher. -- If you have multiple Azure subscriptions, choose the appropriate subscription in which the- resources should be billed. Select a specific subscription using the - [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet. +1. Verify your PowerShellGet version: - ```azurepowershell-interactive - Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 - ``` + ```azurepowershell + Get-Module -Name PowerShellGet + ``` -## Create a Resource Graph shared query + If you need to update, go to [PowerShellGet](/powershell/gallery/powershellget/install-powershellget). -With the **Az.ResourceGraph** PowerShell module added to your environment of choice, it's time to create -a Resource Graph shared query. The shared query is an Azure Resource Manager object that you can -grant permission to or run in Azure Resource Graph Explorer. The query summarizes the count of all -resources grouped by _location_. +1. Install the module: ++ ```azurepowershell + Install-Module -Name Az.ResourceGraph -Repository PSGallery -Scope CurrentUser + ``` -1. Create a resource group with - [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) to store the Azure - Resource Graph shared query. This resource group is named `resource-graph-queries` and the - location is `westus2`. + The command installs the module in the `CurrentUser` scope. If you need to install in the `AllUsers` scope, run the installation from an administrative PowerShell session. - ```azurepowershell-interactive - # Login first with `Connect-AzAccount` if not using Cloud Shell +1. Verify the module was installed: - # Create the resource group - New-AzResourceGroup -Name resource-graph-queries -Location westus2 + ```azurepowershell + Get-Command -Module Az.ResourceGraph -CommandType Cmdlet ``` -1. Create the Azure Resource Graph shared query using the **Az.ResourceGraph** PowerShell module and - [New-AzResourceGraphQuery](/powershell/module/az.resourcegraph/new-azresourcegraphquery) - cmdlet: + The command displays the `Search-AzGraph` cmdlet version and loads the module into your PowerShell session. ++## Connect to Azure ++From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID. ++```azurepowershell +Connect-AzAccount ++# Run these commands if you have multiple subscriptions +Get-AzSubScription +Set-AzContext -Subscription <subscriptionID> +``` - ```azurepowershell-interactive - # Create the Azure Resource Graph shared query - $Params = @{ +## Create a shared query ++The shared query is an Azure Resource Manager object that you can grant permission to or run in Azure Resource Graph Explorer. The query summarizes the count of all resources grouped by location. ++1. Create a resource group to store the Azure Resource Graph shared query. ++ ```azurepowershell + New-AzResourceGroup -Name demoSharedQuery -Location westus2 + ``` ++1. Create the Azure Resource Graph shared query. ++ ```azurepowershell + $params = @{ Name = 'Summarize resources by location'- ResourceGroupName = 'resource-graph-queries' + ResourceGroupName = 'demoSharedQuery' Location = 'westus2' Description = 'This shared query summarizes resources by location for a pinnable map graphic.' Query = 'Resources | summarize count() by location' }- New-AzResourceGraphQuery @Params ++ New-AzResourceGraphQuery @params ``` -1. List the shared queries in the new resource group. The - [Get-AzResourceGraphQuery](/powershell/module/az.resourcegraph/get-azresourcegraphquery) - cmdlet returns an array of values. + The `$params` variable uses PowerShell [splatting](/powershell/module/microsoft.powershell.core/about/about_splatting) to improve readability for the parameter values used in the command to create the shared query. - ```azurepowershell-interactive - # List all the Azure Resource Graph shared queries in a resource group - Get-AzResourceGraphQuery -ResourceGroupName resource-graph-queries +1. List all shared queries in the resource group. ++ ```azurepowershell + Get-AzResourceGraphQuery -ResourceGroupName demoSharedQuery ``` -1. To get just a single shared query result, use `Get-AzResourceGraphQuery` with its `Name` parameter. +1. Limit the results to a specific shared query. - ```azurepowershell-interactive - # Show a specific Azure Resource Graph shared query - Get-AzResourceGraphQuery -ResourceGroupName resource-graph-queries -Name 'Summarize resources by location' + ```azurepowershell + Get-AzResourceGraphQuery -ResourceGroupName demoSharedQuery -Name 'Summarize resources by location' ``` +## Run the shared query ++You can verify the shared query works using Azure Resource Graph Explorer. To change the scope, use the **Scope** menu on the left side of the page. ++1. Sign in to [Azure portal](https://portal.azure.com). +1. Enter _resource graph_ into the search field at the top of the page. +1. Select **Resource Graph Explorer**. +1. Select **Open query**. +1. Change **Type** to _Shared queries_. +1. Select the query _Count VMs by OS_. +1. Select **Run query** and the view output in the **Results** tab. +1. Select **Charts** and then select **Map** to view the location map. ++You can also run the query from your resource group. ++1. In Azure, go to the resource group, _demoSharedQuery_. +1. From the **Overview** tab, select the query _Count VMs by OS_. +1. Select the **Results** tab to view a list. +1. Select **Charts** and then select **Map** to view the location map. + ## Clean up resources -If you wish to remove the Resource Graph shared query and resource group from your Azure -environment, you can do so by using the following commands: +When you finish, you can remove the Resource Graph shared query and resource group from your Azure environment. When a resource group is deleted, the resource group and all its resources are deleted. ++Remove the shared query: ++```azurepowershell +Remove-AzResourceGraphQuery -ResourceGroupName demoSharedQuery -Name 'Summarize resources by location' +``` ++Delete the resource group: ++```azurepowershell +Remove-AzResourceGroup -Name demoSharedQuery +``` ++To sign out of your Azure PowerShell session: ++```azurepowershell +Disconnect-AzAccount +``` ++### Optional clean up steps -- [Remove-AzResourceGraphQuery](/powershell/module/az.resourcegraph/remove-azresourcegraphquery)-- [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup)+If you installed the latest version of Azure PowerShell, the `Az.ResourceGraph` module is included and shouldn't be removed. The following steps are optional if you did a manual install of the `Az.ResourceGraph` module and want to remove the module. -```azurepowershell-interactive -# Delete the Azure Resource Graph shared query -Remove-AzResourceGraphQuery -ResourceGroupName resource-graph-queries -Name 'Summarize resources by location' +To remove the `Az.ResourceGraph` module from your PowerShell session, run the following command: -# Remove the resource group -# WARNING: This command deletes ALL resources you've added to this resource group -Remove-AzResourceGroup -Name resource-graph-queries +```azurepowershell +Remove-Module -Name Az.ResourceGraph ``` +To uninstall the `Az.ResourceGraph` module from your computer, run the following command: ++```azurepowershell +Uninstall-Module -Name Az.ResourceGraph +``` ++A message might be displayed that _module Az.ResourceGraph is currently in use_. If so, you need to shut down your PowerShell session and start a new session. Then run the command to uninstall the module from your computer. + ## Next steps -In this quickstart, you've created a Resource Graph shared query using Azure PowerShell. To learn -more about the Resource Graph language, continue to the query language details page. +In this quickstart, you created a Resource Graph shared query using Azure PowerShell. To learn more about the Resource Graph language, continue to the query language details page. > [!div class="nextstepaction"]-> [Get more information about the query language](./concepts/query-language.md) +> [Understanding the Azure Resource Graph query language](./concepts/query-language.md) |
governance | Shared Query Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-bicep.md | Create a resource group and deploy the Bicep file with Azure CLI or Azure PowerS # [Azure CLI](#tab/azure-cli) ```azurecli-az group create --name exampleRG --location eastus -az deployment group create --resource-group exampleRG --template-file main.bicep +az group create --name demoSharedQuery --location eastus +az deployment group create --resource-group demoSharedQuery --template-file main.bicep ``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-New-AzResourceGroup -Name exampleRG -Location eastus -New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile main.bicep +New-AzResourceGroup -Name demoSharedQuery -Location eastus +New-AzResourceGroupDeployment -ResourceGroupName demoSharedQuery -TemplateFile main.bicep ``` Use Azure CLI or Azure PowerShell to list the deployed resources in the resource # [Azure CLI](#tab/azure-cli) ```azurecli-az resource list --resource-group exampleRG +az resource list --resource-group demoSharedQuery ``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-Get-AzResource -ResourceGroupName exampleRG +Get-AzResource -ResourceGroupName demoSharedQuery ``` You can verify the shared query works using Azure Resource Graph Explorer. To ch You can also run the query from your resource group. -1. In Azure, go to the resource group, _exampleRG_. +1. In Azure, go to the resource group, _demoSharedQuery_. 1. From the **Overview** tab, select the query _Count VMs by OS_. 1. Select the **Results** tab. ## Clean up resources -When you no longer need the resource that you created, delete the resource group using Azure CLI or Azure PowerShell. And if you signed into Azure portal to run the query, be sure to sign out. +When you no longer need the resource that you created, delete the resource group using Azure CLI or Azure PowerShell. When a resource group is deleted, the resource group and all its resources are deleted. And if you signed into Azure portal to run the query, be sure to sign out. # [Azure CLI](#tab/azure-cli) ```azurecli-az group delete --name exampleRG +az group delete --name demoSharedQuery ``` To sign out of your Azure CLI session: az logout # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-Remove-AzResourceGroup -Name exampleRG +Remove-AzResourceGroup -Name demoSharedQuery ``` To sign out of your Azure PowerShell session: Disconnect-AzAccount ## Next steps -In this quickstart, you created a Resource Graph shared query using Bicep. --To learn more about shared queries, continue to the tutorial for: +In this quickstart, you created a Resource Graph shared query using Bicep. To learn more about the Resource Graph language, continue to the query language details page. > [!div class="nextstepaction"]-> [Tutorial: Create and share an Azure Resource Graph query in the Azure portal](./tutorials/create-share-query.md) +> [Understanding the Azure Resource Graph query language](./concepts/query-language.md) |
governance | Shared Query Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-template.md | Title: 'Quickstart: Create Resource Graph shared query using ARM template' + Title: "Quickstart: Create Resource Graph shared query using ARM template" description: In this quickstart, you use an Azure Resource Manager template (ARM template) to create a Resource Graph shared query that counts virtual machines by OS. Last updated 06/26/2024 To remove the shared query created, follow these steps: ## Next steps -In this quickstart, you created a Resource Graph shared query. --To learn more about shared queries, continue to the tutorial for: +In this quickstart, you created a Resource Graph shared query. To learn more about the Resource Graph language, continue to the query language details page. > [!div class="nextstepaction"]-> [Manage queries in Azure portal](./tutorials/create-share-query.md) +> [Understanding the Azure Resource Graph query language](./concepts/query-language.md) |
hdinsight | Hdinsight Management Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-management-ip-addresses.md | description: Learn which IP addresses you must allow inbound traffic from, in or Previously updated : 07/12/2023 Last updated : 06/28/2024 # HDInsight management IP addresses Allow traffic from the following IP addresses for Azure HDInsight health and man Allow traffic from the IP addresses listed for the Azure HDInsight health and management services in the specific Azure region where your resources are located, refer the following note: > [!IMPORTANT] -> We recommend to use [service tag](hdinsight-service-tags.md) feature for network security groups. If you require region specific service tags, please refer the [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/confirmation.aspx?id=56519) +> We recommend to use [service tag](hdinsight-service-tags.md) feature for network security groups. If you require region specific service tags, please refer the [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://download.microsoft.com/download/7/1/D/71D86715-5596-4529-9B13-DA13A5DE5B63/ServiceTags_Public_20240624.json) For information on the IP addresses to use for Azure Government, see the [Azure Government Intelligence + Analytics](../azure-government/compare-azure-government-global-azure.md) document. |
hdinsight | Subscribe To Hdi Release Notes Repo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/subscribe-to-hdi-release-notes-repo.md | Title: Subscribe to GitHub release notes repo description: Learn how to subscribe to GitHub release notes repo Previously updated : 06/15/2023 Last updated : 06/28/2024 # Subscribe to HDInsight release notes GitHub repo |
iot-edge | Configure Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-device.md | Title: Configure Azure IoT Edge device settings description: This article shows you how to configure Azure IoT Edge device settings and options using the config.toml file. Previously updated : 05/06/2024 Last updated : 06/27/2024 auto_generated_edge_ca_expiry_days = 90 This setting manages autorenewal of the Edge CA certificate. Autorenewal applies when the Edge CA is configured as *quickstart* or when the Edge CA has an issuance `method` set. Edge CA certificates loaded from files generally can't be autorenewed as the Edge runtime doesn't have enough information to renew them. > [!IMPORTANT]-> Renewal of an Edge CA requires all server certificates issued by that CA to be regenerated. This regeneration is done by restarting all modules. The time of Edge CA renewal can't be guaranteed. If random module restarts are unacceptable for your use case, disable autorenewal. +> Renewal of an Edge CA requires all server certificates issued by that CA to be regenerated. This regeneration is done by restarting all modules. The time of Edge CA renewal can't be guaranteed. If random module restarts are unacceptable for your use case, disable autorenewal by not including the [edge_ca.auto_renew] section. ```toml [edge_ca.auto_renew] |
iot-hub-device-update | Device Update Region Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-region-mapping.md | -# Regional failover mapping for Device Update for IoT Hub --In cases where an Azure region is unavailable due to an outage, Device Update for IoT Hub supports business continuity and disaster recovery (BCDR) efforts with regional failover pairings. During an outage, data contained in the update files submitted to the Device Update service may be sent to a secondary Azure region. This failover enables Device Update to continue scanning update files for malware and making the updates available on the service endpoints. --## Failover region mapping --| Region name | Fails over to -| | | -| North Europe | West Europe | -| West Europe | North Europe | -| UK South | North Europe | -| Sweden Central | North Europe | -| East US | West US 2 | -| East US 2 | West US 2 | -| West US 2 | East US | -| West US 3 | East US | -| South Central US | East US | -| East US 2 (EUAP) | West US 2 | -| Australia East | Southeast Asia | -| Southeast Asia | Australia East | +# Device Update for IoT Hub regional mapping for scan and failover ++When you're importing an update into the Device Update for IoT Hub service, that update content may be processed within different Azure regions. The region used for processing depends on the region that your Device Update Instance was created in. ++## Anti-malware scan ++When you're using the Azure portal for importing your update, there's now an option to enable anti-malware scan. If you select the option to enable anti-malware scan, your update is sent to the Azure region that corresponds to the "Default scan region" column table in the **Region mapping for default and failover cases** section. If you don't select the option to enable anti-malware scan, your update is processed in the same region as your Device Update Instance, but it isn't scanned for malware. **Optional anti-malware scan is in Public Preview**. ++If you're using the Azure CLI or directly calling Device Update APIs, your update isn't scanned for malware during the import process. It's processed in the same region as your Device Update Instance. ++## Failover and BCDR ++As an exception to the previous section, in cases where an Azure region is unavailable due to an outage, Device Update for IoT Hub supports business continuity and disaster recovery (BCDR) efforts with regional failover pairings. During an outage, data contained in the update files submitted to the Device Update service may be sent to a secondary Azure region for processing. This failover enables Device Update to continue scanning update files for malware if you select that option. ++## Region mapping for default and failover cases +++| Device Update Instance region|Default scan region|Failover scan region | +| -- | -- | -- | +| North Europe | North Europe | Sweden Central | +|West Europe | North Europe | Sweden Central | +| UK South| North Europe | Sweden Central | +|Sweden Central|Sweden Central| North Europe | +|East US| East US |East US 2 | +|East US 2| East US 2 |East US | +|West US 2|West US 2| East US 2 | +|West US 3| West US 2| East US 2 | +|South Central US|West US 2| East US 2 | +|East US 2 (EUAP)|East US 2| East US| +|Australia East|North Europe| Sweden Central| +|Southeast Asia | West US 2| East US 2 | ## Next steps |
iot-hub | Iot Hub Devguide Messages Read Builtin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md | By default, messages are routed to the built-in service-facing endpoint (**messa If you're using message routing and the [fallback route](iot-hub-devguide-messages-d2c.md#fallback-route) is enabled, a message that doesn't match a query on any route goes to the built-in endpoint. If you disable this fallback route, a message that doesn't match any query is dropped. -This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**. +This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hubs-compatible messaging endpoint **messages/events**. | Property | Description | | - | -- | | **Partition count** | Set this property at creation to define the number of [partitions](../event-hubs/event-hubs-features.md#partitions) for device-to-cloud event ingestion. |-| **Retention time** | This property specifies how long in days messages are retained by IoT Hub. The default is one day, but it can be increased to seven days. | +| **Retention time** | This property specifies how long in days IoT Hub retains messages. The default is one day, but it can be increased to seven days. | -IoT Hub allows data retention in the built-in endpoint for a maximum of seven days. You can set the retention time during creation of your IoT hub. Data retention time in IoT Hub depends on your IoT hub tier and unit type. In terms of size, the built-in endpoint can retain messages of the maximum message size up to at least 24 hours of quota. For example, one S1 unit IoT hub provides enough storage to retain at least 400,000 messages, at 4 KB per message. If your devices are sending smaller messages, they may be retained for longer (up to seven days) depending on how much storage is consumed. We guarantee to retain the data for the specified retention time as a minimum. After the retention time has passed, messages expire and become inaccessible. You can modify the retention time, either programmatically using the [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource), or with the [Azure portal](https://portal.azure.com). +IoT Hub allows data retention in the built-in endpoint for a maximum of seven days. You can set the retention time during creation of your IoT hub. Data retention time in IoT Hub depends on your IoT hub tier and unit type. In terms of size, the built-in endpoint can retain messages of the maximum message size up to at least 24 hours of quota. For example, one S1 unit IoT hub provides enough storage to retain at least 400,000 messages, at 4 KB per message. If your devices are sending smaller messages, they might be retained for longer (up to seven days) depending on how much storage is consumed. We guarantee to retain the data for the specified retention time as a minimum. After the retention time, messages expire and become inaccessible. You can modify the retention time, either programmatically using the [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource), or with the Azure portal. IoT Hub also enables you to manage consumer groups on the built-in endpoint. You can have up to 20 consumer groups for each IoT hub. IoT Hub also enables you to manage consumer groups on the built-in endpoint. You Some product integrations and Event Hubs SDKs are aware of IoT Hub and let you use your IoT hub service connection string to connect to the built-in endpoint. -When you use Event Hubs SDKs or product integrations that are unaware of IoT Hub, you need an Event Hub-compatible endpoint and Event Hub-compatible name. You can retrieve these values from the portal as follows: +When you use Event Hubs SDKs or product integrations that are unaware of IoT Hub, you need an Event Hubs-compatible endpoint and Event Hubs-compatible name. You can retrieve these values from the portal as follows: 1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your IoT hub. You can then choose any shared access policy from the **Shared access policy** d ## SDK samples -The SDKs you can use to connect to the built-in Event Hub-compatible endpoint that IoT Hub exposes include: +The SDKs you can use to connect to the built-in Event Hubs-compatible endpoint that IoT Hub exposes include: | Language | SDK | Example | | -- | | - | | .NET | https://www.nuget.org/packages/Azure.Messaging.EventHubs | [ReadD2cMessages .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/getting%20started/ReadD2cMessages) |-| Java | https://mvnrepository.com/artifact/com.azure/azure-messaging-eventhubs | | +| Java | https://mvnrepository.com/artifact/com.azure/azure-messaging-eventhubs | [read-d2c-messages Java](https://github.com/Azure/azure-iot-service-sdk-java/tree/main/service/iot-service-samples/read-d2c-messages) | | Node.js | https://www.npmjs.com/package/@azure/event-hubs | [read-d2c-messages Node.js](https://github.com/Azure-Samples/azure-iot-samples-node/tree/master/iot-hub/Quickstarts/read-d2c-messages) |-| Python | https://pypi.org/project/azure-eventhub/ | [read-dec-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) | +| Python | https://pypi.org/project/azure-eventhub/ | [read-d2c-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) | -The product integrations you can use with the built-in Event Hub-compatible endpoint that IoT Hub exposes include: +## Connect to other service and products ++The product integrations you can use with the built-in Event Hubs-compatible endpoint that IoT Hub exposes include: * [Azure Functions](../azure-functions/index.yml) |
key-vault | Quick Create Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-net.md | -Get started with the Azure Key Vault certificate client library for .NET. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for certificates. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete certificates from an Azure key vault using the .NET client library +Get started with the Azure Key Vault certificate client library for .NET. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for certificates. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete certificates from an Azure key vault using the .NET client library. Key Vault client library resources: For more information about Key Vault and certificates, see: * [Azure CLI](/cli/azure/install-azure-cli) * A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md), [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md). -This quickstart is using `dotnet` and Azure CLI +This quickstart is using `dotnet` and Azure CLI. ## Setup dotnet add package Azure.Identity #### Set environment variables -This application is using key vault name as an environment variable called `KEY_VAULT_NAME`. +The application obtains the key vault name from an environment variable called `KEY_VAULT_NAME`. Windows ```cmd |
lab-services | Account Setup Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/account-setup-guide.md | You might want to create your images in your physical environment and then impor If you decide to use the Shared Image Gallery service, you'll need to create or attach a shared image gallery to your lab account. You can postpone this decision for now, because a shared image gallery can be attached to a lab account at any time. For more information, see:+ - The "Shared image gallery" section of [Azure Lab Services - Administrator guide](./administrator-guide-1.md#shared-image-gallery) - The "Pricing" section of [Azure Lab Services - Administrator guide](./administrator-guide-1.md#pricing) When you set up a lab account, you also can peer your lab account with a virtual After you've finished planning, you're ready to set up your lab account. You can apply the same steps to setting up [Azure Lab Services in Teams](./lab-services-within-teams-overview.md). -1. **Create your lab account**. For instructions, see [Create a lab account](./tutorial-setup-lab-account.md#create-a-lab-account). +1. **Create your lab account**. For instructions, see [Create a lab account](how-to-create-lab-accounts.md). For information about naming conventions, see the "Naming" section of [Azure Lab Services - Administrator guide](./administrator-guide-1.md#naming). -1. **Add users to the Lab Creator role**. For instructions, see [Add users to the Lab Creator role](./tutorial-setup-lab-account.md#add-a-user-to-the-lab-creator-role). +1. **Add users to the Lab Creator role**. For instructions, see [Add a user to the Lab Creator role](how-to-add-lab-creator.md). 1. **Connect to a peer virtual network**. For instructions, see [Connect your lab network with a peer virtual network](./how-to-connect-peer-virtual-network.md). |
lab-services | Administrator Guide 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide-1.md | Last updated 10/20/2020 [!INCLUDE [lab account focused article](./includes/lab-services-labaccount-focused-article.md)] -Information technology (IT) administrators who manage a university's cloud resources are ordinarily responsible for setting up the lab account for their school. After they've set up a lab account, administrators or educators create the labs that are contained within the account. This article provides a high-level overview of the Azure resources that are involved and the guidance for creating them. +Information technology (IT) administrators who manage a university's cloud resources are ordinarily responsible for setting up the lab account for their school. After they set up a lab account, administrators or educators create the labs that are contained within the account. This article provides a high-level overview of the Azure resources that are involved and the guidance for creating them. ![Diagram of a high-level view of Azure resources in a lab account.](./media/administrator-guide/high-level-view.png) -- Labs are hosted within an Azure subscription that's owned by Azure Lab Services.+- Labs are hosted within an Azure subscription managed by Azure Lab Services. - Lab accounts, a shared image gallery, and image versions are hosted within your subscription. - You can have your lab account and the shared image gallery in the same resource group. In this diagram, they are in different resource groups. The relationship between a lab account and its subscription is important because - Billing is reported through the subscription that contains the lab account. - You can grant users in the subscription's Microsoft Entra tenant access to Azure Lab Services. You can add a user as a lab account Owner or Contributor, or as a Lab Creator or lab Owner. -Labs and their virtual machines (VMs) are managed and hosted for you within a subscription that's owned by Azure Lab Services. +Labs and their virtual machines (VMs) are managed and hosted for you within a subscription managed Azure Lab Services. ## Resource group -A subscription contains one or more resource groups. Resource groups are used to create logical groupings of Azure resources that are used together within the same solution. +A subscription contains one or more resource groups. Resource groups are used to create logical groupings of Azure resources that are used together within the same solution. When you create a lab account, you must configure the resource group that contains the lab account. A resource group is also required when you create a [shared image gallery](#shared-image-gallery). You can place your lab account and shared image gallery in the same resource group or in two separate resource groups. You might want to take this second approach if you plan to share the image gallery across various solutions. -When you create a lab account, you can automatically create and attach a shared image gallery at the same time. This option results in the lab account and the shared image gallery being created in separate resource groups. You'll see this behavior when you follow the steps that are described in the [Configure shared image gallery at the time of lab account creation](how-to-attach-detach-shared-image-gallery-1.md#configure-at-the-time-of-lab-account-creation) tutorial. The image at the beginning of this article uses this configuration. +When you create a lab account, you can automatically create and attach a shared image gallery at the same time. This option results in the lab account and the shared image gallery being created in separate resource groups. You see this behavior when you follow the steps that are described in the [Configure shared image gallery at the time of lab account creation](how-to-attach-detach-shared-image-gallery-1.md#configure-at-the-time-of-lab-account-creation) tutorial. The image at the beginning of this article uses this configuration. -We recommend that you invest time up front to plan the structure of your resource groups, because it's *not* possible to change a lab account or shared image gallery resource group once it's created. If you need to change the resource group for these resources, you'll need to delete and re-create your lab account or shared image gallery. +We recommend that you invest time up front to plan the structure of your resource groups. It's *not* possible to change a lab account or shared image gallery resource group once after creation. If you need to change the resource group for these resources, you need to delete and re-create your lab account or shared image gallery. ## Lab account The following list highlights scenarios where more than one lab account might be - **Assign a separate budget to each lab account** - Instead of reporting all lab costs through a single lab account, you might need a more clearly apportioned budget. For example, you can create separate lab accounts for your university's Math department, Computer Science department, and so forth, to distribute the budget across departments. You can then view the cost for each individual lab account by using [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md). + Instead of reporting all lab costs through a single lab account, you might need a more clearly apportioned budget. For example, you can create separate lab accounts for your university's Math department, Computer Science department, and so forth, to distribute the budget across departments. You can then view the cost for each individual lab account by using [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md). - **Isolate pilot labs from active or production labs** The following list highlights scenarios where more than one lab account might be ## Lab -A lab contains VMs that are each assigned to a single student. In general, you can expect to: +A lab contains VMs that are each assigned to a single student. In general, you can expect to: - Have one lab for each class. - Create a new set of labs for each semester, quarter, or other academic system you're using. For classes that need to use the same image, you should use a [shared image gallery](#shared-image-gallery). This way, you can reuse images across labs and academic periods. When you're determining how to structure your labs, consider the following point - **The usage quota is set at the lab level and applies to all users within the lab** - To set different quotas for users, you must create separate labs. However, it's possible to add more hours to specific users after you've set the quota. + To set different quotas for users, you must create separate labs. However, it's possible to add more hours to specific users after you set the quota for the lab. - **The startup or shutdown schedule is set at the lab level and applies to all VMs within the lab** Similar to quota setting, if you need to set different schedules for users, you need to create a separate lab for each schedule. -By default, each lab has its own virtual network. If you have virtual network peering enabled, each lab will have its own subnet peered with the specified virtual network. +By default, each lab has its own virtual network. If you have virtual network peering enabled, each lab has its own subnet peered with the specified virtual network. ## Shared image gallery -A shared image gallery is attached to a lab account and serves as a central repository for storing images. An image is saved in the gallery when an educator chooses to export it from a lab's template VM. Each time an educator makes changes to the template VM and exports it, new image definitions and\or versions are created in the gallery. +A shared image gallery is attached to a lab account and serves as a central repository for storing images. An image is saved in the gallery when an educator chooses to export it from a lab's template VM. Each time an educator makes changes to the template VM and exports it, new image definitions and\or versions are created in the gallery. -Educators can publish an image version from the shared image gallery when they create a new lab. Although the gallery stores multiple versions of an image, educators can select only the most recent version during lab creation. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch. For more information about versioning, see [Image versions](../virtual-machines/shared-image-galleries.md#image-versions). +Educators can publish an image version from the shared image gallery when they create a new lab. Although the gallery stores multiple versions of an image, educators can select only the most recent version during lab creation. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch. For more information about versioning, see [Image versions](../virtual-machines/shared-image-galleries.md#image-versions). The shared image gallery service is an optional resource that you might not need immediately if you're starting with only a few labs. However, shared image gallery offers many benefits that are helpful as you scale up to more labs: - **You can save and manage versions of a template VM image** - It's useful to create a custom image or make changes (software, configuration, and so on) to an image from the Azure Marketplace gallery. For example, it's common for educators to require different software or tooling be installed. Rather than requiring students to manually install these prerequisites on their own, different versions of the template VM image can be exported to a shared image gallery. You can then use these image versions when you create new labs. + It's useful to create a custom image or make changes (software, configuration, and so on) to an image from the Azure Marketplace gallery. For example, it's common for educators to require different software or tooling be installed. Rather than requiring students to manually install these prerequisites on their own, different versions of the template VM image can be exported to a shared image gallery. You can then use these image versions when you create new labs. - **You can share and reuse template VM images across labs** The shared image gallery service is an optional resource that you might not need - **You can upload your own custom images from other environments outside of labs** - You can [upload custom images other environments outside of the context of labs](how-to-attach-detach-shared-image-gallery-1.md). For example, you can upload images from your own physical lab environment or from an Azure VM into shared image gallery. Once an image is imported into the gallery, you can then use the images to create labs. + You can [upload custom images other environments outside of the context of labs](how-to-attach-detach-shared-image-gallery-1.md). For example, you can upload images from your own physical lab environment or from an Azure VM into shared image gallery. Once an image is imported into the gallery, you can then use the images to create labs. To logically group shared images, you can do either of the following: - Create multiple shared image galleries. Each lab account can connect to only one shared image gallery, so this option also requires you to create multiple lab accounts.-- Use a single shared image gallery that's shared by multiple lab accounts. In this case, each lab account can enable only images that are applicable to the labs in that account.+- Use a single shared image gallery shared by multiple lab accounts. In this case, each lab account can enable only images that are applicable to the labs in that account. ## Naming -As you get started with Azure Lab Services, we recommend that you establish naming conventions for Azure and Azure Lab Services related resources. Although the naming conventions that you establish will be unique to the needs of your organization, the following table provides general guidelines: +As you get started with Azure Lab Services, we recommend that you establish naming conventions for Azure and Azure Lab Services related resources. Although the naming conventions that you establish are unique to the needs of your organization, the following table provides general guidelines: | Resource type | Role | Suggested pattern | Examples | | - | - | -- | -- | As you get started with Azure Lab Services, we recommend that you establish nami | Lab | Contains one or more student VMs. | {class-name}-{time}-{educator} | CS101-Fall2021, CS101-Fall2021-JohnDoe | | Shared image gallery | Contains one or more VM image versions | {org-name}-sig, {dept-name}-sig | contoso-sig, mathdept-sig | -In the proceeding table, we used some terms and tokens in the suggested name patterns. Let's go over those terms in a little more detail. +In the preceding table, we used some terms and tokens in the suggested name patterns. Let's go over those terms in a little more detail. | Pattern term/token | Definition | Example | | | - | - | The region specifies the datacenter where information about a resource group is ### Lab account -A lab account's location indicates the region that a resource exists in. +A lab account's location indicates the region that a resource exists in. ### Lab The location that a lab exists in varies, depending on the following factors: - **The lab account is peered with a virtual network** - You can [peer a lab account with a virtual network](./how-to-connect-peer-virtual-network.md) when they're in the same region. When a lab account is peered with a virtual network, labs are automatically created in the same region as both the lab account and the virtual network. + You can [peer a lab account with a virtual network](./how-to-connect-peer-virtual-network.md) when they're in the same region. When a lab account is peered with a virtual network, labs are automatically created in the same region as both the lab account and the virtual network. > [!NOTE] > When a lab account is peered with a virtual network, the **Allow lab creator to pick lab location** setting is disabled. For more information, see [Allow lab creator to pick location for the lab](./allow-lab-creator-pick-lab-location.md). - **No virtual network is peered *and* Lab Creators aren't allowed to pick the lab location** - When *no* virtual network is peered with the lab account and [Lab Creators are *not allowed* to pick the lab location](./allow-lab-creator-pick-lab-location.md), labs are automatically created in a region that has available VM capacity. Specifically, Azure Lab Services looks for availability in [regions that are within the same geography as the lab account](https://azure.microsoft.com/global-infrastructure/regions). + When *no* virtual network is peered with the lab account and [Lab Creators are *not allowed* to pick the lab location](./allow-lab-creator-pick-lab-location.md), labs are automatically created in a region that has available VM capacity. Specifically, Azure Lab Services looks for availability in [regions that are within the same geography as the lab account](https://azure.microsoft.com/global-infrastructure/regions). - **No virtual network is peered *and* Lab Creators are allowed to pick the lab location** A general rule is to set a resource's region to one that's closest to its users. When administrators or Lab Creators create a lab, they can choose from various VM sizes, depending on the needs of their classroom. Remember that the size availability depends on the region that your lab account is located in. -In the following table, notice that several of the VM sizes map to more than one VM series. Depending on capacity availability, Lab Services may use any of the VM series that are listed for a VM size. For example, the *Small* VM size maps to using either the [Standard_A2_v2](../virtual-machines/av2-series.md) or the [Standard_A2](../virtual-machines/sizes-previous-gen.md#a-series) VM series. When you choose *Small* as the VM size for your lab, Lab Services will first attempt to use the *Standard_A2_v2* series. However, when there isn't sufficient capacity available, Lab Services will instead use the *Standard_A2* series. The pricing is determined by the VM size and is the same regardless of which VM series Lab Services uses for that specific size. For more information on pricing for each VM size, read the [Lab Services pricing guide](https://azure.microsoft.com/pricing/details/lab-services/). +In the following table, notice that several of the VM sizes map to more than one VM series. Depending on capacity availability, Lab Services can use any of the VM series that are listed for a VM size. For example, the *Small* VM size maps to using either the [Standard_A2_v2](../virtual-machines/av2-series.md) or the [Standard_A2](../virtual-machines/sizes-previous-gen.md#a-series) VM series. When you choose *Small* as the VM size for your lab, Lab Services first attempts to use the *Standard_A2_v2* series. However, when there isn't sufficient capacity available, Lab Services uses the *Standard_A2* series. The pricing is determined by the VM size and is the same regardless of which VM series Lab Services uses for that specific size. For more information on pricing for each VM size, read the [Lab Services pricing guide](https://azure.microsoft.com/pricing/details/lab-services/). | Size | Minimum vCPUs | Minimum RAM | Series | Suggested use | | - | -- | -- | | - | | Small| 2 vCPUs | 3.5 GB RAM | [Standard_A2_v2](../virtual-machines/av2-series.md), [Standard_A2](../virtual-machines/sizes-previous-gen.md#a-series) | Best suited for command line, opening web browser, low-traffic web servers, small to medium databases. | | Medium | 4 vCPUs | 7 GB RAM | [Standard_A4_v2](../virtual-machines/av2-series.md), [Standard_A3](../virtual-machines/sizes-previous-gen.md#a-series) | Best suited for relational databases, in-memory caching, and analytics. |-| Medium (nested virtualization) | 4 vCPUs | 16 GBs RAM | [Standard_D4s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for relational databases, in-memory caching, and analytics. This size also supports nested virtualization. +| Medium (nested virtualization) | 4 vCPUs | 16 GBs RAM | [Standard_D4s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for relational databases, in-memory caching, and analytics. This size also supports nested virtualization. | | Large | 8 vCPUs | 16 GB RAM | [Standard_A8_v2](../virtual-machines/av2-series.md), [Standard_A7](../virtual-machines/sizes-previous-gen.md#a-series) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. |-| Large (nested virtualization) | 8 vCPUs | 32 GB RAM | [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. This size also supports nested virtualization. | +| Large (nested virtualization) | 8 vCPUs | 32 GB RAM | [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. This size also supports nested virtualization. | | Small GPU (visualization) | 6 vCPUs | 56 GB RAM | [Standard_NV6](../virtual-machines/nv-series.md) | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. | | Small GPU (Compute) | 6 vCPUs | 56 GB RAM | [Standard_NC6](../virtual-machines/nc-series.md), [Standard_NC6s_v3](../virtual-machines/ncv3-series.md) |Best suited for computer-intensive applications such as AI and deep learning. | | Medium GPU (visualization) | 12 vCPUs | 112 GB RAM | [Standard_NV12](../virtual-machines/nv-series.md), [Standard_NV12s_v3](../virtual-machines/nvv3-series.md), [Standard_NV12s_v2](../virtual-machines/sizes-previous-gen.md#nvv2-series) | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. | By using [Azure role-based access control (RBAC)](../role-based-access-control/o - **Lab Creator** - To create labs within a lab account, an educator must be a member of the Lab Creator role. An educator who creates a lab is automatically added as a lab Owner. For more information, see [Add a user to the Lab Creator role](./tutorial-setup-lab-account.md#add-a-user-to-the-lab-creator-role). + To create labs within a lab account, an educator must be a member of the Lab Creator role. An educator who creates a lab is automatically added as a lab Owner. For more information, see [Add a user to the Lab Creator role](how-to-add-lab-creator.md). - Lab **Owner** or **Contributor** When you're assigning roles, it helps to follow these tips: - Ordinarily, only administrators should be members of a lab account Owner or Contributor role. The lab account might have more than one Owner or Contributor. - To give educators the ability to create new labs and manage the labs that they create, you need only assign them the Lab Creator role.-- To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that they'll manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab. For more information, see [Add Owners to a lab](./how-to-add-user-lab-owner.md).+- To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that they manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab. For more information, see [Add Owners to a lab](./how-to-add-user-lab-owner.md). ## Content filtering -Your school may need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Lab Services doesn't offer built-in support for content filtering. +Your school might need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Lab Services doesn't offer built-in support for content filtering. There are two approaches that schools typically consider for content filtering: - Configure a firewall to filter content at the network level. - Install 3rd party software directly on each computer that performs content filtering. -The first approach isn't currently supported by Lab Services. Lab Services hosts each lab's virtual network within a Microsoft-managed Azure subscription. As a result, you don't have access to the underlying virtual network to do content filtering at the network level. For more information on Lab Services' architecture, read the article [Architecture Fundamentals](./classroom-labs-fundamentals.md). +The first approach isn't currently supported by Lab Services. Lab Services hosts each lab's virtual network within a Microsoft-managed Azure subscription. As a result, you don't have access to the underlying virtual network to do content filtering at the network level. For more information on Lab Services' architecture, read the article [Architecture Fundamentals](./classroom-labs-fundamentals.md). -Instead, we recommend the second approach which is to install 3rd party software on each lab's template VM. There are a few key points to highlight as part of this solution: +Instead, we recommend the second approach, which is to install 3rd party software on each lab's template VM. There are a few key points to highlight as part of this solution: -- If you plan to use the [auto-shutdown settings](./cost-management-guide.md#automatic-shutdown-settings-for-cost-control), you will need to unblock several Azure host names with the 3rd party software. The auto-shutdown settings use a diagnostic extension that must be able to communicate back to Lab Services. Otherwise, the auto-shutdown settings will fail to enable for the lab.-- You may also want to have each student use a non-admin account on their VM so that they can't uninstall the content filtering software. By default, Lab Services creates an admin account that each student uses to sign into their VM. It is possible to add a non-admin account using a specialized image, but there are some known limitations.+- If you plan to use the [auto-shutdown settings](./cost-management-guide.md#automatic-shutdown-settings-for-cost-control), you'll need to unblock several Azure host names with the 3rd party software. The auto-shutdown settings use a diagnostic extension that must be able to communicate back to Lab Services. Otherwise, the auto-shutdown settings fail to enable for the lab. +- You may also want to have each student use a non-admin account on their VM so that they can't uninstall the content filtering software. By default, Lab Services creates an admin account that each student uses to sign into their VM. It is possible to add a non-admin account using a specialized image, but there are some known limitations. If your school needs to do content filtering, contact us via the [Azure Lab Services' forums](https://techcommunity.microsoft.com/t5/azure-lab-services/bd-p/AzureLabServices) for more information. ## Endpoint management -Many endpoint management tools, such as [Microsoft Configuration Manager](https://techcommunity.microsoft.com/t5/azure-lab-services/configuration-manager-azure-lab-services/ba-p/1754407), require Windows VMs to have unique machine security identifiers (SIDs). Using SysPrep to create a *generalized* image typically ensures that each Windows machine will have a new, unique machine SID generated when the VM boots from the image. +Many endpoint management tools, such as [Microsoft Configuration Manager](https://techcommunity.microsoft.com/t5/azure-lab-services/configuration-manager-azure-lab-services/ba-p/1754407), require Windows VMs to have unique machine security identifiers (SIDs). Using SysPrep to create a *generalized* image typically ensures that each Windows machine has a new, unique machine SID generated when the VM boots from the image. -With Lab Services, even if you use a *generalized* image to create a lab, the template VM and student VMs will all have the same machine SID. The VMs have the same SID because the template VM's image is in a *specialized* state when it's published to create the student VMs. +With Lab Services, even if you use a *generalized* image to create a lab, the template VM and student VMs will all have the same machine SID. The VMs have the same SID because the template VM's image is in a *specialized* state when it's published to create the student VMs. -For example, the Azure Marketplace images are generalized. If you create a lab from the Win 10 marketplace image and publish the template VM, all of the student VMs within a lab will have the same machine SID as the template VM. The machine SIDs can be verified by using a tool such as [PsGetSid](/sysinternals/downloads/psgetsid). +For example, the Azure Marketplace images are generalized. If you create a lab from the Win 10 marketplace image and publish the template VM, all of the student VMs within a lab have the same machine SID as the template VM. The machine SIDs can be verified by using a tool such as [PsGetSid](/sysinternals/downloads/psgetsid). -If you plan to use an endpoint management tool or similar software, we recommend that you test it with lab VMs to ensure that it works properly when machine SIDs are the same. +If you plan to use an endpoint management tool or similar software, we recommend that you test it with lab VMs to ensure that it works properly when machine SIDs are the same. ## Pricing To learn about pricing, see [Azure Lab Services pricing](https://azure.microsoft You also need to consider the pricing for the Shared Image Gallery service if you plan to use shared image galleries for storing and managing image versions. -Creating a shared image gallery and attaching it to your lab account is free. No cost is incurred until you save an image version to the gallery. The pricing for using a shared image gallery is ordinarily fairly negligible, but it's important to understand how it's calculated, because it isn't included in the pricing for Azure Lab Services. +Creating a shared image gallery and attaching it to your lab account is free. No cost is incurred until you save an image version to the gallery. The pricing for using a shared image gallery is ordinarily fairly negligible, but it's important to understand how it's calculated, because it isn't included in the pricing for Azure Lab Services. #### Storage charges -To store image versions, a shared image gallery uses standard hard disk drive (HDD) managed disks by default. We recommend using HDD-managed disks when using shared image gallery with Lab Services. The size of the HDD-managed disk that's used depends on the size of the image version that's being stored. Lab Services supports image and disk sizes up to 128 GB. To learn about pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). +To store image versions, a shared image gallery uses standard hard disk drive (HDD) managed disks by default. We recommend using HDD-managed disks when using shared image gallery with Lab Services. The size of the HDD-managed disk that's used depends on the size of the image version that's being stored. Lab Services supports image and disk sizes up to 128 GB. To learn about pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). #### Replication and network egress charges When you save an image version by using a lab template VM, Azure Lab Services fi It's important to note that Azure Lab Services automatically replicates the source image version to all [target regions within the geography](https://azure.microsoft.com/global-infrastructure/regions/) where the lab is located. For example, if your lab is in the US geography, an image version is replicated to each of the eight regions that exist within the US. -A network egress charge occurs when an image version is replicated from the source region to additional target regions. The amount charged is based on the size of the image version when the image's data is initially transferred outbound from the source region. For pricing details, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/). +A network egress charge occurs when an image version is replicated from the source region to additional target regions. The amount charged is based on the size of the image version when the image's data is initially transferred outbound from the source region. For pricing details, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/). Egress charges might be waived for [Education Solutions](https://www.microsoft.com/licensing/licensing-programs/licensing-for-industries?rtc=1&activetab=licensing-for-industries-pivot:primaryr3) customers. To learn more, contact your account manager. Let's look at an example of the cost of saving a template VM image to a shared i The total cost per month is estimated as: -* *Number of images × number of versions × number of replicas × managed disk price = total cost per month* +- *Number of images × number of versions × number of replicas × managed disk price = total cost per month* In this example, the cost is: -* 1 custom image (32 GB) × 2 versions × 8 US regions × $1.54 = $24.64 per month +- 1 custom image (32 GB) × 2 versions × 8 US regions × $1.54 = $24.64 per month > [!NOTE] > The preceding calculation is for example purposes only. It covers storage costs associated with using Shared Image Gallery and does *not* include egress costs. For actual pricing for storage, see [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). |
lab-services | How To Add Lab Creator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-add-lab-creator.md | + + Title: 'How to add a lab creator to a lab account with Azure Lab Services' ++description: Learn how to grant a user access to create labs. +++++ Last updated : 06/27/2024++++# Add a user to the Lab Creator role +++To grant people the permission to create labs, add them to the Lab Creator role. ++Follow these steps to [assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). ++> [!NOTE] +> Azure Lab Services automatically assigns the Lab Creator role to the Azure account you use to create the lab account. ++1. On the **Lab Account** page, select **Access control (IAM)**. ++1. From the **Access control (IAM)** page, select **Add** > **Add role assignment**. ++ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows the Access control (I A M) page with Add role assignment menu option highlighted."::: ++1. On the **Role** tab, select the **Lab Creator** role. ++ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot that shows the Add role assignment page with Role tab selected."::: ++1. On the **Members** tab, select the user you want to add to the Lab Creators role. ++1. On the **Review + assign** tab, select **Review + assign** to assign the role. ++## Next steps ++In this article, you granted lab creation permissions to another user. To learn about how to create a lab, see [Manage labs in Azure Lab Services when using lab accounts](how-to-manage-classroom-labs.md). |
lab-services | How To Connect Peer Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-peer-virtual-network.md | Title: Connect to a peer network -description: Learn how to connect your lab network with another network as a peer for lab accounts in Azure Lab Services. For example, connect your on-premises organization/university network with Lab's virtual network in Azure. +description: Learn how to connect your lab network with another network as a peer for lab accounts in Azure Lab Services. For example, connect your on-premises organization/university network with Lab's virtual network in Azure. This article provides information about peering your labs network with another n Virtual network peering enables you to seamlessly connect Azure virtual networks. Once peered, the virtual networks appear as one, for connectivity purposes. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same virtual network, through private IP addresses only. For more information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md). -You may need to connect your lab's network with a peer virtual network in some scenarios including the following ones: +You might need to connect your lab's network with a peer virtual network in some scenarios including the following ones: - The virtual machines in the lab have software that connects to on-premises license servers to acquire license. - The virtual machines in the lab need access to data sets (or any other files) on university's network shares. You may need to connect your lab's network with a peer virtual network in some s Certain on-premises networks are connected to Azure Virtual Network either through [ExpressRoute](../expressroute/expressroute-introduction.md) or [Virtual Network Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). These services must be set up outside of Azure Lab Services. To learn more about connecting an on-premises network to Azure using ExpressRoute, see [ExpressRoute overview](../expressroute/expressroute-introduction.md). For on-premises connectivity using a Virtual Network Gateway, the gateway, specified virtual network, and the lab account must all be in the same region. > [!NOTE]-> When creating a Azure Virtual Network that will be peered with a lab account, it's important to understand how the virtual network's region impacts where labs are created. For more information, see the administrator guide's section on [regions/locations](./administrator-guide-1.md#regionslocations). +> When creating a Azure Virtual Network that will be peered with a lab account, it's important to understand how the virtual network's region impacts where labs are created. For more information, see the administrator guide's section on [regions/locations](./administrator-guide-1.md#regionslocations). > [!NOTE]-> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering). +> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering). ## Configure at the time of lab account creation -During the new [lab account creation](tutorial-setup-lab-account.md), you can pick an existing virtual network that shows in the **Peer virtual network** dropdown list on the **Advanced** tab. The list only shows virtual networks in the same region as the lab account. The selected virtual network is connected (peered) to labs created under the lab account. All the virtual machines in labs that are created after the making this change have access to the resources on the peered virtual network. +When [creating a lab account](how-to-create-lab-accounts.md), you can pick an existing virtual network that shows in the **Peer virtual network** dropdown list on the **Advanced** tab. The list only shows virtual networks in the same region as the lab account. The selected virtual network is connected (peered) to labs created under the lab account. All the virtual machines in labs that are created after the making this change have access to the resources on the peered virtual network. ![Screenshot that shows how to create a lab account in the Azure portal, highlighting the peer virtual network setting.](./media/how-to-connect-peer-virtual-network/select-vnet-to-peer.png) ### Address range -There's also an option to provide an address range for virtual machines for the labs. The address range setting applies only if you enable a peer virtual network for the lab. If the address range is provided, all the virtual machines in the labs under the lab account are created in that address range. The address range should be in CIDR notation (for example, 10.20.0.0/20) and shouldn't overlap with any existing address ranges. +There's also an option to provide an address range for virtual machines for the labs. The address range setting applies only if you enable a peer virtual network for the lab. If the address range is provided, all the virtual machines in the labs under the lab account are created in that address range. The address range should be in CIDR notation (for example, 10.20.0.0/20) and shouldn't overlap with any existing address ranges. -When you provide an address range, it's important to think about the number of *labs* that you create. Azure Lab Services assumes a maximum of 512 virtual machines per lab. For example, an IP range with '/23' can create only one lab. A range with a '/21' allows for the creation of four labs. +When you provide an address range, it's important to think about the number of *labs* that you create. Azure Lab Services assumes a maximum of 512 virtual machines per lab. For example, an IP range with '/23' can create only one lab. A range with a '/21' allows for the creation of four labs. -If the address range isn't specified, Azure Lab Services uses the default address range given to it by Azure when creating the virtual network to be peered with your virtual network. The range is often something like 10.x.0.0/16. This large range might lead to IP range overlap, so make sure to either specify an address range in the lab settings or check the address range of your virtual network being peered. +If the address range isn't specified, Azure Lab Services uses the default address range given to it by Azure when creating the virtual network to be peered with your virtual network. The range is often something like 10.x.0.0/16. Large IP ranges might lead to IP range overlap. Make sure to either specify an address range in the lab settings or check the address range of your virtual network being peered. > [!NOTE]-> Lab creation can fail if the lab account is peered to a virtual network but has too narrow of an IP address range. You can run out of space in the address range if there are too many labs in the lab account (each lab uses 512 addresses). +> Lab creation can fail if the lab account is peered to a virtual network but has too narrow of an IP address range. You can run out of space in the address range if there are too many labs in the lab account (each lab uses 512 addresses). > > For example, if you have a block of /19, this address range can accommodate 8192 IP addresses and 16 labs (8192/512 = 16 labs). In this case, lab creation fails on the 17th lab creation.-> -> If the lab creation fails, contact your lab account owner/admin and request for the address range to be increased. The admin can increase the address range using steps mentioned in the [Specify an address range for VMs in a lab account](#specify-an-address-range-for-vms-in-the-lab-account) section. +> +> If the lab creation fails, contact your lab account owner/admin and request for the address range to be increased. The admin can increase the address range using steps mentioned in the [Specify an address range for VMs in a lab account](#specify-an-address-range-for-vms-in-the-lab-account) section. ## Configure after the lab account is created When you select a virtual network for the **Peer virtual network** field, the ** > The peered virtual network setting applies only to labs that are created after the change is made, not to the existing labs. ## Specify an address range for VMs in the lab account-The following procedure has steps to specify an address range for VMs in the lab. If you update the range that you previously specified, the modified address range applies only to VMs that are created after the change was made. -Here are some restrictions when specifying the address range that you should keep in mind. +The following procedure has steps to specify an address range for VMs in the lab. If you update the range that you previously specified, the modified address range applies only to VMs that are created after the change was made. ++Here are some restrictions when specifying the address range that you should keep in mind. -- The prefix must be smaller than or equal to 23. +- The prefix must be smaller than or equal to 23. - If a virtual network is peered to the lab account, the provided address range can't overlap with address range from peered virtual network. 1. On the **Lab Account** page, select **Lab settings** on the left menu. 2. For the **Address range** field, specify the address range for VMs that are created in the lab. The address range should be in the classless inter-domain routing (CIDR) notation (example: 10.20.0.0/23). Virtual machines in the lab are created in this address range.-3. Select **Save** on the toolbar. +3. Select **Save** on the toolbar ![Screenshot that shows the lab settings page for a lab account in the Azure portal, highlighting the option to configure an address range.](./media/how-to-manage-lab-accounts/labs-configuration-page-address-range.png) See the following articles: - [Attach a compute gallery to a lab](how-to-attach-detach-shared-image-gallery-1.md) - [Add a user as a lab owner](how-to-add-user-lab-owner.md) - [View firewall settings for a lab](how-to-configure-firewall-settings.md)-- [Configure other settings for a lab](how-to-configure-lab-accounts.md)+- [Configure other settings for a lab](how-to-configure-lab-accounts.md) |
lab-services | How To Manage Classroom Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-classroom-labs.md | This article describes how to create and delete a lab. It also shows you how to ## Prerequisites -To set up a lab in a lab account, you must be a member of the **Lab Creator** role in the lab account. The account you used to create a lab account is automatically added to this role. A lab owner can add other users to the Lab Creator role by using steps in the following article: [Add a user to the Lab Creator role](tutorial-setup-lab-account.md#add-a-user-to-the-lab-creator-role). +To set up a lab in a lab account, you must be a member of the **Lab Creator** role in the lab account. The account you used to create a lab account is automatically added to this role. A lab owner can add other users to the Lab Creator role by using steps in the following article: [Add a user to the Lab Creator role](how-to-add-lab-creator.md). ## Create a lab To set up a lab in a lab account, you must be a member of the **Lab Creator** ro > Make a note of user name and password. They won't be shown again. 3. Disable **Use same password for all virtual machines** option if you want students to set their own passwords. This step is **optional**. - An educator can choose to use the same password for all the VMs in the lab, or allow students to set passwords for their VMs. By default, this setting is enabled for all Windows and Linux images except for Ubuntu. When you select **Ubuntu** VM, this setting is disabled and students are prompted to set a password when they sign in for the first time. + An educator can choose to use the same password for all the VMs in the lab, or allow students to set passwords for their VMs. By default, this setting is enabled for all Windows and Linux images except for Ubuntu. When you select **Ubuntu** VM, this setting is disabled and students are prompted to set a password when they sign in for the first time. :::image type="content" source="./media/how-to-manage-classroom-labs/virtual-machine-credentials.png" alt-text="Screenshot that shows the Virtual machine credentials page of the New lab wizard."::: To set up a lab in a lab account, you must be a member of the **Lab Creator** ro 8. On the **Template** page, do the following steps: These steps are **optional** for the tutorial. 1. Start the template VM.- 1. Connect to the template VM by selecting **Connect**. If it's a Linux template VM, you choose whether you want to connect using an SSH terminal or a graphical remote desktop. Additional setup is required to use a graphical remote desktop. For more information, see [Enable graphical remote desktop for Linux virtual machines in Azure Lab Services](how-to-enable-remote-desktop-linux.md). + 1. Connect to the template VM by selecting **Connect**. If it's a Linux template VM, you choose whether you want to connect using an SSH terminal or a graphical remote desktop. Extra setup is required to use a graphical remote desktop. For more information, see [Enable graphical remote desktop for Linux virtual machines in Azure Lab Services](how-to-enable-remote-desktop-linux.md). 1. Select **Reset password** to reset the password for the VM. The VM must be running before the reset password button is available. 1. Install and configure software on your template VM.- 1. **Stop** the VM. + 1. **Stop** the VM. 9. On **Template** page, select **Publish** on the toolbar. |
lab-services | Tutorial Setup Lab Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab-account.md | - Title: 'Tutorial: Set up a lab account with Azure Lab Services'- -description: Learn how to set up a lab account with Azure Lab Services in the Azure portal. Then, grant a user access to create labs. ----- Previously updated : 03/03/2023----# Tutorial: Set up a lab account with Azure Lab Services ---In Azure Lab Services, a lab account serves as the central resource in which you manage your organization's labs. In your lab account, give permission to others to create labs, and set policies that apply to all labs under the lab account. In this tutorial, learn how to create a lab account by using the Azure portal. --In this tutorial, you do the following actions: --> [!div class="checklist"] -> - Create a lab account -> - Add a user to the Lab Creator role ---## Prerequisites --* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --## Create a lab account --The following steps illustrate how to use the Azure portal to create a lab account with Azure Lab Services. --1. Sign in to the [Azure portal](https://portal.azure.com). --1. Select **Create a resource** in the upper left-hand corner of the Azure portal. -- :::image type="content" source="./media/tutorial-setup-lab-account/azure-portal-create-resource.png" alt-text="Screenshot that shows the Azure portal home page, highlighting the Create a resource button."::: --1. Search for **lab account**. (**Lab account** can also be found under the **DevOps** category.) --1. On the **Lab account** tile, select **Create** > **Lab account**. -- :::image type="content" source="./media/tutorial-setup-lab-account/select-lab-accounts-service.png" alt-text="Screenshot of how to search for and create a lab account by using the Azure Marketplace."::: --1. On the **Basics** tab of the **Create a lab account** page, provide the following information: -- | Field | Description | - | | -- | - | **Subscription** | Select the Azure subscription that you want to use to create the resource. | - | **Resource group** | Select an existing resource group or select **Create new**, and enter a name for the new resource group. | - | **Name** | Enter a unique lab account name. <br/>For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices). | - | **Region** | Select a geographic location to host your lab account. | --1. After you're finished configuring the resource, select **Review + Create**. -- :::image type="content" source="./media/tutorial-setup-lab-account/lab-account-basics-page.png" alt-text="Screenshot that shows the Basics tab to create a new lab account in the Azure portal."::: --1. Review all the configuration settings and select **Create** to start the deployment of the lab account. --1. To view the new resource, select **Go to resource**. -- :::image type="content" source="./media/tutorial-setup-lab-account/go-to-lab-account.png" alt-text="Screenshot that shows the resource deployment completion page in the Azure portal."::: --1. Confirm that you see the lab account **Overview** page. -- :::image type="content" source="./media/tutorial-setup-lab-account/lab-account-page.png" alt-text="Screenshot that shows the lab account overview page in the Azure portal."::: --You've now successfully created a lab account by using the Azure portal. To let others create labs in the lab account, you assign them the Lab Creator role. --## Add a user to the Lab Creator role --To set up a lab in a lab account, you must be a member of the Lab Creator role in the lab account. To grant people the permission to create labs, add them to the Lab Creator role. --Follow these steps to [assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). --> [!NOTE] -> Azure Lab Services automatically assigns the Lab Creator role to the Azure account you use to create the lab account. If you plan to use the same user account to create a lab in this tutorial, skip this step. --1. On the **Lab Account** page, select **Access control (IAM)**. --1. From the **Access control (IAM)** page, select **Add** > **Add role assignment**. -- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows the Access control (I A M) page with Add role assignment menu option highlighted."::: --1. On the **Role** tab, select the **Lab Creator** role. -- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot that shows the Add roll assignment page with Role tab selected."::: --1. On the **Members** tab, select the user you want to add to the Lab Creators role. --1. On the **Review + assign** tab, select **Review + assign** to assign the role. --## Next steps --In this tutorial, you created a lab account and granted lab creation permissions to another user. To learn about how to create a lab, advance to the next tutorial: --> [!div class="nextstepaction"] -> [Set up a lab](tutorial-setup-lab.md) |
load-balancer | Howto Load Balancer Imds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/howto-load-balancer-imds.md | -| Data | Description | Version introduced | +| **Data** | **Description** | **Version introduced** | ||-|--| | `publicIpAddresses` | The instance level Public or Private IP of the specific Virtual Machine instance | 2020-10-01 | `inboundRules` | List of load balancing rules or inbound NAT rules using which the Load Balancer directs traffic to the specific Virtual Machine instance. Frontend IP addresses and the Private IP addresses listed here belong to the Load Balancer. | 2020-10-01 |
load-balancer | Load Balancer Multiple Ip Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-cli.md | |
load-balancer | Load Balancer Multiple Ip Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-powershell.md | -This article describes how to use Azure Load Balancer with multiple IP addresses on a secondary network interface (NIC). For this scenario, we have two VMs running Windows, each with a primary and a secondary NIC. Each of the secondary NICs has two IP configurations. Each VM hosts both websites contoso.com and fabrikam.com. Each website is bound to one of the IP configurations on the secondary NIC. We use Azure Load Balancer to expose two frontend IP addresses, one for each website, to distribute traffic to the respective IP configuration for the website. This scenario uses the same port number across both frontends, as well as both backend pool IP addresses. +This article describes how to use Azure Load Balancer with multiple IP addresses on a secondary network interface (NIC). For this scenario, we have two VMs running Windows, each with a primary and a secondary NIC. Each of the secondary NICs has two IP configurations. Each VM hosts both websites contoso.com and fabrikam.com. Each website is bound to one of the IP configurations on the secondary NIC. We use Azure Load Balancer to expose two frontend IP addresses, one for each website, to distribute traffic to the respective IP configuration for the website. This scenario uses the same port number across both frontends, and both backend pool IP addresses. ## Steps to load balance on multiple IP configurations Follow the steps below to achieve the scenario outlined in this article: $Subnet1 = Get-AzVirtualNetworkSubnetConfig -Name "mySubnet" -VirtualNetwork $myVnet ``` - You do not need to associate the secondary IP configurations with public IPs for the purpose of this tutorial. Edit the command to remove the public IP association part. + You don't need to associate the secondary IP configurations with public IPs in this tutorial. Edit the command to remove the public IP association part. -6. Complete steps 4 through 6 of this article again for VM2. Be sure to replace the VM name to VM2 when doing this. Note that you do not need to create a virtual network for the second VM. You may or may not create a new subnet based on your use case. +6. Complete steps 4 through 6 of this article again for VM2. Be sure to replace the VM name to VM2 when doing this. You don't need to create a virtual network for the second VM. You can create a new subnet based on your use case. 7. Create two public IP addresses and store them in the appropriate variables as shown: Follow the steps below to achieve the scenario outlined in this article: $nic2 | Set-AzNetworkInterface ``` -13. Finally, you must configure DNS resource records to point to the respective frontend IP address of the Load Balancer. You may host your domains in Azure DNS. For more information about using Azure DNS with Load Balancer, see [Using Azure DNS with other Azure services](../dns/dns-for-azure-services.md). +13. Finally, you must configure DNS resource records to point to the respective frontend IP address of the Load Balancer. You can host your domains in Azure DNS. For more information about using Azure DNS with Load Balancer, see [Using Azure DNS with other Azure services](../dns/dns-for-azure-services.md). ## Next steps - Learn more about how to combine load balancing services in Azure in [Using load-balancing services in Azure](../traffic-manager/traffic-manager-load-balancing-azure.md). |
load-balancer | Load Balancer Multiple Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip.md | In this section, you create two virtual machines to host the IIS websites. 3. In **Create virtual machine**, enter or select the following information: - | Setting | Value | + | Setting | Value | |--|-| | **Project Details** | | | Subscription | Select your Azure subscription | In this section, you create two virtual machines to host the IIS websites. | Subnet | Select **backend-subnet(10.1.0.0/24)** | | Public IP | Select **None**. | | NIC network security group | Select **Advanced**|- | Configure network security group | Select **Create new**. </br> In **Create network security group**, enter **myNSG** in **Name**. </br> In **Inbound rules**, select **+Add an inbound rule**. </br> In **Service**, select **HTTP**. </br> In **Priority**, enter **100**. </br> In **Name**, enter **myNSGrule** </br> Select **Add** </br> Select **OK** | + | Configure network security group | Select **Create new**.</br> In **Create network security group**, enter **myNSG** in **Name**.</br> In **Inbound rules**, select **+Add an inbound rule**.</br> In **Service**, select **HTTP**.</br> In **Priority**, enter **100**.</br> In **Name**, enter **myNSGrule**.</br> Select **Add**.</br> Select **OK**. | 6. Select **Review + create**. You connect to **myVM1** and **myVM2** with Azure Bastion and configure the seco 6. Select **Allow** for Bastion to use the clipboard. -7. On the server desktop, navigate to Start > Windows Administrative Tools > Windows PowerShell > Windows PowerShell. +7. On the server desktop, navigate to **Start > Windows Administrative Tools > Windows PowerShell > Windows PowerShell**. 8. In the PowerShell window, execute the `route print` command, which returns output similar to the following output for a virtual machine with two attached network interfaces: During the creation of the load balancer, you configure: | Name | Enter **Frontend-contoso**. | | IP version | Select **IPv4**. | | IP type | Select **IP address**. |- | Public IP address | Select **Create new**. </br> Enter **myPublicIP-contoso** for **Name** </br> Select **Zone-redundant** in **Availability zone**. </br> Leave the default of **Microsoft Network** for **Routing preference**. </br> Select **OK**. | + | Public IP address | Select **Create new**.</br> Enter **myPublicIP-contoso** for **Name** </br> Select **Zone-redundant** in **Availability zone**.</br> Leave the default of **Microsoft Network** for **Routing preference**.</br> Select **OK**. | > [!NOTE] > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier). > > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md). >- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md). + > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear.</br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md). 7. Select **Add**. During the creation of the load balancer, you configure: | Name | Enter **Frontend-fabrikam**. | | IP version | Select **IPv4**. | | IP type | Select **IP address**. |- | Public IP address | Select **Create new**. </br> Enter **myPublicIP-fabrikam** for **Name** </br> Select **Zone-redundant** in **Availability zone**. </br> Leave the default of **Microsoft Network** for **Routing preference**. </br> Select **OK**. | + | Public IP address | Select **Create new**.</br> Enter **myPublicIP-fabrikam** for **Name** </br> Select **Zone-redundant** in **Availability zone**.</br> Leave the default of **Microsoft Network** for **Routing preference**.</br> Select **OK**. | 10. Select **Add**. During the creation of the load balancer, you configure: | Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe-contoso**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | + | Health probe | Select **Create new**.</br> In **Name**, enter **myHealthProbe-contoso**.</br> Select **TCP** in **Protocol**.</br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. | During the creation of the load balancer, you configure: | Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe-fabrikam**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | + | Health probe | Select **Create new**.</br> In **Name**, enter **myHealthProbe-fabrikam**.</br> Select **TCP** in **Protocol**.</br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. | If you're not going to continue to use this application, delete the virtual mach Advance to the next article to learn how to create a cross-region load balancer: > [!div class="nextstepaction"]-> [Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md) +> [Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md) |
load-balancer | Quickstart Load Balancer Standard Internal Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md | During the creation of the load balancer, you configure: | Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |- | Health probe | Select **Create new**. </br> In **Name**, enter **lb-health-probe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | + | Health probe | Select **Create new**.</br> In **Name**, enter **lb-health-probe**.</br> Select **TCP** in **Protocol**.</br> Leave the rest of the defaults, and select **Save**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | Enable TCP reset | Select **checkbox**. | |
load-balancer | Quickstart Load Balancer Standard Public Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md | Title: "Quickstart: Create a public load balancer - Azure portal" -description: This quickstart shows how to create a load balancer using the Azure portal. +description: Learn how to create a public load balancer using the Azure portal. Previously updated : 06/06/2023 Last updated : 06/28/2024 #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs. During the creation of the load balancer, you configure: 1. Select **Zone-redundant** in **Availability zone**. > [!NOTE]- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md). + > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear.</br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md). 1. Leave the default of **Microsoft Network** for **Routing preference**. -1. Select **OK**. +1. Select **Save**. -1. Select **Add**. +1. Select **Save**. 1. Select **Next: Backend pools** at the bottom of the page. During the creation of the load balancer, you configure: | Protocol | Select **TCP** | | Port | Enter **80** | | Backend port | Enter **80** |- | Health probe | Select **Create new**. </br> In **Name**, enter **lb-health-probe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. | + | Health probe | Select **Create new**.</br> In **Name**, enter **lb-health-probe**.</br> Select **HTTP** in **Protocol**.</br> Leave the rest of the defaults, and select **Save**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15** | | Enable TCP reset | Select checkbox | |
machine-learning | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md | Azure portal users can find the latest image available for provisioning the Data Visit the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds. +## June 28, 2024 ++Image Version: 24.06.10 ++SDK Version: 1.56.0 ++Issue fixed: Compute Instance 20.04 image build with SDK 1.56.0 ++Major: Image Version: 24.06.10 ++- SDK(azureml-core):1.56.0 +- Python:3.9 +- CUDA: 12.2 +- CUDnn==9.1.1 +- Nvidia Driver: 535.171.04 +- PyTorch: 1.13.1 +- TensorFlow: 2.15.0 +- autokeras==1.0.16 +- keras=2.15.0 +- ray==2.2.0 +- docker version==24.0.9-1 + ## June 17, 2024 [Data Science Virtual Machine - Windows 2022](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2022?tab=Overview) |
machine-learning | Deploy Jais Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/deploy-jais-models.md | Title: How to deploy JAIS models with Azure Machine Learning Studio- -description: Learn how to deploy JAIS models with Azure Machine Learning Studio. + Title: How to deploy JAIS models with Azure Machine Learning studio ++description: Learn how to deploy JAIS models with Azure Machine Learning studio. -# How to deploy JAIS with Azure Machine Learning Studio +# How to deploy JAIS with Azure Machine Learning studio -In this article, you learn how to use Azure Machine Learning Studio to deploy the JAIS model as a service with pay-as you go billing. +In this article, you learn how to use Azure Machine Learning studio to deploy the JAIS model as a service with pay-as you go billing. -The JAIS model is available in Azure Machine Learning Studio with pay-as-you-go token based billing with Models as a Service. +The JAIS model is available in Azure Machine Learning studio with pay-as-you-go token based billing with Models as a Service. You can find the JAIS model in the model catalog by filtering on the JAIS collection. ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for JAIS is only available with workspaces created in these regions: - > [!IMPORTANT] - > For JAIS models, the pay-as-you-go model deployment offering is only available with workspaces created in East US 2 or Sweden Central region. + * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central ++ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../ai-studio/concepts/rbac-ai-studio.md). ### JAIS 30b Chat -JAIS 30b Chat is an auto-regressive bi-lingual LLM for **Arabic** & **English**. The tuned versions use supervised fine-tuning (SFT). The model is finetuned with both Arabic and English prompt-response pairs. The finetuning datasets included a wide range of instructional data across various domains. The model covers a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, the Core42 team developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic. +JAIS 30b Chat is an auto-regressive bi-lingual LLM for **Arabic** & **English**. The tuned versions use supervised fine-tuning (SFT). The model is fine-tuned with both Arabic and English prompt-response pairs. The fine-tuning datasets included a wide range of instructional data across various domains. The model covers a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, the Core42 team developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic. *Context length:* JAIS 30b Chat supports a context length of 8K. Models deployed as a service with pay-as-you-go are protected by [Azure AI Conte - [What is Azure AI Studio?](../ai-studio/what-is-ai-studio.md) - [Azure AI FAQ article](../ai-studio/faq.yml)+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) |
machine-learning | How To Connect Models Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connect-models-serverless.md | Follow these steps to create a connection: # [Python SDK](#tab/python) ```python- client.connections.create(ServerlessConnection( + client.connections.create_or_update(ServerlessConnection( name="meta-llama3-8b-connection", endpoint="https://meta-llama3-8b-qwerty-serverless.inference.ai.azure.com", api_key="1234567890qwertyuiop" Follow these steps to create a connection: ## Related content - [Model Catalog and Collections](concept-model-catalog.md)-- [Deploy models as serverless API endpoints](how-to-deploy-models-serverless.md)+- [Deploy models as serverless API endpoints](how-to-deploy-models-serverless.md) |
machine-learning | How To Create Compute Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md | Choose the tab for the environment you're using for other prerequisites. * To use the Python SDK, [set up your development environment with a workspace](how-to-configure-environment.md). Once your environment is set up, attach to the workspace in your Python script: - [!INCLUDE [connect ws v2](includes/machine-learning-connect-ws-v2.md)] # [Azure CLI](#tab/azure-cli) -* To use the CLI, install the [Azure CLI extension for Machine Learning service (v2)](https://aka.ms/sdk-v2-install), [Azure Machine Learning Python SDK (v2)](https://aka.ms/sdk-v2-install), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md). +* If you're working on a compute instance, the CLI is already installed. If working on a different computer, install the [Azure CLI extension for Machine Learning service (v2)](https://aka.ms/sdk-v2-install). ++ # [Studio](#tab/azure-studio) Where the file *create-instance.yml* is: * If you're using an __Azure Virtual Network__, specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network. You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup. * If you're using an Azure Machine Learning __managed virtual network__, the compute instance is created inside the managed virtual network. You can also select __No public IP__ to prevent the creation of a public IP address. For more information, see [managed compute with a managed network](./how-to-managed-network-compute.md).- * Allow root access. (preview) 1. Select **Applications** if you want to add custom applications to use on your compute instance, such as RStudio or Posit Workbench. See [Add custom applications such as RStudio or Posit Workbench](#add-custom-applications-such-as-rstudio-or-posit-workbench). 1. Select **Tags** if you want to add additional information to categorize the compute instance. from azure.ai.ml.constants import TimeZone from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential -# authenticate -credential = DefaultAzureCredential() --# Get a handle to the workspace -ml_client = MLClient( - credential=credential, - subscription_id="<SUBSCRIPTION_ID>", - resource_group_name="<RESOURCE_GROUP>", - workspace_name="<AML_WORKSPACE_NAME>", -) - ci_minimal_name = "ci-name" ci_start_time = "2023-06-21T11:47:00" #specify your start time in the format yyyy-mm-ddThh:mm:ss from azure.ai.ml import MLClient from azure.identity import ManagedIdentityCredential client_id = os.environ.get("DEFAULT_IDENTITY_CLIENT_ID", None) credential = ManagedIdentityCredential(client_id=client_id)-ml_client = MLClient(credential, sub_id, rg_name, ws_name) -data = ml_client.data.get(name=data_name, version="1") +ml_client = MLClient(credential, subscription_id, resource_group, workspace) ``` You can also use SDK V1: from azureml.core.authentication import MsiAuthentication from azureml.core import Workspace client_id = os.environ.get("DEFAULT_IDENTITY_CLIENT_ID", None) auth = MsiAuthentication(identity_config={"client_id": client_id})-workspace = Workspace.get("chrjia-eastus", auth=auth, subscription_id="381b38e9-9840-4719-a5a0-61d9585e1e91", resource_group="chrjia-rg", location="East US") +workspace = Workspace.get("chrjia-eastus", auth=auth, subscription_id=subscription_id, resource_group=resource_group, location="East US") ``` # [Azure CLI](#tab/azure-cli) |
machine-learning | How To Deploy Models Cohere Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-command.md | The previously mentioned Cohere models can be deployed as a serverless API with ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Cohere Command is only available with workspaces created in these regions: - > [!IMPORTANT] - > Pay-as-you-go model deployment offering is only available in workspaces created in EastUS2 or Sweden Central region. + * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central ++ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the Resource Group. Models deployed as a service with pay-as-you-go are protected by Azure AI conten - [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) |
machine-learning | How To Deploy Models Cohere Embed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-embed.md | The previously mentioned Cohere models can be deployed as a service with pay-as- ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Cohere Embed is only available with workspaces created in these regions: - > [!IMPORTANT] - > Pay-as-you-go model deployment offering is only available in workspaces created in EastUS2 or Sweden Central region. + * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central ++ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the Resource Group. Models deployed as a service with pay-as-you-go are protected by Azure AI conten - [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) |
machine-learning | How To Deploy Models Jamba | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-jamba.md | To get started with Jamba Instruct deployed as a serverless API, explore our int ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.+- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Jamba Instruct is only available with workspaces created in these regions: - > [!IMPORTANT] - > The pay-as-you-go model deployment offering for for Jamba Instruct is only available in workspaces created in the **East US 2** and **Sweden Central** regions. + * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central ++ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions: Models deployed as a serverless API are protected by Azure AI content safety. Wi - [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Plan and manage costs for Azure AI Studio](../ai-studio/how-to/costs-plan-manage.md)+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) |
machine-learning | How To Deploy Models Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-llama.md | If you need to deploy a different model, [deploy it to managed compute](#deploy- # [Meta Llama 3](#tab/llama-three) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.-- > [!IMPORTANT] - > Pay-as-you-go model deployment offering is only available in workspaces created in **East US 2** and **Sweden Central** regions for Meta Llama 3 models. +- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Meta Llama 3 is only available with workspaces created in these regions: ++ * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + * Sweden Central + + For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions: If you need to deploy a different model, [deploy it to managed compute](#deploy- # [Meta Llama 2](#tab/llama-two) - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.+- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Meta Llama 2 is only available with workspaces created in these regions: - > [!IMPORTANT] - > Pay-as-you-go model deployment offering is only available in workspaces created in **East US 2** and **West US 3** regions for Meta Llama 2 models. + * East US + * East US 2 + * North Central US + * South Central US + * West US + * West US 3 + + For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions: Models deployed as a serverless API are protected by Azure AI content safety. Wh - [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Plan and manage costs for Azure AI Studio](../ai-studio/how-to/costs-plan-manage.md)+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) |
machine-learning | How To Deploy Models Phi 3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-phi-3.md | Certain models in the model catalog can be deployed as a serverless API with pay ### Prerequisites - An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An Azure Machine Learning workspace. If you don't have a workspace, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one.+- An Azure Machine Learning workspace. If you don't have a workspace, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one. The serverless API model deployment offering for Phi-3 is only available with workspaces created in these regions: - > [!IMPORTANT] - > For Phi-3 family models, the serverless API model deployment offering is only available with workspaces created in **East US 2** and **Sweden Central** regions. + * East US 2 + * Sweden Central ++ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md). Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok - [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)-- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md) +- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) |
machine-learning | How To Deploy Models Timegen 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-timegen-1.md | Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok - [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)-- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md) +- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) |
machine-learning | How To High Availability Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md | |
machine-learning | How To Manage Compute Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-compute-instance.md | |
machine-learning | How To R Deploy R Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-deploy-r-model.md | These steps assume you have an Azure Container Registry associated with your wor 1. If you see custom environments, nothing more is needed. 1. If you don't see any custom environments, create [an R environment](how-to-r-modify-script-for-production.md#create-an-environment), or any other custom environment. (You *won't* use this environment for deployment, but you *will* use the container registry that is also created for you.) -Once you have verified that you have at least one custom environment, use the following steps to build a container. +Once you have verified that you have at least one custom environment, start a terminal and set up the CLI: -1. Open a terminal window and sign in to Azure. If you're doing this from an [Azure Machine Learning compute instance](quickstart-create-resources.md#create-a-compute-instance), use: - ```azurecli - az login --identity - ``` -- If you're not on the compute instance, omit `--identity` and follow the prompt to open a browser window to authenticate. --1. Make sure you have the most recent versions of the CLI and the `ml` extension: - - ```azurecli - az upgrade - ``` --1. If you have multiple Azure subscriptions, set the active subscription to the one you're using for your workspace. (You can skip this step if you only have access to a single subscription.) Replace `<SUBSCRIPTION-NAME>` with your subscription name. Also remove the brackets `<>`. -- ```azurecli - az account set --subscription "<SUBSCRIPTION-NAME>" - ``` --1. Set the default workspace. If you're doing this from a compute instance, you can use the following command as is. If you're on any other computer, substitute your resource group and workspace name instead. (You can find these values in [Azure Machine Learning studio](how-to-r-train-model.md#submit-the-job).) -- ```azurecli - az configure --defaults group=$CI_RESOURCE_GROUP workspace=$CI_WORKSPACE - ``` +After you've set up the CLI, use the following steps to build a container. 1. Make sure you are in your project directory. |
machine-learning | How To Use Pipelines Prompt Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipelines-prompt-flow.md | -This tutorial walks you through how to create an RAG pipeline. For advanced scenarios, you can build your own custom Azure Machine Learning pipelines from code (typically notebooks) that allows you granular control of the RAG workflow. Azure Machine Learning provides several in-built pipeline components for data chunking, embeddings generation, test data creation, automatic prompt generation, prompt evaluation. These components can be used as per your needs using notebooks. You can even use the Vector Index created in Azure Machine Learning in LangChain. +This article offers you examples on how to create an RAG pipeline. For advanced scenarios, you can build your own custom Azure Machine Learning pipelines from code (typically notebooks) that allows you granular control of the RAG workflow. Azure Machine Learning provides several in-built pipeline components for data chunking, embeddings generation, test data creation, automatic prompt generation, prompt evaluation. These components can be used as per your needs using notebooks. You can even use the Vector Index created in Azure Machine Learning in LangChain. [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)] |
machine-learning | Concept Flows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-flows.md | |
machine-learning | Concept Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-tools.md | |
machine-learning | Concept Variants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-variants.md | |
machine-learning | How To High Availability Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-high-availability-machine-learning.md | |
migrate | Concepts Dependency Visualization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-dependency-visualization.md | Title: Dependency analysis in Azure Migrate Discovery and assessment description: Describes how to use dependency analysis for assessment using Azure Migrate Discovery and assessment. -- -ms. ++ Last updated 12/07/2023 |
migrate | Concepts Migration Webapps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-webapps.md | Title: Support matrix for web apps migration description: Support matrix for web apps migration--++ Last updated 08/31/2023 |
migrate | Create Manage Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/create-manage-projects.md | Title: Create and manage projects description: Find, create, manage, and delete projects in Azure Migrate.-- -ms. ++ Last updated 05/22/2023 |
migrate | How To Discover Sql Existing Project | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-sql-existing-project.md | Title: Discover SQL Server instances in an existing Azure Migrate project description: Learn how to discover SQL Server instances in an existing Azure Migrate project. -- -ms. ++ Last updated 09/27/2023 |
migrate | Troubleshoot Webapps Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-webapps-migration.md | Title: Troubleshoot web apps migration issues description: Troubleshoot web apps migration issues--++ Last updated 02/28/2023 |
migrate | Tutorial Modernize Asp Net Appservice Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-modernize-asp-net-appservice-code.md | Title: Modernize ASP.NET web apps to Azure App Service code description: At-scale migration of ASP.NET web apps to Azure App Service using Azure Migrate--++ Last updated 02/28/2023 |
migrate | Set Discovery Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/set-discovery-scope.md | Title: Set the scope for discovery of servers on VMware vSphere with Azure Migrate description: Describes how to set the discovery scope for servers hosted on VMware vSphere assessment and migration with Azure Migrate.-- -ms. ++ Last updated 12/12/2022 |
operator-nexus | Howto Install Cli Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md | Example output: ```output Name Version -- --monitor-control-service 0.2.0 +monitor-control-service 0.4.1 connectedmachine 0.7.0-connectedk8s 1.6.5 +connectedk8s 1.7.3 k8s-extension 1.4.3 networkcloud 1.1.0-k8s-configuration 1.7.0 -managednetworkfabric 4.2.0 +k8s-configuration 2.0.0 +managednetworkfabric 6.2.0 customlocation 0.1.3-ssh 2.0.2 +ssh 2.0.4 ``` <!-- LINKS - External --> |
postgresql | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md | Azure Database for PostgreSQL - Flexible Server encrypts data in two ways: Although **it's highly not recommended**, if needed, due to legacy client incompatibility, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the `require_secure_transport` server parameter to OFF. You can also set TLS version by setting `ssl_max_protocol_version` server parameters. - **Data at rest**: For storage encryption, Azure Database for PostgreSQL - Flexible Server uses the FIPS 140-2 validated cryptographic module. Data is encrypted on disk, including backups and the temporary files created while queries are running. - The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies, like transparent data encryption in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled. + The service uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies, like transparent data encryption in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled. ## Network security |
postgresql | How To Autovacuum Tuning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-autovacuum-tuning.md | -This article provides an overview of the autovacuum feature for [Azure Database for PostgreSQL flexible server](overview.md) and the feature troubleshooting guides that are available to monitor the database bloat, autovacuum blockers and also information around how far the database is from emergency or wraparound situation. +This article provides an overview of the autovacuum feature for [Azure Database for PostgreSQL flexible server](overview.md) and the feature troubleshooting guides that are available to monitor the database bloat, autovacuum blockers. It also provides information around how far the database is from emergency or wraparound situation. ## What is autovacuum -Internal data consistency in PostgreSQL is based on the Multi-Version Concurrency Control (MVCC) mechanism, which allows the database engine to maintain multiple versions of a row and provides greater concurrency with minimal blocking between the different processes. --PostgreSQL databases need appropriate maintenance. For example, when a row is deleted, it isn't removed physically. Instead, the row is marked as "dead". Similarly for updates, the row is marked as "dead" and a new version of the row is inserted. These operations leave behind dead records, called dead tuples, even after all the transactions that might see those versions finish. Unless cleaned up, dead tuples remain, consuming disk space and bloating tables and indexes which result in slow query performance. --PostgreSQL uses a process called autovacuum to automatically clean-up dead tuples. +Autovacuum is a PostgreSQL background process that automatically cleans up dead tuples and updates statistics. It helps maintain the database performance by automatically running two key maintenance tasks: + +- VACUUM - Frees up disk space by removing dead tuples. +- ANALYZE - Collects statistics to help the PostgreSQL Optimizer choose the best execution paths for queries. + +To ensure autovacuum works properly, the autovacuum server parameter should always be set to ON. When enabled, PostgreSQL automatically decides when to run VACUUM or ANALYZE on a table, ensuring the database remains efficient and optimized. ## Autovacuum internals -Autovacuum reads pages looking for dead tuples, and if none are found, autovacuum discards the page. When autovacuum finds dead tuples, it removes them. The cost is based on: +Autovacuum reads pages looking for dead tuples, and if none are found, autovacuum discards the page. When autovacuum finds dead tuples, it removes them. The cost is based on: -- `vacuum_cost_page_hit`: Cost of reading a page that is already in shared buffers and doesn't need a disk read. The default value is set to 1.-- `vacuum_cost_page_miss`: Cost of fetching a page that isn't in shared buffers. The default value is set to 10.-- `vacuum_cost_page_dirty`: Cost of writing to a page when dead tuples are found in it. The default value is set to 20.+| Parameter | Description +| -- | -- | +`vacuum_cost_page_hit` | Cost of reading a page that is already in shared buffers and doesn't need a disk read. The default value is set to 1. +`vacuum_cost_page_miss` | Cost of fetching a page that isn't in shared buffers. The default value is set to 10. +`vacuum_cost_page_dirty` | Cost of writing to a page when dead tuples are found in it. The default value is set to 20. -The amount of work autovacuum does depends on two parameters: +The amount of work autovacuum performs depend on two parameters: -- `autovacuum_vacuum_cost_limit` is the amount of work autovacuum does in one go.-- `autovacuum_vacuum_cost_delay` number of milliseconds that autovacuum is asleep after it has reached the cost limit specified by the `autovacuum_vacuum_cost_limit` parameter.+| Parameter | Description +| -- | -- | +`autovacuum_vacuum_cost_limit` | The amount of work autovacuum does in one go. +`autovacuum_vacuum_cost_delay` | Number of milliseconds that autovacuum is asleep after it reaches the cost limit specified by the `autovacuum_vacuum_cost_limit` parameter. -In all currently supported versions of Postgres the default for `autovacuum_vacuum_cost_limit` is 200 (actually, it is set to -1 which makes it equals to the value of the regular `vacuum_cost_limit` which, by default, is 200). +In all currently supported versions of Postgres the default value for `autovacuum_vacuum_cost_limit` is 200 (actually, set to -1, which makes it equals to the value of the regular `vacuum_cost_limit`, which by default, is 200). As for `autovacuum_vacuum_cost_delay`, in Postgres version 11 it defaults to 20 milliseconds, while in Postgres versions 12 and above it defaults to 2 milliseconds. select schemaname,relname,n_dead_tup,n_live_tup,round(n_dead_tup::float/n_live_t The following columns help determine if autovacuum is catching up to table activity: -- **dead_pct**: percentage of dead tuples when compared to live tuples.-- **last_autovacuum**: The date of the last time the table was autovacuumed.-- **last_autoanalyze**: The date of the last time the table was automatically analyzed.+| Parameter | Description +| -- | -- | +`dead_pct` | Percentage of dead tuples when compared to live tuples. +`last_autovacuum` | The date of the last time the table was autovacuumed. +`last_autoanalyze` | The date of the last time the table was automatically analyzed. ## When does PostgreSQL trigger autovacuum -An autovacuum action (either *ANALYZE* or *VACUUM*) triggers when the number of dead tuples exceeds a particular number that is dependent on two factors: the total count of rows in a table, plus a fixed threshold. *ANALYZE*, by default, triggers when 10% of the table plus 50 rows changes, while *VACUUM* triggers when 20% of the table plus 50 rows changes. Since the *VACUUM* threshold is twice as high as the *ANALYZE* threshold, *ANALYZE* gets triggered earlier than *VACUUM*. +An autovacuum action (either *ANALYZE* or *VACUUM*) triggers when the number of dead tuples exceeds a particular number that is dependent on two factors: the total count of rows in a table, plus a fixed threshold. *ANALYZE*, by default, triggers when 10% of the table plus 50 row changes, while *VACUUM* triggers when 20% of the table plus 50 row changes. Since the *VACUUM* threshold is twice as high as the *ANALYZE* threshold, *ANALYZE* gets triggered earlier than *VACUUM*. +For PG versions >=13; *ANALYZE* by default, triggers when 20% of the table plus 1000 row inserts. The exact equations for each action are: -- **Autoanalyze** = autovacuum_analyze_scale_factor * tuples + autovacuum_analyze_threshold+- **Autoanalyze** = autovacuum_analyze_scale_factor * tuples + autovacuum_analyze_threshold or + autovacuum_vacuum_insert_scale_factor * tuples + autovacuum_vacuum_insert_threshold (For PG versions >= 13) - **Autovacuum** = autovacuum_vacuum_scale_factor * tuples + autovacuum_vacuum_threshold -For example, analyze triggers after 60 rows change on a table that contains 100 rows, and vacuum triggers when 70 rows change on the table, using the following equations: +For example, if we have a table with 100 rows. The following equation then provides the information on when the analyze and vacuum triggers: +For Updates/deletes: `Autoanalyze = 0.1 * 100 + 50 = 60` `Autovacuum = 0.2 * 100 + 50 = 70` +Analyze triggers after 60 rows are changed on a table, and Vacuum triggers when 70 rows are changed on a table. ++For Inserts: +`Autoanalyze = 0.2 * 100 + 1000 = 1020` ++Analyze triggers after 1,020 rows are inserted on a table ++Here's the description of the parameters used in the equation: ++| Parameter | Description +| -- | -- | +| `autovacuum_analyze_scale_factor` | Percentage of inserts/updates/deletes which triggers ANALYZE on the table. +| `autovacuum_analyze_threshold` | Specifies the minimum number of tuples inserted/updated/deleted to ANALYZE a table. +| `autovacuum_vacuum_insert_scale_factor` | Percentage of inserts that triggers ANLYZE on the table. +| `autovacuum_vacuum_insert_threshold` | Specifies the minimum number of tuples inserted to ANALYZE a table. +| `autovacuum_vacuum_scale_factor` | Percentage of updates/deletes which triggers VACUUM on the table. + Use the following query to list the tables in a database and identify the tables that qualify for the autovacuum process: ```sql The autovacuum process estimates the cost of every I/O operation, accumulates a By default, `autovacuum_vacuum_cost_limit` is set to –1, meaning autovacuum cost limit is the same value as the parameter `vacuum_cost_limit`, which defaults to 200. `vacuum_cost_limit` is the cost of a manual vacuum. -If `autovacuum_vacuum_cost_limit` is set to `-1` then autovacuum uses the `vacuum_cost_limit` parameter, but if `autovacuum_vacuum_cost_limit` itself is set to greater than `-1` then `autovacuum_vacuum_cost_limit` parameter is considered. +If `autovacuum_vacuum_cost_limit` is set to `-1`, then autovacuum uses the `vacuum_cost_limit` parameter, but if `autovacuum_vacuum_cost_limit` itself is set to greater than `-1` then `autovacuum_vacuum_cost_limit` parameter is considered. In case the autovacuum isn't keeping up, the following parameters might be changed: -| Parameter | Description | +| Parameter | Description | -- | -- |-| `autovacuum_vacuum_scale_factor` | Default: `0.2`, range: `0.05 - 0.1`. The scale factor is workload-specific and should be set depending on the amount of data in the tables. Before changing the value, investigate the workload and individual table volumes. | | `autovacuum_vacuum_cost_limit` | Default: `200`. Cost limit might be increased. CPU and I/O utilization on the database should be monitored before and after making changes. | | `autovacuum_vacuum_cost_delay` | **Postgres Version 11** - Default: `20 ms`. The parameter might be decreased to `2-10 ms`.<br />**Postgres Versions 12 and above** - Default: `2 ms`. | > [!NOTE] -> The `autovacuum_vacuum_cost_limit` value is distributed proportionally among the running autovacuum workers, so that if there is more than one, the sum of the limits for each worker doesn't exceed the value of the `autovacuum_vacuum_cost_limit` parameter +> - The `autovacuum_vacuum_cost_limit` value is distributed proportionally among the running autovacuum workers, so that if there is more than one, the sum of the limits for each worker doesn't exceed the value of the `autovacuum_vacuum_cost_limit` parameter. +> - `autovacuum_vacuum_scale_factor` is another parameter which could trigger vacuum on a table based on dead tuple accumulation. Default: `0.2`, Allowed range: `0.05 - 0.1`. The scale factor is workload-specific and should be set depending on the amount of data in the tables. Before changing the value, investigate the workload and individual table volumes. ### Autovacuum constantly running -Continuously running autovacuum might affect CPU and IO utilization on the server. The following might be possible reasons: +Continuously running autovacuum might affect CPU and IO utilization on the server. Here are some of the possible reasons: #### `maintenance_work_mem` If `maintenance_work_mem` is low, it might be increased to up to 2 GB on Azure Autovacuum tries to start a worker on each database every `autovacuum_naptime` seconds. -For example, if a server has 60 databases and `autovacuum_naptime` is set to 60 seconds, then the autovacuum worker starts every second [autovacuum_naptime/Number of DBs]. +For example, if a server has 60 databases and `autovacuum_naptime` is set to 60 seconds, then the autovacuum worker starts every second [autovacuum_naptime/Number of databases]. It's a good idea to increase `autovacuum_naptime` if there are more databases in a cluster. At the same time, the autovacuum process can be made more aggressive by increasing the `autovacuum_cost_limit` and decreasing the `autovacuum_cost_delay` parameters and increasing the `autovacuum_max_workers` from the default of 3 to 4 or 5. Overly aggressive `maintenance_work_mem` values could periodically cause out ### Autovacuum is too disruptive -If autovacuum is consuming a lot of resources, the following can be done: +If autovacuum is consuming more resources, the following actions can be done: #### Autovacuum parameters Evaluate the parameters `autovacuum_vacuum_cost_delay`, `autovacuum_vacuum_cost_limit`, `autovacuum_max_workers`. Improperly setting autovacuum parameters might lead to scenarios where autovacuum becomes too disruptive. -If autovacuum is too disruptive, consider the following: +If autovacuum is too disruptive, consider the following actions: - Increase `autovacuum_vacuum_cost_delay` and reduce `autovacuum_vacuum_cost_limit` if set higher than the default of 200.-- Reduce the number of `autovacuum_max_workers` if it's set higher than the default of 3.+- Reduce the number of `autovacuum_max_workers` if set higher than the default of 3. #### Too many autovacuum workers -Increasing the number of autovacuum workers won't necessarily increase the speed of vacuum. Having a high number of autovacuum workers isn't recommended. +Increasing the number of autovacuum workers doesn't increase the speed of vacuum. Having a high number of autovacuum workers isn't recommended. -Increasing the number of autovacuum workers will result in more memory consumption, and depending on the value of `maintenance_work_mem` , could cause performance degradation. +Increasing the number of autovacuum workers result in more memory consumption, and depending on the value of `maintenance_work_mem` , could cause performance degradation. Each autovacuum worker process only gets (1/autovacuum_max_workers) of the total `autovacuum_cost_limit`, so having a high number of workers causes each one to go slower. If the number of workers is increased, `autovacuum_vacuum_cost_limit` should also be increased and/or `autovacuum_vacuum_cost_delay` should be decreased to make the vacuum process faster. -However, if we have changed table level `autovacuum_vacuum_cost_delay` or `autovacuum_vacuum_cost_limit` parameters then the workers running on those tables are exempted from being considered in the balancing algorithm [autovacuum_cost_limit/autovacuum_max_workers]. +However, if we set the parameter at table level `autovacuum_vacuum_cost_delay` or `autovacuum_vacuum_cost_limit` parameters then the workers running on those tables are exempted from being considered in the balancing algorithm [autovacuum_cost_limit/autovacuum_max_workers]. ### Autovacuum transaction ID (TXID) wraparound protection -When a database runs into transaction ID wraparound protection, an error message like the following can be observed: +When a database runs into transaction ID wraparound protection, an error message like the following error can be observed: ``` Database isn't accepting commands to avoid wraparound data loss in database 'xx' Stop the postmaster and vacuum that database in single-user mode. > [!NOTE] > This error message is a long-standing oversight. Usually, you do not need to switch to single-user mode. Instead, you can run the required VACUUM commands and perform tuning for VACUUM to run fast. While you cannot run any data manipulation language (DML), you can still run VACUUM. -The wraparound problem occurs when the database is either not vacuumed or there are too many dead tuples that couldn't be removed by autovacuum. The reasons for this might be: +The wraparound problem occurs when the database is either not vacuumed or there are too many dead tuples that aren't removed by autovacuum. The reasons for this issue might be: #### Heavy workload The workload could cause too many dead tuples in a brief period that makes it di #### Long-running transactions -Any long-running transactions in the system won't allow dead tuples to be removed while autovacuum is running. They're a blocker to the vacuum process. Removing the long running transactions frees up dead tuples for deletion when autovacuum runs. +Any long-running transaction in the system doesn't allow dead tuples to be removed while autovacuum is running. They're a blocker to the vacuum process. Removing the long running transactions frees up dead tuples for deletion when autovacuum runs. Long-running transactions can be detected using the following query: Unused replication slots prevent autovacuum from claiming dead tuples. The follo Use `pg_drop_replication_slot()` to delete unused replication slots. -When the database runs into transaction ID wraparound protection, check for any blockers as mentioned previously, and remove those manually for autovacuum to continue and complete. You can also increase the speed of autovacuum by setting `autovacuum_cost_delay` to 0 and increasing the `autovacuum_cost_limit` to a value greater than 200. However, changes to these parameters won't be applied to existing autovacuum workers. Either restart the database or kill existing workers manually to apply parameter changes. +When the database runs into transaction ID wraparound protection, check for any blockers as mentioned previously, and remove the blockers manually for autovacuum to continue and complete. You can also increase the speed of autovacuum by setting `autovacuum_cost_delay` to 0 and increasing the `autovacuum_cost_limit` to a value greater than 200. However, changes to these parameters do not apply to existing autovacuum workers. Either restart the database or kill existing workers manually to apply parameter changes. ### Table-specific requirements -Autovacuum parameters might be set for individual tables. It's especially important for small and big tables. For example, for a small table that contains only 100 rows, autovacuum triggers VACUUM operation when 70 rows change (as calculated previously). If this table is frequently updated, you might see hundreds of autovacuum operations a day. This prevents autovacuum from maintaining other tables on which the percentage of changes aren't as big. Alternatively, a table containing a billion rows needs to change 200 million rows to trigger autovacuum operations. Setting autovacuum parameters appropriately prevents such scenarios. +Autovacuum parameters might be set for individual tables. It's especially important for small and large tables. For example, for a small table that contains only 100 rows, autovacuum triggers VACUUM operation when 70 rows change (as calculated previously). If this table is frequently updated, you might see hundreds of autovacuum operations a day, preventing autovacuum from maintaining other tables on which the percentage of changes aren't as significant. Alternatively, a table containing a billion rows needs to change 200 million rows to trigger autovacuum operations. Setting autovacuum parameters appropriately prevents such scenarios. To set autovacuum setting per table, change the server parameters as the following examples: To set autovacuum setting per table, change the server parameters as the follo ### Insert-only workloads -In versions of PostgreSQL prior to 13, autovacuum won't run on tables with an insert-only workload, because if there are no updates or deletes, there are no dead tuples and no free space that needs to be reclaimed. However, autoanalyze will run for insert-only workloads since there's new data. The disadvantages of this are: +In versions of PostgreSQL <= 13, autovacuum doesn't run on tables with an insert-only workload, as there are no dead tuples and no free space that needs to be reclaimed. However, autoanalyze runs for insert-only workloads since there's new data. The disadvantages of this are: - The visibility map of the tables isn't updated, and thus query performance, especially where there are Index Only Scans, starts to suffer over time. - The database can run into transaction ID wraparound protection.-- Hint bits won't be set.+- Hint bits are not set. #### Solutions -##### Postgres versions prior to 13 +##### Postgres versions <= 13 Using the **pg_cron** extension, a cron job can be set up to schedule a periodic vacuum analyze on the table. The frequency of the cron job depends on the workload. For step-by-step guidance using pg_cron, review [Extensions](./concepts-extensio ##### Postgres 13 and higher versions -Autovacuum will run on tables with an insert-only workload. Two new server parameters `autovacuum_vacuum_insert_threshold` and  `autovacuum_vacuum_insert_scale_factor` help control when autovacuum can be triggered on insert-only tables. +Autovacuum runs on tables with an insert-only workload. Two new server parameters `autovacuum_vacuum_insert_threshold` and  `autovacuum_vacuum_insert_scale_factor` help control when autovacuum can be triggered on insert-only tables. ## Troubleshooting guides -Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL flexible server portal it is possible to monitor bloat at database or individual schema level along with identifying potential blockers to autovacuum process. Two troubleshooting guides are available first one is autovacuum monitoring that can be used to monitor bloat at database or individual schema level. The second troubleshooting guide is autovacuum blockers and wraparound which helps to identify potential autovacuum blockers along with information on how far the databases on the server are from wraparound or emergency situation. The troubleshooting guides also share recommendations to mitigate potential issues. How to set up the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md). +Using the feature troubleshooting guides that is available on the Azure Database for PostgreSQL flexible server portal it's possible to monitor bloat at database or individual schema level along with identifying potential blockers to autovacuum process. Two troubleshooting guides are available first one is autovacuum monitoring that can be used to monitor bloat at database or individual schema level. The second troubleshooting guide is autovacuum blockers and wraparound, which helps to identify potential autovacuum blockers. It also provides information on how far the databases on the server are from wraparound or emergency situation. The troubleshooting guides also share recommendations to mitigate potential issues. How to set up the troubleshooting guides to use them follow [setup troubleshooting guides](how-to-troubleshooting-guides.md). +++## Azure Advisor Recommendations ++Azure Advisor recommendations are a proactive way of identifying if a server has a high bloat ratio or the server is approaching transaction wraparound scenario. You can also set alerts for the recommendations using the [Create Azure Advisor alerts on new recommendations using the Azure portal](../../advisor/advisor-alerts-portal.md) ++The recommendations are: ++- **High Bloat Ratio**: A high bloat ratio can affect server performance in several ways. One significant issue is that the PostgreSQL Engine Optimizer might struggle to select the best execution plan, leading to degraded query performance. Therefore, a recommendation is triggered when the bloat percentage on a server reaches a certain threshold to avoid such performance issues. ++- **Transaction Wrap around**: This scenario is one of the most serious issues a server can encounter. Once your server is in this state it might stop accepting any more transactions, causing the server to become read-only. Hence, a recommendation is triggered when we see the server has crossed 1 billion transactions threshold. ## Related content |
role-based-access-control | Role Assignments Eligible Activate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-eligible-activate.md | + + Title: Activate eligible Azure role assignments (Preview) - Azure RBAC +description: Learn how to activate eligible Azure role assignments in Azure role-based access control (Azure RBAC) using the Azure portal. ++++ Last updated : 06/27/2024++++# Activate eligible Azure role assignments (Preview) ++> [!IMPORTANT] +> Azure role assignment integration with Privileged Identity Management is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Eligible Azure role assignments provide just-in-time access to a role for a limited period of time. Microsoft Entra Privileged Identity Management (PIM) role activation has been integrated into the Access control (IAM) page in the Azure portal. If you have been made eligible for an Azure role, you can activate that role using the Azure portal. This capability is being |