Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-support.md | The following table provides links to language support reference articles by sup | Azure AI Language support | Description | | | |-|![Content Moderator icon](medi) (retired) | Detect potentially offensive or unwanted content. | +|![Content Moderator icon](~/reusable-content/ce-skilling/azure/medi) (retired) | Detect potentially offensive or unwanted content. | |![Document Intelligence icon](~/reusable-content/ce-skilling/azure/medi) | Turn documents into intelligent data-driven solutions. |-|![Immersive Reader icon](medi) | Help users read and comprehend text. | +|![Immersive Reader icon](~/reusable-content/ce-skilling/azure/medi) | Help users read and comprehend text. | |![Language icon](~/reusable-content/ce-skilling/azure/medi) | Build apps with industry-leading natural language understanding capabilities. |-|![Language Understanding icon](medi) (retired) | Understand natural language in your apps. | -|![QnA Maker icon](medi) (retired) | Distill information into easy-to-navigate questions and answers. | +|![Language Understanding icon](~/reusable-content/ce-skilling/azure/medi) (retired) | Understand natural language in your apps. | +|![QnA Maker icon](~/reusable-content/ce-skilling/azure/medi) (retired) | Distill information into easy-to-navigate questions and answers. | |![Speech icon](~/reusable-content/ce-skilling/azure/medi)| Configure speech-to-text, text-to-speech, translation, and speaker recognition applications. | |![Translator icon](~/reusable-content/ce-skilling/azure/medi) | Translate more than 100 in-use, at-risk, and endangered languages and dialects.|-|![Video Indexer icon](media/service-icons/video-indexer.svg)</br>[Video Indexer](/azure/azure-video-indexer/language-identification-model#guidelines-and-limitations) | Extract actionable insights from your videos. | +|![Video Indexer icon](~/reusable-content/ce-skilling/azure/media/ai-services/video-indexer.svg)</br>[Video Indexer](/azure/azure-video-indexer/language-identification-model#guidelines-and-limitations) | Extract actionable insights from your videos. | |![Vision icon](~/reusable-content/ce-skilling/azure/medi) | Analyze content in images and videos. | ## Language independent services These Azure AI services are language agnostic and don't have limitations based o | Azure AI service | Description | | | |-|![Anomaly Detector icon](media/service-icons/anomaly-detector.svg)</br>[Anomaly Detector](./Anomaly-Detector/index.yml) | Identify potential problems early on. | -|![Custom Vision icon](media/service-icons/custom-vision.svg)</br>[Custom Vision](./custom-vision-service/index.yml) |Customize image recognition for your business. | +|![Anomaly Detector icon](~/reusable-content/ce-skilling/azure/media/ai-services/anomaly-detector.svg)</br>[Anomaly Detector](./Anomaly-Detector/index.yml) | Identify potential problems early on. | +|![Custom Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/custom-vision.svg)</br>[Custom Vision](./custom-vision-service/index.yml) |Customize image recognition for your business. | |![Face icon](~/reusable-content/ce-skilling/azure/medi) | Detect and identify people and emotions in images. |-|![Personalizer icon](media/service-icons/personalizer.svg)</br>[Personalizer](./personalizer/index.yml) | Create rich, personalized experiences for users. | +|![Personalizer icon](~/reusable-content/ce-skilling/azure/media/ai-services/personalizer.svg)</br>[Personalizer](./personalizer/index.yml) | Create rich, personalized experiences for users. | ## See also |
ai-services | Multi Service Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md | The multi-service resource enables access to the following Azure AI services wit | Service | Description | | | |-| ![Content Moderator icon](./media/service-icons/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content. | -| ![Custom Vision icon](./media/service-icons/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition for your business. | +| ![Content Moderator icon](~/reusable-content/ce-skilling/azure/media/ai-services/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content. | +| ![Custom Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition for your business. | | ![Document Intelligence icon](~/reusable-content/ce-skilling/azure/media/ai-services/document-intelligence.svg) [Document Intelligence](./document-intelligence/index.yml) | Turn documents into intelligent data-driven solutions. | | ![Face icon](~/reusable-content/ce-skilling/azure/medi) | Detect and identify people and emotions in images. | | ![Language icon](~/reusable-content/ce-skilling/azure/media/ai-services/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities. | | ![Speech icon](~/reusable-content/ce-skilling/azure/media/ai-services/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation, and speaker recognition. |-| ![Translator icon](~/reusable-content/ce-skilling/azure/media/ai-services/translator.svg) [Translator](./translator/index.yml) | Use AI-powered translation technology to translate more than 100 in-use, at-risk, and endangered languages and dialects.. | +| ![Translator icon](~/reusable-content/ce-skilling/azure/media/ai-services/translator.svg) [Translator](./translator/index.yml) | Use AI-powered translation technology to translate more than 100 in-use, at-risk, and endangered languages and dialects. | | ![Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos. | ::: zone pivot="azportal" |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | Along with using Elasticsearch databases in Azure OpenAI Studio, you can also us ## Deploy to a copilot (preview), Teams app (preview), or web app -After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure OpenAI studio. +After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure OpenAI Studio. :::image type="content" source="../media/use-your-data/deploy-model.png" alt-text="A screenshot showing the model deployment button in Azure OpenAI Studio." lightbox="../media/use-your-data/deploy-model.png"::: This gives you multiple options for deploying your solution. #### [Copilot (preview)](#tab/copilot) -You can deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure OpenAI studio, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai). +You can deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure OpenAI Studio, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai). > [!NOTE] > Deploying to a copilot in Copilot Studio (preview) is only available in US regions. A Teams app lets you bring conversational experience to your users in Teams to i **Prerequisites** - The latest version of [Visual Studio Code](https://code.visualstudio.com/) installed.-- The latest version of [Teams toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) installed. This is a VS Code extension that creates a project scaffolding for your app.-- [Node.js](https://nodejs.org/en/download/) (version 16 or 17) installed. For more information, see [Node.js version compatibility table for project type](/microsoftteams/platform/toolkit/build-environments#nodejs-version-compatibility-table-for-project-type).+- The latest version of [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) installed. This is a VS Code extension that creates a project scaffolding for your app. +- [Node.js](https://nodejs.org/en/download/) (version 16 or 18) installed. For more information, see [Node.js version compatibility table for project type](/microsoftteams/platform/toolkit/build-environments#nodejs-version-compatibility-table-for-project-type). - [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) installed. - Sign in to your [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) (using this link to get a test account: [Developer program](https://developer.microsoft.com/microsoft-365/dev-program)). - Enable **custom Teams apps** and turn on **custom app uploading** in your account (instructions [here](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant#enable-custom-teams-apps-and-turn-on-custom-app-uploading)) token_output = TokenEstimator.estimate_tokens(input_text) ## Troubleshooting -To troubleshoot failed operations, always look out for errors or warnings specified either in the API response or Azure OpenAI studio. Here are some of the common errors and warnings: +To troubleshoot failed operations, always look out for errors or warnings specified either in the API response or Azure OpenAI Studio. Here are some of the common errors and warnings: ### Failed ingestion jobs |
ai-services | Deployment Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/deployment-types.md | Azure OpenAI offers three types of deployments. These provide a varied level of | **Getting started** | [Model deployment](./create-resource.md) | [Model deployment](./create-resource.md) | [Provisioned onboarding](./provisioned-throughput-onboarding.md) | | **Cost** | [Global deployment pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | [Regional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | May experience cost savings for consistent usage | | **What you get** | Easy access to all new models with highest default pay-per-call limits.<br><br> Customers with high volume usage may see higher latency variability | Easy access with [SLA on availability](https://azure.microsoft.com/support/legal/sl#estimate-provisioned-throughput-and-cost) |-| **What you don’t get** | ❌Data residency guarantees | ❌High volume w/consistent low latency | ❌Pay-per-call flexibility | +| **What you don’t get** |❌Data processing guarantee<br> <br> Data might be processed outside of the resource's Azure geography, but data storage remains in its Azure geography. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) | ❌High volume w/consistent low latency | ❌Pay-per-call flexibility | | **Per-call Latency** | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time. | | **Sku Name in code** | `GlobalStandard` | `Standard` | `ProvisionedManaged` | | **Billing model** | Pay-per-token | Pay-per-token | Monthly Commitments | Standard deployments are optimized for low to medium volume workloads with high ## Global standard +> [!IMPORTANT] +> Data might be processed outside of the resource's Azure geography, but data storage remains in its Azure geography. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/). + Global deployments are available in the same Azure OpenAI resources as non-global offers but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard will provide the highest default quota for new models and eliminates the need to load balance across multiple resources. The deployment type is optimized for low to medium volume workloads with high burstiness. Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [quotas page to learn more](./quota.md). |
ai-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md | The following sections provide you with a quick guide to the default quotas and |Tier| Quota Limit in tokens per minute (TPM) | Requests per minute | ||::|::|-|Enterprise agreement | 10 M | 60 K | +|Enterprise agreement | 30 M | 60 K | |Default | 450 K | 2.7 K | M = million | K = thousand |
ai-services | Rest Api Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/reference/rest-api-resources.md | Select a service from the table to learn how it can help you meet your developme | Service documentation | Description | Reference documentation | | : | : | : |-| ![Azure AI Search icon](../media/service-icons/search.svg) [Azure AI Search](../../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps | [Azure AI Search API](/rest/api/searchservice) | -| ![Azure OpenAI Service icon](../medi)</br>• [fine-tuning](/rest/api/azureopenai/fine-tuning) | -| ![Bot service icon](../media/service-icons/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels | [Bot Service API](/azure/bot-service/rest-api/bot-framework-rest-connector-api-reference?view=azure-bot-service-4.0&preserve-view=true) | +| ![Azure AI Search icon](~/reusable-content/ce-skilling/azure/media/ai-services/search.svg) [Azure AI Search](../../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps | [Azure AI Search API](/rest/api/searchservice) | +| ![Azure OpenAI Service icon](~/reusable-content/ce-skilling/azure/medi)</br>• [fine-tuning](/rest/api/azureopenai/fine-tuning) | +| ![Bot service icon](~/reusable-content/ce-skilling/azure/media/ai-services/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels | [Bot Service API](/azure/bot-service/rest-api/bot-framework-rest-connector-api-reference?view=azure-bot-service-4.0&preserve-view=true) | | ![Content Safety icon](~/reusable-content/ce-skilling/azure/media/ai-services/content-safety.svg) [Content Safety](../content-safety/index.yml) | An AI service that detects unwanted contents | [Content Safety API](https://westus.dev.cognitive.microsoft.com/docs/services/content-safety-service-2023-10-15-preview/operations/TextBlocklists_AddOrUpdateBlocklistItems) |-| ![Custom Vision icon](../media/service-icons/custom-vision.svg) [Custom Vision](../custom-vision-service/index.yml) | Customize image recognition for your business applications. |**Custom Vision APIs**<br>• [prediction](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)<br>• [training](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddebd)| +| ![Custom Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/custom-vision.svg) [Custom Vision](../custom-vision-service/index.yml) | Customize image recognition for your business applications. |**Custom Vision APIs**<br>• [prediction](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)<br>• [training](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddebd)| | ![Document Intelligence icon](~/reusable-content/ce-skilling/azure/media/ai-services/document-intelligence.svg) [Document Intelligence](../document-intelligence/index.yml) | Turn documents into intelligent data-driven solutions | [Document Intelligence API](/rest/api/aiservices/document-models?view=rest-aiservices-2023-07-31&preserve-view=true) | | ![Face icon](~/reusable-content/ce-skilling/azure/medi) | | ![Language icon](~/reusable-content/ce-skilling/azure/media/ai-services/language.svg) [Language](../language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities | [REST API](/rest/api/language/) | | ![Speech icon](~/reusable-content/ce-skilling/azure/medi) | | ![Translator icon](~/reusable-content/ce-skilling/azure/medi)|-| ![Video Indexer icon](../media/service-icons/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer) | Extract actionable insights from your videos | [Video Indexer API](/rest/api/videoindexer/accounts?view=rest-videoindexer-2024-01-01&preserve-view=true) | +| ![Video Indexer icon](~/reusable-content/ce-skilling/azure/media/ai-services/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer) | Extract actionable insights from your videos | [Video Indexer API](/rest/api/videoindexer/accounts?view=rest-videoindexer-2024-01-01&preserve-view=true) | | ![Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/vision.svg) [Vision](../computer-vision/index.yml) | Analyze content in images and videos | [Vision API](https://eastus.dev.cognitive.microsoft.com/docs/services/Cognitive_Services_Unified_Vision_API_2024-02-01/operations/61d65934cd35050c20f73ab6) | ## Deprecated services | Service documentation | Description | Reference documentation | | | | |-| ![Anomaly Detector icon](../media/service-icons/anomaly-detector.svg) [Anomaly Detector](../Anomaly-Detector/index.yml) <br>(deprecated 2023) | Identify potential problems early on | [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/CreateMultivariateModel) | -| ![Content Moderator icon](../medi) | -| ![Language Understanding icon](../media/service-icons/luis.svg) [Language understanding (LUIS)](../luis/index.yml) <br>(deprecated 2023) | Understand natural language in your apps | [LUIS API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) | -| ![Metrics Advisor icon](../media/service-icons/metrics-advisor.svg) [Metrics Advisor](../metrics-advisor/index.yml) <br>(deprecated 2023) | An AI service that detects unwanted contents | [Metrics Advisor API](https://westus.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/createDataFeed) | -| ![Personalizer icon](../media/service-icons/personalizer.svg) [Personalizer](../personalizer/index.yml) <br>(deprecated 2023) | Create rich, personalized experiences for each user | [Personalizer API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) | -| ![QnA Maker icon](../media/service-icons/luis.svg) [QnA maker](../qnamaker/index.yml) <br>(deprecated 2022) | Distill information into easy-to-navigate questions and answers | [QnA Maker API](https://westus.dev.cognitive.microsoft.com/docs/services/5a93fcf85b4ccd136866eb37/operations/5ac266295b4ccd1554da75ff) | +| ![Anomaly Detector icon](~/reusable-content/ce-skilling/azure/media/ai-services/anomaly-detector.svg) [Anomaly Detector](../Anomaly-Detector/index.yml) <br>(deprecated 2023) | Identify potential problems early on | [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/CreateMultivariateModel) | +| ![Content Moderator icon](~/reusable-content/ce-skilling/azure/medi) | +| ![Language Understanding icon](~/reusable-content/ce-skilling/azure/media/ai-services/luis.svg) [Language understanding (LUIS)](../luis/index.yml) <br>(deprecated 2023) | Understand natural language in your apps | [LUIS API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) | +| ![Metrics Advisor icon](~/reusable-content/ce-skilling/azure/media/ai-services/metrics-advisor.svg) [Metrics Advisor](../metrics-advisor/index.yml) <br>(deprecated 2023) | An AI service that detects unwanted contents | [Metrics Advisor API](https://westus.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/createDataFeed) | +| ![Personalizer icon](~/reusable-content/ce-skilling/azure/media/ai-services/personalizer.svg) [Personalizer](../personalizer/index.yml) <br>(deprecated 2023) | Create rich, personalized experiences for each user | [Personalizer API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) | +| ![QnA Maker icon](~/reusable-content/ce-skilling/azure/media/ai-services/luis.svg) [QnA maker](../qnamaker/index.yml) <br>(deprecated 2022) | Distill information into easy-to-navigate questions and answers | [QnA Maker API](https://westus.dev.cognitive.microsoft.com/docs/services/5a93fcf85b4ccd136866eb37/operations/5ac266295b4ccd1554da75ff) | ## Next steps |
ai-services | Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-to-text.md | -In this overview, you learn about the benefits and capabilities of the speech to text feature of the Speech service, which is part of Azure AI services. Speech to text can be used for [real-time](#real-time-speech-to-text), [batch transcription](#batch-transcription-api), or [fast transcription](./fast-transcription-create.md) of audio streams into text. +Azure AI Speech service offers advanced speech to text capabilities. This feature supports both real-time and batch transcription, providing versatile solutions for converting audio streams into text. -> [!NOTE] -> To compare pricing of [real-time](#real-time-speech-to-text), [batch transcription](#batch-transcription-api), and [fast transcription](./fast-transcription-create.md), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). +## Core Features -For a full list of available speech to text languages, see [Language and voice support](language-support.md?tabs=stt). +The speech to text service offers the following core features: +- [Real-time](#real-time-speech-to-text) transcription: Instant transcription with intermediate results for live audio inputs. +- [Fast transcription](#fast-transcription-preview): Fastest synchronous output for situations with predictable latency. +- [Batch transcription](#batch-transcription-api): Efficient processing for large volumes of prerecorded audio. +- [Custom speech](#custom-speech): Models with enhanced accuracy for specific domains and conditions. ## Real-time speech to text -With real-time speech to text, the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as: -- Transcriptions, captions, or subtitles for live meetings-- [Diarization](get-started-stt-diarization.md)-- [Pronunciation assessment](how-to-pronunciation-assessment.md)-- Contact center agents assist-- Dictation-- Voice agents+Real-time speech to text transcribes audio as it's recognized from a microphone or file. It's ideal for applications requiring immediate transcription, such as: +- **Transcriptions, captions, or subtitles for live meetings**: Real-time audio transcription for accessibility and record-keeping. +- **Diarization**: Identifying and distinguishing between different speakers in the audio. +- **Pronunciation assessment**: Evaluating and providing feedback on pronunciation accuracy. +- **Call center agents assist**: Providing real-time transcription to assist customer service representatives. +- **Dictation**: Transcribing spoken words into written text for documentation purposes. +- **Voice agents**: Enabling interactive voice response systems to transcribe user queries and commands. -Real-time speech to text is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md). +Real-time speech to text can be accessed via the Speech SDK, Speech CLI, and REST API, allowing integration into various applications and workflows. +Real-time speech to text is available via the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), and REST APIs such as the [Fast transcription API](fast-transcription-create.md). ## Fast transcription (Preview) -Fast transcription API is used to transcribe audio files with returning results synchronously and much faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as: +Fast transcription API is used to transcribe audio files with returning results synchronously and faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as: -- Quick audio or video transcription, subtitles, and edit. -- Video translation +- **Quick audio or video transcription and subtitles**: Quickly get a transcription of an entire video or audio file in one go. +- **Video translation**: Immediately get new subtitles for a video if you have audio in different languages. > [!NOTE] > Fast transcription API is only available via the speech to text REST API version 2024-05-15-preview and later. To get started with fast transcription, see [use the fast transcription API (pre ## Batch transcription API -[Batch transcription](batch-transcription.md) is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as: -- Transcriptions, captions, or subtitles for prerecorded audio-- Contact center post-call analytics-- Diarization+[Batch transcription](batch-transcription.md) is designed for transcribing large amounts of audio stored in files. This method processes audio asynchronously and is suited for: +- **Transcriptions, captions, or subtitles for prerecorded audio**: Converting stored audio content into text. +- **Contact center post-call analytics**: Analyzing recorded calls to extract valuable insights. +- **Diarization**: Differentiating between speakers in recorded audio. Batch transcription is available via:-- [Speech to text REST API](rest-speech-to-text.md): To get started, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).-- The [Speech CLI](spx-overview.md) supports both real-time and batch transcription. For Speech CLI help with batch transcriptions, run the following command:+- [Speech to text REST API](rest-speech-to-text.md): Facilitates batch processing with the flexibility of RESTful calls. To get started, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch). +- [Speech CLI](spx-overview.md): Supports both real-time and batch transcription, making it easy to manage transcription tasks. For Speech CLI help with batch transcriptions, run the following command: + ```azurecli-interactive spx help batch transcription ``` With [custom speech](./custom-speech-overview.md), you can evaluate and improve Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pretrained with dialects and phonetics representing various common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works well in most speech recognition scenarios. -A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions. For more information, see [custom speech](./custom-speech-overview.md) and [Speech to text REST API](rest-speech-to-text.md). +Custom speech allows you to tailor the speech recognition model to better suit your application's specific needs. This can be particularly useful for: +- **Improving recognition of domain-specific vocabulary**: Train the model with text data relevant to your field. +- **Enhancing accuracy for specific audio conditions**: Use audio data with reference transcriptions to refine the model. ++For more information about custom speech, see the [custom speech overview](./custom-speech-overview.md) and the [speech to text REST API](rest-speech-to-text.md) documentation. ++For details about customization options per language and locale, see the [language and voice support for the Speech service](./language-support.md?tabs=stt) documentation. ++## Usage Examples ++Here are some practical examples of how you can utilize Azure AI speech to text: -Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt). +| Use case | Scenario | Solution | +| | | | +| **Live meeting transcriptions and captions** | A virtual event platform needs to provide real-time captions for webinars. | Integrate real-time speech to text using the Speech SDK to transcribe spoken content into captions displayed live during the event. | +| **Customer service enhancement** | A call center wants to assist agents by providing real-time transcriptions of customer calls. | Use real-time speech to text via the Speech CLI to transcribe calls, enabling agents to better understand and respond to customer queries. | +| **Video subtitling** | A video-hosting platform wants to quickly generate a set of subtitles for a video. | Use fast transcription to quickly get a set of subtitles for the entire video. | +| **Educational tools** | An e-learning platform aims to provide transcriptions for video lectures. | Apply batch transcription through the speech to text REST API to process prerecorded lecture videos, generating text transcripts for students. | +| **Healthcare documentation** | A healthcare provider needs to document patient consultations. | Use real-time speech to text for dictation, allowing healthcare professionals to speak their notes and have them transcribed instantly. Use a custom model to enhance recognition of specific medical terms. | +| **Media and entertainment** | A media company wants to create subtitles for a large archive of videos. | Use batch transcription to process the video files in bulk, generating accurate subtitles for each video. | +| **Market research** | A market research firm needs to analyze customer feedback from audio recordings. | Employ batch transcription to convert audio feedback into text, enabling easier analysis and insights extraction. | ## Responsible AI An AI system includes not only the technology, but also the people who use it, t * [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/ai-services/speech-service/context/context) * [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/ai-services/speech-service/context/context) -## Next steps +## Related content - [Get started with speech to text](get-started-speech-to-text.md) - [Create a batch transcription](batch-transcription-create.md)+- For detailed pricing information, visit the [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page. |
ai-services | What Are Ai Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/what-are-ai-services.md | -Azure AI services help developers and organizations rapidly create intelligent, cutting-edge, market-ready, and responsible applications with out-of-the-box and prebuilt and customizable APIs and models. Example applications include natural language processing for conversations, search, monitoring, translation, speech, vision, and decision-making. --> [!TIP] -> Try Azure AI services including Azure OpenAI, Content Safety, Speech, Vision, and more in [Azure AI Studio](https://ai.azure.com). For more information, see [What is Azure AI Studio?](../ai-studio/what-is-ai-studio.md). --Most [Azure AI services](../ai-services/index.yml) are available through REST APIs and client library SDKs in popular development languages. For more information, see each service's documentation. ## Available Azure AI services Learn how an Azure AI service can help your enhance applications and optimize yo | Service | Description | | | |-| ![Anomaly Detector icon](media/service-icons/anomaly-detector.svg) [Anomaly Detector](./Anomaly-Detector/index.yml) (retired) | Identify potential problems early on. | -| ![Azure AI Search icon](media/service-icons/search.svg) [Azure AI Search](../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps. | +| ![Anomaly Detector icon](~/reusable-content/ce-skilling/azure/media/ai-services/anomaly-detector.svg) [Anomaly Detector](./Anomaly-Detector/index.yml) (retired) | Identify potential problems early on. | +| ![Azure AI Search icon](~/reusable-content/ce-skilling/azure/media/ai-services/search.svg) [Azure AI Search](../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps. | | ![Azure OpenAI Service icon](~/reusable-content/ce-skilling/azure/media/ai-services/azure-openai.svg) [Azure OpenAI](./openai/index.yml) | Perform a wide variety of natural language tasks. |-| ![Bot service icon](media/service-icons/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels. | -| ![Content Moderator icon](media/service-icons/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content. | +| ![Bot service icon](~/reusable-content/ce-skilling/azure/media/ai-services/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels. | +| ![Content Moderator icon](~/reusable-content/ce-skilling/azure/media/ai-services/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content. | | ![Content Safety icon](~/reusable-content/ce-skilling/azure/media/ai-services/content-safety.svg) [Content Safety](./content-safety/index.yml) | An AI service that detects unwanted contents. |-| ![Custom Vision icon](media/service-icons/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition for your business. | +| ![Custom Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition for your business. | | ![Document Intelligence icon](~/reusable-content/ce-skilling/azure/media/ai-services/document-intelligence.svg) [Document Intelligence](./document-intelligence/index.yml) | Turn documents into intelligent data-driven solutions. | | ![Face icon](~/reusable-content/ce-skilling/azure/medi) | Detect and identify people and emotions in images. |-| ![Immersive Reader icon](media/service-icons/immersive-reader.svg) [Immersive Reader](./immersive-reader/index.yml) | Help users read and comprehend text. | +| ![Immersive Reader icon](~/reusable-content/ce-skilling/azure/media/ai-services/immersive-reader.svg) [Immersive Reader](./immersive-reader/index.yml) | Help users read and comprehend text. | | ![Language icon](~/reusable-content/ce-skilling/azure/media/ai-services/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities. |-| ![Language Understanding icon](media/service-icons/luis.svg) [Language understanding](./luis/index.yml) (retired) | Understand natural language in your apps. | -| ![Metrics Advisor icon](media/service-icons/metrics-advisor.svg) [Metrics Advisor](./metrics-advisor/index.yml) (retired) | An AI service that detects unwanted contents. | -| ![Personalizer icon](media/service-icons/personalizer.svg) [Personalizer](./personalizer/index.yml) (retired) | Create rich, personalized experiences for each user. | -| ![QnA Maker icon](media/service-icons/luis.svg) [QnA maker](./qnamaker/index.yml) (retired) | Distill information into easy-to-navigate questions and answers. | +| ![Language Understanding icon](~/reusable-content/ce-skilling/azure/media/ai-services/luis.svg) [Language understanding](./luis/index.yml) (retired) | Understand natural language in your apps. | +| ![Metrics Advisor icon](~/reusable-content/ce-skilling/azure/media/ai-services/metrics-advisor.svg) [Metrics Advisor](./metrics-advisor/index.yml) (retired) | An AI service that detects unwanted contents. | +| ![Personalizer icon](~/reusable-content/ce-skilling/azure/media/ai-services/personalizer.svg) [Personalizer](./personalizer/index.yml) (retired) | Create rich, personalized experiences for each user. | +| ![QnA Maker icon](~/reusable-content/ce-skilling/azure/media/ai-services/luis.svg) [QnA maker](./qnamaker/index.yml) (retired) | Distill information into easy-to-navigate questions and answers. | | ![Speech icon](~/reusable-content/ce-skilling/azure/media/ai-services/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation, and speaker recognition. | | ![Translator icon](~/reusable-content/ce-skilling/azure/media/ai-services/translator.svg) [Translator](./translator/index.yml) | Use AI-powered translation technology to translate more than 100 in-use, at-risk, and endangered languages and dialects. |-| ![Video Indexer icon](media/service-icons/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer/) | Extract actionable insights from your videos. | +| ![Video Indexer icon](~/reusable-content/ce-skilling/azure/media/ai-services/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer/) | Extract actionable insights from your videos. | | ![Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos. | ## Pricing tiers and billing |
ai-studio | Deploy Models Mistral | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral.md | Title: How to deploy Mistral family of models with Azure AI Studio -description: Learn how to deploy Mistral Large with Azure AI Studio. +description: Learn how to deploy Mistral family of models with Azure AI Studio. -* __Premium models__: Mistral Large and Mistral Small. These models can be deployed as serverless APIs with pay-as-you-go token-based billing. -* __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models can be deployed to managed computes in your own Azure subscription. +* __Premium models__: Mistral Large (2402), Mistral Large (2407), and Mistral Small. +* __Open models__: Mistral Nemo, Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. ++All the premium models and Mistral Nemo (an open model) can be deployed as serverless APIs with pay-as-you-go token-based billing. The other open models can be deployed to managed computes in your own Azure subscription. You can browse the Mistral family of models in the model catalog by filtering on the Mistral collection. You can browse the Mistral family of models in the model catalog by filtering on # [Mistral Large](#tab/mistral-large) -Mistral Large is Mistral AI's most advanced Large Language Model (LLM). It can be used on any language-based task, thanks to its state-of-the-art reasoning and knowledge capabilities. +Mistral Large is Mistral AI's most advanced Large Language Model (LLM). It can be used on any language-based task, thanks to its state-of-the-art reasoning and knowledge capabilities. There are two variants available for the Mistral Large model version: ++- Mistral Large (2402) +- Mistral Large (2407) -Additionally, Mistral Large is: +Additionally, some attributes of _Mistral Large (2402)_ include: * __Specialized in RAG.__ Crucial information isn't lost in the middle of long context windows (up to 32-K tokens). * __Strong in coding.__ Code generation, review, and comments. Supports all mainstream coding languages. * __Multi-lingual by design.__ Best-in-class performance in French, German, Spanish, Italian, and English. Dozens of other languages are supported. * __Responsible AI compliant.__ Efficient guardrails baked in the model and extra safety layer with the `safe_mode` option. +And attributes of _Mistral Large (2407)_ include: ++- **Multi-lingual by design.** Supports dozens of languages, including English, French, German, Spanish, and Italian. +- **Proficient in coding.** Trained on more than 80 coding languages, including Python, Java, C, C++, JavaScript, and Bash. Also trained on more specific languages such as Swift and Fortran. +- **Agent-centric.** Possesses agentic capabilities with native function calling and JSON outputting. +- **Advanced in reasoning.** Demonstrates state-of-the-art mathematical and reasoning capabilities. ++ # [Mistral Small](#tab/mistral-small) Mistral Small is Mistral AI's most efficient Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency. Mistral Small is: -- **A small model optimized for low latency.** Very efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model, it outperforms Mixtral-8x7B and has lower latency. +- **A small model optimized for low latency.** Efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model, it outperforms Mixtral-8x7B and has lower latency. - **Specialized in RAG.** Crucial information isn't lost in the middle of long context windows (up to 32K tokens). - **Strong in coding.** Code generation, review, and comments. Supports all mainstream coding languages. - **Multi-lingual by design.** Best-in-class performance in French, German, Spanish, Italian, and English. Dozens of other languages are supported. - **Responsible AI compliant.** Efficient guardrails baked in the model, and extra safety layer with the `safe_mode` option. ++# [Mistral Nemo](#tab/mistral-nemo) ++Mistral Nemo is a cutting-edge Language Model (LLM) boasting state-of-the-art reasoning, world knowledge, and coding capabilities within its size category. ++Mistral Nemo is a 12B model, making it a powerful drop-in replacement for any system using Mistral 7B, which it supersedes. It supports a context length of 128K, and it accepts only text inputs and generates text outputs. ++Additionally, Mistral Nemo is: ++- **Jointly developed with Nvidia.** This collaboration has resulted in a powerful 12B model that pushes the boundaries of language understanding and generation. +- **Multilingual proficient.** Mistral Nemo is equipped with a tokenizer called Tekken, which is designed for multilingual applications. It supports over 100 languages, such as English, French, German, and Spanish. Tekken is more efficient than the Llama 3 tokenizer in compressing text for approximately 85% of all languages, with significant improvements in Malayalam, Hindi, Arabic, and prevalent European languages. +- **Agent-centric.** Mistral Nemo possesses top-tier agentic capabilities, including native function calling and JSON outputting. +- **Advanced in reasoning.** Mistral Nemo demonstrates state-of-the-art mathematical and reasoning capabilities within its size category. + ## Deploy Mistral family of models as a serverless API Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription. -**Mistral Large** and **Mistral Small** can be deployed as a serverless API with pay-as-you-go billing and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models. +**Mistral Large (2402)**, **Mistral Large (2407)**, **Mistral Small**, and **Mistral Nemo** can be deployed as a serverless API with pay-as-you-go billing and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models. ### Prerequisites Certain models in the model catalog can be deployed as a serverless API with pay ### Create a new deployment -The following steps demonstrate the deployment of Mistral Large, but you can use the same steps to deploy Mistral Small by replacing the model name. +The following steps demonstrate the deployment of Mistral Large (2402), but you can use the same steps to deploy Mistral Nemo or any of the premium Mistral models by replacing the model name. To create a deployment: 1. Sign in to [Azure AI Studio](https://ai.azure.com). 1. Select **Model catalog** from the left sidebar.-1. Search for and select **Mistral-large** to open its Details page. +1. Search for and select the Mistral Large (2402) model to open its Details page. :::image type="content" source="../media/deploy-monitor/mistral/mistral-large-deploy-directly-from-catalog.png" alt-text="A screenshot showing how to access the model details page by going through the model catalog." lightbox="../media/deploy-monitor/mistral/mistral-large-deploy-directly-from-catalog.png"::: To create a deployment: 1. From the left sidebar of your project, select **Components** > **Deployments**. 1. Select **+ Create deployment**.- 1. Search for and select **Mistral-large**. to open the Model's Details page. + 1. Search for and select the Mistral Large (2402) model to open the Model's Details page. :::image type="content" source="../media/deploy-monitor/mistral/mistral-large-deploy-starting-from-project.png" alt-text="A screenshot showing how to access the model details page by going through the Deployments page in your project." lightbox="../media/deploy-monitor/mistral/mistral-large-deploy-starting-from-project.png"::: To learn about billing for the Mistral AI model deployed as a serverless API wit ### Consume the Mistral family of models as a service -You can consume Mistral family models by using the chat API. +You can consume Mistral models by using the chat API. 1. From your **Project overview** page, go to the left sidebar and select **Components** > **Deployments**. |
ai-studio | Model Catalog Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md | Network isolation | [Configure managed networks for Azure AI Studio hubs.](confi Model | Managed compute | Serverless API (pay-as-you-go) --|--|-- Llama family models | Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat <br> Llama-3-8B-Instruct <br> Llama-3-70B-Instruct <br> Llama-3-8B <br> Llama-3-70B | Llama-3-70B-Instruct <br> Llama-3-8B-Instruct <br> Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat -Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large <br> Mistral-small +Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large (2402) <br> Mistral-large (2407) <br> Mistral-small <br> Mistral-Nemo Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual JAIS | Not available | jais-30b-chat-Phi3 family models | Phi-3-small-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi3-medium-128k-instruct <br> Phi3-medium-4k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi3-medium-128k-instruct <br> Phi3-medium-4k-instruct +Phi3 family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct Nixtla | Not available | TimeGEN-1 Other models | Available | Not available Llama-3-70B-Instruct <br> Llama-3-8B-Instruct | [Microsoft Managed Countries](/p Llama-2-7b <br> Llama-2-13b <br> Llama-2-70b | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, West US, West US 3 | West US 3 Llama-2-7b-chat <br> Llama-2-13b-chat <br> Llama-2-70b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, West US, West US 3, | Not available Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available-Mistral-Large | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available +Mistral Large (2402) <br> Mistral-Large (2407) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available +Mistral Nemo | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Japan | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available TimeGEN-1 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Mexico <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available jais-30b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available-Phi-3-mini-4k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Canada Central, Sweden Central, West US 3 | Not available -Phi-3-mini-128k-instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available +Phi-3-mini-4k-instruct <br> Phi-3-mini-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available +Phi-3-small-8k-instruct <br> Phi-3-small-128k-Instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available +Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available <!-- docutune:enable --> |
aks | Container Insights Live Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/container-insights-live-data.md | + + Title: View Azure Kubernetes Service (AKS) container logs, events, and pod metrics in real time +description: Learn how to view Azure Kubernetes Service (AKS) container logs, events, and pod metrics in real time using Container Insights. ++++ Last updated : 11/01/2023+++# View Azure Kubernetes Service (AKS) container logs, events, and pod metrics in real time ++In this article, you learn how to use the *live data* feature in Container Insights to view Azure Kubernetes Service (AKS) container logs, events, and pod metrics in real time. This feature provides direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods` to help you troubleshoot issues in real time. ++> [!NOTE] +> AKS uses [Kubernetes cluster-level logging architectures][kubernetes-cluster-architecture]. The container logs are located inside `/var/log/containers` on the node. To access a node, see [Connect to Azure Kubernetes Service (AKS) cluster nodes][node-access]. ++## Before you begin ++For help with setting up the *live data* feature, see [Configure live data in Container Insights][configure-live-data]. This feature directly accesses the Kubernetes API. For more information about the authentication model, see [Kubernetes API][kubernetes-api]. ++## View AKS resource live logs ++> [!NOTE] +> To access logs from a private cluster, you need to be on a machine on the same private network as the cluster. ++1. In the [Azure portal][azure-portal], navigate to your AKS cluster. +2. Under **Kubernetes resources**, select **Workloads**. +3. Select the *Deployment*, *Pod*, *Replica Set*, *Stateful Set*, *Job* or *Cron Job* that you want to view logs for, and then select **Live Logs**. +4. Select the resource you want to view logs for. ++ The following example shows the logs for a *Pod* resource: ++ :::image type="content" source="./media/container-insights-live-data/live-data-deployment.png" alt-text="Screenshot that shows the deployment of live logs." lightbox="./media/container-insights-live-data/live-data-deployment.png"::: ++## View live logs ++You can view real time log data as it's generated by the container engine on the *Cluster*, *Nodes*, *Controllers*, or *Containers*. ++1. In the [Azure portal][azure-portal], navigate to your AKS cluster. +2. Under **Monitoring**, select **Insights**. +3. Select the *Cluster*, *Nodes*, *Controllers*, or *Containers* tab, and then select the object you want to view logs for. +4. On the resource **Overview**, select **Live Logs**. ++ > [!NOTE] + > To view the data from your Log Analytics workspace, select **View Logs in Log Analytics**. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container Insights][log-query]. ++ After successful authentication, if data can be retrieved, it begins streaming to the Live Logs tab. You can view log data here in a continuous stream. The following image shows the logs for a *Container* resource: ++ :::image type="content" source="./media/container-insights-live-data/container-live-logs.png" alt-text="Screenshot that shows the container Live Logs view data option." lightbox="./media/container-insights-live-data/container-live-logs.png"::: ++## View live events ++You can view real-time event data as it's generated by the container engine on the *Cluster*, *Nodes*, *Controllers*, or *Containers*. ++1. In the [Azure portal][azure-portal], navigate to your AKS cluster. +2. Under **Monitoring**, select **Insights**. +3. Select the *Cluster*, *Nodes*, *Controllers*, or *Containers* tab, and then select the object you want to view events for. +4. On the resource **Overview** page, select **Live Events**. ++ > [!NOTE] + > To view the data from your Log Analytics workspace, select **View Events in Log Analytics**. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container Insights][log-query]. ++ After successful authentication, if data can be retrieved, it begins streaming to the Live Events tab. The following image shows the events for a *Container* resource: ++ :::image type="content" source="./media/container-insights-live-data/container-live-events.png" alt-text="Screenshot that shows the container Live Events view data option." lightbox="./media/container-insights-live-data/container-live-events.png"::: ++## View metrics ++You can view real-time metrics data as it's generated by the container engine on the *Nodes* or *Controllers* by selecting a *Pod* resource. ++1. In the [Azure portal][azure-portal], navigate to your AKS cluster. +2. Under **Monitoring**, select **Insights**. +3. Select the *Nodes* or *Controllers* tab, and then select the *Pod* object you want to view metrics for. +4. On the resource **Overview** page, select **Live Metrics**. ++ > [!NOTE] + > To view the data from your Log Analytics workspace, select **View Events in Log Analytics**. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container Insights][log-query]. ++ After successful authentication, if data can be retrieved, it begins streaming to the Live Metrics tab. The following image shows the metrics for a *Pod* resource: ++ :::image type="content" source="./media/container-insights-live-data/pod-live-metrics.png" alt-text="Screenshot that shows the pod Live Metrics view data option." lightbox="./media/container-insights-live-data/pod-live-metrics.png"::: ++## Next steps ++For more information about monitoring on AKS, see the following articles: ++* [Azure Kubernetes Service (AKS) diagnose and solve problems][aks-diagnose-solve-problems] +* [Monitor Kubernetes events for troubleshooting][aks-monitor-events] ++<!-- LINKS --> +[kubernetes-cluster-architecture]: https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures +[node-access]: ./node-access.md +[configure-live-data]: ../azure-monitor/containers/container-insights-livedata-setup.md +[kubernetes-api]: https://kubernetes.io/docs/concepts/overview/kubernetes-api/ +[azure-portal]: https://portal.azure.com/ +[log-query]: ../azure-monitor/containers/container-insights-log-query.md +[aks-diagnose-solve-problems]: ./aks-diagnostics.md +[aks-monitor-events]: ./events.md |
aks | Monitor Control Plane Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md | Title: Monitor Azure Kubernetes Service control plane metrics (preview) + Title: Monitor Azure Kubernetes Service (AKS) control plane metrics (preview) description: Learn how to collect metrics from the Azure Kubernetes Service (AKS) control plane and view the telemetry in Azure Monitor. --++ Last updated 01/31/2024 --#CustomerIntent: As a platform engineer, I want to collect metrics from the control plane and monitor them for any potential issues +#CustomerIntent: As a platform engineer, I want to collect metrics from the control plane and monitor them for any potential issues. # Monitor Azure Kubernetes Service (AKS) control plane metrics (preview) -The Azure Kubernetes Service (AKS) [control plane](concepts-clusters-workloads.md#control-plane) health is critical for the performance and reliability of the cluster. Control plane metrics (preview) provide more visibility into its availability and performance, allowing you to maximize overall observability and maintain operational excellence. These metrics are fully compatible with Prometheus and Grafana, and can be customized to only store what you consider necessary. With these new metrics, you can collect all metrics from API server, ETCD, Scheduler, Autoscaler, and controller manager. --This article helps you understand this new feature, how to implement it, and how to observe the telemetry collected. +This article shows you how to use the control plane metrics (preview) feature in Azure Kubernetes Service (AKS) to collect metrics from the control plane and view the telemetry in Azure Monitor. The control plane metrics feature is fully compatible with Prometheus and Grafana and provides more visibility into the availability and performance of the control plane components, such as the API server, ETCD, Scheduler, Autoscaler, and controller manager. You can use these metrics to maximize overall observability and maintain operational excellence for your AKS cluster. ## Prerequisites and limitations -- Only supports [Azure Monitor managed service for Prometheus][managed-prometheus-overview].+- Control plane metrics (preview) only supports [Azure Monitor managed service for Prometheus][managed-prometheus-overview]. - [Private link](../azure-monitor/logs/private-link-security.md) isn't supported.-- Only the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps) can be customized. All other customizations are not supported.-- The cluster must use [managed identity authentication](use-managed-identity.md).+- You can only customize the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps). All other customizations aren't supported. +- The AKS cluster must use [managed identity authentication](use-managed-identity.md). ### Install or update the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](~/reusable-content/ce-skilling/azure/includes/aks/includes/preview/preview-callout.md)] -Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command. +- Install or update the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] or [`az extension update`][az-extension-update] command. -```azurecli-interactive -az extension add --name aks-preview -``` + ```azurecli-interactive + # Install the aks-preview extension + az extension add --name aks-preview + + # Update the aks-preview extension + az extension update --name aks-preview + ``` -If you need to update the extension version, you can do this using the [`az extension update`][az-extension-update] command. +### Register the `AzureMonitorMetricsControlPlanePreview` feature flag -```azurecli-interactive -az extension update --name aks-preview -``` +1. Register the `AzureMonitorMetricsControlPlanePreview` feature flag using the [`az feature register`][az-feature-register] command. -### Register the 'AzureMonitorMetricsControlPlanePreview' feature flag --Register the `AzureMonitorMetricsControlPlanePreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: + ```azurecli-interactive + az feature register --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview" + ``` -```azurecli-interactive -az feature register --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview" -``` + It takes a few minutes for the status to show *Registered*. -It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command: +2. Verify the registration status using the [`az feature show`][az-feature-show] command. -```azurecli-interactive -az feature show --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview" -``` + ```azurecli-interactive + az feature show --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview" + ``` -When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: +3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command. -```azurecli-interactive -az provider register --namespace "Microsoft.ContainerService" -``` + ```azurecli-interactive + az provider register --namespace "Microsoft.ContainerService" + ``` ## Enable control plane metrics on your AKS cluster -You can enable control plane metrics with the Azure Monitor managed service for Prometheus add-on during cluster creation or for an existing cluster. To collect Prometheus metrics from your Kubernetes cluster, see [Enable Prometheus and Grafana for Kubernetes clusters][enable-monitoring-kubernetes-cluster] and follow the steps on the **CLI** tab for an AKS cluster. +You can enable control plane metrics with the Azure Monitor managed service for Prometheus add-on when creating a new cluster or updating an existing cluster. -If your cluster already has the Prometheus addon deployed, then you can simply run an `az aks update` to ensure the cluster updates to start collecting control plane metrics. +## Enable control plane metrics on a new AKS cluster -```azurecli -az aks update --name <cluster-name> --resource-group <resource-group> -``` +To collect Prometheus metrics from your Kubernetes cluster, see [Enable Prometheus and Grafana for AKS clusters][enable-monitoring-kubernetes-cluster] and follow the steps on the **CLI** tab for an AKS cluster. ++## Enable control plane metrics on an existing AKS cluster ++- If your cluster already has the Prometheus add-on, update the cluster to ensure it starts collecting control plane metrics using the [`az aks update`][az-aks-update] command. ++ ```azurecli-interactive + az aks update --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP + ``` > [!NOTE]-> Unlike the metrics collected from cluster nodes, control plane metrics are collected by a component which isn't part of the **ama-metrics** add-on. Enabling the `AzureMonitorMetricsControlPlanePreview` feature flag and the managed prometheus add-on ensures control plane metrics are collected. After enabling metric collection, it can take several minutes for the data to appear in the workspace. +> Unlike the metrics collected from cluster nodes, control plane metrics are collected by a component which isn't part of the **ama-metrics** add-on. Enabling the `AzureMonitorMetricsControlPlanePreview` feature flag and the managed Prometheus add-on ensures control plane metrics are collected. After enabling metric collection, it can take several minutes for the data to appear in the workspace. -## Querying control plane metrics +## Query control plane metrics -Control plane metrics are stored in an Azure monitor workspace in the cluster's region. They can be queried directly from the workspace or through the Azure Managed Grafana instance connected to the workspace. To find the Azure Monitor workspace associated with the cluster, from the left-hand pane of your selected AKS cluster, navigate to the **Monitoring** section and select **Insights**. On the Container Insights page for the cluster, select **Monitor Settings**. +Control plane metrics are stored in an Azure Monitor workspace in the cluster's region. You can query the metrics directly from the workspace or through the Azure managed Grafana instance connected to the workspace. +View the control plane metrics in the Azure Monitor workspace using the following steps: -If you're using Azure Managed Grafana to visualize the data, you can import the following dashboards. AKS provides dashboard templates to help you view and analyze your control plane telemetry data in real-time. +1. In the [Azure portal][azure-portal], navigate to your AKS cluster. +2. Under **Monitoring**, select **Insights**. -* [API server][grafana-dashboard-template-api-server] -* [ETCD][grafana-dashboard-template-etcd] + :::image type="content" source="media/monitor-control-plane-metrics/insights-azmon.png" alt-text="Screenshot of Azure Monitor workspace." lightbox="media/monitor-control-plane-metrics/insights-azmon.png"::: ++> [!NOTE] +> AKS provides dashboard templates to help you view and analyze your control plane telemetry data in real-time. If you're using Azure managed Grafana to visualize the data, you can import the following dashboards: +> +> - [API server][grafana-dashboard-template-api-server] +> - [ETCD][grafana-dashboard-template-etcd] ## Customize control plane metrics -By default, AKs includes a pre-configured set of metrics to collect and store for each component. `API server` and `etcd` are enabled by default. This list can be customized through the [ama-settings-configmap][ama-metrics-settings-configmap]. The list of `minimal-ingestion` profile metrics are available [here][list-of-default-metrics-aks-control-plane]. +AKS includes a preconfigured set of metrics to collect and store for each component. `API server` and `etcd` are enabled by default. You can customize this list through the [`ama-settings-configmap`][ama-metrics-settings-configmap]. -The following lists the default targets: +The default targets include the following: ```yaml controlplane-apiserver = true controlplane-kube-controller-manager = false controlplane-etcd = true ``` -The various options are similar to Azure Managed Prometheus listed [here][prometheus-metrics-scrape-configuration-minimal]. +All ConfigMaps should be applied to the `kube-system` namespace for any cluster. -All ConfigMaps should be applied to `kube-system` namespace for any cluster. +For more information about `minimal-ingestion` profile metrics, see [Minimal ingestion profile for control plane metrics in managed Prometheus][list-of-default-metrics-aks-control-plane]. -### Ingest only minimal metrics for the default targets +### Ingest only minimal metrics from default targets -This is the default behavior with the setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`. Only metrics listed later in this article are ingested for each of the default targets, which in this case is `controlplane-apiserver` and `controlplane-etcd`. +When setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`, only the minimal set of metrics are ingested for each of the default targets: `controlplane-apiserver` and `controlplane-etcd`. ### Ingest all metrics from all targets -Perform the following steps to collect all metrics from all targets on the cluster. +Collect all metrics from all targets on the cluster using the following steps: 1. Download the ConfigMap file [ama-metrics-settings-configmap.yaml][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.--1. Set `minimalingestionprofile = false` and verify the targets under `default-scrape-settings-enabled` that you want to scrape, are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`. --1. Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command. +2. Set `minimalingestionprofile = false`. +3. Under `default-scrape-settings-enabled`, verify that the targets you want to scrape are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`. +4. Apply the ConfigMap using the [`kubectl apply`][kubectl-apply] command. ```bash kubectl apply -f configmap-controlplane.yaml Perform the following steps to collect all metrics from all targets on the clust ### Ingest a few other metrics in addition to minimal metrics -`Minimal ingestion profile` is a setting that helps reduce ingestion volume of metrics, as only metrics used by default dashboards, default recording rules & default alerts are collected. Perform the following steps to customize this behavior. +The `minimal ingestion profile` setting helps reduce the ingestion volume of metrics, as it only collects metrics used by default dashboards, default recording rules, and default alerts are collected. To customize this setting, use the following steps: 1. Download the ConfigMap file [ama-metrics-settings-configmap][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.--1. Set `minimalingestionprofile = true` and verify the targets under `default-scrape-settings-enabled` that you want to scrape are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`. --1. Under the `default-targets-metrics-keep-list`, specify the list of metrics for the `true` targets. For example, +2. Set `minimalingestionprofile = true`. +3. Under `default-scrape-settings-enabled`, verify that the targets you want to scrape are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`. +4. Under `default-targets-metrics-keep-list`, specify the list of metrics for the `true` targets. For example: ```yaml controlplane-apiserver= "apiserver_admission_webhook_admission_duration_seconds| apiserver_longrunning_requests" ``` -- Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.+5. Apply the ConfigMap using the [`kubectl apply`][kubectl-apply] command. ```bash kubectl apply -f configmap-controlplane.yaml Perform the following steps to collect all metrics from all targets on the clust ### Ingest only specific metrics from some targets 1. Download the ConfigMap file [ama-metrics-settings-configmap][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.--1. Set `minimalingestionprofile = false` and verify the targets under `default-scrape-settings-enabled` that you want to scrape are set to `true`. The only targets you can specify here are `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`,`controlplane-kube-controller-manager`, and `controlplane-etcd`. --1. Under the `default-targets-metrics-keep-list`, specify the list of metrics for the `true` targets. For example, +2. Set `minimalingestionprofile = false`. +3. Under `default-scrape-settings-enabled`, verify that the targets you want to scrape are set to `true`. The only targets you can specify here are `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`,`controlplane-kube-controller-manager`, and `controlplane-etcd`. +4. Under `default-targets-metrics-keep-list`, specify the list of metrics for the `true` targets. For example: ```yaml controlplane-apiserver= "apiserver_admission_webhook_admission_duration_seconds| apiserver_longrunning_requests" ``` -- Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.+5. Apply the ConfigMap using the [`kubectl apply`][kubectl-apply] command. ```bash kubectl apply -f configmap-controlplane.yaml Perform the following steps to collect all metrics from all targets on the clust ## Troubleshoot control plane metrics issues -Make sure to check that the feature flag `AzureMonitorMetricsControlPlanePreview` is enabled and the `ama-metrics` pods are running. +Make sure the feature flag `AzureMonitorMetricsControlPlanePreview` is enabled and the `ama-metrics` pods are running. > [!NOTE]-> The [troubleshooting methods][prometheus-troubleshooting] for Azure managed service Prometheus won't translate directly here as the components scraping the control plane aren't present in the managed prometheus add-on. +> The [troubleshooting methods][prometheus-troubleshooting] for Azure managed service Prometheus don't directly translate here, as the components scraping the control plane aren't present in the managed Prometheus add-on. -## ConfigMap formatting or errors +### ConfigMap formatting -Make sure to double check the formatting of the ConfigMap, and if the fields are correctly populated with the intended values. Specifically the `default-targets-metrics-keep-list`, `minimal-ingestion-profile`, and `default-scrape-settings-enabled`. +Make sure you're using proper formatting in the ConfigMap and that the fields, specifically `default-targets-metrics-keep-list`, `minimal-ingestion-profile`, and `default-scrape-settings-enabled`, are correctly populated with their intended values. -### Isolate control plane from data plane issue +### Isolate control plane from data plane Start by setting some of the [node related metrics][node-metrics] to `true` and verify the metrics are being forwarded to the workspace. This helps determine if the issue is specific to scraping control plane metrics. ### Events ingested -Once you applied the changes, you can open metrics explorer from the **Azure Monitor overview** page, or from the **Monitoring** section the selected cluster. In the Azure portal, select **Metrics**. Check for an increase or decrease in the number of events ingested per minute. It should help you determine if the specific metric is missing or all metrics are missing. +Once you applied the changes, you can open metrics explorer from the **Azure Monitor overview** page or from the **Monitoring** section the selected cluster and check for an increase or decrease in the number of events ingested per minute. It should help you determine if a specific metric is missing or if all metrics are missing. -### Specific metric is not exposed +### Specific metric isn't exposed -There were cases where the metrics are documented, but not exposed from the target and wasn't forwarded to the Azure Monitor workspace. In this case, it's necessary to verify other metrics are being forwarded to the workspace. +There have been cases where metrics are documented, but aren't exposed from the target and aren't forwarded to the Azure Monitor workspace. In this case, it's necessary to verify other metrics are being forwarded to the workspace. ### No access to the Azure Monitor workspace -When you enable the add-on, you might have specified an existing workspace that you don't have access to. In that case, it might look like the metrics are not being collected and forwarded. Make sure that you create a new workspace while enabling the add-on or while creating the cluster. +When you enable the add-on, you might have specified an existing workspace that you don't have access to. In that case, it might look like the metrics aren't being collected and forwarded. Make sure that you create a new workspace while enabling the add-on or while creating the cluster. ## Disable control plane metrics on your AKS cluster -You can disable control plane metrics at any time, by either disabling the feature flag, disabling managed Prometheus, or by deleting the AKS cluster. --## Preview flag enabled after Managed Prometheus setup -If the preview flag(`AzureMonitorMetricsControlPlanePreview`) was enabled on an existing Managed Prometheus cluster, it will require forcing an update for the cluster to emit control plane metrics +You can disable control plane metrics at any time by disabling the managed Prometheus add-on and unregistering the `AzureMonitorMetricsControlPlanePreview` feature flag. -You can run an az aks update to ensure the cluster updates to start collecting control plane metrics. +1. Remove the metrics add-on that scrapes Prometheus metrics using the [`az aks update`][az-aks-update] command. -```azurecli -az aks update -n <cluster-name> -g <resource-group> -``` --> [!NOTE] -> This action doesn't remove any existing data stored in your Azure Monitor workspace. + ```azurecli-interactive + az aks update --disable-azure-monitor-metrics --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP + ``` -Run the following command to remove the metrics add-on that scrapes Prometheus metrics. +2. Disable scraping of control plane metrics on the AKS cluster by unregistering the `AzureMonitorMetricsControlPlanePreview` feature flag using the [`az feature unregister`][az-feature-unregister] command. -```azurecli-interactive -az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> -``` + ```azurecli-interactive + az feature unregister "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview" + ``` -Run the following command to disable scraping of control plane metrics on the AKS cluster by unregistering the `AzureMonitorMetricsControlPlanePreview` feature flag using the [az feature unregister][az-feature-unregister] command. +## FAQ -```azurecli-interactive -az feature unregister "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview" -``` +### Can I scrape control plane metrics with self hosted Prometheus? -## FAQs -* Can these metrics be scraped with self hosted prometheus? - * The control plane metrics currently cannot be scraped with self hosted prometheus. Self hosted prometheus will be able to scrape the single instance depending on the load balancer. These metrics are notaccurate as there are often multiple replicas of the control plane metrics which will only be visible through Managed Prometheus +No, you currently can't scrape control plane metrics with self hosted Prometheus. Self hosted Prometheus can only scrape the single instance depending on the load balancer. The metrics aren't reliable, as there are often multiple replicas of the control plane metrics are only visible through managed Prometheus -* Why is the user agent not available through the control plane metrics? - * [Control plane metrics in Kubernetes](https://kubernetes.io/docs/reference/instrumentation/metrics/) do not have the user agent. The user agent is only available through Control Plane logs available through [Diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) +### Why is the user agent not available through the control plane metrics? +[Control plane metrics in Kubernetes](https://kubernetes.io/docs/reference/instrumentation/metrics/) don't have the user agent. The user agent is only available through the control plane logs available in the [diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md). ## Next steps After evaluating this preview feature, [share your feedback][share-feedback]. We're interested in hearing what you think. -- Learn more about the [list of default metrics for AKS control plane][list-of-default-metrics-aks-control-plane].+To learn more about AKS control plane metrics, see the [list of default metrics for AKS control plane][list-of-default-metrics-aks-control-plane]. <!-- EXTERNAL LINKS --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply After evaluating this preview feature, [share your feedback][share-feedback]. We [az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update [enable-monitoring-kubernetes-cluster]: ../azure-monitor/containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana-[prometheus-metrics-scrape-configuration-minimal]: ../azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md#scenarios [prometheus-troubleshooting]: ../azure-monitor/containers/prometheus-metrics-troubleshoot.md [node-metrics]: ../azure-monitor/containers/prometheus-metrics-scrape-default.md [list-of-default-metrics-aks-control-plane]: control-plane-metrics-default-list.md [az-feature-unregister]: /cli/azure/feature#az-feature-unregister-[release-tracker]: https://releases.aks.azure.com/#tabversion -+[azure-portal]: https://portal.azure.com +[az-aks-update]: /cli/azure/aks#az-aks-update |
aks | Use Vertical Pod Autoscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-vertical-pod-autoscaler.md | + + Title: Use the Vertical Pod Autoscaler in Azure Kubernetes Service (AKS) +description: Learn how to deploy, upgrade, or disable the Vertical Pod Autoscaler on your Azure Kubernetes Service (AKS) cluster. ++ Last updated : 02/22/2024++++++# Use the Vertical Pod Autoscaler in Azure Kubernetes Service (AKS) ++This article shows you how to use the Vertical Pod Autoscaler (VPA) on your Azure Kubernetes Service (AKS) cluster. The VPA automatically adjusts the CPU and memory requests for your pods to match the usage patterns of your workloads. This feature helps to optimize the performance of your applications and reduce the cost of running your workloads in AKS. ++For more information, see the [Vertical Pod Autoscaler overview](./vertical-pod-autoscaler.md). ++## Before you begin ++* If you have an existing AKS cluster, make sure it's running Kubernetes version 1.24 or higher. +* You need the Azure CLI version 2.52.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. +* If enabling VPA on an existing cluster, make sure `kubectl` is installed and configured to connect to your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. ++ ```azurecli-interactive + az aks get-credentials --name <cluster-name> --resource-group <resource-group-name> + ``` ++## Deploy the Vertical Pod Autoscaler on a new cluster ++* Create a new AKS cluster with the VPA enabled using the [`az aks create`][az-aks-create] command with the `--enable-vpa` flag. ++ ```azurecli-interactive + az aks create --name <cluster-name> --resource-group <resource-group-name> --enable-vpa --generate-ssh-keys + ``` ++ After a few minutes, the command completes and returns JSON-formatted information about the cluster. ++## Update an existing cluster to use the Vertical Pod Autoscaler ++* Update an existing cluster to use the VPA using the [`az aks update`][az-aks-update] command with the `--enable-vpa` flag. ++ ```azurecli-interactive + az aks update --name <cluster-name> --resource-group <resource-group-name> --enable-vpa + ``` ++ After a few minutes, the command completes and returns JSON-formatted information about the cluster. ++## Disable the Vertical Pod Autoscaler on an existing cluster ++* Disable the VPA on an existing cluster using the [`az aks update`][az-aks-update] command with the `--disable-vpa` flag. ++ ```azurecli-interactive + az aks update --name <cluster-name> --resource-group <resource-group-name> --disable-vpa + ``` ++ After a few minutes, the command completes and returns JSON-formatted information about the cluster. ++## Test Vertical Pod Autoscaler installation ++In the following example, we create a deployment with two pods, each running a single container that requests 100 millicore and tries to utilize slightly above 500 millicores. We also create a VPA config pointing at the deployment. The VPA observes the behavior of the pods, and after about five minutes, updates the pods to request 500 millicores. ++1. Create a file named `hamster.yaml` and copy in the following manifest of the Vertical Pod Autoscaler example from the [kubernetes/autoscaler][kubernetes-autoscaler-github-repo] GitHub repository: ++ ```yml + apiVersion: "autoscaling.k8s.io/v1" + kind: VerticalPodAutoscaler + metadata: + name: hamster-vpa + spec: + targetRef: + apiVersion: "apps/v1" + kind: Deployment + name: hamster + resourcePolicy: + containerPolicies: + - containerName: '*' + minAllowed: + cpu: 100m + memory: 50Mi + maxAllowed: + cpu: 1 + memory: 500Mi + controlledResources: ["cpu", "memory"] + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: hamster + spec: + selector: + matchLabels: + app: hamster + replicas: 2 + template: + metadata: + labels: + app: hamster + spec: + securityContext: + runAsNonRoot: true + runAsUser: 65534 + containers: + - name: hamster + image: registry.k8s.io/ubuntu-slim:0.1 + resources: + requests: + cpu: 100m + memory: 50Mi + command: ["/bin/sh"] + args: + - "-c" + - "while true; do timeout 0.5s yes >; sleep 0.5s; done" + ``` ++2. Deploy the `hamster.yaml` Vertical Pod Autoscaler example using the [`kubectl apply`][kubectl-apply] command. ++ ```bash + kubectl apply -f hamster.yaml + ``` ++ After a few minutes, the command completes and returns JSON-formatted information about the cluster. ++3. View the running pods using the [`kubectl get`][kubectl-get] command. ++ ```bash + kubectl get pods -l app=hamster + ``` ++ Your output should look similar to the following example output: ++ ```output + hamster-78f9dcdd4c-hf7gk 1/1 Running 0 24s + hamster-78f9dcdd4c-j9mc7 1/1 Running 0 24s + ``` ++4. View the CPU and Memory reservations on one of the pods using the [`kubectl describe`][kubectl-describe] command. Make sure you replace `<example-pod>` with one of the pod IDs returned in your output from the previous step. ++ ```bash + kubectl describe pod hamster-<example-pod> + ``` ++ Your output should look similar to the following example output: ++ ```output + hamster: + Container ID: containerd:// + Image: k8s.gcr.io/ubuntu-slim:0.1 + Image ID: sha256: + Port: <none> + Host Port: <none> + Command: + /bin/sh + Args: + -c + while true; do timeout 0.5s yes >; sleep 0.5s; done + State: Running + Started: Wed, 28 Sep 2022 15:06:14 -0400 + Ready: True + Restart Count: 0 + Requests: + cpu: 100m + memory: 50Mi + Environment: <none> + ``` ++ The pod has 100 millicpu and 50 Mibibytes of Memory reserved in this example. For this sample application, the pod needs less than 100 millicpu to run, so there's no CPU capacity available. The pods also reserves less memory than needed. The Vertical Pod Autoscaler *vpa-recommender* deployment analyzes the pods hosting the hamster application to see if the CPU and Memory requirements are appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values. ++5. Monitor the pods using the [`kubectl get`][kubectl-get] command. ++ ```bash + kubectl get --watch pods -l app=hamster + ``` ++6. When the new hamster pod starts, you can view the updated CPU and Memory reservations using the [`kubectl describe`][kubectl-describe] command. Make sure you replace `<example-pod>` with one of the pod IDs returned in your output from the previous step. ++ ```bash + kubectl describe pod hamster-<example-pod> + ``` ++ Your output should look similar to the following example output: ++ ```output + State: Running + Started: Wed, 28 Sep 2022 15:09:51 -0400 + Ready: True + Restart Count: 0 + Requests: + cpu: 587m + memory: 262144k + Environment: <none> + ``` ++ In the previous output, you can see that the CPU reservation increased to 587 millicpu, which is over five times the original value. The Memory increased to 262,144 Kilobytes, which is around 250 Mibibytes, or five times the original value. This pod was under-resourced, and the Vertical Pod Autoscaler corrected the estimate with a much more appropriate value. ++7. View updated recommendations from VPA using the [`kubectl describe`][kubectl-describe] command to describe the hamster-vpa resource information. ++ ```bash + kubectl describe vpa/hamster-vpa + ``` ++ Your output should look similar to the following example output: ++ ```output + State: Running + Started: Wed, 28 Sep 2022 15:09:51 -0400 + Ready: True + Restart Count: 0 + Requests: + cpu: 587m + memory: 262144k + Environment: <none> + ``` ++## Set Vertical Pod Autoscaler requests ++The `VerticalPodAutoscaler` object automatically sets resource requests on pods with an `updateMode` of `Auto`. You can set a different value depending on your requirements and testing. In this example, we create and test a deployment manifest with two pods, each running a container that requests 100 milliCPU and 50 MiB of Memory, and sets the `updateMode` to `Recreate`. ++1. Create a file named `azure-autodeploy.yaml` and copy in the following manifest: ++ ```yml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: vpa-auto-deployment + spec: + replicas: 2 + selector: + matchLabels: + app: vpa-auto-deployment + template: + metadata: + labels: + app: vpa-auto-deployment + spec: + containers: + - name: mycontainer + image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine + resources: + requests: + cpu: 100m + memory: 50Mi + command: ["/bin/sh"] + args: ["-c", "while true; do timeout 0.5s yes >; sleep 0.5s; done"] + ``` ++2. Create the pod using the [`kubectl create`][kubectl-create] command. ++ ```bash + kubectl create -f azure-autodeploy.yaml + ``` ++ After a few minutes, the command completes and returns JSON-formatted information about the cluster. ++3. View the running pods using the [`kubectl get`][kubectl-get] command. ++ ```bash + kubectl get pods + ``` ++ Your output should look similar to the following example output: ++ ```output + NAME READY STATUS RESTARTS AGE + vpa-auto-deployment-54465fb978-kchc5 1/1 Running 0 52s + vpa-auto-deployment-54465fb978-nhtmj 1/1 Running 0 52s + ``` ++4. Create a file named `azure-vpa-auto.yaml` and copy in the following manifest: ++ ```yml + apiVersion: autoscaling.k8s.io/v1 + kind: VerticalPodAutoscaler + metadata: + name: vpa-auto + spec: + targetRef: + apiVersion: "apps/v1" + kind: Deployment + name: vpa-auto-deployment + updatePolicy: + updateMode: "Recreate" + ``` ++ The `targetRef.name` value specifies that any pod controlled by a deployment named `vpa-auto-deployment` belongs to `VerticalPodAutoscaler`. The `updateMode` value of `Recreate` means that the Vertical Pod Autoscaler controller can delete a pod, adjust the CPU and Memory requests, and then create a new pod. ++5. Apply the manifest to the cluster using the [`kubectl apply`][kubectl-apply] command. ++ ```bash + kubectl create -f azure-vpa-auto.yaml + ``` ++6. Wait a few minutes and then view the running pods using the [`kubectl get`][kubectl-get] command. ++ ```bash + kubectl get pods + ``` ++ Your output should look similar to the following example output: ++ ```output + NAME READY STATUS RESTARTS AGE + vpa-auto-deployment-54465fb978-qbhc4 1/1 Running 0 2m49s + vpa-auto-deployment-54465fb978-vbj68 1/1 Running 0 109s + ``` ++7. Get detailed information about one of your running pods using the [`kubectl get`][kubectl-get] command. Make sure you replace `<pod-name>` with the name of one of your pods from your previous output. ++ ```bash + kubectl get pod <pod-name> --output yaml + ``` ++ Your output should look similar to the following example output, which shows that VPA controller increased the Memory request to 262144k and the CPU request to 25 milliCPU: ++ ```output + apiVersion: v1 + kind: Pod + metadata: + annotations: + vpaObservedContainers: mycontainer + vpaUpdates: 'Pod resources updated by vpa-auto: container 0: cpu request, memory + request' + creationTimestamp: "2022-09-29T16:44:37Z" + generateName: vpa-auto-deployment-54465fb978- + labels: + app: vpa-auto-deployment ++ spec: + containers: + - args: + - -c + - while true; do timeout 0.5s yes >; sleep 0.5s; done + command: + - /bin/sh + image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine + imagePullPolicy: IfNotPresent + name: mycontainer + resources: + requests: + cpu: 25m + memory: 262144k + ``` ++8. Get detailed information about the Vertical Pod Autoscaler and its recommendations for CPU and Memory using the [`kubectl get`][kubectl-get] command. ++ ```bash + kubectl get vpa vpa-auto --output yaml + ``` ++ Your output should look similar to the following example output: ++ ```output + recommendation: + containerRecommendations: + - containerName: mycontainer + lowerBound: + cpu: 25m + memory: 262144k + target: + cpu: 25m + memory: 262144k + uncappedTarget: + cpu: 25m + memory: 262144k + upperBound: + cpu: 230m + memory: 262144k + ``` ++ In this example, the results in the `target` attribute specify that it doesn't need to change the CPU or the Memory target for the container to run optimally. However, results can vary depending on the application and its resource utilization. ++ The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a pod and replace it with a new pod. If a pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the pod and replaces it with a pod that meets the target attribute. ++## Extra Recommender for Vertical Pod Autoscaler ++The Recommender provides recommendations for resource usage based on real-time resource consumption. AKS deploys a Recommender when a cluster enables VPA. You can deploy a customized Recommender or an extra Recommender with the same image as the default one. The benefit of having a customized Recommender is that you can customize your recommendation logic. With an extra Recommender, you can partition VPAs to use different Recommenders. ++In the following example, we create an extra Recommender, apply to an existing AKS clust, and configure the VPA object to use the extra Recommender. ++1. Create a file named `extra_recommender.yaml` and copy in the following manifest: ++ ```yml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: extra-recommender + namespace: kube-system + spec: + replicas: 1 + selector: + matchLabels: + app: extra-recommender + template: + metadata: + labels: + app: extra-recommender + spec: + serviceAccountName: vpa-recommender + securityContext: + runAsNonRoot: true + runAsUser: 65534 + containers: + - name: recommender + image: registry.k8s.io/autoscaling/vpa-recommender:0.13.0 + imagePullPolicy: Always + args: + - --recommender-name=extra-recommender + resources: + limits: + cpu: 200m + memory: 1000Mi + requests: + cpu: 50m + memory: 500Mi + ports: + - name: prometheus + containerPort: 8942 + ``` ++2. Deploy the `extra-recomender.yaml` Vertical Pod Autoscaler example using the [`kubectl apply`][kubectl-apply] command. ++ ```bash + kubectl apply -f extra-recommender.yaml + ``` ++ After a few minutes, the command completes and returns JSON-formatted information about the cluster. ++3. Create a file named `hamster-extra-recommender.yaml` and copy in the following manifest: ++ ```yml + apiVersion: "autoscaling.k8s.io/v1" + kind: VerticalPodAutoscaler + metadata: + name: hamster-vpa + spec: + recommenders: + - name: 'extra-recommender' + targetRef: + apiVersion: "apps/v1" + kind: Deployment + name: hamster + updatePolicy: + updateMode: "Auto" + resourcePolicy: + containerPolicies: + - containerName: '*' + minAllowed: + cpu: 100m + memory: 50Mi + maxAllowed: + cpu: 1 + memory: 500Mi + controlledResources: ["cpu", "memory"] + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: hamster + spec: + selector: + matchLabels: + app: hamster + replicas: 2 + template: + metadata: + labels: + app: hamster + spec: + securityContext: + runAsNonRoot: true + runAsUser: 65534 # nobody + containers: + - name: hamster + image: k8s.gcr.io/ubuntu-slim:0.1 + resources: + requests: + cpu: 100m + memory: 50Mi + command: ["/bin/sh"] + args: + - "-c" + - "while true; do timeout 0.5s yes >; sleep 0.5s; done" + ``` ++ If `memory` isn't specified in `controlledResources`, the Recommender doesn't respond to OOM events. In this example, we only set CPU in `controlledValues`. `controlledValues` allows you to choose whether to update the container's resource requests using the`RequestsOnly` option, or by both resource requests and limits using the `RequestsAndLimits` option. The default value is `RequestsAndLimits`. If you use the `RequestsAndLimits` option, requests are computed based on actual usage, and limits are calculated based on the current pod's request and limit ratio. ++ For example, if you start with a pod that requests 2 CPUs and limits to 4 CPUs, VPA always sets the limit to be twice as much as requests. The same principle applies to Memory. When you use the `RequestsAndLimits` mode, it can serve as a blueprint for your initial application resource requests and limits. ++ You can simplify the VPA object using `Auto` mode and computing recommendations for both CPU and Memory. ++4. Deploy the `hamster-extra-recomender.yaml` example using the [`kubectl apply`][kubectl-apply] command. ++ ```bash + kubectl apply -f hamster-extra-recommender.yaml + ``` ++5. Monitor your pods using the `[kubectl get`][kubectl-get] command. ++ ```bash + kubectl get --watch pods -l app=hamster + ```` ++6. When the new hamster pod starts, view the updated CPU and Memory reservations using the [`kubectl describe`][kubectl-describe] command. Make sure you replace `<example-pod>` with one of your pod IDs. ++ ```bash + kubectl describe pod hamster-<example-pod> + ``` ++ Your output should look similar to the following example output: ++ ```output + State: Running + Started: Wed, 28 Sep 2022 15:09:51 -0400 + Ready: True + Restart Count: 0 + Requests: + cpu: 587m + memory: 262144k + Environment: <none> + ``` ++7. View updated recommendations from VPA using the [`kubectl describe`][kubectl-describe] command. ++ ```bash + kubectl describe vpa/hamster-vpa + ``` ++ Your output should look similar to the following example output: ++ ```output + State: Running + Started: Wed, 28 Sep 2022 15:09:51 -0400 + Ready: True + Restart Count: 0 + Requests: + cpu: 587m + memory: 262144k + Environment: <none> + Spec: + recommenders: + Name: customized-recommender + ``` ++## Troubleshoot the Vertical Pod Autoscaler ++If you encounter issues with the Vertical Pod Autoscaler, you can troubleshoot the system components and custom resource definition to identify the problem. ++1. Verify that all system components are running using the following command: ++ ```bash + kubectl --namespace=kube-system get pods|grep vpa + ``` ++ Your output should list *three pods*: recommender, updater, and admission-controller, all with a status of `Running`. ++2. For each of the pods returned in your previous output, verify that the system components are logging any errors using the following command: ++ ```bash + kubectl --namespace=kube-system logs [pod name] | grep -e '^E[0-9]\{4\}' + ``` ++3. Verify that the custom resource definition was created using the following command: ++ ```bash + kubectl get customresourcedefinition | grep verticalpodautoscalers + ``` ++## Next steps ++To learn more about the VPA object, see the [Vertical Pod Autoscaler API reference](./vertical-pod-autoscaler-api-reference.md). ++<!-- EXTERNAL LINKS --> +[kubernetes-autoscaler-github-repo]: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/examples/hamster.yaml +[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply +[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create +[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get +[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe ++<!-- INTERNAL LINKS --> +[install-azure-cli]: /cli/azure/install-azure-cli +[az-aks-create]: /cli/azure/aks#az-aks-create +[az-aks-update]: /cli/azure/aks#az-aks-update +[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials |
aks | Vertical Pod Autoscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md | Title: Vertical Pod Autoscaling in Azure Kubernetes Service (AKS) -description: Learn how to vertically autoscale your pod on an Azure Kubernetes Service (AKS) cluster. -+ Title: Vertical pod autoscaling in Azure Kubernetes Service (AKS) +description: Learn about vertical pod autoscaling in Azure Kubernetes Service (AKS) using the Vertical Pod Autoscaler (VPA). + Last updated 09/28/2023 -# Vertical Pod Autoscaling in Azure Kubernetes Service (AKS) +# Vertical pod autoscaling in Azure Kubernetes Service (AKS) -This article provides an overview of Vertical Pod Autoscaler (VPA) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA frees up CPU and Memory for the other pods and helps make effective utilization of your AKS cluster. +This article provides an overview of using the Vertical Pod Autoscaler (VPA) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. -Vertical Pod autoscaling provides recommendations for resource usage over time. To manage sudden increases in resource usage, use the [Horizontal Pod Autoscaler][horizontal-pod-autoscaling], which scales the number of pod replicas as needed. +When configured, the VPA automatically sets resource requests and limits on containers per workload based on past usage. The VPA frees up CPU and Memory for other pods and helps ensure effective utilization of your AKS clusters. The Vertical Pod Autoscaler provides recommendations for resource usage over time. To manage sudden increases in resource usage, use the [Horizontal Pod Autoscaler][horizontal-pod-autoscaling], which scales the number of pod replicas as needed. ## Benefits -Vertical Pod Autoscaler provides the following benefits: +The Vertical Pod Autoscaler offers the following benefits: -* It analyzes and adjusts processor and memory resources to *right size* your applications. VPA isn't only responsible for scaling up, but also for scaling down based on their resource use over time. +* Analyzes and adjusts processor and memory resources to *right size* your applications. VPA isn't only responsible for scaling up, but also for scaling down based on their resource use over time. +* A pod with a scaling mode set to *auto* or *recreate* is evicted if it needs to change its resource requests. +* You can set CPU and memory constraints for individual containers by specifying a resource policy. +* Ensures nodes have correct resources for pod scheduling. +* Offers configurable logging of any adjustments made to processor or memory resources made. +* Improves cluster resource utilization and frees up CPU and memory for other pods. -* A pod is evicted if it needs to change its resource requests if its scaling mode is set to *auto* or *recreate*. +## Limitations and considerations -* Set CPU and memory constraints for individual containers by specifying a resource policy +Consider the following limitations and considerations when using the Vertical Pod Autoscaler: -* Ensures nodes have correct resources for pod scheduling --* Configurable logging of any adjustments to processor or memory resources made --* Improve cluster resource utilization and frees up CPU and memory for other pods. --## Limitations --* Vertical Pod autoscaling supports a maximum of 1,000 pods associated with `VerticalPodAutoscaler` objects per cluster. --* VPA might recommend more resources than available in the cluster. As a result, this prevents the pod from being assigned to a node and run, because the node doesn't have sufficient resources. You can overcome this limitation by setting the *LimitRange* to the maximum available resources per namespace, which ensures pods don't ask for more resources than specified. Additionally, you can set maximum allowed resource recommendations per pod in a `VerticalPodAutoscaler` object. Be aware that VPA cannot fully overcome an insufficient node resource issue. The limit range is fixed, but the node resource usage is changed dynamically. --* We don't recommend using Vertical Pod Autoscaler with [Horizontal Pod Autoscaler][horizontal-pod-autoscaler-overview], which scales based on the same CPU and memory usage metrics. --* VPA Recommender only stores up to eight days of historical data. --* VPA does not support JVM-based workloads due to limited visibility into actual memory usage of the workload. --* It is not recommended or supported to run your own implementation of VPA alongside this managed implementation of VPA. Having an extra or customized recommender is supported. --* AKS Windows containers are not supported. --## Before you begin --* AKS cluster is running Kubernetes version 1.24 and higher. --* The Azure CLI version 2.52.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. --* `kubectl` should be connected to the cluster you want to install VPA. +* VPA supports a maximum of 1,000 pods associated with `VerticalPodAutoscaler` objects per cluster. +* VPA might recommend more resources than available in the cluster, which prevents the pod from being assigned to a node and run due to insufficient resources. You can overcome this limitation by setting the *LimitRange* to the maximum available resources per namespace, which ensures pods don't ask for more resources than specified. You can also set maximum allowed resource recommendations per pod in a `VerticalPodAutoscaler` object. The VPA can't completely overcome an insufficient node resource issue. The limit range is fixed, but the node resource usage is changed dynamically. +* We don't recommend using VPA with the [Horizontal Pod Autoscaler (HPA)][horizontal-pod-autoscaler-overview], which scales based on the same CPU and memory usage metrics. +* The VPA Recommender only stores up to *eight days* of historical data. +* VPA doesn't support JVM-based workloads due to limited visibility into actual memory usage of the workload. +* VPA doesn't support running your own implementation of VPA alongside it. Having an extra or customized recommender is supported. +* AKS Windows containers aren't supported. ## VPA overview -### API object --The Vertical Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The version supported is 0.11 and higher, and can be found in the [Kubernetes autoscaler repo][github-autoscaler-repo-v011]. - The VPA object consists of three components: -- **Recommender** - it monitors the current and past resource consumption and, based on it, provides recommended values for the containers' cpu and memory requests/limits. The **Recommender** monitors the metric history, Out of Memory (OOM) events, and the VPA deployment spec, and suggests fair requests. By providing a proper resource request and limits configuration, the limits are raised and lowered.--- **Updater** - it checks which of the managed pods have correct resources set and, if not, kills them so that they can be recreated by their controllers with the updated requests.--- **VPA Admission controller** - it sets the correct resource requests on new pods (either created or recreated by their controller due to the Updater's activity).+* **Recommender**: The Recommender monitors current and past resource consumption, including metric history, Out of Memory (OOM) events, and VPA deployment specs, and uses the information it gathers to provide recommended values for container CPU and Memory requests/limits. +* **Updater**: The Updater monitors managed pods to ensure that their resource requests are set correctly. If not, it removes those pods so that their controllers can recreate them with the updated requests. +* **VPA Admission Controller**: The VPA Admission Controller sets the correct resource requests on new pods either created or recreated by their controller based on the Updater's activity. ### VPA admission controller -VPA admission controller is a binary that registers itself as a Mutating Admission Webhook. With each pod created, it gets a request from the apiserver and it evaluates if there's a matching VPA configuration, or find a corresponding one and use the current recommendation to set resource requests in the pod. --A standalone job runs outside of the VPA admission controller, called `overlay-vpa-cert-webhook-check`. The `overlay-vpa-cert-webhook-check` is used to create and renew the certificates, and register the VPA admission controller as a `MutatingWebhookConfiguration`. +The VPA Admission Controller is a binary that registers itself as a *Mutating Admission Webhook*. When a new pod is created, the VPA Admission Controller gets a request from the API server and evaluates if there's a matching VPA configuration or finds a corresponding one and uses the current recommendation to set resource requests in the pod. -For high availability, AKS supports two admission controller replicas. +A standalone job, `overlay-vpa-cert-webhook-check`, runs outside of the VPA Admission Controller. The `overlay-vpa-cert-webhook-check` job creates and renews the certificates and registers the VPA Admission Controller as a `MutatingWebhookConfiguration`. ### VPA object operation modes -A Vertical Pod Autoscaler resource is inserted for each controller that you want to have automatically computed resource requirements. This is most commonly a *deployment*. There are four modes in which VPAs operate: --* `Auto` - VPA assigns resource requests during pod creation and updates existing pods using the preferred update mechanism. Currently, `Auto` is equivalent to `Recreate`, and also is the default mode. Once restart free ("in-place") update of pod requests is available, it may be used as the preferred update mechanism by the `Auto` mode. When using `Recreate` mode, VPA evicts a pod if it needs to change its resource requests. It may cause the pods to be restarted all at once, thereby causing application inconsistencies. You can limit restarts and maintain consistency in this situation by using a [PodDisruptionBudget][pod-disruption-budget]. -* `Recreate` - VPA assigns resource requests during pod creation as well as update existing pods by evicting them when the requested resources differ significantly from the new recommendation (respecting the Pod Disruption Budget, if defined). This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. Otherwise, the `Auto` mode is preferred, which may take advantage of restart-free updates once they are available. -* `Initial` - VPA only assigns resource requests during pod creation and never changes afterwards. -* `Off` - VPA doesn't automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object. +A Vertical Pod Autoscaler resource, most commonly a *deployment*, is inserted for each controller that you want to have automatically computed resource requirements. -## Deployment pattern during application development +There are four modes in which the VPA operates: -A common deployment pattern recommended for you if you're unfamiliar with VPA is to perform the following steps during application development in order to identify its unique resource utilization characteristics, test VPA to verify it is functioning properly, and test alongside other Kubernetes components to optimize resource utilization of the cluster. +* `Auto`: VPA assigns resource requests during pod creation and updates existing pods using the preferred update mechanism. `Auto`, which is equivalent to `Recreate`, is the default mode. Once restart-free, or *in-place*, updates of pod requests are available, it can be used as the preferred update mechanism by the `Auto` mode. With the `Auto` mode, VPA evicts a pod if it needs to change its resource requests. It might cause the pods to be restarted all at once, which can cause application inconsistencies. You can limit restarts and maintain consistency in this situation using a [PodDisruptionBudget][pod-disruption-budget]. +* `Recreate`: VPA assigns resource requests during pod creation and updates existing pods by evicting them when the requested resources differ significantly from the new recommendations (respecting the PodDisruptionBudget, if defined). You should only use this mode if you need to ensure that the pods are restarted whenever the resource request changes. Otherwise, we recommend using `Auto` mode, which takes advantage of restart-free updates once available. +* `Initial`: VPA only assigns resource requests during pod creation. It doesn't update existing pods. This mode is useful for testing and understanding the VPA behavior without affecting the running pods. +* `Off`: VPA doesn't automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object. -1. Set UpdateMode = "Off" in your production cluster and run VPA in recommendation mode so you can test and gain familiarity with VPA. UpdateMode = "Off" can avoid introducing a misconfiguration that can cause an outage. +## Deployment pattern for application development -2. Establish observability first by collecting actual resource utilization telemetry over a given period of time. This helps you understand the behavior and signs of symptoms or issues from container and pod resources influenced by the workloads running on them. --3. Get familiar with the monitoring data to understand the performance characteristics. Based on this insight, set the desired requests/limits accordingly and then in the next deployment or upgrade +If you're unfamiliar with VPA, we recommend the following deployment pattern during application development to identify its unique resource utilization characteristics, test VPA to verify it's functioning properly, and test alongside other Kubernetes components to optimize resource utilization of the cluster: +1. Set `UpdateMode = "Off"` in your production cluster and run VPA in recommendation mode so you can test and gain familiarity with VPA. `UpdateMode = "Off"` can avoid introducing a misconfiguration that can cause an outage. +2. Establish observability first by collecting actual resource utilization telemetry over a given period of time, which helps you understand the behavior and any signs of issues from container and pod resources influenced by the workloads running on them. +3. Get familiar with the monitoring data to understand the performance characteristics. Based on this insight, set the desired requests/limits accordingly and then in the next deployment or upgrade. 4. Set `updateMode` value to `Auto`, `Recreate`, or `Initial` depending on your requirements. -## Deploy, upgrade, or disable VPA on a cluster --In this section, you deploy, upgrade, or disable the Vertical Pod Autoscaler on your cluster. --1. To enable VPA on a new cluster, use `--enable-vpa` parameter with the [az aks create][az-aks-create] command. -- ```azurecli-interactive - az aks create \ - --name myAKSCluster \ - --resource-group myResourceGroup \ - --enable-vpa \ - --generate-ssh-keys - ``` -- After a few minutes, the command completes and returns JSON-formatted information about the cluster. --2. Optionally, to enable VPA on an existing cluster, use the `--enable-vpa` with the [https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update] command. -- ```azurecli-interactive - az aks update --name myAKSCluster --resource-group myResourceGroup --enable-vpa - ``` -- After a few minutes, the command completes and returns JSON-formatted information about the cluster. --3. Optionally, to disable VPA on an existing cluster, use the `--disable-vpa` with the [https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update] command. -- ```azurecli-interactive - az aks update --name myAKSCluster --resource-group myResourceGroup --disable-vpa - ``` -- After a few minutes, the command completes and returns JSON-formatted information about the cluster. --4. To verify that the Vertical Pod Autoscaler pods have been created successfully, use the [kubectl get][kubectl-get] command. --```bash -kubectl get pods --name kube-system -``` --The output of the command includes the following results specific to the VPA pods. The pods should show a *running* status. --```output -NAME READY STATUS RESTARTS AGE -vpa-admission-controller-7867874bc5-vjfxk 1/1 Running 0 41m -vpa-recommender-5fd94767fb-ggjr2 1/1 Running 0 41m -vpa-updater-56f9bfc96f-jgq2g 1/1 Running 0 41m -``` --## Test your Vertical Pod Autoscaler installation --The following steps create a deployment with two pods, each running a single container that requests 100 millicores and tries to utilize slightly above 500 millicores. Also a VPA config is created, pointing at the deployment. The VPA observes the behavior of the pods, and after about five minutes, they're updated with a higher CPU request. --1. Create a file named `hamster.yaml` and copy in the following manifest of the Vertical Pod Autoscaler example from the [kubernetes/autoscaler][kubernetes-autoscaler-github-repo] GitHub repository. --1. Deploy the `hamster.yaml` Vertical Pod Autoscaler example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: -- ```bash - kubectl apply -f hamster.yaml - ``` -- After a few minutes, the command completes and returns JSON-formatted information about the cluster. --1. Run the following [kubectl get][kubectl-get] command to get the pods from the hamster example application: -- ```bash - kubectl get pods -l app=hamster - ``` -- The example output resembles the following: -- ```output - hamster-78f9dcdd4c-hf7gk 1/1 Running 0 24s - hamster-78f9dcdd4c-j9mc7 1/1 Running 0 24s - ``` --1. Use the [kubectl describe][kubectl-describe] command on one of the pods to view its CPU and memory reservation. Replace "exampleID" with one of the pod IDs returned in your output from the previous step. -- ```bash - kubectl describe pod hamster-exampleID - ``` -- The example output is a snippet of the information about the cluster: -- ```output - hamster: - Container ID: containerd:// - Image: k8s.gcr.io/ubuntu-slim:0.1 - Image ID: sha256: - Port: <none> - Host Port: <none> - Command: - /bin/sh - Args: - -c - while true; do timeout 0.5s yes >; sleep 0.5s; done - State: Running - Started: Wed, 28 Sep 2022 15:06:14 -0400 - Ready: True - Restart Count: 0 - Requests: - cpu: 100m - memory: 50Mi - Environment: <none> - ``` -- The pod has 100 millicpu and 50 Mibibytes of memory reserved in this example. For this sample application, the pod needs less than 100 millicpu to run, so there's no CPU capacity available. The pods also reserves much less memory than needed. The Vertical Pod Autoscaler *vpa-recommender* deployment analyzes the pods hosting the hamster application to see if the CPU and memory requirements are appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values. --1. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command. -- ```bash - kubectl get --watch pods -l app=hamster - ``` --1. When a new hamster pod is started, describe the pod running the [kubectl describe][kubectl-describe] command and view the updated CPU and memory reservations. -- ```bash - kubectl describe pod hamster-<exampleID> - ``` -- The example output is a snippet of the information describing the pod: -- ```output - State: Running - Started: Wed, 28 Sep 2022 15:09:51 -0400 - Ready: True - Restart Count: 0 - Requests: - cpu: 587m - memory: 262144k - Environment: <none> - ``` -- In the previous output, you can see that the CPU reservation increased to 587 millicpu, which is over five times the original value. The memory increased to 262,144 Kilobytes, which is around 250 Mibibytes, or five times the original value. This pod was under-resourced, and the Vertical Pod Autoscaler corrected the estimate with a much more appropriate value. --1. To view updated recommendations from VPA, run the [kubectl describe][kubectl-describe] command to describe the hamster-vpa resource information. -- ```bash - kubectl describe vpa/hamster-vpa - ``` -- The example output is a snippet of the information about the resource utilization: -- ```output - State: Running - Started: Wed, 28 Sep 2022 15:09:51 -0400 - Ready: True - Restart Count: 0 - Requests: - cpu: 587m - memory: 262144k - Environment: <none> - ``` --## Set Pod Autoscaler requests --Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automatically set resource requests on pods when the updateMode is set to a **Auto**. You can set a different value depending on your requirements and testing. In this example, updateMode is set to `Recreate`. --1. Enable VPA for your cluster by running the following command. Replace cluster name `myAKSCluster` with the name of your AKS cluster and replace `myResourceGroup` with the name of the resource group the cluster is hosted in. -- ```azurecli-interactive - az aks update --name myAKSCluster --resource-group myResourceGroup --enable-vpa - ``` --2. Create a file named `azure-autodeploy.yaml`, and copy in the following manifest. -- ```yml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: vpa-auto-deployment - spec: - replicas: 2 - selector: - matchLabels: - app: vpa-auto-deployment - template: - metadata: - labels: - app: vpa-auto-deployment - spec: - containers: - - name: mycontainer - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine - resources: - requests: - cpu: 100m - memory: 50Mi - command: ["/bin/sh"] - args: ["-c", "while true; do timeout 0.5s yes >; sleep 0.5s; done"] - ``` -- This manifest describes a deployment that has two pods. Each pod has one container that requests 100 milliCPU and 50 MiB of memory. --3. Create the pod with the [kubectl create][kubectl-create] command, as shown in the following example: -- ```bash - kubectl create -f azure-autodeploy.yaml - ``` -- After a few minutes, the command completes and returns JSON-formatted information about the cluster. --4. Run the following [kubectl get][kubectl-get] command to get the pods: -- ```bash - kubectl get pods - ``` -- The output resembles the following example showing the name and status of the pods: -- ```output - NAME READY STATUS RESTARTS AGE - vpa-auto-deployment-54465fb978-kchc5 1/1 Running 0 52s - vpa-auto-deployment-54465fb978--namehtmj 1/1 Running 0 52s - ``` --5. Create a file named `azure-vpa-auto.yaml`, and copy in the following manifest that describes a `VerticalPodAutoscaler`: -- ```yml - apiVersion: autoscaling.k8s.io/v1 - kind: VerticalPodAutoscaler - metadata: - name: vpa-auto - spec: - targetRef: - apiVersion: "apps/v1" - kind: Deployment - name: vpa-auto-deployment - updatePolicy: - updateMode: "Recreate" - ``` -- The `targetRef.name` value specifies that any pod that's controlled by a deployment named `vpa-auto-deployment` belongs to `VerticalPodAutoscaler`. The `updateMode` value of `Recreate` means that the Vertical Pod Autoscaler controller can delete a pod, adjust the CPU and memory requests, and then create a new pod. --6. Apply the manifest to the cluster using the [kubectl apply][kubectl-apply] command: -- ```bash - kubectl create -f azure-vpa-auto.yaml - ``` --7. Wait a few minutes, and view the running pods again by running the following [kubectl get][kubectl-get] command: -- ```bash - kubectl get pods - ``` -- The output resembles the following example showing the pod names have changed and status of the pods: -- ```output - NAME READY STATUS RESTARTS AGE - vpa-auto-deployment-54465fb978-qbhc4 1/1 Running 0 2m49s - vpa-auto-deployment-54465fb978-vbj68 1/1 Running 0 109s - ``` --8. Get detailed information about one of your running pods by using the [Kubectl get][kubectl-get] command. Replace `podName` with the name of one of your pods that you retrieved in the previous step. -- ```bash - kubectl get pod podName --output yaml - ``` -- The output resembles the following example, showing that the Vertical Pod Autoscaler controller has increased the memory request to 262144k and CPU request to 25 milliCPU. -- ```output - apiVersion: v1 - kind: Pod - metadata: - annotations: - vpaObservedContainers: mycontainer - vpaUpdates: 'Pod resources updated by vpa-auto: container 0: cpu request, memory - request' - creationTimestamp: "2022-09-29T16:44:37Z" - generateName: vpa-auto-deployment-54465fb978- - labels: - app: vpa-auto-deployment -- spec: - containers: - - args: - - -c - - while true; do timeout 0.5s yes >; sleep 0.5s; done - command: - - /bin/sh - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine - imagePullPolicy: IfNotPresent - name: mycontainer - resources: - requests: - cpu: 25m - memory: 262144k - ``` --9. To get detailed information about the Vertical Pod Autoscaler and its recommendations for CPU and memory, use the [kubectl get][kubectl-get] command: -- ```bash - kubectl get vpa vpa-auto --output yaml - ``` -- The output resembles the following example: -- ```output - recommendation: - containerRecommendations: - - containerName: mycontainer - lowerBound: - cpu: 25m - memory: 262144k - target: - cpu: 25m - memory: 262144k - uncappedTarget: - cpu: 25m - memory: 262144k - upperBound: - cpu: 230m - memory: 262144k - ``` -- The results show the `target` attribute specifies that for the container to run optimally, it doesn't need to change the CPU or the memory target. Your results may vary where the target CPU and memory recommendation are higher. -- The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a pod and replace it with a new pod. If a pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the pod and replaces it with a pod that meets the target attribute. --## Extra Recommender for Vertical Pod Autoscaler --In the VPA, one of the core components is the Recommender. It provides recommendations for resource usage based on real time resource consumption. AKS deploys a recommender when a cluster enables VPA. You can deploy a customized recommender or an extra recommender with the same image as the default one. The benefit of having a customized recommender is that you can customize your recommendation logic. With an extra recommender, you can partition VPAs to multiple recommenders if there are many VPA objects. --The following example is an extra recommender that you apply to your existing AKS cluster. You then configure the VPA object to use the extra recommender. --1. Create a file named `extra_recommender.yaml` and copy in the following manifest: -- ```json - apiVersion: apps/v1 - kind: Deployment - metadata: - name: extra-recommender - namespace: kube-system - spec: - replicas: 1 - selector: - matchLabels: - app: extra-recommender - template: - metadata: - labels: - app: extra-recommender - spec: - serviceAccountName: vpa-recommender - securityContext: - runAsNonRoot: true - runAsUser: 65534 # nobody - containers: - - name: recommender - image: registry.k8s.io/autoscaling/vpa-recommender:0.13.0 - imagePullPolicy: Always - args: - - --recommender--nameame=extra-recommender - resources: - limits: - cpu: 200m - memory: 1000Mi - requests: - cpu: 50m - memory: 500Mi - ports: - - name: prometheus - containerPort: 8942 - ``` --2. Deploy the `extra-recomender.yaml` Vertical Pod Autoscaler example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. -- ```bash - kubectl apply -f extra-recommender.yaml - ``` -- After a few minutes, the command completes and returns JSON-formatted information about the cluster. --3. Create a file named `hamnster_extra_recommender.yaml` and copy in the following manifest: -- ```yml - apiVersion: "autoscaling.k8s.io/v1" - kind: VerticalPodAutoscaler - metadata: - name: hamster-vpa - spec: - recommenders: - - name: 'extra-recommender' - targetRef: - apiVersion: "apps/v1" - kind: Deployment - name: hamster - updatePolicy: - updateMode: "Auto" - resourcePolicy: - containerPolicies: - - containerName: '*' - minAllowed: - cpu: 100m - memory: 50Mi - maxAllowed: - cpu: 1 - memory: 500Mi - controlledResources: ["cpu", "memory"] - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: hamster - spec: - selector: - matchLabels: - app: hamster - replicas: 2 - template: - metadata: - labels: - app: hamster - spec: - securityContext: - runAsNonRoot: true - runAsUser: 65534 # nobody - containers: - - name: hamster - image: k8s.gcr.io/ubuntu-slim:0.1 - resources: - requests: - cpu: 100m - memory: 50Mi - command: ["/bin/sh"] - args: - - "-c" - - "while true; do timeout 0.5s yes >; sleep 0.5s; done" - ``` -- If `memory` is not specified in `controlledResources`, the Recommender doesn't respond to OOM events. In this case, you are only setting CPU in `controlledValues`. `controlledValues` allows you to choose whether to update the container's resource requests by `RequestsOnly` option, or both resource requests and limits using the `RequestsAndLimits` option. The default value is `RequestsAndLimits`. If you use the `RequestsAndLimits` option, **requests** are computed based on actual usage, and **limits** are calculated based on the current pod's request and limit ratio. -- For example, if you start with a pod that requests 2 CPUs and limits to 4 CPUs, VPA always sets the limit to be twice as much as requests. The same principle applies to memory. When you use the `RequestsAndLimits` mode, it can serve as a blueprint for your initial application resource requests and limits. --You can simplify VPA object by using Auto mode and computing recommendations for both CPU and Memory. --4. Deploy the `hamster_extra-recomender.yaml` example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. -- ```bash - kubectl apply -f hamster_customized_recommender.yaml - ``` --5. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command. -- ```bash - kubectl get --watch pods -l app=hamster - ```` --6. When a new hamster pod is started, describe the pod running the [kubectl describe][kubectl-describe] command and view the updated CPU and memory reservations. -- ```bash - kubectl describe pod hamster-<exampleID> - ``` -- The example output is a snippet of the information describing the pod: -- ```output - State: Running - Started: Wed, 28 Sep 2022 15:09:51 -0400 - Ready: True - Restart Count: 0 - Requests: - cpu: 587m - memory: 262144k - Environment: <none> - ``` --7. To view updated recommendations from VPA, run the [kubectl describe][kubectl-describe] command to describe the hamster-vpa resource information. -- ```bash - kubectl describe vpa/hamster-vpa - ``` -- The example output is a snippet of the information about the resource utilization: -- ```output - State: Running - Started: Wed, 28 Sep 2022 15:09:51 -0400 - Ready: True - Restart Count: 0 - Requests: - cpu: 587m - memory: 262144k - Environment: <none> - Spec: - recommenders: - Name: customized-recommender - ``` --## Troubleshooting --To diagnose problems with a VPA installation, perform the following steps. --1. Check if all system components are running using the following command: -- ```bash - kubectl --namespace=kube-system get pods|grep vpa - ``` --The output should list three pods - recommender, updater and admission-controller all with the state showing a status of `Running`. --2. Confirm if the system components log any errors. For each of the pods returned by the previous command, run the following command: -- ```bash - kubectl --namespace=kube-system logs [pod name] | grep -e '^E[0-9]\{4\}' - ``` --3. Confirm that the custom resource definition was created by running the following command: -- ```bash - kubectl get customresourcedefinition | grep verticalpodautoscalers - ``` - ## Next steps -This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements. --* You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks]. --* See the Vertical Pod Autoscaler [API reference] to learn more about the definitions for related VPA objects. +To learn how to set up the Vertical Pod Autoscaler on your AKS cluster, see [Use the Vertical Pod Autoscaler in AKS](./use-vertical-pod-autoscaler.md). <!-- EXTERNAL LINKS -->-[kubernetes-autoscaler-github-repo]: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/examples/hamster.yaml -[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply -[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create -[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get -[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe -[github-autoscaler-repo-v011]: https://github.com/kubernetes/autoscaler/blob/vpa-release-0.11/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go [pod-disruption-budget]: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ <!-- INTERNAL LINKS -->-[get-started-with-aks]: /azure/architecture/reference-architectures/containers/aks-start-here -[install-azure-cli]: /cli/azure/install-azure-cli -[az-aks-create]: /cli/azure/aks#az-aks-create -[az-aks-upgrade]: /cli/azure/aks#az-aks-upgrade [horizontal-pod-autoscaling]: concepts-scale.md#horizontal-pod-autoscaler-[scale-applications-in-aks]: tutorial-kubernetes-scale.md -[az-provider-register]: /cli/azure/provider#az-provider-register -[az-feature-register]: /cli/azure/feature#az-feature-register -[az-feature-show]: /cli/azure/feature#az-feature-show [horizontal-pod-autoscaler-overview]: concepts-scale.md#horizontal-pod-autoscaler- |
api-management | Developer Portal Enable Usage Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-enable-usage-logs.md | To configure a diagnostic setting for developer portal usage logs: 1. **Category groups**: Optionally make a selection for your scenario. 1. Under **Categories**: Select **Logs related to Developer Portal usage**. Optionally select other categories as needed. 1. Under **Destination details**, select one or more options and specify details for the destination. For example, archive logs to a storage account or stream them to an event hub. [Learn more](../azure-monitor/essentials/diagnostic-settings.md)- > [!NOTE] - > Currently, the **Send to Log Analytics workspace** destination isn't supported for developer portal usage logs. - 1. Select **Save**. ## View diagnostic log data |
app-service | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md | description: Learn how to migrate your App Service Environment to App Service En Previously updated : 7/18/2024 Last updated : 7/24/2024 zone_pivot_groups: app-service-cli-portal The in-place migration feature doesn't support the following scenarios. See the - App Service Environment v1 in a [Classic virtual network](/previous-versions/azure/virtual-network/create-virtual-network-classic) - ELB App Service Environment v2 with IP SSL addresses - ELB App Service Environment v1 with IP SSL addresses+- App Service Environment with a name that doesn't meet the character limits. The entire name, including the domain suffix, must be 64 characters or fewer. For example: *my-ase-name.appserviceenvironment.net* for ILB and *my-ase-name.p.azurewebsites.net* for ELB must be 64 characters or fewer. If you don't meet the character limit, you must migrate manually. The character limits specifically for the App Service Environment name are as follows: + - ILB App Service Environment name character limit: 36 characters + - ELB App Service Environment name character limit: 42 characters The App Service platform reviews your App Service Environment to confirm in-place migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the in-place migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates. |
app-service | Side By Side Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md | description: Learn how to migrate your App Service Environment v2 to App Service Previously updated : 7/23/2024 Last updated : 7/24/2024 # Migration to App Service Environment v3 using the side-by-side migration feature The side-by-side migration feature doesn't support the following scenarios. See - If you have an App Service Environment v1, you can migrate using the [in-place migration feature](migrate.md) or one of the [manual migration options](migration-alternatives.md). - ELB App Service Environment v2 with IP SSL addresses - [Zone pinned](zone-redundancy.md) App Service Environment v2+- App Service Environment with a name that doesn't meet the character limits. The entire name, including the domain suffix, must be 64 characters or fewer. For example: *my-ase-name.appserviceenvironment.net* for ILB and *my-ase-name.p.azurewebsites.net* for ELB must be 64 characters or fewer. If you don't meet the character limit, you must migrate manually. The character limits specifically for the App Service Environment name are as follows: + - ILB App Service Environment name character limit: 36 characters + - ELB App Service Environment name character limit: 42 characters The App Service platform reviews your App Service Environment to confirm side-by-side migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the side-by-side migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates. |
application-gateway | Cli Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/cli-samples.md | - Title: Azure CLI examples for Azure Application Gateway -description: This article has links to Azure CLI examples so you can quickly deploy Azure Application Gateway configured in various ways. ---- Previously updated : 11/16/2019-----# Azure CLI examples for Azure Application Gateway --The following table includes links to Azure CLI script examples for Azure Application Gateway. --| Example | Description | -|-- | -- | -| [Manage web traffic](./scripts/create-vmss-cli.md) | Creates an application gateway and all related resources. | -| [Restrict web traffic](./scripts/create-vmss-waf-cli.md) | Creates an application gateway that restricts traffic using OWASP rules.| |
application-gateway | Resource Manager Template Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/resource-manager-template-samples.md | - Title: Azure Resource Manager templates- -description: This article has links to Azure Resource Manager template examples so you can quickly deploy Azure Application Gateway configured in various ways. ----- Previously updated : 11/16/2019----# Azure Resource Manager templates for Azure Application Gateway --The following table includes links to Azure Resource Manager templates for Azure Application Gateway. --| Example | Description | -|-- | -- | -| [Application Gateway v2 with Web Application Firewall](https://azure.microsoft.com/resources/templates/ag-docs-wafv2/) | Creates an Application Gateway v2 with Web Application Firewall v2.| |
application-gateway | Create Vmss Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-cli.md | - Title: Azure CLI Script Sample - Manage web traffic | Microsoft Docs -description: Azure CLI Script Sample - Manage web traffic with an application gateway and a virtual machine scale set. ---- Previously updated : 01/29/2018-----# Manage web traffic using the Azure CLI --This script creates an application gateway that uses a virtual machine scale set for backend servers. The application gateway can then be configured to manage web traffic. After running the script, you can test the application gateway using its public IP address. ----## Sample script --[!code-azurecli-interactive[main](../../../cli_scripts/application-gateway/create-vmss/create-vmss.sh "Create application gateway")] --## Clean up deployment --Run the following command to remove the resource group, application gateway, and all related resources. --```azurecli-interactive -az group delete --name myResourceGroupAG --yes -``` --## Script explanation --This script uses the following commands to create the deployment. Each item in the table links to command specific documentation. --| Command | Notes | -||| -| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. | -| [az network vnet create](/cli/azure/network/vnet) | Creates a virtual network. | -| [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) | Creates a subnet in a virtual network. | -| [az network public-ip create](/cli/azure/network/public-ip) | Creates the public IP address for the application gateway. | -| [az network application-gateway create](/cli/azure/network/application-gateway) | Create an application gateway. | -| [az vmss create](/cli/azure/vmss) | Creates a virtual machine scale set. | -| [az network public-ip show](/cli/azure/network/public-ip) | Gets the public IP address of the application gateway. | --## Next steps --For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview). --Additional application gateway CLI script samples can be found in the [Azure Windows VM documentation](../cli-samples.md). |
application-gateway | Create Vmss Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-powershell.md | - Title: Azure PowerShell Script Sample - Manage web traffic | Microsoft Docs -description: Azure PowerShell Script Sample - Manage web traffic with an application gateway and a virtual machine scale set. ---- Previously updated : 01/29/2018-----# Manage web traffic with Azure PowerShell --This script creates an application gateway that uses a virtual machine scale set for backend servers. The application gateway can then be configured to manage web traffic. After running the script, you can test the application gateway using its public IP address. ----## Sample script --[!code-powershell[main](../../../powershell_scripts/application-gateway/create-vmss/create-vmss.ps1 "Create application gateway")] --## Clean up deployment --Run the following command to remove the resource group, application gateway, and all related resources. --```powershell -Remove-AzResourceGroup -Name myResourceGroupAG -``` --## Script explanation --This script uses the following commands to create the deployment. Each item in the table links to command specific documentation. --| Command | Notes | -||| -| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. | -| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates the subnet configuration. | -| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates the virtual network using with the subnet configurations. | -| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates the public IP address for the application gateway. | -| [New-AzApplicationGatewayIPConfiguration](/powershell/module/az.network/new-azapplicationgatewayipconfiguration) | Creates the configuration that associates a subnet with the application gateway. | -| [New-AzApplicationGatewayFrontendIPConfig](/powershell/module/az.network/new-azapplicationgatewayfrontendipconfig) | Creates the configuration that assigns a public IP address to the application gateway. | -| [New-AzApplicationGatewayFrontendPort](/powershell/module/az.network/new-azapplicationgatewayfrontendport) | Assigns a port to be used to access the application gateway. | -| [New-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/new-azapplicationgatewaybackendaddresspool) | Creates a backend pool for an application gateway. | -| [New-AzApplicationGatewayBackendHttpSettings](/powershell/module/az.network/new-azapplicationgatewaybackendhttpsetting) | Configures settings for a backend pool. | -| [New-AzApplicationGatewayHttpListener](/powershell/module/az.network/new-azapplicationgatewayhttplistener) | Creates a listener. | -| [New-AzApplicationGatewayRequestRoutingRule](/powershell/module/az.network/new-azapplicationgatewayrequestroutingrule) | Creates a routing rule. | -| [New-AzApplicationGatewaySku](/powershell/module/az.network/new-azapplicationgatewaysku) | Specify the tier and capacity for an application gateway. | -| [New-AzApplicationGateway](/powershell/module/az.network/new-azapplicationgateway) | Create an application gateway. | -| [Set-AzVmssStorageProfile](/powershell/module/az.compute/set-azvmssstorageprofile) | Create a storage profile for the scale set. | -| [Set-AzVmssOsProfile](/powershell/module/az.compute/set-azvmssosprofile) | Define the operating system for the scale set. | -| [Add-AzVmssNetworkInterfaceConfiguration](/powershell/module/az.compute/add-azvmssnetworkinterfaceconfiguration) | Define the network interface for the scale set. | -| [New-AzVmss](/powershell/module/az.compute/new-azvm) | Create a virtual machine scale set. | -| [Get-AzPublicIPAddress](/powershell/module/az.network/get-azpublicipaddress) | Gets the public IP address of an application gateway. | -|[Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group and all resources contained within. | --## Next steps --For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/). --Additional application gateway PowerShell script samples can be found in the [Azure Application Gateway documentation](../powershell-samples.md). |
application-gateway | Waf Custom Rules Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/waf-custom-rules-powershell.md | - Title: Azure PowerShell Script Sample - Create WAF custom rules -description: Azure PowerShell Script Sample - Create Web Application Firewall custom rules --- Previously updated : 6/7/2019-----# Create Web Application Firewall (WAF) custom rules with Azure PowerShell --This script creates an Application Gateway Web Application Firewall that uses custom rules. The custom rule blocks traffic if the request header contains User-Agent *evilbot*. --## Prerequisites --### Azure PowerShell module --If you choose to install and use Azure PowerShell locally, this script requires the Azure PowerShell module version 2.1.0 or later. --1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). -2. To create a connection with Azure, run `Connect-AzAccount`. ---## Sample script --[!code-powershell[main](../../../powershell_scripts/application-gateway/waf-rules/waf-custom-rules.ps1 "Custom WAF rules")] --## Clean up deployment --Run the following command to remove the resource group, application gateway, and all related resources. --```powershell -Remove-AzResourceGroup -Name CustomRulesTest -``` --## Script explanation --This script uses the following commands to create the deployment. Each item in the table links to command specific documentation. --| Command | Notes | -||| -| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. | -| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates the subnet configuration. | -| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates the virtual network using with the subnet configurations. | -| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates the public IP address for the application gateway. | -| [New-AzApplicationGatewayIPConfiguration](/powershell/module/az.network/new-azapplicationgatewayipconfiguration) | Creates the configuration that associates a subnet with the application gateway. | -| [New-AzApplicationGatewayFrontendIPConfig](/powershell/module/az.network/new-azapplicationgatewayfrontendipconfig) | Creates the configuration that assigns a public IP address to the application gateway. | -| [New-AzApplicationGatewayFrontendPort](/powershell/module/az.network/new-azapplicationgatewayfrontendport) | Assigns a port to be used to access the application gateway. | -| [New-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/new-azapplicationgatewaybackendaddresspool) | Creates a backend pool for an application gateway. | -| [New-AzApplicationGatewayBackendHttpSettings](/powershell/module/az.network/new-azapplicationgatewaybackendhttpsetting) | Configures settings for a backend pool. | -| [New-AzApplicationGatewayHttpListener](/powershell/module/az.network/new-azapplicationgatewayhttplistener) | Creates a listener. | -| [New-AzApplicationGatewayRequestRoutingRule](/powershell/module/az.network/new-azapplicationgatewayrequestroutingrule) | Creates a routing rule. | -| [New-AzApplicationGatewaySku](/powershell/module/az.network/new-azapplicationgatewaysku) | Specify the tier and capacity for an application gateway. | -| [New-AzApplicationGateway](/powershell/module/az.network/new-azapplicationgateway) | Create an application gateway. | -|[Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group and all resources contained within. | -|[New-AzApplicationGatewayAutoscaleConfiguration](/powershell/module/az.network/New-AzApplicationGatewayAutoscaleConfiguration)|Creates an autoscale configuration for the Application Gateway.| -|[New-AzApplicationGatewayFirewallMatchVariable](/powershell/module/az.network/New-AzApplicationGatewayFirewallMatchVariable)|Creates a match variable for firewall condition.| -|[New-AzApplicationGatewayFirewallCondition](/powershell/module/az.network/New-AzApplicationGatewayFirewallCondition)|Creates a match condition for custom rule.| -|[New-AzApplicationGatewayFirewallCustomRule](/powershell/module/az.network/New-AzApplicationGatewayFirewallCustomRule)|Creates a new custom rule for the application gateway firewall policy.| -|[New-AzApplicationGatewayFirewallPolicy](/powershell/module/az.network/New-AzApplicationGatewayFirewallPolicy)|Creates a application gateway firewall policy.| -|[New-AzApplicationGatewayWebApplicationFirewallConfiguration](/powershell/module/az.network/New-AzApplicationGatewayWebApplicationFirewallConfiguration)|Creates a WAF configuration for an application gateway.| --## Next steps --- For more information about WAF custom rules, see [Custom rules for Web Application Firewall](../../web-application-firewall/ag/custom-waf-rules-overview.md)-- For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).-- Additional application gateway PowerShell script samples can be found in the [Azure Application Gateway documentation](../powershell-samples.md). |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/release-notes.md | Title: "What's new with Azure Arc-enabled Kubernetes" Previously updated : 04/18/2024- Last updated : 07/23/2024+ description: "Learn about the latest releases of Arc-enabled Kubernetes." # What's new with Azure Arc-enabled Kubernetes -Azure Arc-enabled Kubernetes is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the latest releases of the [Azure Arc-enabled Kubernetes agents](conceptual-agent-overview.md). +Azure Arc-enabled Kubernetes is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about recent releases of the [Azure Arc-enabled Kubernetes agents](conceptual-agent-overview.md). When any of the Arc-enabled Kubernetes agents are updated, all of the agents in the `azure-arc` namespace are incremented with a new version number, so that the version numbers are consistent across agents. When a new version is released, all of the agents are upgraded together to the newest version (whether or not there are functionality changes in a given agent), unless you have [disabled automatic upgrades](agent-upgrade.md) for the cluster. We generally recommend using the most recent versions of the agents. The [version support policy](agent-upgrade.md#version-support-policy) covers the most recent version and the two previous versions (N-2). +## Version 1.18.x (July 2024) ++- Fixed `logCollector` pod restarts +- Updated to Microsoft Go v1.22.5 +- Other bug fixes ++## Version 1.17.x (June 2024) ++- Upgraded to use [Microsoft Go 1.22 to be FIPS compliant](https://github.com/microsoft/go/blob/microsoft/main/eng/doc/fips/README.md#tls-with-fips-compliant-settings) ++## Version 1.16.x (May 2024) ++- Migrated to use Microsoft Go w/ OpenSSL and fixed some vulnerabilities + ## Version 1.15.3 (March 2024) - Various enhancements and bug fixes |
azure-arc | Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md | For **Arc-enabled System Center Virtual Machine Manager (SCVMM)**, the manual up Before upgrading an Arc resource bridge, the following prerequisites must be met: +- The appliance VM must be on a General Availability version (1.0.15 or higher). If not, the Arc resource bridge VM needs to be redeployed. If you are using Arc-enabled VMware/AVS, then you have the option to [perform disaster recovery](../vmware-vsphere/recover-from-resource-bridge-deletion.md). If you are using Arc-enabled SCVMM, then follow this [disaster recovery guide](../system-center-virtual-machine-manager/disaster-recovery.md). + - The appliance VM must be online, healthy with a Status of "Running". You can check the Azure resource of your Arc resource bridge to verify. - The [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be up-to-date. To test that the credentials within the Arc resource bridge VM are valid, perform an operation on an Arc-enabled VM from Azure or [update the credentials](/azure/azure-arc/resource-bridge/maintenance) to be certain. |
azure-arc | Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md | Title: Managing the Azure Connected Machine agent -description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 05/04/2023+description: This article describes the different management tasks that you'll typically perform during the lifecycle of the Azure Connected Machine agent. Last updated : 07/24/2024 -- - ignite-2023 # Managing and maintaining the Connected Machine agent Microsoft recommends using the most recent version of the Azure Connected Machin ### [Windows](#tab/windows) -Links to the current and previous releases of the Windows agents are available below the heading of each [release note](agent-release-notes.md). If you're looking for an agent version that's more than 6 months old, check out the [release notes archive](agent-release-notes-archive.md). +Links to the current and previous releases of the Windows agents are available below the heading of each [release note](agent-release-notes.md). If you're looking for an agent version that's more than six months old, check out the [release notes archive](agent-release-notes-archive.md). ### [Linux - apt](#tab/linux-apt) Links to the current and previous releases of the Windows agents are available b ## Upgrade the agent -The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that are not using the latest version of the machine agent and recommends that you upgrade to the latest version. It will notify you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal. +The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that aren't using the latest version of the machine agent and recommends that you upgrade to the latest version. It notifies you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal. -The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, or uninstalling the Azure Connected Machine Agent will not require you to restart your server. +The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, or uninstalling the Azure Connected Machine Agent doesn't require you to restart your server. The following table describes the methods supported to perform the agent upgrade: For Windows Servers that belong to a domain and connect to the Internet to check 1. Select **OK**. -The next time computers in your selected scope refresh their policy, they will start to check for updates in both Windows Update and Microsoft Update. +The next time computers in your selected scope refresh their policy, they'll start to check for updates in both Windows Update and Microsoft Update. For organizations that use Microsoft Configuration Manager (MECM) or Windows Server Update Services (WSUS) to deliver updates to their servers, you need to configure WSUS to synchronize the Azure Connected Machine Agent packages and approve them for installation on your servers. Follow the guidance for [Windows Server Update Services](/windows-server/administration/windows-server-update-services/manage/setting-up-update-synchronizations#to-specify-update-products-and-classifications-for-synchronization) or [MECM](/mem/configmgr/sum/get-started/configure-classifications-and-products#to-configure-classifications-and-products-to-synchronize) to add the following products and classifications to your configuration: Once the updates are being synchronized, you can optionally add the Azure Connec 1. Run **AzureConnectedMachineAgent.msi** to start the Setup Wizard. -If the Setup Wizard discovers a previous version of the agent, it will upgrade it automatically. When the upgrade completes, the Setup Wizard closes automatically. +If the Setup Wizard discovers a previous version of the agent, it upgrades it automatically. When the upgrade completes, the Setup Wizard closes automatically. #### To upgrade from the command line The Azure Connected Machine agent doesn't automatically upgrade itself when a ne ## Renaming an Azure Arc-enabled server resource -When you change the name of a Linux or Windows machine connected to Azure Arc-enabled servers, the new name is not recognized automatically because the resource name in Azure is immutable. As with other Azure resources, you must delete the resource and re-create it in order to use the new name. +When you change the name of a Linux or Windows machine connected to Azure Arc-enabled servers, the new name isn't recognized automatically because the resource name in Azure is immutable. As with other Azure resources, you must delete the resource and re-create it in order to use the new name. For Azure Arc-enabled servers, before you rename the machine, it's necessary to remove the VM extensions before proceeding: For Azure Arc-enabled servers, before you rename the machine, it's necessary to 3. Use the **azcmagent** tool with the [Disconnect](azcmagent-disconnect.md) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. You can run this manually while logged on interactively, with a Microsoft identity [access token](../../active-directory/develop/access-tokens.md), or with the service principal you used for onboarding (or with a [new service principal that you create](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). - Disconnecting the machine from Azure Arc-enabled servers doesn't remove the Connected Machine agent, and you do not need to remove the agent as part of this process. + Disconnecting the machine from Azure Arc-enabled servers doesn't remove the Connected Machine agent, and you don't need to remove the agent as part of this process. 4. Re-register the Connected Machine agent with Azure Arc-enabled servers. Run the `azcmagent` tool with the [Connect](azcmagent-connect.md) parameter to complete this step. The agent will default to using the computer's current hostname, but you can choose your own resource name by passing the `--resource-name` parameter to the connect command. For guidance on how to identify and remove any extensions on your Azure Arc-enab ### Step 2: Disconnect the server from Azure Arc -Disconnecting the agent deletes the corresponding Azure resource for the server and clears the local state of the agent. To disconnect the agent, run the `azcmagent disconnect` command as an administrator on the server. You'll be prompted to log in with an Azure account that has permission to delete the resource in your subscription. If the resource has already been deleted in Azure, you'll need to pass an additional flag to clean up the local state: `azcmagent disconnect --force-local-only`. +Disconnecting the agent deletes the corresponding Azure resource for the server and clears the local state of the agent. To disconnect the agent, run the `azcmagent disconnect` command as an administrator on the server. You'll be prompted to sign in with an Azure account that has permission to delete the resource in your subscription. If the resource has already been deleted in Azure, pass an additional flag to clean up the local state: `azcmagent disconnect --force-local-only`. ### Step 3a: Uninstall the Windows agent -Both of the following methods remove the agent, but they do not remove the *C:\Program Files\AzureConnectedMachineAgent* folder on the machine. +Both of the following methods remove the agent, but they don't remove the *C:\Program Files\AzureConnectedMachineAgent* folder on the machine. #### Uninstall from Control Panel You do not need to restart any services when reconfiguring the proxy settings wi Starting with agent version 1.15, you can also specify services which should **not** use the specified proxy server. This can help with split-network designs and private endpoint scenarios where you want Microsoft Entra ID and Azure Resource Manager traffic to go through your proxy server to public endpoints but want Azure Arc traffic to skip the proxy and communicate with a private IP address on your network. -The proxy bypass feature does not require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that should not use the proxy server. The location parameter refers to the Azure region of the Arc Server(s). +The proxy bypass feature doesn't require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that shouldn't use the proxy server. The location parameter refers to the Azure region of the Arc Server(s). Proxy bypass value when set to `ArcData` only bypasses the traffic of the Azure extension for SQL Server and not the Arc agent. If you're already using environment variables to configure the proxy server for 1. Remove the unused environment variables by following the steps for [Windows](#windows-environment-variables) or [Linux](#linux-environment-variables). +## Alerting for Azure Arc-enabled server disconnection ++The Connected Machine agent [sends a regular heartbeat message](overview.md#agent-status) to the service every five minutes. If an Arc-enabled server stops sending heartbeats to Azure for longer than 15 minutes, it can mean that it's offline, the network connection has been blocked, or the agent isn't running. Develop a plan for how youΓÇÖll respond and investigate these incidents, including setting up [Resource Health alerts](../..//service-health/resource-health-alert-monitor-guide.md) to get notified when such incidents occur. ++ ## Next steps * Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md). |
azure-arc | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/disaster-recovery.md | To recover from Arc resource bridge VM deletion, you need to deploy a new resour >[!Note] > DHCP-based Arc Resource Bridge deployment is no longer supported.<br><br> If you had deployed Arc Resource Bridge earlier using DHCP, you must clean up your deployment by removing your resources from Azure and do a [fresh onboarding](./quickstart-connect-system-center-virtual-machine-manager-to-arc.md).+> +## Prerequisites ++1. The disaster recovery script must be run from the same folder where the config (.yaml) files are present. The config files are present on the machine used to run the script to deploy Arc resource bridge. ++1. The machine being used to run the script must have bidirectional connectivity to the Arc resource bridge VM on port 6443 (Kubernetes API server) and 22 (SSH), and outbound connectivity to the Arc resource bridge VM on port 443 (HTTPS). + ### Recover Arc resource bridge from a Windows machine If the recovery steps mentioned above are unsuccessful in restoring Arc resource - Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html). - Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). |
azure-arc | Quickstart Connect System Center Virtual Machine Manager To Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md | The script execution will take up to half an hour and you'll be prompted for var Once the command execution is completed, your setup is complete, and you can try out the capabilities of Azure Arc-enabled SCVMM. +>[!IMPORTANT] +>After the successful installation of Azure Arc Resource Bridge, it's recommended to retain a copy of the resource bridge config (.yaml) files in a secure place that facilitates easy retrieval. These files are needed later to run commands to perform management operations (e.g. [az arcappliance upgrade](/cli/azure/arcappliance/upgrade#az-arcappliance-upgrade-vmware)) on the resource bridge. You can find the three config files (.yaml files) in the same folder where you ran the onboarding script. ++ ### Retry command - Windows If for any reason, the appliance creation fails, you need to retry it. Run the command with ```-Force``` to clean up and onboard again. |
azure-functions | Functions Bindings Storage Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md | If all 5 tries fail, Azure Functions adds a message to a Storage queue named *we ## Memory usage and concurrency ::: zone pivot="programming-language-csharp" -When you bind to an [output type](#usage) that doesn't support steaming, such as `string`, or `Byte[]`, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. When possible, use a stream-supporting type. Type support depends on the C# mode and extension version. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types). +When you bind to an [output type](#usage) that doesn't support streaming, such as `string`, or `Byte[]`, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. When possible, use a stream-supporting type. Type support depends on the C# mode and extension version. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types). ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-typescript,programming-language-python,programming-language-powershell,programming-language-java" At this time, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. |
azure-functions | Functions Infrastructure As Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md | This example section creates a Standard general purpose v2 storage account: ### [Bicep](#tab/bicep) ```bicep-resource storageAccountName 'Microsoft.Storage/storageAccounts@2023-05-01' = { +resource storageAccount 'Microsoft.Storage/storageAccounts@2023-05-01' = { name: storageAccountName location: location kind: 'StorageV2' |
azure-monitor | Azure Monitor Agent Troubleshoot Linux Vm Rsyslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md | If you're sending a high log volume through rsyslog and your system is set up to 1. `sudo systemctl restart rsyslog` -### Azure Monitor Agent for Linux event buffer is filling a disk --If you observe the `/var/opt/microsoft/azuremonitor/events` directory growing unbounded (10 GB or higher) and not reducing in size, [file a ticket](#file-a-ticket). For **Summary**, enter **Azure Monitor Agent Event Buffer is filling disk**. For **Problem type**, enter **I need help configuring data collection from a VM**. - |
azure-monitor | Data Collection Log Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-log-text.md | Adhere to the following recommendations to ensure that you don't experience data ## Incoming stream The incoming stream of data includes the columns in the following table. - | Column | Type | Description | +| Column | Type | Description | |:|:|:| | `TimeGenerated` | datetime | The time the record was generated. This value will be automatically populated with the time the record is added to the Log Analytics workspace. You can override this value using a transformation to set `TimeGenerated` to another value. | | `RawData` | string | The entire log entry in a single column. You can use a transformation if you want to break down this data into multiple columns before sending to the table. | | `FilePath` | string | If you add this column to the incoming stream in the DCR, it will be populated with the path to the log file. This column is not created automatically and can't be added using the portal. You must manually modify the DCR created by the portal or create the DCR using another method where you can explicitly define the incoming stream. |-| `Computer` | string | If you add this column to the incoming stream in the DCR, it will be populated with the name of the computer. This column is not created automatically and can't be added using the portal. You must manually modify the DCR created by the portal or create the DCR using another method where you can explicitly define the incoming stream. | ## Custom table $tableParams = @' { "name": "FilePath", "type": "String"- }, - { - "name": "Computer", - "type": "String" } ] } Use the following ARM template to create or modify a DCR for collecting text log { "name": "FilePath", "type": "string"- }, - { - "name": "Computer", - "type": "string" } ] } |
azure-monitor | Alerts Create Rule Cli Powershell Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-rule-cli-powershell-arm.md | You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit > [!NOTE] > When you create a metric alert on a single resource, the syntax uses the `TargetResourceId`. When you create a metric alert on multiple resources, the syntax contains the `TargetResourceScope`, `TargetResourceType`, and `TargetResourceRegion`. - To create a log search alert rule using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.-- To create an activity log alert rule using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet.+- To create an activity log alert rule using PowerShell, use the [New-AzActivityLogAlert](/powershell/module/az.monitor/new-azactivitylogalert) cmdlet. ## Create a new alert rule using an ARM template |
azure-monitor | Proactive Application Security Detection Pack | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-application-security-detection-pack.md | -This feature requires no special setup, other than [configuring your app to send telemetry](../app/usage-overview.md). +This feature requires no special setup, other than [configuring your app to send telemetry](../app/usage.md). ## When would I get this type of smart detection notification? There are three types of security issues that are detected: |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | telemetry.trackEvent({name: "WinGame"}); ### Custom events in Log Analytics -The telemetry is available in the `customEvents` table on the [Application Insights Logs tab](../logs/log-query-overview.md) or [usage experience](usage-overview.md). Events might come from `trackEvent(..)` or the [Click Analytics Auto-collection plug-in](javascript-feature-extensions.md). +The telemetry is available in the `customEvents` table on the [Application Insights Logs tab](../logs/log-query-overview.md) or [usage experience](usage.md). Events might come from `trackEvent(..)` or the [Click Analytics Auto-collection plug-in](javascript-feature-extensions.md). If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than `1`. For example, `itemCount==10` means that of 10 calls to `trackEvent()`, the sampling process transmitted only one of them. To get a correct count of custom events, use code such as `customEvents | summarize sum(itemCount)`. The function is asynchronous for the [server telemetry channel](https://www.nuge ## Authenticated users -In a web app, users are [identified by cookies](./usage-segmentation.md#the-users-sessions-and-events-segmentation-tool) by default. A user might be counted more than once if they access your app from a different machine or browser, or if they delete cookies. +In a web app, users are [identified by cookies](./usage.md#users-sessions-and-eventsanalyze-telemetry-from-three-perspectives) by default. A user might be counted more than once if they access your app from a different machine or browser, or if they delete cookies. If users sign in to your app, you can get a more accurate count by setting the authenticated user ID in the browser code: |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Azure Monitor Application Insights, a feature of [Azure Monitor](..\overview.md) Application Insights provides many experiences to enhance the performance, reliability, and quality of your applications. ### Investigate-- [Application dashboard](overview-dashboard.md): An at-a-glance assessment of your application's health and performance.-- [Application map](app-map.md): A visual overview of application architecture and components' interactions.-- [Live metrics](live-stream.md): A real-time analytics dashboard for insight into application activity and performance.-- [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search): Trace and diagnose transactions to identify issues and optimize performance.-- [Availability view](availability-overview.md): Proactively monitor and test the availability and responsiveness of application endpoints.-- [Failures view](failures-and-performance-views.md?tabs=failures-view): Identify and analyze failures in your application to minimize downtime.-- [Performance view](failures-and-performance-views.md?tabs=performance-view): Review application performance metrics and potential bottlenecks.++* [Application dashboard](overview-dashboard.md): An at-a-glance assessment of your application's health and performance. +* [Application map](app-map.md): A visual overview of application architecture and components' interactions. +* [Live metrics](live-stream.md): A real-time analytics dashboard for insight into application activity and performance. +* [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search): Trace and diagnose transactions to identify issues and optimize performance. +* [Availability view](availability-overview.md): Proactively monitor and test the availability and responsiveness of application endpoints. +* [Failures view](failures-and-performance-views.md?tabs=failures-view): Identify and analyze failures in your application to minimize downtime. +* [Performance view](failures-and-performance-views.md?tabs=performance-view): Review application performance metrics and potential bottlenecks. ### Monitoring-- [Alerts](../alerts/alerts-overview.md): Monitor a wide range of aspects of your application and trigger various actions.-- [Metrics](../essentials/metrics-getting-started.md): Dive deep into metrics data to understand usage patterns and trends.-- [Diagnostic settings](../essentials/diagnostic-settings.md): Configure streaming export of platform logs and metrics to the destination of your choice. -- [Logs](../logs/log-analytics-overview.md): Retrieve, consolidate, and analyze all data collected into Azure Monitoring Logs.-- [Workbooks](../visualize/workbooks-overview.md): Create interactive reports and dashboards that visualize application monitoring data.++* [Alerts](../alerts/alerts-overview.md): Monitor a wide range of aspects of your application and trigger various actions. +* [Metrics](../essentials/metrics-getting-started.md): Dive deep into metrics data to understand usage patterns and trends. +* [Diagnostic settings](../essentials/diagnostic-settings.md): Configure streaming export of platform logs and metrics to the destination of your choice. +* [Logs](../logs/log-analytics-overview.md): Retrieve, consolidate, and analyze all data collected into Azure Monitoring Logs. +* [Workbooks](../visualize/workbooks-overview.md): Create interactive reports and dashboards that visualize application monitoring data. ### Usage-- [Users, sessions, and events](usage-segmentation.md): Determine when, where, and how users interact with your web app.-- [Funnels](usage-funnels.md): Analyze conversion rates to identify where users progress or drop off in the funnel.-- [Flows](usage-flows.md): Visualize user paths on your site to identify high engagement areas and exit points.-- [Cohorts](usage-cohorts.md): Group users by shared characteristics to simplify trend identification, segmentation, and performance troubleshooting.++* [Users, sessions, and events](usage.md#users-sessions-and-eventsanalyze-telemetry-from-three-perspectives): Determine when, where, and how users interact with your web app. +* [Funnels](usage.md#funnelsdiscover-how-customers-use-your-application): Analyze conversion rates to identify where users progress or drop off in the funnel. +* [Flows](usage.md#user-flowsanalyze-user-navigation-patterns): Visualize user paths on your site to identify high engagement areas and exit points. +* [Cohorts](usage.md#cohortsanalyze-a-specific-set-of-users-sessions-events-or-operations): Group users by shared characteristics to simplify trend identification, segmentation, and performance troubleshooting. ### Code analysis-- [Profiler](../profiler/profiler-overview.md): Capture, identify, and view performance traces for your application.-- [Code optimizations](../insights/code-optimizations.md): Harness AI to create better and more efficient applications.-- [Snapshot debugger](../snapshot-debugger/snapshot-debugger.md): Automatically collect debug snapshots when exceptions occur in .NET application++* [Profiler](../profiler/profiler-overview.md): Capture, identify, and view performance traces for your application. +* [Code optimizations](../insights/code-optimizations.md): Harness AI to create better and more efficient applications. +* [Snapshot debugger](../snapshot-debugger/snapshot-debugger.md): Automatically collect debug snapshots when exceptions occur in .NET application ## Logic model Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/we - [Live metrics](live-stream.md) - [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search) - [Availability overview](availability-overview.md)-- [Users, sessions, and events](usage-segmentation.md)+- [Users, sessions, and events](usage.md) |
azure-monitor | Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md | HttpContext.Features.Get<RequestTelemetry>().Properties["myProp"] = someData ## Enable client-side telemetry for web applications -The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md) using JavaScript (Web) SDK Loader Script injection by configuration. +The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage.md) using JavaScript (Web) SDK Loader Script injection by configuration. 1. In `_ViewImports.cshtml`, add injection: Our [Service Updates](https://azure.microsoft.com/updates/?service=application-i ## Next steps -* [Explore user flows](./usage-flows.md) to understand how users move through your app. +* [Explore user flows](./usage.md#user-flowsanalyze-user-navigation-patterns) to understand how users move through your app. * [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown. * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. * Use [availability tests](./availability-overview.md) to check your app constantly from around the world. |
azure-monitor | Ip Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md | Title: Application Insights IP address collection | Microsoft Docs description: Understand how Application Insights handles IP addresses and geolocation. Previously updated : 06/23/2023-- Last updated : 07/24/2024+ # Geolocation and IP address handling -This article explains how geolocation lookup and IP address handling work in Application Insights, along with how to modify the default behavior. +This article explains how geolocation lookup and IP address handling work in [Application Insights](app-insights-overview.md#application-insights-overview). ## Default behavior -By default, IP addresses are temporarily collected but not stored in Application Insights. This process follows some basic steps. +By default, IP addresses are temporarily collected but not stored. -When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup. Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field. --To remove geolocation data, see the following articles: --* [Remove the client IP initializer](../app/configuration-with-applicationinsights-config.md) -* [Use a custom initializer](../app/api-filtering-sampling.md) +When telemetry is sent to Azure, the IP address is used in a geolocation lookup. The result is used to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field. The telemetry types are: * **Browser telemetry**: Application Insights collects the sender's IP address. The ingestion endpoint calculates the IP address.-* **Server telemetry**: The Application Insights telemetry module temporarily collects the client IP address. The IP address isn't collected locally when the `X-Forwarded-For` header is set. When the incoming IP address list has more than one item, the last IP address is used to populate geolocation fields. +* **Server telemetry**: The Application Insights telemetry module temporarily collects the client IP address when the `X-Forwarded-For` header isn't set. When the incoming IP address list has more than one item, the last IP address is used to populate geolocation fields. -This behavior is by design to help avoid unnecessary collection of personal data and IP address location information. Whenever possible, we recommend avoiding the collection of personal data. +This behavior is by design to help avoid unnecessary collection of personal data and IP address location information. -> [!NOTE] -> Although the default is to not collect IP addresses, you can override this behavior. We recommend verifying that the collection doesn't break any compliance requirements or local regulations. -> -> To learn more about handling personal data in Application Insights, see [Guidance for personal data](../logs/personal-data-mgmt.md). +When IP addresses aren't collected, city and other geolocation attributes also aren't collected. ++## Storage of IP address data -When IP addresses aren't collected, city and other geolocation attributes populated by our pipeline by using the IP address also aren't collected. You can mask IP collection at the source. There are two ways to do it. You can: +> [!WARNING] +> The default and our recommendation is to not collect IP addresses. If you override this behavior, verify the collection doesn't break any compliance requirements or local regulations. +> +> To learn more about handling personal data, see [Guidance for personal data](../logs/personal-data-mgmt.md). -* Remove the client IP initializer. For more information, see [Configuration with Applications Insights Configuration](configuration-with-applicationinsights-config.md). -* Provide your own custom initializer. For more information, see an [API filtering example](api-filtering-sampling.md). +To enable IP collection and storage, the `DisableIpMasking` property of the Application Insights component must be set to `true`. -## Storage of IP address data +Options to set this property include: -To enable IP collection and storage, the `DisableIpMasking` property of the Application Insights component must be set to `true`. You can set this property through Azure Resource Manager templates (ARM templates) or by calling the REST API. +- [ARM template](#arm-template) +- [Portal](#portal) +- [REST API](#rest-api) +- [PowerShell](#powershell) ### ARM template If you need to modify the behavior for only a single Application Insights resour 1. After the deployment is complete, new telemetry data will be recorded. - If you select and edit the template again, you'll see only the default template without the newly added property. If you aren't seeing IP address data and want to confirm that `"DisableIpMasking": true` is set, run the following PowerShell commands: + If you select and edit the template again, only the default template without the newly added property. If you aren't seeing IP address data and want to confirm that `"DisableIpMasking": true` is set, run the following PowerShell commands: ```powershell # Replace `Fabrikam-dev` with the appropriate resource and resource group name. If you need to modify the behavior for only a single Application Insights resour $AppInsights.Properties ``` - A list of properties is returned as a result. One of the properties should read `DisableIpMasking: true`. If you run the PowerShell commands before you deploy the new property with Azure Resource Manager, the property won't exist. + A list of properties is returned as a result. One of the properties should read `DisableIpMasking: true`. If you run the PowerShell commands before you deploy the new property with Azure Resource Manager, the property doesn't exist. ### REST API The following [REST API](/rest/api/azure/) payload makes the same modifications: -``` +```json PATCH https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/microsoft.insights/components/<resource-name>?api-version=2018-05-01-preview HTTP/1.1 Host: management.azure.com Authorization: AUTH_TOKEN Content-Length: 54 ### PowerShell -The PoweShell 'Update-AzApplicationInsights' cmdlet can disable IP masking with the `DisableIPMasking` parameter. +The PowerShell `Update-AzApplicationInsights` cmdlet can disable IP masking with the `DisableIPMasking` parameter. ```powershell Update-AzApplicationInsights -Name "aiName" -ResourceGroupName "rgName" -DisableIPMasking:$true ``` -For more information on the 'Update-AzApplicationInsights' cmdlet, see [Update-AzApplicationInsights](/powershell/module/az.applicationinsights/update-azapplicationinsights) --## Telemetry initializer --If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field. The code for this class is the same across .NET versions. --```csharp -using Microsoft.ApplicationInsights.Channel; -using Microsoft.ApplicationInsights.DataContracts; -using Microsoft.ApplicationInsights.Extensibility; --namespace MyWebApp -{ - public class CloneIPAddress : ITelemetryInitializer - { - public void Initialize(ITelemetry telemetry) - { - ISupportProperties propTelemetry = telemetry as ISupportProperties; -- if (propTelemetry !=null && !propTelemetry.Properties.ContainsKey("client-ip")) - { - string clientIPValue = telemetry.Context.Location.Ip; - propTelemetry.Properties.Add("client-ip", clientIPValue); - } - } - } -} -``` --> [!NOTE] -> If you can't access `ISupportProperties`, make sure you're running the latest stable release of the Application Insights SDK. `ISupportProperties` is intended for high cardinality values. `GlobalProperties` is more appropriate for low cardinality values like region name and environment name. ---# [.NET 6.0+](#tab/framework) --```csharp - using Microsoft.ApplicationInsights.Extensibility; - using CustomInitializer.Telemetry; --builder.services.AddSingleton<ITelemetryInitializer, CloneIPAddress>(); -``` --# [.NET 5.0](#tab/dotnet5) --```csharp - using Microsoft.ApplicationInsights.Extensibility; - using CustomInitializer.Telemetry; -- public void ConfigureServices(IServiceCollection services) -{ - services.AddSingleton<ITelemetryInitializer, CloneIPAddress>(); -} -``` --# [ASP.NET Framework](#tab/dotnet6) --```csharp -using Microsoft.ApplicationInsights.Extensibility; --namespace MyWebApp -{ - public class MvcApplication : System.Web.HttpApplication - { - protected void Application_Start() - { - //Enable your telemetry initializer: - TelemetryConfiguration.Active.TelemetryInitializers.Add(new CloneIPAddress()); - } - } -} --``` ----# [Node.js](#tab/nodejs) --### Node.js --```javascript -appInsights.defaultClient.addTelemetryProcessor((envelope) => { - const baseData = envelope.data.baseData; - if (appInsights.Contracts.domainSupportsProperties(baseData)) { - const ipAddress = envelope.tags[appInsights.defaultClient.context.keys.locationIp]; - if (ipAddress) { - baseData.properties["client-ip"] = ipAddress; - } - } -}); -``` -# [Client-side JavaScript](#tab/javascript) --### Client-side JavaScript --Unlike the server-side SDKs, the client-side JavaScript SDK doesn't calculate an IP address. By default, IP address calculation for client-side telemetry occurs at the ingestion endpoint in Azure. --If you want to calculate the IP address directly on the client side, you need to add your own custom logic and use the result to set the `ai.location.ip` tag. When `ai.location.ip` is set, the ingestion endpoint doesn't perform IP address calculation, and the provided IP address is used for the geolocation lookup. In this scenario, the IP address is still zeroed out by default. --To keep the entire IP address calculated from your custom logic, you could use a telemetry initializer that would copy the IP address data that you provided in `ai.location.ip` to a separate custom field. But again, unlike the server-side SDKs, the client-side SDK won't calculate the address for you if it can't rely on third-party libraries or your own custom logic. --```javascript -appInsights.addTelemetryInitializer((item) => { - const ipAddress = item.tags && item.tags["ai.location.ip"]; - if (ipAddress) { - item.baseData.properties = { - ...item.baseData.properties, - "client-ip": ipAddress - }; - } -}); --``` --If client-side data traverses a proxy before forwarding to the ingestion endpoint, IP address calculation might show the IP address of the proxy and not the client. ----### View the results of your telemetry initializer --If you send new traffic to your site and wait a few minutes, you can then run a query to confirm that the collection is working: --```kusto -requests -| where timestamp > ago(1h) -| project appName, operation_Name, url, resultCode, client_IP, customDimensions.["client-ip"] -``` --Newly collected IP addresses will appear in the `customDimensions_client-ip` column. The default `client-ip` column will still have all four octets zeroed out. --If you're testing from localhost, and the value for `customDimensions_client-ip` is `::1`, this value is expected behavior. The `::1` value represents the loopback address in IPv6. It's equivalent to `127.0.0.1` in IPv4. --## Frequently asked questions --This section provides answers to common questions. --### How is city, country/region, and other geolocation data calculated? --We look up the IP address (IPv4 or IPv6) of the web client: - -* Browser telemetry: We collect the sender's IP address. -* Server telemetry: The Application Insights module collects the client IP address. It's not collected if `X-Forwarded-For` is set. -* To learn more about how IP address and geolocation data is collected in Application Insights, see [Geolocation and IP address handling](./ip-collection.md). - -You can configure `ClientIpHeaderTelemetryInitializer` to take the IP address from a different header. In some systems, for example, it's moved by a proxy, load balancer, or CDN to `X-Originating-IP`. [Learn more](https://apmtips.com/posts/2016-07-05-client-ip-address/). - -You can [use Power BI](../logs/log-powerbi.md) to display your request telemetry on a map if you've [migrated to a workspace-based resource](./convert-classic-resource.md). +For more information on the `Update-AzApplicationInsights` cmdlet, see [Update-AzApplicationInsights](/powershell/module/az.applicationinsights/update-azapplicationinsights) ## Next steps -* Learn more about [personal data collection](../logs/personal-data-mgmt.md) in Application Insights. -* Learn more about how [IP address collection](https://apmtips.com/posts/2016-07-05-client-ip-address/) works in Application Insights. This article is an older external blog post written by one of our engineers. It predates the current default behavior where the IP address is recorded as `0.0.0.0`. The article goes into greater depth on the mechanics of the built-in telemetry initializer. +* Learn more about [personal data collection](../logs/personal-data-mgmt.md) in Azure Monitor. +* Learn how to [set the user IP](opentelemetry-add-modify.md#set-the-user-ip) using OpenTelemetry. |
azure-monitor | Javascript Feature Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md | appInsights.loadAppInsights(); If you want to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext). -If you're using a HEART workbook with the Click Analytics plug-in, you don't need to set the authenticated user context to see telemetry data. For more information, see the [HEART workbook documentation](./usage-heart.md#confirm-that-data-is-flowing). +If you're using a HEART workbook with the Click Analytics plug-in, you don't need to set the authenticated user context to see telemetry data. For more information, see the [HEART workbook documentation](./usage.md#confirm-that-data-is-flowing). ## Use the plug-in See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap ## Next steps -- [Confirm data is flowing](./javascript-sdk.md#confirm-data-is-flowing).-- See the [documentation on utilizing HEART workbook](usage-heart.md) for expanded product analytics.-- See the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection Plug-in.-- Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.-- See [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query) if you arenΓÇÖt familiar with the process of writing a query. -- Build a [workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data.+* [Confirm data is flowing](./javascript-sdk.md#confirm-data-is-flowing). +* See the [documentation on utilizing HEART workbook](usage.md#heartfive-dimensions-of-customer-experience) for expanded product analytics. +* See the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection Plug-in. +* Use [Events Analysis in the Usage experience](usage.md#users-sessions-and-eventsanalyze-telemetry-from-three-perspectives) to analyze top clicks and slice by available dimensions. +* See [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query) if you arenΓÇÖt familiar with the process of writing a query. +* Build a [workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data. |
azure-monitor | Javascript Sdk Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-configuration.md | In this scenario, a 502 or 503 response might be returned to a client because of ## Next steps -* [Track usage](usage-overview.md) +* [Track usage](usage.md) * [Custom events and metrics](api-custom-events-metrics.md)-* [Build-measure-learn](usage-overview.md) * [Azure file copy task](/azure/devops/pipelines/tasks/deploy/azure-file-copy) <!-- Remote URLs --> |
azure-monitor | Javascript Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md | Yes, the Application Insights JavaScript SDK is open source. To view the source ## Next steps -* [Explore Application Insights usage experiences](usage-overview.md) +* [Explore Application Insights usage experiences](usage.md) * [Track page views](api-custom-events-metrics.md#page-views) * [Track custom events and metrics](api-custom-events-metrics.md) * [Insert a JavaScript telemetry initializer](api-filtering-sampling.md#javascript-telemetry-initializers) |
azure-monitor | Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md | If you open live metrics, the SDKs switch to a higher frequency mode and send ne ## Next steps -* [Monitor usage with Application Insights](./usage-overview.md) +* [Monitor usage with Application Insights](./usage.md) * [Use Diagnostic Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) * [Profiler](./profiler.md) * [Snapshot Debugger](./snapshot-debugger.md) |
azure-monitor | Opentelemetry Add Modify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md | Use the add [custom property example](#add-a-custom-property-to-a-span), but rep ```C# // Add the client IP address to the activity as a tag. // only applicable in case of activity.Kind == Server-activity.SetTag("http.client_ip", "<IP Address>"); +activity.SetTag("client.address", "<IP Address>"); ``` ##### [Java](#tab/java) |
azure-monitor | Overview Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md | Application Insights Logs provides a rich query language that you can use to ana ## Next steps -- [Funnels](./usage-funnels.md)-- [Retention](./usage-retention.md)-- [User flows](./usage-flows.md)-- In the tutorial, you learned how to create custom dashboards. Now look at the rest of the Application Insights documentation, which also includes a case study.+* [Funnels](./usage.md#funnelsdiscover-how-customers-use-your-application) +* [Retention](./usage.md#user-retention-analysis) +* [User flows](./usage.md#user-flowsanalyze-user-navigation-patterns) +* In the tutorial, you learned how to create custom dashboards. Now look at the rest of the Application Insights documentation, which also includes a case study. > [!div class="nextstepaction"] > [Deep diagnostics](../app/devops.md) |
azure-monitor | Usage Cohorts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-cohorts.md | - Title: Application Insights usage cohorts | Microsoft Docs -description: Analyze different sets or users, sessions, events, or operations that have something in common. - Previously updated : 07/01/2024---# Application Insights cohorts --A cohort is a set of users, sessions, events, or operations that have something in common. In Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set you're interested in. --## Cohorts vs. basic filters --You can use cohorts in ways similar to filters. But cohorts' definitions are built from custom analytics queries, so they're much more adaptable and complex. Unlike filters, you can save cohorts so that other members of your team can reuse them. --You might define a cohort of users who have all tried a new feature in your app. You can save this cohort in your Application Insights resource. It's easy to analyze this saved group of specific users in the future. -> [!NOTE] -> After cohorts are created, they're available from the Users, Sessions, Events, and User Flows tools. --## Example: Engaged users --Your team defines an engaged user as anyone who uses your app five or more times in a given month. In this section, you define a cohort of these engaged users. --1. Select **Create a Cohort**. -1. Select the **Template Gallery** tab to see a collection of templates for various cohorts. -1. Select **Engaged Users -- by Days Used**. -- There are three parameters for this cohort: - * **Activities**: Where you choose which events and page views count as usage. - * **Period**: The definition of a month. - * **UsedAtLeastCustom**: The number of times users need to use something within a period to count as engaged. --1. Change **UsedAtLeastCustom** to **5+ days**. Leave **Period** set as the default of 28 days. -- Now this cohort represents all user IDs sent with any custom event or page view on 5 separate days in the past 28 days. --1. Select **Save**. -- > [!TIP] - > Give your cohort a name, like *Engaged Users (5+ Days)*. Save it to *My reports* or *Shared reports*, depending on whether you want other people who have access to this Application Insights resource to see this cohort. --1. Select **Back to Gallery**. --### What can you do by using this cohort? --Open the Users tool. In the **Show** dropdown box, choose the cohort you created under **Users who belong to**. ---Important points to notice: --* You can't create this set through normal filters. The date logic is more advanced. -* You can further filter this cohort by using the normal filters in the Users tool. Although the cohort is defined on 28-day windows, you can still adjust the time range in the Users tool to be 30, 60, or 90 days. --These filters support more sophisticated questions that are impossible to express through the query builder. An example is _people who were engaged in the past 28 days. How did those same people behave over the past 60 days?_ --## Example: Events cohort --You can also make cohorts of events. In this section, you define a cohort of events and page views. Then you see how to use them from the other tools. This cohort might define a set of events that your team considers _active usage_ or a set related to a certain new feature. --1. Select **Create a Cohort**. -1. Select the **Template Gallery** tab to see a collection of templates for various cohorts. -1. Select **Events Picker**. -1. In the **Activities** dropdown box, select the events you want to be in the cohort. -1. Save the cohort and give it a name. --## Example: Active users where you modify a query --The previous two cohorts were defined by using dropdown boxes. You can also define cohorts by using analytics queries for total flexibility. To see how, create a cohort of users from the United Kingdom. --1. Open the Cohorts tool, select the **Template Gallery** tab, and select **Blank Users cohort**. -- :::image type="content" source="./media/usage-cohorts/cohort.png" alt-text="Screenshot that shows the template gallery for cohorts." lightbox="./media/usage-cohorts/cohort.png"::: -- There are three sections: -- * **Markdown text**: Where you describe the cohort in more detail for other members on your team. - * **Parameters**: Where you make your own parameters, like **Activities**, and other dropdown boxes from the previous two examples. - * **Query**: Where you define the cohort by using an analytics query. -- In the query section, you [write an analytics query](/azure/kusto/query). The query selects the certain set of rows that describe the cohort you want to define. The Cohorts tool then implicitly adds a `| summarize by user_Id` clause to the query. This data appears as a preview underneath the query in a table, so you can make sure your query is returning results. -- > [!NOTE] - > If you don't see the query, resize the section to make it taller and reveal the query. --1. Copy and paste the following text into the query editor: -- ```KQL - union customEvents, pageViews - | where client_CountryOrRegion == "United Kingdom" - ``` --1. Select **Run Query**. If you don't see user IDs appear in the table, change to a country/region where your application has users. --1. Save and name the cohort. --## Frequently asked question --### I defined a cohort of users from a certain country/region. When I compare this cohort in the Users tool to setting a filter on that country/region, why do I see different results? --Cohorts and filters are different. Suppose you have a cohort of users from the United Kingdom (defined like the previous example), and you compare its results to setting the filter `Country or region = United Kingdom`: --* The cohort version shows all events from users who sent one or more events from the United Kingdom in the current time range. If you split by country or region, you likely see many countries and regions. -* The filters version only shows events from the United Kingdom. If you split by country or region, you see only the United Kingdom. --## Learn more --* [Analytics query language](../logs/log-analytics-tutorial.md?toc=%2fazure%2fazure-monitor%2ftoc.json) -* [Users, sessions, events](usage-segmentation.md) -* [User flows](usage-flows.md) -* [Usage overview](usage-overview.md) |
azure-monitor | Usage Flows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-flows.md | - Title: Application Insights User Flows analyzes navigation flows -description: Analyze how users move between the pages and features of your web app. - Previously updated : 12/15/2023----# Analyze user navigation patterns with User Flows in Application Insights ---The User Flows tool visualizes how users move between the pages and features of your site. It's great for answering questions like: --* How do users move away from a page on your site? -* What do users select on a page on your site? -* Where are the places that users churn most from your site? -* Are there places where users repeat the same action over and over? --The User Flows tool starts from an initial custom event, exception, dependency, page view or request that you specify. From this initial event, User Flows shows the events that happened before and after user sessions. Lines of varying thickness show how many times users followed each path. Special **Session Started** nodes show where the subsequent nodes began a session. **Session Ended** nodes show how many users sent no page views or custom events after the preceding node, highlighting where users probably left your site. --> [!NOTE] -> Your Application Insights resource must contain page views or custom events to use the User Flows tool. [Learn how to set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md). -> --## Choose an initial event ---To begin answering questions with the User Flows tool, choose an initial custom event, exception, dependency, page view or request to serve as the starting point for the visualization: --1. Select the link in the **What do users do after?** title or select **Edit**. -1. Select a custom event, exception, dependency, page view or request from the **Initial event** dropdown list. -1. Select **Create graph**. --The **Step 1** column of the visualization shows what users did most frequently after the initial event. The items are ordered from top to bottom and from most to least frequent. The **Step 2** and subsequent columns show what users did next. The information creates a picture of all the ways that users moved through your site. --By default, the User Flows tool randomly samples only the last 24 hours of page views and custom events from your site. You can increase the time range and change the balance of performance and accuracy for random sampling on the **Edit** menu. --If some of the page views, custom events, and exceptions aren't relevant to you, select **X** on the nodes you want to hide. After you've selected the nodes you want to hide, select **Create graph**. To see all the nodes you've hidden, select **Edit** and look at the **Excluded events** section. --If page views or custom events are missing that you expect to see in the visualization: --* Check the **Excluded events** section on the **Edit** menu. -* Use the plus buttons on **Others** nodes to include less-frequent events in the visualization. -* If the page view or custom event you expect is sent infrequently by users, increase the time range of the visualization on the **Edit** menu. -* Make sure the custom event, exception, dependency, page view or request you expect is set up to be collected by the Application Insights SDK in the source code of your site. Learn more about [collecting custom events](./api-custom-events-metrics.md). --If you want to see more steps in the visualization, use the **Previous steps** and **Next steps** dropdown lists above the visualization. --## After users visit a page or feature, where do they go and what do they select? ---If your initial event is a page view, the first column (**Step 1**) of the visualization is a quick way to understand what users did immediately after they visited the page. --Open your site in a window next to the User Flows visualization. Compare your expectations of how users interact with the page to the list of events in the **Step 1** column. Often, a UI element on the page that seems insignificant to your team can be among the most used on the page. It can be a great starting point for design improvements to your site. --If your initial event is a custom event, the first column shows what users did after they performed that action. As with page views, consider if the observed behavior of your users matches your team's goals and expectations. --If your selected initial event is **Added Item to Shopping Cart**, for example, look to see if **Go to Checkout** and **Completed Purchase** appear in the visualization shortly thereafter. If user behavior is different from your expectations, use the visualization to understand how users are getting "trapped" by your site's current design. --## Where are the places that users churn most from your site? --Watch for **Session Ended** nodes that appear high up in a column in the visualization, especially early in a flow. This positioning means many users probably churned from your site after they followed the preceding path of pages and UI interactions. --Sometimes churn is expected. For example, it's expected after a user makes a purchase on an e-commerce site. But usually churn is a sign of design problems, poor performance, or other issues with your site that can be improved. --Keep in mind that **Session Ended** nodes are based only on telemetry collected by this Application Insights resource. If Application Insights doesn't receive telemetry for certain user interactions, users might have interacted with your site in those ways after the User Flows tool says the session ended. --## Are there places where users repeat the same action over and over? --Look for a page view or custom event that's repeated by many users across subsequent steps in the visualization. This activity usually means that users are performing repetitive actions on your site. If you find repetition, think about changing the design of your site or adding new functionality to reduce repetition. For example, you might add bulk edit functionality if you find users performing repetitive actions on each row of a table element. --## Frequently asked questions --This section provides answers to common questions. --### Does the initial event represent the first time the event appears in a session or any time it appears in a session? --The initial event on the visualization only represents the first time a user sent that page view or custom event during a session. If users can send the initial event multiple times in a session, then the **Step 1** column only shows how users behave after the *first* instance of an initial event, not all instances. --### Some of the nodes in my visualization have a level that's too high. How can I get more detailed nodes? --Use the **Split by** options on the **Edit** menu: --1. Select the event you want to break down on the **Event** menu. -1. Select a dimension on the **Dimension** menu. For example, if you have an event called **Button Clicked**, try a custom property called **Button Name**. --## Next steps --* [Usage overview](usage-overview.md) -* [Users, sessions, and events](usage-segmentation.md) -* [Retention](usage-retention.md) -* [Adding custom events to your app](./api-custom-events-metrics.md) |
azure-monitor | Usage Funnels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-funnels.md | - Title: Application Insights funnels -description: Learn how you can use funnels to discover how customers are interacting with your application. - Previously updated : 01/31/2024----# Discover how customers are using your application with Application Insights funnels --Understanding the customer experience is of great importance to your business. If your application involves multiple stages, you need to know if customers are progressing through the entire process or ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Application Insights funnels to gain insights into your users and monitor step-by-step conversion rates. --## Create your funnel -Before you create your funnel, decide on the question you want to answer. For example, you might want to know how many users view the home page, view a customer profile, and create a ticket. --To create a funnel: --1. On the **Funnels** tab, select **Edit**. -1. Choose your **Top Step**. -- :::image type="content" source="./media/usage-funnels/funnel.png" alt-text="Screenshot that shows the Funnel tab and selecting steps on the Edit tab." lightbox="./media/usage-funnels/funnel.png"::: --1. To apply filters to the step, select **Add filters**. This option appears after you choose an item for the top step. -1. Then choose your **Second Step** and so on. -- > [!NOTE] - > Funnels are limited to a maximum of six steps. --1. Select the **View** tab to see your funnel results. -- :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot that shows the Funnels View tab that shows results from the top and second steps." lightbox="./media/usage-funnels/funnel-2.png"::: --1. To save your funnel to view at another time, select **Save** at the top. Use **Open** to open your saved funnels. --### Funnels features --Funnels have the following features: --- If your app is sampled, you'll see a sampling banner. Selecting the banner opens a context pane that explains how to turn off sampling.-- Select a step to see more details on the right.-- The historical conversion graph shows the conversion rates over the last 90 days.-- Understand your users better by accessing the users tool. You can use filters in each step.--## Next steps -- * [Usage overview](usage-overview.md) - * [Users, sessions, and events](usage-segmentation.md) - * [Retention](usage-retention.md) - * [Workbooks](../visualize/workbooks-overview.md) - * [Add user context](./usage-overview.md) - * [Export to Power BI](../logs/log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md) |
azure-monitor | Usage Heart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md | - Title: HEART analytics workbook -description: Product teams can use the HEART workbook to measure success across five user-centric dimensions to deliver better software. - Previously updated : 07/01/2024----# Analyze product usage with HEART -This article describes how to enable and use the Heart Workbook on Azure Monitor. The HEART workbook is based on the HEART measurement framework, which was originally introduced by Google. Several Microsoft internal teams use HEART to deliver better software. --## Overview -HEART is an acronym that stands for happiness, engagement, adoption, retention, and task success. It helps product teams deliver better software by focusing on five dimensions of customer experience: --- **Happiness**: Measure of user attitude-- **Engagement**: Level of active user involvement-- **Adoption**: Target audience penetration-- **Retention**: Rate at which users return-- **Task success**: Productivity empowerment--These dimensions are measured independently, but they interact with each other. ---- Adoption, engagement, and retention form a user activity funnel. Only a portion of users who adopt the tool come back to use it.-- Task success is the driver that progresses users down the funnel and moves them from adoption to retention.-- Happiness is an outcome of the other dimensions and not a stand-alone measurement. Users who have progressed down the funnel and are showing a higher level of activity are ideally happier.--## Get started -### Prerequisites -- | Source | Attribute | Description | - |--|-|--| - | customEvents | session_Id | Unique session identifier | - | customEvents | appName | Unique Application Insights app identifier | - | customEvents | itemType | Category of customEvents record | - | customEvents | timestamp | Datetime of event | - | customEvents | operation_Id | Correlate telemetry events | - | customEvents | user_Id | Unique user identifier | - | customEvents ┬╣ | parentId | Name of feature | - | customEvents ┬╣ | pageName | Name of page | - | customEvents ┬╣ | actionType | Category of Click Analytics record | - | pageViews | user_AuthenticatedId | Unique authenticated user identifier | - | pageViews | session_Id | Unique session identifier | - | pageViews | appName | Unique Application Insights app identifier | - | pageViews | timestamp | Datetime of event | - | pageViews | operation_Id | Correlate telemetry events | - | pageViews | user_Id | Unique user identifier | --- If you're setting up the authenticated user context, instrument the below attributes:--| Source | Attribute | Description | -|--|-|--| -| customEvents | user_AuthenticatedId | Unique authenticated user identifier | --**Footnotes** --┬╣: To emit these attributes, use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) via npm. -->[!TIP] -> To understand how to effectively use the Click Analytics plug-in, see [Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](javascript-feature-extensions.md#use-the-plug-in). --### Open the workbook -You can find the workbook in the gallery under **Public Templates**. The workbook appears in the section **Product Analytics using the Click Analytics Plugin**. ---There are seven workbooks. ---You only have to interact with the main workbook, **HEART Analytics - All Sections**. This workbook contains the other six workbooks as tabs. You can also access the individual workbooks related to each tab through the gallery. --### Confirm that data is flowing --To validate that data is flowing as expected to light up the metrics accurately, select the **Development Requirements** tab. --> [!IMPORTANT] -> Unless you [set the authenticated user context](./javascript-feature-extensions.md#optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data. ---If data isn't flowing as expected, this tab shows the specific attributes with issues. ---## Workbook structure -The workbook shows metric trends for the HEART dimensions split over seven tabs. Each tab contains descriptions of the dimensions, the metrics contained within each dimension, and how to use them. --The tabs are: --- **Summary**: Summarizes usage funnel metrics for a high-level view of visits, interactions, and repeat usage.-- **Adoption**: Helps you understand the penetration among the target audience, acquisition velocity, and total user base.-- **Engagement**: Shows frequency, depth, and breadth of usage.-- **Retention**: Shows repeat usage.-- **Task success**: Enables understanding of user flows and their time distributions.-- **Happiness**: We recommend using a survey tool to measure customer satisfaction score (CSAT) over a five-point scale. On this tab, we've provided the likelihood of happiness via usage and performance metrics.-- **Feature metrics**: Enables understanding of HEART metrics at feature granularity.--> [!WARNING] -> The HEART workbook is currently built on logs and effectively are [log-based metrics](pre-aggregated-metrics-log-metrics.md). The accuracy of these metrics are negatively affected by sampling and filtering. --## How HEART dimensions are defined and measured --### Happiness --Happiness is a user-reported dimension that measures how users feel about the product offered to them. --A common approach to measure happiness is to ask users a CSAT question like How satisfied are you with this product? Users' responses on a three- or a five-point scale (for example, *no, maybe,* and *yes*) are aggregated to create a product-level score that ranges from 1 to 5. Because user-initiated feedback tends to be negatively biased, HEART tracks happiness from surveys displayed to users at predefined intervals. --Common happiness metrics include values such as **Average Star Rating** and **Customer Satisfaction Score**. Send these values to Azure Monitor by using one of the custom ingestion methods described in [Custom sources](../data-sources.md#custom-sources). --### Engagement --Engagement is a measure of user activity. Specifically, user actions are intentional, such as clicks. Active usage can be broken down into three subdimensions: --- **Activity frequency**: Measures how often a user interacts with the product. For example, users typically interact daily, weekly, or monthly.-- **Activity breadth**: Measures the number of features users interact with over a specific time period. For example, users interacted with a total of five features in June 2021.-- **Activity depth**: Measures the number of features users interact with each time they launch the product. For example, users interacted with two features on every launch.--Measuring engagement can vary based on the type of product being used. For example, a product like Microsoft Teams is expected to have a high daily usage, which makes it an important metric to track. But for a product like a paycheck portal, measurement might make more sense at a monthly or weekly level. -->[!IMPORTANT] ->A user who performs an intentional action, such as clicking a button or typing an input, is counted as an active user. For this reason, engagement metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application. --### Adoption --Adoption enables understanding of penetration among the relevant users, who you're gaining as your user base, and how you're gaining them. Adoption metrics are useful for measuring: --- Newly released products.-- Newly updated products.-- Marketing campaigns.--### Retention --A retained user is a user who was active in a specified reporting period and its previous reporting period. Retention is typically measured with the following metrics. --| Metric | Definition | Question answered | -|-|-|-| -| Retained users | Count of active users who were also active the previous period | How many users are staying engaged with the product? | -| Retention | Proportion of active users from the previous period who are also active this period | What percent of users are staying engaged with the product? | -->[!IMPORTANT] ->Because active users must have at least one telemetry event with an action type, retention metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application. --### Task success --Task success tracks whether users can do a task efficiently and effectively by using the product's features. Many products include structures that are designed to funnel users through completing a task. Some examples include: --- Adding items to a cart and then completing a purchase.-- Searching a keyword and then selecting a result.-- Starting a new account and then completing account registration.--A successful task meets three requirements: -- **Expected task flow**: The intended task flow of the feature was completed by the user and aligns with the expected task flow.-- **High performance**: The intended functionality of the feature was accomplished in a reasonable amount of time.-- **High reliability**: The intended functionality of the feature was accomplished without failure.--A task is considered unsuccessful if any of the preceding requirements isn't met. -->[!IMPORTANT] ->Task success metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application. --Set up a custom task by using the following parameters. --| Parameter | Description | -|-|-| -| First step | The feature that starts the task. In the cart/purchase example, **Adding items to a cart** is the first step. | -| Expected task duration | The time window to consider a completed task a success. Any tasks completed outside of this constraint are considered a failure. Not all tasks necessarily have a time constraint. For such tasks, select **No Time Expectation**. | -| Last step | The feature that completes the task. In the cart/purchase example, **Purchasing items from the cart** is the last step. | --## Frequently asked questions --### How do I view the data at different grains (daily, monthly, or weekly)? -You can select the **Date Grain** filter to change the grain. The filter is available across all the dimension tabs. ---### How do I access insights from my application that aren't available on the HEART workbooks? --You can dig into the data that feeds the HEART workbook if the visuals don't answer all your questions. To do this task, under the **Monitoring** section, select **Logs** and query the `customEvents` table. Some of the Click Analytics attributes are contained within the `customDimensions` field. A sample query is shown here. ---To learn more about Logs in Azure Monitor, see [Azure Monitor Logs overview](../logs/data-platform-logs.md). --### Can I edit visuals in the workbook? --Yes. When you select the public template of the workbook: --1. Select **Edit** and make your changes. -- :::image type="content" source="media/usage-overview/workbook-edit-faq.png" alt-text="Screenshot that shows the Edit button in the upper-left corner of the workbook template."::: --1. After you make your changes, select **Done Editing**, and then select the **Save** icon. -- :::image type="content" source="media/usage-overview/workbook-save-faq.png" alt-text="Screenshot that shows the Save icon at the top of the workbook template that becomes available after you make edits."::: --1. To view your saved workbook, under **Monitoring**, go to the **Workbooks** section and then select the **Workbooks** tab. -- A copy of your customized workbook appears there. You can make any further changes you want in this copy. -- :::image type="content" source="media/usage-overview/workbook-view-faq.png" alt-text="Screenshot that shows the Workbooks tab next to the Public Templates tab, where the edited copy of the workbook is located."::: --For more on editing workbook templates, see [Azure Workbooks templates](../visualize/workbooks-templates.md). --## Next steps -- Check out the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection plug-in.-- Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.-- Find click data under the content field within the `customDimensions` attribute in the `CustomEvents` table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). See [sample app](https://go.microsoft.com/fwlink/?linkid=2152871) for more guidance.-- Learn more about the [Google HEART framework](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36299.pdf). |
azure-monitor | Usage Impact | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-impact.md | - Title: Application Insights usage impact - Azure Monitor -description: Analyze how different properties potentially affect conversion rates for parts of your apps. - Previously updated : 07/01/2024---# Impact analysis with Application Insights --Impact analyzes how load times and other properties influence conversion rates for various parts of your app. To put it more precisely, it discovers how any dimension of a page view, custom event, or request affects the usage of a different page view or custom event. --## Still not sure what Impact does? --One way to think of Impact is as the ultimate tool for settling arguments with someone on your team about how slowness in some aspect of your site is affecting whether users stick around. Users might tolerate some slowness, but Impact gives you insight into how best to balance optimization and performance to maximize user conversion. --Analyzing performance is only a subset of Impact's capabilities. Impact supports custom events and dimensions, so you can easily answer questions like, How does user browser choice correlate with different rates of conversion? --> [!NOTE] -> Your Application Insights resource must contain page views or custom events to use the Impact analysis workbook. Learn how to [set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md). Also, because you're analyzing correlation, sample size matters. --## Impact analysis workbook --To use the Impact analysis workbook, in your Application Insights resources go to **Usage** > **More** and select **User Impact Analysis Workbook**. Or on the **Workbooks** tab, select **Public Templates**. Then under **Usage**, select **User Impact Analysis**. ---### Use the workbook ---1. From the **Selected event** dropdown list, select an event. -1. From the **analyze how its** dropdown list, select a metric. -1. From the **Impacting event** dropdown list, select an event. -1. To add a filter, use the **Add selected event filters** tab or the **Add impacting event filters** tab. --## Is page load time affecting how many people convert on my page? --To begin answering questions with the Impact workbook, choose an initial page view, custom event, or request. --1. From the **Selected event** dropdown list, select an event. -1. Leave the **analyze how its** dropdown list on the default selection of **Duration**. (In this context, **Duration** is an alias for **Page Load Time**.) -1. From the **Impacting event** dropdown list, select a custom event. This event should correspond to a UI element on the page view you selected in step 1. -- :::image type="content" source="./media/usage-impact/impact.png" alt-text="Screenshot that shows an example with the selected event as Home Page analyzed by duration." lightbox="./media/usage-impact/impact.png"::: --## What if I'm tracking page views or load times in custom ways? --Impact supports both standard and custom properties and measurements. Use whatever you want. Instead of duration, use filters on the primary and secondary events to get more specific. --## Do users from different countries or regions convert at different rates? --1. From the **Selected event** dropdown list, select an event. -1. From the **analyze how its** dropdown list, select **Country or region**. -1. From the **Impacting event** dropdown list, select a custom event that corresponds to a UI element on the page view you chose in step 1. -- :::image type="content" source="./media/usage-impact/regions.png" alt-text="Screenshot that shows an example with the selected event as GET analyzed by country and region." lightbox="./media/usage-impact/regions.png"::: --## How does the Impact analysis workbook calculate these conversion rates? --Under the hood, the Impact analysis workbook relies on the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). Results are computed between -1 and 1. The coefficient -1 represents a negative linear correlation and 1 represents a positive linear correlation. --The basic breakdown of how Impact analysis works is listed here: --* Let _A_ = the main page view, custom event, or request you select in the **Selected event** dropdown list. -* Let _B_ = the secondary page view or custom event you select in the **impacts the usage of** dropdown list. --Impact looks at a sample of all the sessions from users in the selected time range. For each session, it looks for each occurrence of _A_. --Sessions are then broken into two different kinds of _subsessions_ based on one of two conditions: --- A converted subsession consists of a session ending with a _B_ event and encompasses all _A_ events that occur prior to _B_.-- An unconverted subsession occurs when all *A*s occur without a terminal _B_.--How Impact is ultimately calculated varies based on whether we're analyzing by metric or by dimension. For metrics, all *A*s in a subsession are averaged. For dimensions, the value of each _A_ contributes _1/N_ to the value assigned to _B_, where _N_ is the number of *A*s in the subsession. --## Next steps --- To learn more about workbooks, see the [Workbooks overview](../visualize/workbooks-overview.md).-- To enable usage experiences, start sending [custom events](./api-custom-events-metrics.md#trackevent) or [page views](./api-custom-events-metrics.md#page-views).-- If you already send custom events or page views, explore the Usage tools to learn how users use your service:- - [Funnels](usage-funnels.md) - - [Retention](usage-retention.md) - - [User flows](usage-flows.md) - - [Workbooks](../visualize/workbooks-overview.md) - - [Add user context](./usage-overview.md) |
azure-monitor | Usage Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md | - Title: Usage analysis with Application Insights | Azure Monitor -description: Understand your users and what they do with your app. - Previously updated : 09/12/2023----# Usage analysis with Application Insights --Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? [Application Insights](./app-insights-overview.md) helps you gain powerful insights into how people use your app. Every time you update your app, you can assess how well it works for users. With this knowledge, you can make data-driven decisions about your next development cycles. --## Send telemetry from your app --The best experience is obtained by installing Application Insights both in your app server code and in your webpages. The client and server components of your app send telemetry back to the Azure portal for analysis. --1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./opentelemetry-enable.md?tabs=java), [Node.js](./nodejs.md), or [other](./app-insights-overview.md#supported-languages) app. -- * If you don't want to install server code, [create an Application Insights resource](./create-workspace-resource.md). --1. **Webpage code:** Use the JavaScript SDK to collect data from webpages. See [Get started with the JavaScript SDK](./javascript-sdk.md). - - [!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)] -- To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md). --1. **Mobile app code:** Use the App Center SDK to collect events from your app. Then send copies of these events to Application Insights for analysis by [following this guide](https://github.com/Microsoft/appcenter). --1. **Get telemetry:** Run your project in debug mode for a few minutes. Then look for results in the **Overview** pane in Application Insights. -- Publish your app to monitor your app's performance and find out what your users are doing with your app. --## Explore usage demographics and statistics --Find out when people use your app and what pages they're most interested in. You can also find out where your users are located and what browsers and operating systems they use. --The **Users** and **Sessions** reports filter your data by pages or custom events. The reports segment the data by properties such as location, environment, and page. You can also add your own filters. ---Insights on the right point out interesting patterns in the set of data. --* The **Users** report counts the numbers of unique users that access your pages within your chosen time periods. For web apps, users are counted by using cookies. If someone accesses your site with different browsers or client machines, or clears their cookies, they're counted more than once. -* The **Sessions** report tabulates the number of user sessions that access your site. A session represents a period of activity initiated by a user and concludes with a period of inactivity exceeding half an hour. --For more information about the Users, Sessions, and Events tools, see [Users, sessions, and events analysis in Application Insights](usage-segmentation.md). --## Retention: How many users come back? --Retention helps you understand how often your users return to use their app, based on cohorts of users that performed some business action during a certain time bucket. You can: --- Understand what specific features cause users to come back more than others.-- Form hypotheses based on real user data.-- Determine whether retention is a problem in your product.---You can use the retention controls on top to define specific events and time ranges to calculate retention. The graph in the middle gives a visual representation of the overall retention percentage by the time range specified. The graph on the bottom represents individual retention in a specific time period. This level of detail allows you to understand what your users are doing and what might affect returning users on a more detailed granularity. --For more information about the Retention workbook, see [User retention analysis for web applications with Application Insights](usage-retention.md). --## Custom business events --To understand user interactions in your app, insert code lines to log custom events. These events track various user actions, like button selections, or important business events, such as purchases or game victories. --You can also use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) to collect custom events. --In some cases, page views can represent useful events, but it isn't true in general. A user can open a product page without buying the product. --With specific business events, you can chart your users' progress through your site. You can find out their preferences for different options and where they drop out or have difficulties. With this knowledge, you can make informed decisions about the priorities in your development backlog. --Events can be logged from the client side of the app: --```JavaScript - appInsights.trackEvent({name: "incrementCount"}); -``` --Or events can be logged from the server side: --```csharp - var tc = new Microsoft.ApplicationInsights.TelemetryClient(); - tc.TrackEvent("CreatedAccount", new Dictionary<string,string> {"AccountType":account.Type}, null); - ... - tc.TrackEvent("AddedItemToCart", new Dictionary<string,string> {"Item":item.Name}, null); - ... - tc.TrackEvent("CompletedPurchase"); -``` --You can attach property values to these events so that you can filter or split the events when you inspect them in the portal. A standard set of properties is also attached to each event, such as anonymous user ID, which allows you to trace the sequence of activities of an individual user. --Learn more about [custom events](./api-custom-events-metrics.md#trackevent) and [properties](./api-custom-events-metrics.md#properties). --### Slice and dice events --In the Users, Sessions, and Events tools, you can slice and dice custom events by user, event name, and properties. ---Whenever youΓÇÖre in any usage experience, select the **Open the last run query** icon to take you back to the underlying query. ---You can then modify the underlying query to get the kind of information youΓÇÖre looking for. --HereΓÇÖs an example of an underlying query about page views. Go ahead and paste it directly into the query editor to test it out. --```kusto -// average pageView duration by name -let timeGrain=5m; -let dataset=pageViews -// additional filters can be applied here -| where timestamp > ago(1d) -| where client_Type == "Browser" ; -// calculate average pageView duration for all pageViews -dataset -| summarize avg(duration) by bin(timestamp, timeGrain) -| extend pageView='Overall' -// render result in a chart -| render timechart -``` - -## Design the telemetry with the app --When you design each feature of your app, consider how you're going to measure its success with your users. Decide what business events you need to record, and code the tracking calls for those events into your app from the start. --## A | B testing --If you're unsure which feature variant is more successful, release both and let different users access each variant. Measure the success of each variant, and then transition to a unified version. --In this technique, you attach unique property values to all the telemetry sent by each version of your app. You can do it by defining properties in the active TelemetryContext. These default properties get included in every telemetry message sent by the application. It includes both custom messages and standard telemetry. --In the Application Insights portal, filter and split your data on the property values so that you can compare the different versions. --To do this step, [set up a telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer): --```csharp - // Telemetry initializer class - public class MyTelemetryInitializer : ITelemetryInitializer - { - // In this example, to differentiate versions, we use the value specified in the AssemblyInfo.cs - // for ASP.NET apps, or in your project file (.csproj) for the ASP.NET Core apps. Make sure that - // you set a different assembly version when you deploy your application for A/B testing. - static readonly string _version = - System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.ToString(); - - public void Initialize(ITelemetry item) - { - item.Context.Component.Version = _version; - } - } -``` --# [NET 6.0+](#tab/aspnetcore) --For [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) applications, add a new telemetry initializer to the Dependency Injection service collection in the `Program.cs` class. --```csharp -using Microsoft.ApplicationInsights.Extensibility; --builder.Services.AddSingleton<ITelemetryInitializer, MyTelemetryInitializer>(); -``` --# [.NET Framework 4.8](#tab/aspnet-framework) --In the web app initializer, such as `Global.asax.cs`: --```csharp -- protected void Application_Start() - { - // ... - TelemetryConfiguration.Active.TelemetryInitializers - .Add(new MyTelemetryInitializer()); - } -``` ----## Next steps -- - [Users, sessions, and events](usage-segmentation.md) - - [Funnels](usage-funnels.md) - - [Retention](usage-retention.md) - - [User Flows](usage-flows.md) - - [Workbooks](../visualize/workbooks-overview.md) |
azure-monitor | Usage Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-retention.md | - Title: Analyze web app user retention with Application Insights -description: This article shows you how to determine how many users return to your app. - Previously updated : 06/23/2023----# User retention analysis for web applications with Application Insights --The retention feature in [Application Insights](./app-insights-overview.md) helps you analyze how many users return to your app, and how often they perform particular tasks or achieve goals. For example, if you run a game site, you could compare the numbers of users who return to the site after losing a game with the number who return after winning. This knowledge can help you improve your user experience and your business strategy. --## Get started --If you don't yet see data in the retention tool in the Application Insights portal, [learn how to get started with the usage tools](usage-overview.md). --## The Retention workbook --To use the Retention workbook, in your Application Insights resources go to **Usage** > **Retention** > **Retention Analysis Workbook**. Or on the **Workbooks** tab, select **Public Templates**. Then under **Usage**, select **User Retention Analysis**. ---### Use the workbook ---Workbook capabilities: --- By default, retention shows all users who did anything and then came back and did anything else over a defined period. You can select different combinations of events to narrow the focus on specific user activities.-- To add one or more filters on properties, select **Add Filters**. For example, you can focus on users in a particular country or region.-- The **Overall Retention** chart shows a summary of user retention across the selected time period.-- The grid shows the number of users retained. Each row represents a cohort of users who performed any event in the time period shown. Each cell in the row shows how many of that cohort returned at least once in a later period. Some users might return in more than one period.-- The insights cards show the top five initiating events and the top five returned events. This information gives users a better understanding of their retention report.-- :::image type="content" source="./media/usage-retention/retention-2.png" alt-text="Screenshot that shows the Retention workbook showing the User returned after # of weeks chart." lightbox="./media/usage-retention/retention-2.png"::: --## Use business events to track retention --You should measure events that represent significant business activities to get the most useful retention analysis. --For more information and example code, see [Custom business events](usage-overview.md#custom-business-events). --To learn more, see [writing custom events](./api-custom-events-metrics.md#trackevent). --## Next steps --- To learn more about workbooks, see the [workbooks overview](../visualize/workbooks-overview.md).-- To enable usage experiences, start sending [custom events](./api-custom-events-metrics.md#trackevent) or [page views](./api-custom-events-metrics.md#page-views).-- If you already send custom events or page views, explore the Usage tools to learn how users use your service:- - [Users, sessions, events](usage-segmentation.md) - - [Funnels](usage-funnels.md) - - [User flows](usage-flows.md) - - [Workbooks](../visualize/workbooks-overview.md) - - [Add user context](./usage-overview.md) |
azure-monitor | Usage Segmentation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-segmentation.md | - Title: User, session, and event analysis in Application Insights -description: Demographic analysis of users of your web app. - Previously updated : 07/01/2024----# User, session, and event analysis in Application Insights --Find out when people use your web app, what pages they're most interested in, where your users are located, and what browsers and operating systems they use. Analyze business and usage telemetry by using [Application Insights](./app-insights-overview.md). ---## Get started --If you don't yet see data in the **Users**, **Sessions**, or **Events** panes in the Application Insights portal, [learn how to get started with the Usage tools](usage-overview.md). --## The Users, Sessions, and Events segmentation tool --Three of the **Usage** panes use the same tool to slice and dice telemetry from your web app from three perspectives. By filtering and splitting the data, you can uncover insights about the relative use of different pages and features. --* **Users tool**: How many people used your app and its features? Users are counted by using anonymous IDs stored in browser cookies. A single person using different browsers or machines will be counted as more than one user. -* **Sessions tool**: How many sessions of user activity have included certain pages and features of your app? A session is reset after half an hour of user inactivity, or after 24 hours of continuous use. -* **Events tool**: How often are certain pages and features of your app used? A page view is counted when a browser loads a page from your app, provided you've [instrumented it](./javascript.md). -- A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent) or use the [Click Analytics](javascript-feature-extensions.md) extension. --> [!NOTE] -> For information on an alternatives to using [anonymous IDs](./data-model-complete.md#anonymous-user-id) and ensuring an accurate count, see the documentation for [authenticated IDs](./data-model-complete.md#authenticated-user-id). --Clicking **View More Insights** displays the following information: -- Application Performance: Sessions, Events, and a Performance evaluation related to users' perception of responsiveness.-- Properties: Charts containing up to six user properties such as browser version, country or region, and operating system.-- Meet Your Users: View timelines of user activity.--## Query for certain users --Explore different groups of users by adjusting the query options at the top of the Users tool: --- **During**: Choose a time range.-- **Show**: Choose a cohort of users to analyze.-- **Who used**: Choose custom events, requests, and page views.-- **Events**: Choose multiple events, requests, and page views that will show users who did at least one, not necessarily all, of the selected options.-- **By value x-axis**: Choose how to categorize the data, either by time range or by another property, such as browser or city.-- **Split By**: Choose a property to use to split or segment the data.-- **Add Filters**: Limit the query to certain users, sessions, or events based on their properties, such as browser or city.--## Meet your users --The **Meet your users** section shows information about five sample users matched by the current query. Exploring the behaviors of individuals and in aggregate can provide insights about how people use your app. --## Next steps --- To enable usage experiences, start sending [custom events](./api-custom-events-metrics.md#trackevent) or [page views](./api-custom-events-metrics.md#page-views).-- If you already send custom events or page views, explore the **Usage** tools to learn how users use your service.- - [Funnels](usage-funnels.md) - - [Retention](usage-retention.md) - - [User flows](usage-flows.md) - - [Workbooks](../visualize/workbooks-overview.md) - - [Add user context](./usage-overview.md) |
azure-monitor | Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage.md | + + Title: Usage analysis with Application Insights | Azure Monitor +description: Understand your users and what they do with your application. + Last updated : 07/16/2024++++# Usage analysis with Application Insights ++Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? ++[Application Insights](./app-insights-overview.md) is a powerful tool for monitoring the performance and usage of your applications. It provides insights into how users interact with your app, identifies areas for improvement, and helps you understand the impact of changes. With this knowledge, you can make data-driven decisions about your next development cycles. ++This article covers the following areas: ++* [Users, Sessions & Events](#users-sessions-and-eventsanalyze-telemetry-from-three-perspectives) - Track and analyze user interaction with your application, session trends, and specific events to gain insights into user behavior and app performance. +* [Funnels](#funnelsdiscover-how-customers-use-your-application) - Understand how users progress through a series of steps in your application and where they might be dropping off. +* [User Flows](#user-flowsanalyze-user-navigation-patterns) - Visualize user paths to identify the most common routes and pinpointing areas where users are most engaged users or may encounter issues. +* [Cohorts](#cohortsanalyze-a-specific-set-of-users-sessions-events-or-operations) - Group users or events by common characteristics to analyze behavior patterns, feature usage, and the impact of changes over time. +* [Impact Analysis](#impact-analysisdiscover-how-different-properties-influence-conversion-rates) - Analyze how application performance metrics, like load times, influence user experience and behavior, to help you to prioritize improvements. +* [HEART](#heartfive-dimensions-of-customer-experience) - Utilize the HEART framework to measure and understand user Happiness, Engagement, Adoption, Retention, and Task success. ++## Send telemetry from your application ++To optimize your experience, consider integrating Application Insights into both your app server code and your webpages. This dual implementation enables telemetry collection from both the client and server components of your application. ++1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./opentelemetry-enable.md?tabs=java), [Node.js](./nodejs.md), or [other](./app-insights-overview.md#supported-languages) app. ++ If you don't want to install server code, [create an Application Insights resource](./create-workspace-resource.md). ++1. **Webpage code:** Use the JavaScript SDK to collect data from webpages, see [Get started with the JavaScript SDK](./javascript-sdk.md). ++ [!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)] ++ To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md). ++1. **Mobile app code:** Use the App Center SDK to collect events from your app. Then send copies of these events to Application Insights for analysis by [following this guide](https://github.com/Microsoft/appcenter). ++1. **Get telemetry:** Run your project in debug mode for a few minutes. Then look for results in the **Overview** pane in Application Insights. ++ Publish your app to monitor your app's performance and find out what your users are doing with your app. ++## Users, Sessions, and Events - Analyze telemetry from three perspectives ++Three of the **Usage** panes use the same tool to slice and dice telemetry from your web app from three perspectives. By filtering and splitting the data, you can uncover insights about the relative use of different pages and features. ++* **Users tool**: How many people used your app and its features? Users are counted by using anonymous IDs stored in browser cookies. A single person using different browsers or machines will be counted as more than one user. ++* **Sessions tool**: How many sessions of user activity have included certain pages and features of your app? A session is reset after half an hour of user inactivity, or after 24 hours of continuous use. ++* **Events tool**: How often are certain pages and features of your app used? A page view is counted when a browser loads a page from your app, provided you've [instrumented it](./javascript.md). ++ A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent) or use the [Click Analytics](javascript-feature-extensions.md) extension. ++> [!NOTE] +> For information on an alternatives to using [anonymous IDs](./data-model-complete.md#anonymous-user-id) and ensuring an accurate count, see the documentation for [authenticated IDs](./data-model-complete.md#authenticated-user-id). ++Clicking **View More Insights** displays the following information: ++* **Application Performance:** Sessions, Events, and a Performance evaluation related to users' perception of responsiveness. +* **Properties:** Charts containing up to six user properties such as browser version, country or region, and operating system. +* **Meet Your Users:** View timelines of user activity. ++### Explore usage demographics and statistics ++Find out when people use your web app, what pages they're most interested in, where your users are located, and what browsers and operating systems they use. Analyze business and usage telemetry by using Application Insights. +++* The **Users** report counts the numbers of unique users that access your pages within your chosen time periods. For web apps, users are counted by using cookies. If someone accesses your site with different browsers or client machines, or clears their cookies, they're counted more than once. ++* The **Sessions** report tabulates the number of user sessions that access your site. A session represents a period of activity initiated by a user and concludes with a period of inactivity exceeding half an hour. ++#### Query for certain users ++Explore different groups of users by adjusting the query options at the top of the Users pane: ++| Option | Description | +|--|-| +| During | Choose a time range. | +| Show | Choose a cohort of users to analyze. | +| Who used | Choose custom events, requests, and page views. | +| Events | Choose multiple events, requests, and page views that will show users who did at least one, not necessarily all, of the selected options. | +| By value x-axis | Choose how to categorize the data, either by time range or by another property, such as browser or city. | +| Split By | Choose a property to use to split or segment the data. | +| Add Filters | Limit the query to certain users, sessions, or events based on their properties, such as browser or city. | ++#### Meet your users ++The **Meet your users** section shows information about five sample users matched by the current query. Exploring the behaviors of individuals and in aggregate can provide insights about how people use your app. ++### User retention analysis ++The Application Insights retention feature provides valuable insights into user engagement by tracking the frequency and patterns of users returning to your app and their interactions with specific features. It enables you to compare user behaviors, such as the difference in return rates between users who win or lose a game, offering actionable data to enhance user experience and inform business strategies. ++By analyzing cohorts of users based on their actions within a given timeframe, you can identify which features drive repeat usage. This knowledge can help you: ++* Understand what specific features cause users to come back more than others. +* Determine whether retention is a problem in your product. +* Form hypotheses based on real user data to help you improve the user experience and your business strategy. +++You can use the retention controls on top to define specific events and time ranges to calculate retention. The graph in the middle gives a visual representation of the overall retention percentage by the time range specified. The graph on the bottom represents individual retention in a specific time period. This level of detail allows you to understand what your users are doing and what might affect returning users on a more detailed granularity. ++For more information about the Retention workbook, see the section below. ++#### The retention workbook ++To use the retention workbook in Application Insights, navigate to the **Workbooks** pane, select **Public Templates** at the top, and locate the **User Retention Analysis** workbook listed under the **Usage** category. +++**Workbook capabilities:** ++* By default, retention shows all users who did anything and then came back and did anything else over a defined period. You can select different combinations of events to narrow the focus on specific user activities. ++* To add one or more filters on properties, select **Add Filters**. For example, you can focus on users in a particular country or region. ++* The **Overall Retention** chart shows a summary of user retention across the selected time period. ++* The grid shows the number of users retained. Each row represents a cohort of users who performed any event in the time period shown. Each cell in the row shows how many of that cohort returned at least once in a later period. Some users might return in more than one period. ++* The insights cards show the top five initiating events and the top five returned events. This information gives users a better understanding of their retention report. ++ :::image type="content" source="./media/usage-retention/retention-2.png" alt-text="Screenshot that shows the Retention workbook showing the User returned after number of weeks chart." lightbox="./media/usage-retention/retention-2.png"::: ++#### Use business events to track retention ++You should measure events that represent significant business activities to get the most useful retention analysis. ++For more information and example code, see the section below. ++### Track user interactions with custom events ++To understand user interactions in your app, insert code lines to log custom events. These events track various user actions, like button selections, or important business events, such as purchases or game victories. ++You can also use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) to collect custom events. ++> [!TIP] +> When you design each feature of your app, consider how you're going to measure its success with your users. Decide what business events you need to record, and code the tracking calls for those events into your app from the start. ++In some cases, page views can represent useful events, but it isn't true in general. A user can open a product page without buying the product. ++With specific business events, you can chart your users' progress through your site. You can find out their preferences for different options and where they drop out or have difficulties. With this knowledge, you can make informed decisions about the priorities in your development backlog. ++Events can be logged from the client side of the app: ++```javascript +appInsights.trackEvent({name: "incrementCount"}); +``` ++Or events can be logged from the server side: ++```csharp +var tc = new Microsoft.ApplicationInsights.TelemetryClient(); +tc.TrackEvent("CreatedAccount", new Dictionary<string,string> {"AccountType":account.Type}, null); +... +tc.TrackEvent("AddedItemToCart", new Dictionary<string,string> {"Item":item.Name}, null); +... +tc.TrackEvent("CompletedPurchase"); +``` ++You can attach property values to these events so that you can filter or split the events when you inspect them in the portal. A standard set of properties is also attached to each event, such as anonymous user ID, which allows you to trace the sequence of activities of an individual user. ++Learn more about [custom events](./api-custom-events-metrics.md#trackevent) and [properties](./api-custom-events-metrics.md#properties). ++#### Slice and dice events ++In the Users, Sessions, and Events tools, you can slice and dice custom events by user, event name, and properties. +++Whenever youΓÇÖre in any usage experience, select the **Open the last run query** icon to take you back to the underlying query. +++You can then modify the underlying query to get the kind of information youΓÇÖre looking for. ++HereΓÇÖs an example of an underlying query about page views. Go ahead and paste it directly into the query editor to test it out. ++```kusto +// average pageView duration by name +let timeGrain=5m; +let dataset=pageViews +// additional filters can be applied here +| where timestamp > ago(1d) +| where client_Type == "Browser" ; +// calculate average pageView duration for all pageViews +dataset +| summarize avg(duration) by bin(timestamp, timeGrain) +| extend pageView='Overall' +// render result in a chart +| render timechart +``` ++### Determine feature success with A/B testing ++If you're unsure which feature variant is more successful, release both and let different users access each variant. Measure the success of each variant, and then transition to a unified version. ++In this technique, you attach unique property values to all the telemetry sent by each version of your app. You can do it by defining properties in the active TelemetryContext. These default properties get included in every telemetry message sent by the application. It includes both custom messages and standard telemetry. ++In the Application Insights portal, filter and split your data on the property values so that you can compare the different versions. ++To do this step, [set up a telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer): ++```csharp +// Telemetry initializer class +public class MyTelemetryInitializer : ITelemetryInitializer +{ + // In this example, to differentiate versions, we use the value specified in the AssemblyInfo.cs + // for ASP.NET apps, or in your project file (.csproj) for the ASP.NET Core apps. Make sure that + // you set a different assembly version when you deploy your application for A/B testing. + static readonly string _version = + System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.ToString(); + + public void Initialize(ITelemetry item) + { + item.Context.Component.Version = _version; + } +} +``` ++#### [.NET Core](#tab/aspnetcore) ++For [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) applications, add a new telemetry initializer to the Dependency Injection service collection in the `Program.cs` class: ++```csharp +using Microsoft.ApplicationInsights.Extensibility; ++builder.Services.AddSingleton<ITelemetryInitializer, MyTelemetryInitializer>(); +``` ++#### [.NET Framework 4.8](#tab/aspnet-framework) ++In the web app initializer, such as `Global.asax.cs`: ++```csharp +protected void Application_Start() +{ + // ... + TelemetryConfiguration.Active.TelemetryInitializers + .Add(new MyTelemetryInitializer()); +} +``` ++++## Funnels - Discover how customers use your application ++Understanding the customer experience is of great importance to your business. If your application involves multiple stages, you need to know if customers are progressing through the entire process or ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Application Insights funnels to gain insights into your users and monitor step-by-step conversion rates. ++**Funnel features:** ++* If your app is sampled, you'll see a banner. Selecting it opens a context pane that explains how to turn off sampling. +* Select a step to see more details on the right. +* The historical conversion graph shows the conversion rates over the last 90 days. +* Understand your users better by accessing the users tool. You can use filters in each step. ++### Create a funnel ++#### Prerequisites ++Before you create a funnel, decide on the question you want to answer. For example, you might want to know how many users view the home page, view a customer profile, and create a ticket. ++#### Get started ++To create a funnel: ++1. On the **Funnels** tab, select **Edit**. ++1. Choose your **Top Step**. ++ :::image type="content" source="./media/usage-funnels/funnel.png" alt-text="Screenshot that shows the Funnel tab and selecting steps on the Edit tab." lightbox="./media/usage-funnels/funnel.png"::: ++1. To apply filters to the step, select **Add filters**. This option appears after you choose an item for the top step. ++1. Then choose your **Second Step** and so on. ++ > [!NOTE] + > Funnels are limited to a maximum of six steps. ++1. Select the **View** tab to see your funnel results. ++ :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot that shows the Funnels View tab that shows results from the top and second steps." lightbox="./media/usage-funnels/funnel-2.png"::: ++1. To save your funnel to view at another time, select **Save** at the top. Use **Open** to open your saved funnels. ++## User Flows - Analyze user navigation patterns +++The User Flows tool visualizes how users move between the pages and features of your site. It's great for answering questions like: ++* How do users move away from a page on your site? +* What do users select on a page on your site? +* Where are the places that users churn most from your site? +* Are there places where users repeat the same action over and over? ++The User Flows tool starts from an initial custom event, exception, dependency, page view or request that you specify. From this initial event, User Flows shows the events that happened before and after user sessions. Lines of varying thickness show how many times users followed each path. Special **Session Started** nodes show where the subsequent nodes began a session. **Session Ended** nodes show how many users sent no page views or custom events after the preceding node, highlighting where users probably left your site. ++> [!NOTE] +> Your Application Insights resource must contain page views or custom events to use the User Flows tool. [Learn how to set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md). ++### Choose an initial event +++To begin answering questions with the User Flows tool, choose an initial custom event, exception, dependency, page view or request to serve as the starting point for the visualization: ++1. Select the link in the **What do users do after?** title or select **Edit**. +1. Select a custom event, exception, dependency, page view or request from the **Initial event** dropdown list. +1. Select **Create graph**. ++The **Step 1** column of the visualization shows what users did most frequently after the initial event. The items are ordered from top to bottom and from most to least frequent. The **Step 2** and subsequent columns show what users did next. The information creates a picture of all the ways that users moved through your site. ++By default, the User Flows tool randomly samples only the last 24 hours of page views and custom events from your site. You can increase the time range and change the balance of performance and accuracy for random sampling on the **Edit** menu. ++If some of the page views, custom events, and exceptions aren't relevant to you, select **X** on the nodes you want to hide. After you've selected the nodes you want to hide, select **Create graph**. To see all the nodes you've hidden, select **Edit** and look at the **Excluded events** section. ++If page views or custom events you expect to see in the visualization are missing that: ++* Check the **Excluded events** section on the **Edit** menu. +* Use the plus buttons on **Others** nodes to include less-frequent events in the visualization. +* If the page view or custom event you expect is sent infrequently by users, increase the time range of the visualization on the **Edit** menu. +* Make sure the custom event, exception, dependency, page view or request you expect is set up to be collected by the Application Insights SDK in the source code of your site. ++If you want to see more steps in the visualization, use the **Previous steps** and **Next steps** dropdown lists above the visualization. ++### After users visit a page or feature, where do they go and what do they select? +++If your initial event is a page view, the first column (**Step 1**) of the visualization is a quick way to understand what users did immediately after they visited the page. ++Open your site in a window next to the User Flows visualization. Compare your expectations of how users interact with the page to the list of events in the **Step 1** column. Often, a UI element on the page that seems insignificant to your team can be among the most used on the page. It can be a great starting point for design improvements to your site. ++If your initial event is a custom event, the first column shows what users did after they performed that action. As with page views, consider if the observed behavior of your users matches your team's goals and expectations. ++If your selected initial event is **Added Item to Shopping Cart**, for example, look to see if **Go to Checkout** and **Completed Purchase** appear in the visualization shortly thereafter. If user behavior is different from your expectations, use the visualization to understand how users are getting "trapped" by your site's current design. ++### Where are the places that users churn most from your site? ++Watch for **Session Ended** nodes that appear high up in a column in the visualization, especially early in a flow. This positioning means many users probably churned from your site after they followed the preceding path of pages and UI interactions. ++Sometimes churn is expected. For example, it's expected after a user makes a purchase on an e-commerce site. But usually churn is a sign of design problems, poor performance, or other issues with your site that can be improved. ++Keep in mind that **Session Ended** nodes are based only on telemetry collected by this Application Insights resource. If Application Insights doesn't receive telemetry for certain user interactions, users might have interacted with your site in those ways after the User Flows tool says the session ended. ++### Are there places where users repeat the same action over and over? ++Look for a page view or custom event that's repeated by many users across subsequent steps in the visualization. This activity usually means that users are performing repetitive actions on your site. If you find repetition, think about changing the design of your site or adding new functionality to reduce repetition. For example, you might add bulk edit functionality if you find users performing repetitive actions on each row of a table element. ++## Cohorts - Analyze a specific set of users, sessions, events, or operations ++A cohort is a set of users, sessions, events, or operations that have something in common. In Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set you're interested in. ++### Cohorts vs basic filters ++You can use cohorts in ways similar to filters. But cohorts' definitions are built from custom analytics queries, so they're much more adaptable and complex. Unlike filters, you can save cohorts so that other members of your team can reuse them. ++You might define a cohort of users who have all tried a new feature in your app. You can save this cohort in your Application Insights resource. It's easy to analyze this saved group of specific users in the future. ++> [!NOTE] +> After cohorts are created, they're available from the Users, Sessions, Events, and User Flows tools. ++### Example: Engaged users ++Your team defines an engaged user as anyone who uses your app five or more times in a given month. In this section, you define a cohort of these engaged users. ++1. Select **Create a Cohort**. ++1. Select the **Template Gallery** tab to see a collection of templates for various cohorts. ++1. Select **Engaged Users -- by Days Used**. ++ There are three parameters for this cohort: + * **Activities**: Where you choose which events and page views count as usage. + * **Period**: The definition of a month. + * **UsedAtLeastCustom**: The number of times users need to use something within a period to count as engaged. ++1. Change **UsedAtLeastCustom** to **5+ days**. Leave **Period** set as the default of 28 days. ++ Now this cohort represents all user IDs sent with any custom event or page view on 5 separate days in the past 28 days. ++1. Select **Save**. ++ > [!TIP] + > Give your cohort a name, like *Engaged Users (5+ Days)*. Save it to *My reports* or *Shared reports*, depending on whether you want other people who have access to this Application Insights resource to see this cohort. ++1. Select **Back to Gallery**. ++#### What can you do by using this cohort? ++Open the Users tool. In the **Show** dropdown box, choose the cohort you created under **Users who belong to**. +++Important points to notice: ++* You can't create this set through normal filters. The date logic is more advanced. +* You can further filter this cohort by using the normal filters in the Users tool. Although the cohort is defined on 28-day windows, you can still adjust the time range in the Users tool to be 30, 60, or 90 days. ++These filters support more sophisticated questions that are impossible to express through the query builder. An example is *people who were engaged in the past 28 days. How did those same people behave over the past 60 days?* ++### Example: Events cohort ++You can also make cohorts of events. In this section, you define a cohort of events and page views. Then you see how to use them from the other tools. This cohort might define a set of events that your team considers *active usage* or a set related to a certain new feature. ++1. Select **Create a Cohort**. +1. Select the **Template Gallery** tab to see a collection of templates for various cohorts. +1. Select **Events Picker**. +1. In the **Activities** dropdown box, select the events you want to be in the cohort. +1. Save the cohort and give it a name. ++### Example: Active users where you modify a query ++The previous two cohorts were defined by using dropdown boxes. You can also define cohorts by using analytics queries for total flexibility. To see how, create a cohort of users from the United Kingdom. ++1. Open the Cohorts tool, select the **Template Gallery** tab, and select **Blank Users cohort**. ++ :::image type="content" source="./media/usage-cohorts/cohort.png" alt-text="Screenshot that shows the template gallery for cohorts." lightbox="./media/usage-cohorts/cohort.png"::: ++ There are three sections: ++ * **Markdown text**: Where you describe the cohort in more detail for other members on your team. + * **Parameters**: Where you make your own parameters, like **Activities**, and other dropdown boxes from the previous two examples. + * **Query**: Where you define the cohort by using an analytics query. ++ In the query section, you [write an analytics query](/azure/kusto/query). The query selects the certain set of rows that describe the cohort you want to define. The Cohorts tool then implicitly adds a `| summarize by user_Id` clause to the query. This data appears as a preview underneath the query in a table, so you can make sure your query is returning results. ++ > [!NOTE] + > If you don't see the query, resize the section to make it taller and reveal the query. ++1. Copy and paste the following text into the query editor: ++ ```KQL + union customEvents, pageViews + | where client_CountryOrRegion == "United Kingdom" + ``` ++1. Select **Run Query**. If you don't see user IDs appear in the table, change to a country/region where your application has users. ++1. Save and name the cohort. ++## Impact Analysis - Discover how different properties influence conversion rates ++Impact Analysis discovers how any dimension of a page view, custom event, or request affects the usage of a different page view or custom event. ++One way to think of Impact is as the ultimate tool for settling arguments with someone on your team about how slowness in some aspect of your site is affecting whether users stick around. Users might tolerate some slowness, but Impact gives you insight into how best to balance optimization and performance to maximize user conversion. ++Analyzing performance is only a subset of Impact's capabilities. Impact supports custom events and dimensions, so you can easily answer questions like, How does user browser choice correlate with different rates of conversion? ++> [!NOTE] +> Your Application Insights resource must contain page views or custom events to use the Impact analysis workbook. Learn how to [set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md). Also, because you're analyzing correlation, sample size matters. ++### Impact analysis workbook ++To use the Impact analysis workbook, in your Application Insights resources go to **Usage** > **More** and select **User Impact Analysis Workbook**. Or on the **Workbooks** tab, select **Public Templates**. Then under **Usage**, select **User Impact Analysis**. +++#### Use the workbook +++1. From the **Selected event** dropdown list, select an event. +1. From the **analyze how its** dropdown list, select a metric. +1. From the **Impacting event** dropdown list, select an event. +1. To add a filter, use the **Add selected event filters** tab or the **Add impacting event filters** tab. ++### Is page load time affecting how many people convert on my page? ++To begin answering questions with the Impact workbook, choose an initial page view, custom event, or request. ++1. From the **Selected event** dropdown list, select an event. ++1. Leave the **analyze how its** dropdown list on the default selection of **Duration**. (In this context, **Duration** is an alias for **Page Load Time**.) ++1. From the **Impacting event** dropdown list, select a custom event. This event should correspond to a UI element on the page view you selected in step 1. ++ :::image type="content" source="./media/usage-impact/impact.png" alt-text="Screenshot that shows an example with the selected event as Home Page analyzed by duration." lightbox="./media/usage-impact/impact.png"::: ++### What if I'm tracking page views or load times in custom ways? ++Impact supports both standard and custom properties and measurements. Use whatever you want. Instead of duration, use filters on the primary and secondary events to get more specific. ++### Do users from different countries or regions convert at different rates? ++1. From the **Selected event** dropdown list, select an event. ++1. From the **analyze how its** dropdown list, select **Country or region**. ++1. From the **Impacting event** dropdown list, select a custom event that corresponds to a UI element on the page view you chose in step 1. ++ :::image type="content" source="./media/usage-impact/regions.png" alt-text="Screenshot that shows an example with the selected event as GET analyzed by country and region." lightbox="./media/usage-impact/regions.png"::: ++### How does the Impact analysis workbook calculate these conversion rates? ++Under the hood, the Impact analysis workbook relies on the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). Results are computed between -1 and 1. The coefficient -1 represents a negative linear correlation and 1 represents a positive linear correlation. ++The basic breakdown of how Impact analysis works is listed here: ++* Let *A* = the main page view, custom event, or request you select in the **Selected event** dropdown list. +* Let *B* = the secondary page view or custom event you select in the **impacts the usage of** dropdown list. ++Impact looks at a sample of all the sessions from users in the selected time range. For each session, it looks for each occurrence of *A*. ++Sessions are then broken into two different kinds of *subsessions* based on one of two conditions: ++* A converted subsession consists of a session ending with a *B* event and encompasses all *A* events that occur prior to *B*. +* An unconverted subsession occurs when all *A*s occur without a terminal *B*. ++How Impact is ultimately calculated varies based on whether we're analyzing by metric or by dimension. For metrics, all *A*s in a subsession are averaged. For dimensions, the value of each *A* contributes *1/N* to the value assigned to *B*, where *N* is the number of *A*s in the subsession. ++## HEART - Five dimensions of customer experience ++This article describes how to enable and use the Heart Workbook on Azure Monitor. The HEART workbook is based on the HEART measurement framework, which was originally introduced by Google. Several Microsoft internal teams use HEART to deliver better software. ++### Overview ++HEART is an acronym that stands for happiness, engagement, adoption, retention, and task success. It helps product teams deliver better software by focusing on five dimensions of customer experience: ++* **Happiness**: Measure of user attitude +* **Engagement**: Level of active user involvement +* **Adoption**: Target audience penetration +* **Retention**: Rate at which users return +* **Task success**: Productivity empowerment ++These dimensions are measured independently, but they interact with each other. +++* Adoption, engagement, and retention form a user activity funnel. Only a portion of users who adopt the tool come back to use it. ++* Task success is the driver that progresses users down the funnel and moves them from adoption to retention. ++* Happiness is an outcome of the other dimensions and not a stand-alone measurement. Users who have progressed down the funnel and are showing a higher level of activity are ideally happier. ++### Get started ++#### Prerequisites ++* **Azure subscription**: [Create an Azure subscription for free](https://azure.microsoft.com/free/) ++* **Application Insights resource**: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource) ++* **Click Analytics**: Set up the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md). ++* **Specific attributes**: Instrument the following attributes to calculate HEART metrics. ++ | Source | Attribute | Description | + |-|-|--| + | customEvents | session_Id | Unique session identifier | + | customEvents | appName | Unique Application Insights app identifier | + | customEvents | itemType | Category of customEvents record | + | customEvents | timestamp | Datetime of event | + | customEvents | operation_Id | Correlate telemetry events | + | customEvents | user_Id | Unique user identifier | + | customEvents ┬╣ | parentId | Name of feature | + | customEvents ┬╣ | pageName | Name of page | + | customEvents ┬╣ | actionType | Category of Click Analytics record | + | pageViews | user_AuthenticatedId | Unique authenticated user identifier | + | pageViews | session_Id | Unique session identifier | + | pageViews | appName | Unique Application Insights app identifier | + | pageViews | timestamp | Datetime of event | + | pageViews | operation_Id | Correlate telemetry events | + | pageViews | user_Id | Unique user identifier | ++* If you're setting up the authenticated user context, instrument the below attributes: ++| Source | Attribute | Description | +|--|-|--| +| customEvents | user_AuthenticatedId | Unique authenticated user identifier | ++**Footnotes** ++┬╣: To emit these attributes, use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) via npm. ++>[!TIP] +> To understand how to effectively use the Click Analytics plug-in, see [Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](javascript-feature-extensions.md#use-the-plug-in). ++#### Open the workbook ++You can find the workbook in the gallery under **Public Templates**. The workbook appears in the section **Product Analytics using the Click Analytics Plugin**. +++There are seven workbooks. +++You only have to interact with the main workbook, **HEART Analytics - All Sections**. This workbook contains the other six workbooks as tabs. You can also access the individual workbooks related to each tab through the gallery. ++#### Confirm that data is flowing ++To validate that data is flowing as expected to light up the metrics accurately, select the **Development Requirements** tab. ++> [!IMPORTANT] +> Unless you [set the authenticated user context](./javascript-feature-extensions.md#optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data. +++If data isn't flowing as expected, this tab shows the specific attributes with issues. +++### Workbook structure ++The workbook shows metric trends for the HEART dimensions split over seven tabs. Each tab contains descriptions of the dimensions, the metrics contained within each dimension, and how to use them. ++The tabs are: ++* **Summary**: Summarizes usage funnel metrics for a high-level view of visits, interactions, and repeat usage. +* **Adoption**: Helps you understand the penetration among the target audience, acquisition velocity, and total user base. +* **Engagement**: Shows frequency, depth, and breadth of usage. +* **Retention**: Shows repeat usage. +* **Task success**: Enables understanding of user flows and their time distributions. +* **Happiness**: We recommend using a survey tool to measure customer satisfaction score (CSAT) over a five-point scale. On this tab, we've provided the likelihood of happiness via usage and performance metrics. +* **Feature metrics**: Enables understanding of HEART metrics at feature granularity. ++> [!WARNING] +> The HEART workbook is currently built on logs and effectively are [log-based metrics](pre-aggregated-metrics-log-metrics.md). The accuracy of these metrics are negatively affected by sampling and filtering. ++### How HEART dimensions are defined and measured ++#### Happiness ++Happiness is a user-reported dimension that measures how users feel about the product offered to them. ++A common approach to measure happiness is to ask users a CSAT question like How satisfied are you with this product? Users' responses on a three- or a five-point scale (for example, *no, maybe,* and *yes*) are aggregated to create a product-level score that ranges from 1 to 5. Because user-initiated feedback tends to be negatively biased, HEART tracks happiness from surveys displayed to users at predefined intervals. ++Common happiness metrics include values such as **Average Star Rating** and **Customer Satisfaction Score**. Send these values to Azure Monitor by using one of the custom ingestion methods described in [Custom sources](../data-sources.md#custom-sources). ++#### Engagement ++Engagement is a measure of user activity. Specifically, user actions are intentional, such as clicks. Active usage can be broken down into three subdimensions: ++* **Activity frequency**: Measures how often a user interacts with the product. For example, users typically interact daily, weekly, or monthly. ++* **Activity breadth**: Measures the number of features users interact with over a specific time period. For example, users interacted with a total of five features in June 2021. ++* **Activity depth**: Measures the number of features users interact with each time they launch the product. For example, users interacted with two features on every launch. ++Measuring engagement can vary based on the type of product being used. For example, a product like Microsoft Teams is expected to have a high daily usage, which makes it an important metric to track. But for a product like a paycheck portal, measurement might make more sense at a monthly or weekly level. ++>[!IMPORTANT] +>A user who performs an intentional action, such as clicking a button or typing an input, is counted as an active user. For this reason, engagement metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application. ++#### Adoption ++Adoption enables understanding of penetration among the relevant users, who you're gaining as your user base, and how you're gaining them. Adoption metrics are useful for measuring: ++* Newly released products. +* Newly updated products. +* Marketing campaigns. ++#### Retention ++A retained user is a user who was active in a specified reporting period and its previous reporting period. Retention is typically measured with the following metrics. ++| Metric | Definition | Question answered | +|-|-|-| +| Retained users | Count of active users who were also active the previous period | How many users are staying engaged with the product? | +| Retention | Proportion of active users from the previous period who are also active this period | What percent of users are staying engaged with the product? | ++>[!IMPORTANT] +>Because active users must have at least one telemetry event with an action type, retention metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application. ++#### Task success ++Task success tracks whether users can do a task efficiently and effectively by using the product's features. Many products include structures that are designed to funnel users through completing a task. Some examples include: ++* Adding items to a cart and then completing a purchase. +* Searching a keyword and then selecting a result. +* Starting a new account and then completing account registration. ++A successful task meets three requirements: ++* **Expected task flow**: The intended task flow of the feature was completed by the user and aligns with the expected task flow. +* **High performance**: The intended functionality of the feature was accomplished in a reasonable amount of time. +* **High reliability**: The intended functionality of the feature was accomplished without failure. ++A task is considered unsuccessful if any of the preceding requirements isn't met. ++>[!IMPORTANT] +>Task success metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application. ++Set up a custom task by using the following parameters. ++| Parameter | Description | +||| +| First step | The feature that starts the task. In the cart/purchase example, **Adding items to a cart** is the first step. | +| Expected task duration | The time window to consider a completed task a success. Any tasks completed outside of this constraint are considered a failure. Not all tasks necessarily have a time constraint. For such tasks, select **No Time Expectation**. | +| Last step | The feature that completes the task. In the cart/purchase example, **Purchasing items from the cart** is the last step. | ++## Frequently asked questions ++### Does the initial event represent the first time the event appears in a session or any time it appears in a session? ++The initial event on the visualization only represents the first time a user sent that page view or custom event during a session. If users can send the initial event multiple times in a session, then the **Step 1** column only shows how users behave after the *first* instance of an initial event, not all instances. ++### Some of the nodes in my visualization have a level that's too high. How can I get more detailed nodes? ++Use the **Split by** options on the **Edit** menu: ++1. Select the event you want to break down on the **Event** menu. ++1. Select a dimension on the **Dimension** menu. For example, if you have an event called **Button Clicked**, try a custom property called **Button Name**. ++### I defined a cohort of users from a certain country/region. When I compare this cohort in the Users tool to setting a filter on that country/region, why do I see different results? ++Cohorts and filters are different. Suppose you have a cohort of users from the United Kingdom (defined like the previous example), and you compare its results to setting the filter `Country or region = United Kingdom`: ++* The cohort version shows all events from users who sent one or more events from the United Kingdom in the current time range. If you split by country or region, you likely see many countries and regions. ++* The filters version only shows events from the United Kingdom. If you split by country or region, you see only the United Kingdom. ++### How do I view the data at different grains (daily, monthly, or weekly)? ++You can select the **Date Grain** filter to change the grain. The filter is available across all the dimension tabs. +++### How do I access insights from my application that aren't available on the HEART workbooks? ++You can dig into the data that feeds the HEART workbook if the visuals don't answer all your questions. To do this task, under the **Monitoring** section, select **Logs** and query the `customEvents` table. Some of the Click Analytics attributes are contained within the `customDimensions` field. A sample query is shown here. +++To learn more about Logs in Azure Monitor, see [Azure Monitor Logs overview](../logs/data-platform-logs.md). ++### Can I edit visuals in the workbook? ++Yes. When you select the public template of the workbook: ++1. Select **Edit** and make your changes. ++ :::image type="content" source="media/usage-overview/workbook-edit-faq.png" alt-text="Screenshot that shows the Edit button in the upper-left corner of the workbook template."::: ++1. After you make your changes, select **Done Editing**, and then select the **Save** icon. ++ :::image type="content" source="media/usage-overview/workbook-save-faq.png" alt-text="Screenshot that shows the Save icon at the top of the workbook template that becomes available after you make edits."::: ++1. To view your saved workbook, under **Monitoring**, go to the **Workbooks** section and then select the **Workbooks** tab. A copy of your customized workbook appears there. You can make any further changes you want in this copy. ++ :::image type="content" source="media/usage-overview/workbook-view-faq.png" alt-text="Screenshot that shows the Workbooks tab next to the Public Templates tab, where the edited copy of the workbook is located."::: ++For more on editing workbook templates, see [Azure Workbooks templates](../visualize/workbooks-templates.md). ++## Next steps ++* Check out the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection plug-in. +* Learn more about the [Google HEART framework](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36299.pdf). +* To learn more about workbooks, see the [Workbooks overview](../visualize/workbooks-overview.md). |
azure-monitor | Azure Monitor Workspace Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md | Data stored in the Azure Monitor Workspace is handled in accordance with all sta When you create a new Azure Monitor workspace, you provide a region which sets the location in which the data is stored. Currently Azure Monitor Workspace is available in the below regions. -|Geo|Regions|Geo|Regions|Geo|Regions|Geo|Regions| -||||||||| -|Africa|South Africa North|Asia Pacific|East Asia, Southeast Asia|Australia|Australia Central, Australia East, Australia Southeast|Brazil|Brazil South, Brazil Southeast| -|Canada|Canada Central, Canada East|Europe|North Europe, West Europe|France|France Central, France South|Germany|Germany West Central| -|India|Central India, South India|Israel|Israel Central|Italy|Italy North|Japan|Japan East, Japan West| -|Korea|Korea Central, Korea South|Norway|Norway East, Norway West|Spain|Spain Central|Sweden|Sweden South, Sweden Central| -|Switzerland|Switzerland North, Switzerland West|UAE|UAE North|UK|UK South, UK West|US|Central US, East US, East US 2, South Central US, West Central US, West US, West US 2, West US 3| -|US Government|USGov Virginia, USGov Texas||||||| +|Geo|Regions| +||| +|Africa|South Africa North| +|Asia Pacific|East Asia, Southeast Asia| +|Australia|Australia Central, Australia East, Australia Southeast| +|Brazil|Brazil South, Brazil Southeast| +|Canada|Canada Central, Canada East| +|Europe|North Europe, West Europe| +|France|France Central, France South| +|Germany|Germany West Central| +|India|Central India, South India| +|Israel|Israel Central| +|Italy|Italy North| +|Japan|Japan East, Japan West| +|Korea|Korea Central, Korea South| +|Norway|Norway East, Norway West| +|Spain|Spain Central| +|Sweden|Sweden South, Sweden Central| +|Switzerland|Switzerland North, Switzerland West| +|UAE|UAE North| +|UK|UK South, UK West| +|US|Central US, East US, East US 2, South Central US, West Central US, West US, West US 2, West US 3| +|US Government|USGov Virginia, USGov Texas| + If you have clusters in regions where Azure Monitor Workspace is not yet available, you can select another region in the same geography. |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | This article lists significant changes to Azure Monitor documentation. |Containers|[Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](containers/prometheus-metrics-scrape-configuration.md)|[Azure Monitor Managed Prometheus] Docs for pod annotation scraping through configmap| |Essentials|[Custom metrics in Azure Monitor (preview)](essentials/metrics-custom-overview.md)|Article refreshed an updated| |General|[Disable monitoring of your Kubernetes cluster](containers/kubernetes-monitoring-disable.md)|New article to consolidate process for all container configurations and for both Prometheus and Container insights.|-|Logs|[ Best practices for Azure Monitor Logs](best-practices-logs.md)|Dedicated clusters are now available in all commitment tiers, with a minimum daily ingestion of 100 GB.| +|Logs|[Best practices for Azure Monitor Logs](best-practices-logs.md)|Dedicated clusters are now available in all commitment tiers, with a minimum daily ingestion of 100 GB.| |Logs|[Enhance data and service resilience in Azure Monitor Logs with availability zones](logs/availability-zones.md)|Availability zones are now supported in the Israel Central, Poland Central, and Italy North regions.| |Virtual-Machines|[Dependency Agent](vm/vminsights-dependency-agent-maintenance.md)|VM Insights Dependency Agent now supports RHEL 8.6 Linux.| |Visualizations|[Composite bar renderer](visualize/workbooks-composite-bar.md)|We've edited the Workbooks content to make some features and functionality easier to find based on customer feedback. We've also removed legacy content.| General|[What's new in Azure Monitor documentation](whats-new.md)| Subscribe to Application-Insights|[Filter and preprocess telemetry in the Application Insights SDK](app/api-filtering-sampling.md)|An Azure Monitor Telemetry Data Types Reference has been added for quick reference.| Application-Insights|[Add and modify OpenTelemetry](app/opentelemetry-add-modify.md)|We've simplified the OpenTelemetry onboarding process by moving instructions to add and modify telemetry in this new document.| Application-Insights|[Application Map: Triage distributed applications](app/app-map.md)|Application Map Intelligent View has reached general availability. Enjoy this powerful tool that harnesses machine learning to aid in service health investigations.|-Application-Insights|[Usage analysis with Application Insights](app/usage-overview.md)|Code samples have been updated for the latest versions of .NET.| +Application-Insights|[Usage analysis with Application Insights](app/usage.md)|Code samples have been updated for the latest versions of .NET.| Application-Insights|[Enable a framework extension for Application Insights JavaScript SDK](app/javascript-framework-extensions.md)|All JavaScript SDK documentation has been updated and simplified, including documentation for feature and framework extensions.| Autoscale|[Use autoscale actions to send email and webhook alert notifications in Azure Monitor](autoscale/autoscale-webhook-email.md)|Article updated and refreshed| Containers|[Query logs from Container insights](containers/container-insights-log-query.md#container-logs)|New section: Container logs, with sample queries| Alerts|[Create a new alert rule](alerts/alerts-create-new-alert-rule.md)|Log ale Alerts|[Monitor Azure AD B2C with Azure Monitor](/azure/active-directory-b2c/azure-monitor)|Articles on action groups have been updated.| Alerts|[Create a new alert rule](alerts/alerts-create-new-alert-rule.md)|Alert rules that use action groups support custom properties to add custom information to the alert notification payload.| Application-Insights|[Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](app/javascript-feature-extensions.md)|Most of our JavaScript SDK documentation has been updated and overhauled.|-Application-Insights|[Analyze product usage with HEART](app/usage-heart.md)|Updated and overhauled HEART framework documentation.| +Application-Insights|[Analyze product usage with HEART](app/usage.md#heartfive-dimensions-of-customer-experience)|Updated and overhauled HEART framework documentation.| Application-Insights|[Dependency tracking in Application Insights](app/asp-net-dependencies.md)|All new documentation supports the Azure Monitor OpenTelemetry Distro public preview release announced on May 10, 2023. [Public Preview: Azure Monitor OpenTelemetry Distro for ASP.NET Core, JavaScript (Node.js), Python](https://azure.microsoft.com/updates/public-preview-azure-monitor-opentelemetry-distro-for-aspnet-core-javascript-nodejs-python)| Application-Insights|[Application Monitoring for Azure App Service and Java](app/azure-web-apps-java.md)|Added CATALINA_OPTS for Tomcat.| Essentials|[Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Azure Active Directory pod identity (preview)](essentials/prometheus-remote-write-azure-ad-pod-identity.md)|New article: Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Azure Active Directory pod identity| |
azure-netapp-files | Azure Netapp Files Resource Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md | The following table describes resource limits for Azure NetApp Files: | Number of snapshots per volume | 255 | No | | Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | | Minimum size of a single capacity pool | 1 TiB* | No |-| Maximum size of a single capacity pool | 2,048 TiB | Yes | +| Maximum size of a single capacity pool | 2,048 TiB | No | | Minimum size of a single regular volume | 100 GiB | No | | Maximum size of a single regular volume | 100 TiB | No | | Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 50 TiB | No | |
azure-netapp-files | Azure Netapp Files Understand Storage Hierarchy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md | When you use a manual QoS capacity pool with, for example, an SAP HANA system, a - A volume's capacity consumption counts against its pool's provisioned capacity. - A volumeΓÇÖs throughput consumption counts against its poolΓÇÖs available throughput. See [Manual QoS type](#manual-qos-type). - Each volume belongs to only one pool, but a pool can contain multiple volumes. -- Volumes contain a capacity of between 100 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 and 1 PiB.+- Volumes contain a capacity of between 100 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 TiB and 1 PiB. ## Large volumes |
azure-resource-manager | Bicep Core Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-core-diagnostics.md | Title: Bicep warnings and error codes description: Lists the warnings and error codes. Previously updated : 07/23/2024 Last updated : 07/24/2024 -# Bicep warning and error codes +# Bicep core diagnostics -If you need more information about a particular warning or error code, select the **Feedback** button in the upper right corner of the page and specify the code. +If you need more information about a particular diagnostic code, select the **Feedback** button in the upper right corner of the page and specify the code. | Code | Description | ||-| |
azure-resource-manager | Move Resource Group And Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md | If your move requires setting up new dependent resources, you'll experience an i Moving a resource only moves it to a new resource group or subscription. It doesn't change the location of the resource. +> [!NOTE] +> You can't move Azure resources to another resource group or another subscription if there's a read-only lock, whether in the source or in the destination. + ## Changed resource ID When you move a resource, you change its resource ID. The standard format for a resource ID is `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}`. When you move a resource to a new resource group or subscription, you change one or more values in that path. |
azure-signalr | Signalr Howto Configure Application Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-configure-application-firewall.md | + + Title: SignalR Application Firewall (Preview) +description: An introduction about why and how to set up Application Firewall for Azure SignalR service ++++ Last updated : 07/10/2024+++# Application Firewall for Azure SignalR Service ++The Application Firewall provides sophisticated control over client connections in a distributed system. Before diving into its functionality and setup, let's clarify what the Application Firewall does not do: ++1. It does not replace authentication. The firewall operates behind the client connection authentication layer. +2. It is not related to network layer access control. ++## What Does the Application Firewall Do? ++The Application Firewall consists of various rule lists. Currently, there is a rule list called *Client Connection Count Rules*. Future updates will support more rule lists to control aspects like connection lifetime and message throughput. ++This guideline is divided into three parts: +1. Introduction to different application firewall rules. +2. Instructions on configuring the rules using the Portal or Bicep on the SignalR service side. +3. Steps to configure the token on the server side. ++## Prerequisites ++* An Azure SignalR Service in [Premium tier](https://azure.microsoft.com/pricing/details/signalr-service/). ++## Client Connection Count Rules +Client Connection Count Rules restrict concurrent client connections. When a client attempts to establish a new connection, the rules are checked **sequentially**. If any rule is violated, the connection is rejected with a status code 429. ++ #### ThrottleByUserIdRule + This rule limits the concurrent connections of a user. For example, if a user opens multiple browser tabs or logs in using different devices, you can use this rule to restrict the number of concurrent connections for that user. ++ > [!NOTE] + > * The **UserId** must exist in the access token for this rule to work. Refer to [Configure access token](#configure-access-token). ++ + #### ThrottleByJwtSignatureRule + This rule limits the concurrent connections of the same token to prevent malicious users from reusing tokens to establish infinite connections, which can exhaust connection quota. ++ > [!NOTE] + > * It's not guaranteed by default that tokens generated by the SDK are different each time. Though each token contains a timestamp, this timestamp might be the same if vast tokens are generated within seconds. To avoid identical tokens, insert a random claim into the token claims. Refer to [Configure access token](#configure-access-token). +++ #### ThrottleByJwtCustomClaimRule ++ More advanced, connections could be grouped into different groups according to custom claim. Connections with the same claim are aggregated to do the check. For example, you could add a **ThrottleByJwtCustomClaimRule** to allow 5 concurrent connections with custom claim name *freeUser*. ++ > [!NOTE] + > * The rule applies to all claims with a certain claim name. The connection count aggregation is on the same claim (including claim name and claim value). The *ThrottleByUserIdRule* is a special case of this rule, applying to all connections with the userIdentity claim. + ++> [!WARNING] +> * **Avoid using too aggressive maxCount**. Client connections may close without completing the TCP handshake. SignalR service can't detect those "half-closed" connections immediately. The connection is taken as active until the heartbeat failure. Therefore, aggressive throttling strategies might unexpectedly throttle clients. A smoother approach is to **leave some buffer** for the connection count, for example: double the *maxCount*. ++++## Set up Application Firewall ++# [Portal](#tab/Portal) +To use Application Firewall, navigate to the SignalR **Application Firewall** blade on the Azure portal and click **Add** to add a rule. ++![Screenshot of adding application firewall rules for Azure SignalR on Portal.](./media/signalr-howto-config-application-firewall/signalr-add-application-firewall-rule.png "Add rule") ++# [Bicep](#tab/Bicep) ++Use Visual Studio Code or your favorite editor to create a file with the following content and name it main.bicep: ++```bicep +@description('The name for your SignalR service') +param resourceName string = 'contoso' ++resource signalr 'Microsoft.SignalRService/signalr@2024-04-01-preview' = { + name: resourceName + properties: { + applicationFirewall:{ + clientConnectionCountRules:[ + // Add or remove rules as needed + { + // This rule will be skipped if no userId is set + type: 'ThrottleByUserIdRule' + maxCount: 5 + } + { + type: 'ThrottleByJwtSignatureRule' + maxCount: 10 + } + { + // This rule will be skipped if no freeUser claim is set + type: 'ThrottleByJwtCustomClaimRule' + maxCount: 10 + claimName: 'freeUser' + } + { + // This rule will be skipped if no paidUser claim is set + type: 'ThrottleByJwtCustomClaimRule' + maxCount: 100 + claimName: 'paidUser' + } + ] + } + } +} ++``` ++Deploy the Bicep file using Azure CLI + ```azurecli + az deployment group create --resource-group MyResourceGroup --template-file main.bicep + ``` ++- ++++## Configure access token +The application firewall rules only take effect when the access token contains the corresponding claim. A rule is **skipped** if the connection does not have the corresponding claim. ++Below is an example to add userId or custom claim in the access token in **Default Mode**: ++```cs +services.AddSignalR().AddAzureSignalR(options => + { + // Add necessary claims according to your rules. + options.ClaimsProvider = context => new[] + { + // Add UserId: Used in ThrottleByUserIdRule + new Claim(ClaimTypes.NameIdentifier, context.Request.Query["username"]), ++ // Add unique claim: Ensure uniqueness when using ThrottleByJwtSignatureRule. + // The token name is not important. You could change it as you like. + new Claim("uniqueToken", Guid.NewGuid().ToString()), + + // Cutom claim: Used in ThrottleByJwtCustomClaimRule + new Claim("<Custom Claim Name>", "<Custom Claim Value>"), + // Custom claim example + new Claim("freeUser", context.Request.Query["username"]), + }; + }); +``` +The logic for **Serverless Mode** is similar. ++For more details, refer to [Client negotiation](signalr-concept-client-negotiation.md#what-can-you-do-during-negotiation) . +++++ |
azure-vmware | Request Host Quota Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md | You need an Azure account in an Azure subscription that adheres to one of the fo - **Service:** All services > Azure VMware Solution - **Resource:** General question - **Summary:** Need capacity- - **Problem type:** Deployment - - **Problem subtype:** AVS Quota request + - **Problem type:** AVS Quota request ++ > [!NOTE] + > If the *Problem Type* is not is not visible from the short-list offered, select **None of the Above**. *AVS Quota requests* will be in the offered list of *Problem Types*. 1. In the **Description** of the support ticket, on the **Details** tab, provide information for: |
azure-web-pubsub | Howto Configure Application Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-configure-application-firewall.md | + + Title: Web PubSub Application Firewall (Preview) +description: An introduction about why and how to set up Application Firewall for Azure Web PubSub service ++++ Last updated : 07/10/2024+++# Application Firewall for Azure Web PubSub Service ++The Application Firewall provides sophisticated control over client connections in a distributed system. Before diving into its functionality and setup, let's clarify what the Application Firewall does not do: ++1. It does not replace authentication. The firewall operates behind the client connection authentication layer. +2. It is not related to network layer access control. ++## What Does the Application Firewall Do? ++The Application Firewall consists of various rule lists. Currently, there is a rule list called *Client Connection Count Rules*. Future updates will support more rule lists to control aspects like connection lifetime and message throughput. ++This guideline is divided into three parts: +1. Introduction to different application firewall rules. +2. Instructions on configuring the rules using the Portal or Bicep on the Web PubSub service side. +3. Steps to configure the token on the server side. ++## Prerequisites +* A Web PubSub resource in [premium tier](https://azure.microsoft.com/pricing/details/web-pubsub/). ++## Client Connection Count Rules +Client Connection Count Rules restrict concurrent client connections. When a client attempts to establish a new connection, the rules are checked **sequentially**. If any rule is violated, the connection is rejected with a status code 429. ++ #### ThrottleByUserIdRule + This rule limits the concurrent connections of a user. For example, if a user opens multiple browser tabs or logs in using different devices, you can use this rule to restrict the number of concurrent connections for that user. ++ > [!NOTE] + > * The UserId must exist in the access token for this rule to work. Refer to [Configure access token](#configure-access-token). ++ + #### ThrottleByJwtSignatureRule + This rule limits the concurrent connections of the same token to prevent malicious users from reusing tokens to establish infinite connections, which can exhaust connection quota. ++ > [!NOTE] + > * It's not guaranteed by default that tokens generated by the SDK are different each time. Though each token contains a timestamp, this timestamp might be the same if vast tokens are generated within seconds. To avoid identical tokens, insert a random claim into the token claims. Refer to [Configure access token](#configure-access-token). +++> [!WARNING] +> * **Avoid using too aggressive maxCount**. Client connections may close without completing the TCP handshake. SignalR service can't detect those "half-closed" connections immediately. The connection is taken as active until the heartbeat failure. Therefore, aggressive throttling strategies might unexpectedly throttle clients. A smoother approach is to **leave some buffer** for the connection count, for example: double the *maxCount*. ++++## Set up Application Firewall ++# [Portal](#tab/Portal) +To use Application Firewall, navigate to the Web PubSub **Application Firewall** blade on the Azure portal and click **Add** to add a rule. ++![Screenshot of adding application firewall rules for Azure Web PubSub on Portal.](./media/howto-config-application-firewall/add-application-firewall-rule.png "Add rule") ++# [Bicep](#tab/Bicep) ++Use Visual Studio Code or your favorite editor to create a file with the following content and name it main.bicep: ++```bicep +@description('The name for your Web PubSub service') +param resourceName string = 'contoso' ++resource webpubsub 'Microsoft.SignalRService/webpubsub@2024-04-01-preview' = { + name: resourceName + properties: { + applicationFirewall:{ + clientConnectionCountRules:[ + // Add or remove rules as needed + { + // This rule will be skipped if no userId is set + type: 'ThrottleByUserIdRule' + maxCount: 5 + } + { + type: 'ThrottleByJwtSignatureRule' + maxCount: 10 + } + ] + } + } +} ++``` ++Deploy the Bicep file using Azure CLI + ```azurecli + az deployment group create --resource-group MyResourceGroup --template-file main.bicep + ``` ++- ++++## Configure access token ++The application firewall rules only take effect when the access token contains the corresponding claim. A rule is **skipped** if the connection does not have the corresponding claim. *userId" and *roles* are currently supported claims in the SDK. ++Below is an example to add userId and insert a unique placeholder in the access token: ++```cs +// The GUID role wont have any effect. But it esures this token's uniqueness when using rule ThrottleByJwtSignatureRule. +var url = service.GetClientAccessUri((userId: "user1" , roles: new string[] { "webpubsub.joinLeaveGroup.group1", Guid.NewGuid().ToString()}); +``` ++For more details, refer to [Client negotiation](howto-generate-client-access-url.md#generate-from-service-sdk) . +++++ |
backup | Azure Kubernetes Service Cluster Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore.md | Azure Backup now allows you to back up AKS clusters (cluster resources and persi - You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#register-the-trusted-access) between the Backup vault and the AKS cluster. +- In case you are trying to restore a backup stored in Vault Tier, you need to provide a storage account in input as a staging location. Backup data is stored in the Backup vault as a blob within the Microsoft tenant. During a restore operation, the backup data is copied from one vault to staging storage account across tenants. Ensure that the staging storage account for the restore has the **AllowCrossTenantReplication** property set to **true**. + For more information on the limitations and supported scenarios, see the [support matrix](azure-kubernetes-service-cluster-backup-support-matrix.md). ## Restore the AKS clusters To restore the backed-up AKS cluster, follow these steps: :::image type="content" source="./media/azure-kubernetes-service-cluster-restore/start-kubernetes-cluster-restore.png" alt-text="Screenshot shows how to start the restore process."::: -2. On the next page, click **Select backup instance**, and then select the *instance* that you want to restore. +2. On the next page, select **Select backup instance**, and then select the *instance* that you want to restore. If the instance is available in both *Primary* and *Secondary Region*, select the *region to restore* too, and then select **Continue**. To restore the backed-up AKS cluster, follow these steps: :::image type="content" source="./media/azure-kubernetes-service-cluster-restore/select-resources-to-restore-page.png" alt-text="Screenshot shows the Select Resources to restore page."::: -6. If you seleted a recovery point for restore from *Vault-standard datastore*, then provide a *snapshot resource group* and *storage account* as the staging location. +6. If you selected a recovery point for restore from *Vault-standard datastore*, then provide a *snapshot resource group* and *storage account* as the staging location. :::image type="content" source="./media/azure-kubernetes-service-cluster-restore/restore-parameters.png" alt-text="Screenshot shows the parameters to add for restore from Vault-standard storage."::: |
backup | Backup Azure Backup Vault Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-vault-troubleshoot.md | + + Title: Troubleshoot Azure Backup Vault +description: Symptoms, causes, and resolutions of the Azure Backup Vault related operations. + Last updated : 07/18/2024++++++# Troubleshoot Azure Backup Vault related operations ++This article provides troubleshooting steps that help you resolve Azure Backup Vault management errors. ++## Common user errors ++#### Error code: UserErrorSystemIdentityNotEnabledWithVault ++**Possible Cause:** Backup Vault is created with System Identity enabled by default. This error appears when System Identity of the Backup Vault is in a disabled state and a backup related operation fails with this error. ++**Resolution:** To resolve this error, enable the System Identity of the Backup Vault and reassign all the necessary roles to it. Else use a User Identity in its place with all the roles assigned and update Managed Identity for all the Backup Instances using the now disabled System Identity. ++#### Error code: UserErrorUserIdentityNotFoundOrNotAssociatedWithVault ++**Possible Cause:** Backup Instances can be created with a User Identity having all the required roles assigned to it. In addition, User Identity can also be used for operations like Encryption using a Customer Managed Key. This error appears when the particular User Identity is deleted or not attached with the Backup Vault. ++**Resolution:** To resolve this error, assign the same or alternate User Identity to the Backup Vault and update the Backup Instance to use the new identity in latter case. Otherwise, enable the System Identity of the Backup Vault, update the Backup Instance and assign all the necessary roles to it. ++## Next steps ++- [About Azure Backup Vault](create-manage-backup-vault.md) |
backup | Backup Azure Dataprotection Use Rest Api Backup Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dataprotection-use-rest-api-backup-blobs.md | Title: Back up blobs in a storage account using Azure Data Protection REST API. description: In this article, learn how to configure, initiate, and manage backup operations of blobs using REST API. Previously updated : 05/30/2024 Last updated : 07/24/2024 ms.assetid: 7c244b94-d736-40a8-b94d-c72077080bbe The following is the request body to configure backup for all blobs within a sto } } ```-To configure backup with vaulted backup (preview) enabled, refer the below request body. ++To configure backup with vaulted backup enabled, refer the below request body. ```json {backupInstanceDataSourceType is Microsoft.Storage/storageAccounts/blobServices The [request body](#prepare-the-request-to-configure-blob-backup) that you prepa } } ```-#### Example request body for vaulted backup (preview) ++#### Example request body for vaulted backup ```json { |
backup | Backup Azure Dataprotection Use Rest Api Create Update Blob Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dataprotection-use-rest-api-create-update-blob-policy.md | Title: Create Azure Backup policies for blobs using data protection REST API description: In this article, you'll learn how to create and manage backup policies for blobs using REST API. Previously updated : 05/30/2024 Last updated : 07/24/2024 ms.assetid: 472d6a4f-7914-454b-b8e4-062e8b556de3 The policy says: } ``` -To configure a backup policy with the vaulted backup (preview), use the following JSON script: +To configure a backup policy with the vaulted backup, use the following JSON script: ```json { |
backup | Backup Azure Dataprotection Use Rest Api Restore Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dataprotection-use-rest-api-restore-blobs.md | Title: Restore blobs in a storage account using Azure Data Protection REST API description: In this article, learn how to restore blobs of a storage account using REST API. Previously updated : 05/30/2024 Last updated : 07/24/2024 ms.assetid: 9b8d21e6-3e23-4345-bb2b-e21040996afd To illustrate the restoration steps in this article, we'll refer to blobs in a s ## Prepare for Azure Blobs restore -You can now do the restore operation for *operational backup* and *vaulted backup (preview)* for Azure Blobs. +You can now do the restore operation for *operational backup* and *vaulted backup* for Azure Blobs. **Choose a backup tier**: The key points to remember in this scenario are: } ``` -# [Vaulted backup (preview)](#tab/vaulted-backup) +# [Vaulted backup](#tab/vaulted-backup) [!INCLUDE [blob-vaulted-backup-restore-restapi.md](../../includes/blob-vaulted-backup-restore-restapi.md)] |
backup | Backup Azure Mysql Flexible Server Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mysql-flexible-server-restore.md | This article describes how to restore the Azure Database for MySQL - Flexible Se Learn more about the [supported scenarios. considerations, and limitations](backup-azure-mysql-flexible-server-support-matrix.md). +## Prerequisites ++Backup data is stored in the Backup vault as a blob within the Microsoft tenant. During a restore operation, the backup data is copied from one storage account to another across tenants. Ensure that the target storage account for the restore has the **AllowCrossTenantReplication** property set to **true**. + ## Restore MySQL - Flexible Server database To restore the database, follow these steps: To restore the database, follow these steps: ## Next steps -- [Back up the Azure Database for MySQL - Flexible Server (preview)](backup-azure-mysql-flexible-server.md)+- [Back up the Azure Database for MySQL - Flexible Server (preview)](backup-azure-mysql-flexible-server.md) |
backup | Backup Azure Troubleshoot Blob Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-blob-backup.md | Title: Troubleshoot Blob backup and restore issues description: In this article, learn about symptoms, causes, and resolutions of Azure Backup failures related to Blob backup and restore. Previously updated : 11/22/2023 Last updated : 07/24/2024 |
backup | Backup Blobs Storage Account Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-arm-template.md | Title: Quickstart - Back up blobs in a storage account via ARM template using Az description: Learn how to back up blobs in a storage account with an ARM template. Previously updated : 05/30/2024 Last updated : 07/24/2024 -# Quickstart: Back up a storage account with Blob data using an ARM template (preview) +# Quickstart: Back up a storage account with Blob data using an ARM template This quickstart describes how to back up a storage account with Azure Blob data with a vaulted backup policy using an ARM template. |
backup | Backup Blobs Storage Account Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-bicep.md | Title: Quickstart - Back up blobs in a storage account description: Learn how to back up blobs in a storage account with a Bicep template. Previously updated : 05/30/2024 Last updated : 07/24/2024 -# Quickstart: Back up a storage account with Blob data using Azure Backup via a Bicep template (preview) +# Quickstart: Back up a storage account with Blob data using Azure Backup via a Bicep template This quickstart describes how to back up a storage account with Azure Blob data with a vaulted backup policy using a Bicep template. |
backup | Backup Blobs Storage Account Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-cli.md | Title: Back up Azure Blobs using Azure CLI description: Learn how to back up Azure Blobs using Azure CLI. Previously updated : 05/30/2024 Last updated : 07/24/2024 After creating a vault, let's create a Backup policy to protect Azure Blobs in a ## Create a backup policy -You can create a backup policy for *operational backup* and *vaulted backup (preview)* for Azure Blobs using Azure CLI. +You can create a backup policy for *operational backup* and *vaulted backup* for Azure Blobs using Azure CLI. **Choose a backup tier**: az dataprotection backup-policy create -g testBkpVaultRG --vault-name TestBkpVau } ``` -# [Vaulted backup (preview)](#tab/vaulted-backup) +# [Vaulted backup](#tab/vaulted-backup) [!INCLUDE [blob-backup-create-policy-cli.md](../../includes/blob-backup-create-policy-cli.md)] |
backup | Backup Blobs Storage Account Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-ps.md | Title: Back up Azure blobs within a storage account using Azure PowerShell description: Learn how to back up all Azure blobs within a storage account using Azure PowerShell. Previously updated : 05/30/2024 Last updated : 07/24/2024 blobBkpPolicy Microsoft.DataProtection/backupVaults/backupPolicies $blobBkpPol = Get-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name "blobBkpPolicy" ```-# [Vaulted Backup (preview)](#tab/vaulted-backup) +# [Vaulted Backup](#tab/vaulted-backup) [!INCLUDE [blob-vaulted-backup-create-policy-ps.md](../../includes/blob-vaulted-backup-create-policy-ps.md)] blobrg-PSTestSA-3df6ac08-9496-4839-8fb5-8b78e594f166 Microsoft.DataProtection/ba > [!IMPORTANT] > Once a storage account is configured for blobs backup, a few capabilities are affected, such as change feed and delete lock. [Learn more](blob-backup-configure-manage.md#effects-on-backed-up-storage-accounts). -# [Vaulted Backup (preview)](#tab/vaulted-backup) +# [Vaulted Backup](#tab/vaulted-backup) [!INCLUDE [blob-vaulted-backup-prepare-request-ps.md](../../includes/blob-vaulted-backup-prepare-request-ps.md)] |
backup | Blob Backup Configure Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-manage.md | Title: Configure and manage backup for Azure Blobs using Azure Backup description: Learn how to configure and manage operational and vaulted backups for Azure Blobs. Previously updated : 05/02/2023 Last updated : 07/24/2024 Azure Backup allows you to configure operational and vaulted backups to protect # [Operational backup](#tab/operational-backup) -- Operational backup of blobs is a local backup solution that maintains data for a specified duration in the source storage account itself. This solution doesn't maintain an additional copy of data in the vault. This solution allows you to retain your data for restore for up to 360 days. Long retention durations may, however, lead to longer time taken during the restore operation.-- The solution can be used to perform restores to the source storage account only and may result in data being overwritten.-- If you delete a container from the storage account by calling the *Delete Container operation*, that container can't be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers, in addition to operational backup, to protect against accidental deletion of containers.+- Operational backup of blobs is a local backup solution that maintains data for a specified duration in the source storage account itself. This solution doesn't maintain an additional copy of data in the vault. This solution allows you to retain your data for restore for up to 360 days. Long retention durations can, however, lead to longer time taken during the restore operation. +- The solution can be used to perform restores to the source storage account only and can result in data being overwritten. +- If you delete a container from the storage account by calling the *Delete Container operation*, that container can't be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you want to restore them later. Also, Microsoft recommends enabling soft delete for containers, in addition to operational backup, to protect against accidental deletion of containers. - Ensure that the **Microsoft.DataProtection** provider is registered for your subscription. For more information about the supported scenarios, limitations, and availability, see the [support matrix](blob-backup-support-matrix.md). To assign the required role for storage accounts that you need to protect, follo >[!NOTE] >You can also assign the roles to the vault at the Subscription or Resource Group levels according to your convenience. -1. In the storage account that needs to be protected, go to the **Access Control (IAM)** tab on the left navigation pane. +1. In the storage account that needs to be protected, go to the **Access Control (IAM)** tab on the left navigation blade. 1. Select **Add role assignments** to assign the required role. ![Add role assignments](./media/blob-backup-configure-manage/add-role-assignments.png) -1. In the Add role assignment pane: +1. In the Add role assignment blade: 1. Under **Role**, choose **Storage Account Backup Contributor**. 1. Under **Assign access to**, choose **User, group or service principal**. To assign the required role for storage accounts that you need to protect, follo >[!NOTE] >The role assignment might take up to 30 minutes to take effect. -## Create a backup policy -A backup policy defines the schedule and frequency of the recovery points creation, and its retention duration in the Backup vault. You can use a single backup policy for your vaulted backup, operational backup, or both. You can use the same backup policy to configure backup for multiple storage accounts to a vault. -To create a backup policy, follow these steps: -1. Go to **Backup center**, and then select **+ Policy**. This takes you to the create policy experience. -- :::image type="content" source="./media/blob-backup-configure-manage/add-policy-inline.png" alt-text="Screenshot shows how to initiate adding backup policy for vaulted blob backup." lightbox="./media/blob-backup-configure-manage/add-policy-expanded.png"::: --2. Select the *data source type* as **Azure Blobs (Azure Storage)**, and then select **Continue**. -- :::image type="content" source="./media/blob-backup-configure-manage/datasource-type-selection-for-vaulted-blob-backup.png" alt-text="Screenshot shows how to select datasource type for vaulted blob backup."::: --3. On the **Basics** tab, enter a name for the policy and select the vault you want this policy to be associated with. -- :::image type="content" source="./media/blob-backup-configure-manage/add-vaulted-backup-policy-name.png" alt-text="Screenshot shows how to add vaulted blob backup policy name."::: -- You can view the details of the selected vault in this tab, and then select **continue**. - -4. On the **Schedule + retention** tab, enter the *backup details* of the data store, schedule, and retention for these data stores, as applicable. -- 1. To use the backup policy for vaulted backups, operational backups, or both, select the corresponding checkboxes. - 1. For each data store you selected, add or edit the schedule and retention settings: - - **Vaulted backups**: Choose the frequency of backups between *daily* and *weekly*, specify the schedule when the backup recovery points need to be created, and then edit the default retention rule (selecting **Edit**) or add new rules to specify the retention of recovery points using a *grandparent-parent-child* notation. - - **Operational backups**: These are continuous and don't require a schedule. Edit the default rule for operational backups to specify the required retention. -- :::image type="content" source="./media/blob-backup-configure-manage/define-vaulted-backup-schedule-and-retention-inline.png" alt-text="Screenshot shows how to configure vaulted blob backup schedule and retention." lightbox="./media/blob-backup-configure-manage/define-vaulted-backup-schedule-and-retention-expanded.png"::: --5. Go to **Review and create**. -6. Once the review is complete, select **Create**. --## Configure backups --You can configure backup for one or more storage accounts in an Azure region if you want them to back up to the same vault using a single backup policy. --To configure backup for storage accounts, follow these steps: --1. Go to **Backup center** > **Overview**, and then select **+ Backup**. -- :::image type="content" source="./media/blob-backup-configure-manage/start-vaulted-backup.png" alt-text="Screenshot shows how to initiate vaulted blob backup."::: --2. On the **Initiate: Configure Backup** tab, choose **Azure Blobs (Azure Storage)** as the **Datasource type**. -- :::image type="content" source="./media/blob-backup-configure-manage/choose-datasource-for-vaulted-backup.png" alt-text="Screenshot shows how to initiate configuring vaulted blob backup."::: --3. On the **Basics** tab, specify **Azure Blobs (Azure Storage)** as the **Datasource type**, and then select the *Backup vault* that you want to associate with your storage accounts. -- You can view details of the selected vault on this tab, and then select **Next**. -- :::image type="content" source="./media/blob-backup-configure-manage/select-datasource-type-for-vaulted-backup.png" alt-text="Screenshot shows how to select datasource type to initiate vaulted blob backup."::: - -4. Select the *backup policy* that you want to use for retention. -- You can view the details of the selected policy. You can also create a new backup policy, if needed. Once done, select **Next**. -- :::image type="content" source="./media/blob-backup-configure-manage/select-policy-for-vaulted-backup.png" alt-text="Screenshot shows how to select policy for vaulted blob backup."::: --5. On the **Datasources** tab, select the *storage accounts* you want to back up. -- :::image type="content" source="./media/blob-backup-configure-manage/select-storage-account-for-vaulted-backup.png" alt-text="Screenshot shows how to select storage account for vaulted blob backup." lightbox="./media/blob-backup-configure-manage/select-storage-account-for-vaulted-backup.png"::: -- You can select multiple storage accounts in the region to back up using the selected policy. Search or filter the storage accounts, if required. - - If you've chosen the vaulted backup policy in step 4, you can also select specific containers to backup. Click "Change" under the "Selected containers" column. In the context blade, choose "browse containers to backup" and unselect the ones you don't want to backup. --6. When you select the storage accounts and containers to protect, Azure Backup performs the following validations to ensure all prerequisites are met. The **Backup readiness** column shows if the Backup vault has enough permissions to configure backups for each storage account. -- 1. Validates that the Backup vault has the required permissions to configure backup (the vault has the **Storage account backup contributor** role on all the selected storage accounts. If validation shows errors, then the selected storage accounts don't have **Storage account backup contributor** role. You can assign the required role, based on your current permissions. The error message helps you understand if you have the required permissions, and take the appropriate action: -- - **Role assignment not done**: This indicates that you (the user) have permissions to assign the **Storage account backup contributor** role and the other required roles for the storage account to the vault. -- Select the roles, and then select **Assign missing roles** on the toolbar to automatically assign the required role to the Backup vault, and trigger an autorevalidation. -- The role propagation may take some time (up to 10 minutes) causing the revalidation to fail. In this scenario, you need to wait for a few minutes and select **Revalidate** to retry validation. -- - **Insufficient permissions for role assignment**: This indicates that the vault doesn't have the required role to configure backups, and you (the user) don't have enough permissions to assign the required role. To make the role assignment easier, Azure Backup allows you to download the role assignment template, which you can share with users with permissions to assign roles for storage accounts. -- To do this, select the storage accounts, and then select **Download role assignment template** to download the template. Once the role assignments are complete, select **Revalidate** to validate the permissions again, and then configure backup. -- :::image type="content" source="./media/blob-backup-configure-manage/vaulted-backup-role-assignment-success.png" alt-text="Screenshot shows that the role assignment is successful."::: -- >[!Note] - >The template contains details for selected storage accounts only. So, if there are multiple users that need to assign roles for different storage accounts, you can select and download different templates accordingly. -- 1. In case of vaulted backups, validates that the number of containers to be backed up is less than *100*. By default, all containers are selected; however, you can exclude containers that shouldn't be backed up. If your storage account has *>100* containers, you must exclude containers to reduce the count to *100 or below*. -- >[!Note] - >In case of vaulted backups, the storage accounts to be backed up must contain at least *1 container*. If the selected storage account doesn't contain any containers or if no containers are selected, you may get an error while configuring backups. --7. Once validation succeeds, open the **Review and configure** tab. --8. Review the details on the **Review + configure** tab and select **Next** to initiate the *configure backup* operation. --You'll receive notifications about the status of configuring protection and its completion. ### Using Data protection settings of the storage account to configure backup You can configure backup for blobs in a storage account directly from the ΓÇÿData ProtectionΓÇÖ settings of the storage account. -1. Go to the storage account for which you want to configure backup for blobs, and then go to **Data Protection** in left pane (under **Data management**). +1. Go to the storage account for which you want to configure backup for blobs, and then go to **Data Protection** in left blade (under **Data management**). 1. In the available data protection options, the first one allows you to enable operational backup using Azure Backup. You can configure backup for blobs in a storage account directly from the ΓÇÿDat ![Enable operational backup with Azure Backup](./media/blob-backup-configure-manage/enable-operational-backup-with-azure-backup.png) - 1. On selecting **Manage identity**, brings you to the Identity pane of the storage account. + 1. On selecting **Manage identity**, brings you to the Identity blade of the storage account. 1. Select **Add role assignment** to initiate the role assignment. You can configure backup for blobs in a storage account directly from the ΓÇÿDat ![Finish role assignment](./media/blob-backup-configure-manage/finish-role-assignment.png) - 1. Select the cancel icon (**x**) on the top right corner to return to the **Data protection** pane of the storage account.<br><br>Once back, continue configuring backup. + 1. Select the cancel icon (**x**) on the top right corner to return to the **Data protection** blade of the storage account.<br><br>Once back, continue configuring backup. ## Effects on backed-up storage accounts # [Vaulted backup](#tab/vaulted-backup) -- In storage accounts (for which you've configured vaulted backups), the object replication rules get created under the **Object replication** item in the left pane.+- In storage accounts (for which you've configured vaulted backups), the object replication rules get created under the **Object replication** item in the left blade. - Object replication requires versioning and change-feed capabilities. So, Azure Backup service enables these features on the source storage account. # [Operational backup](#tab/operational-backup) Once backup is configured, changes taking place on block blobs in the storage ac ## Manage backups -You can use Backup Center as your single pane of glass for managing all your backups. Regarding backup for Azure Blobs, you can use Backup Center to do the following: +You can use Backup Center as your single blade of glass for managing all your backups. Regarding backup for Azure Blobs, you can use Backup Center to do the following: - As we've seen above, you can use it for creating Backup vaults and policies. You can also view all vaults and policies under the selected subscriptions. - Backup Center gives you an easy way to monitor the state of protection of protected storage accounts as well as storage accounts for which backup isn't currently configured. To stop backup for a storage account, follow these steps: ![Stop operational backup](./media/blob-backup-configure-manage/stop-operational-backup.png) -After stopping backup, you may disable other storage data protection capabilities (enabled for configuring backups) from the data protection pane of the storage account. +After stopping backup, you can disable other storage data protection capabilities (enabled for configuring backups) from the data protection blade of the storage account. ## Next steps |
backup | Blob Backup Configure Quick | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-quick.md | + + Title: Quickstart - Configure vaulted backup for Azure Blobs using Azure Backup +description: In this quickstart, learn how to configure vaulted backup for Azure Blobs. + Last updated : 07/24/2024++++++# Quickstart: Configure vaulted backup for Azure Blobs using Azure Backup ++This quickstart describes how to create a backup policy and configure vaulted backup for Azure Blobs from the Azure portal. ++++## Prerequisites ++Before you configure blob vaulted backup, ensure that: ++- You have a Backup vault to configure Azure Blob backup. If you haven't created the Backup vault, [create one](blob-backup-configure-manage.md?tabs=vaulted-backup#create-a-backup-vault). +- You assign permissions to the Backup vault on the storage account. [Learn more](blob-backup-configure-manage.md?tabs=vaulted-backup#grant-permissions-to-the-backup-vault-on-storage-accounts). +- You create a backup policy for Azure Blobs vaulted backup. [Learn more](blob-backup-configure-manage.md?tabs=vaulted-backup#create-a-backup-policy). ++## Before you start ++Things to remember before you start configuring blob vaulted backup: ++- Vaulted backup of blobs is a managed offsite backup solution that transfers data to the backup vault and retains as per the retention configured in the backup policy. You can retain data for a maximum of *10 years*. +- Currently, you can use the vaulted backup solution to restore data to a different storage account only. While performing restores, ensure that the target storage account doesn't contain any *containers* with the same name as those backed up in a recovery point. If any conflicts arise due to the same name of containers, the restore operation fails. +- Storage accounts to be backed up need to have *cross-tenant replication* enabled. To ensure if the checkbox for this setting is enabled, go to the **storage account** > **Object replication** > **Advanced settings**. ++For more information about the supported scenarios, limitations, and availability, see the [support matrix](blob-backup-support-matrix.md). +++++## Next step ++[Restore Azure Blobs using Azure Backup](blob-restore.md) |
backup | Blob Backup Configure Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-tutorial.md | + + Title: Tutorial - Configure vaulted backup for Azure Blobs using Azure Backup +description: In this tutorial, learn how to configure vaulted backup for Azure Blobs. + Last updated : 07/24/2024++++++# Tutorial: Configure vaulted backup for Azure Blobs using Azure Backup ++This tutorial describes how to create a backup policy and configure vaulted backup for Azure Blobs from the Azure portal. +++## Prerequisites ++Before you configure blob vaulted backup, ensure that: ++- You have a Backup vault to configure Azure Blob backup. If you haven't created the Backup vault, [create one](blob-backup-configure-manage.md?tabs=vaulted-backup#create-a-backup-vault). +- You assign permissions to the Backup vault on the storage account. [Learn more](blob-backup-configure-manage.md?tabs=vaulted-backup#grant-permissions-to-the-backup-vault-on-storage-accounts). ++## Before you start ++Things to remember before you start configuring blob vaulted backup: ++- Vaulted backup of blobs is a managed offsite backup solution that transfers data to the backup vault and retains as per the retention configured in the backup policy. You can retain data for a maximum of *10 years*. +- Currently, you can use the vaulted backup solution to restore data to a different storage account only. While performing restores, ensure that the target storage account doesn't contain any *containers* with the same name as those backed up in a recovery point. If any conflicts arise due to the same name of containers, the restore operation fails. +- Storage accounts to be backed up need to have *cross-tenant replication* enabled. To ensure if the checkbox for this setting is enabled, go to the **storage account** > **Object replication** > **Advanced settings**. ++For more information about the supported scenarios, limitations, and availability, see the [support matrix](blob-backup-support-matrix.md). ++++++## Next step ++[Restore Azure Blobs using Azure Backup](blob-restore.md). |
backup | Blob Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-overview.md | Title: Overview of Azure Blobs backup description: Learn about Azure Blobs backup.- Previously updated : 03/21/2024+ Last updated : 07/24/2024 This article gives you an understanding about configuring the following types of - **Continuous backups**: You can configure operational backup, a managed local data protection solution, to protect your block blobs from accidental deletion or corruption. The data is stored locally within the source storage account and not transferred to the backup vault. You donΓÇÖt need to define any schedule for backups. All changes are retained, and you can restore them from the state at a selected point in time. -- **Periodic backups (preview)**: You can configure vaulted backup, a managed offsite data protection solution, to get protection against any accidental or malicious deletion of blobs or storage account. The backup data using vaulted backups is copied and stored in the Backup vault as per the schedule and frequency you define via the backup policy and retained as per the retention configured in the policy.+- **Periodic backups**: You can configure vaulted backup, a managed offsite data protection solution, to get protection against any accidental or malicious deletion of blobs or storage account. The backup data using vaulted backups is copied and stored in the Backup vault as per the schedule and frequency you define via the backup policy and retained as per the retention configured in the policy. You can choose to configure vaulted backups, operational backups, or both on your storage accounts using a single backup policy. The integration with [Backup center](backup-center-overview.md) enables you to govern, monitor, operate, and analyze backups at scale. Operational backup uses blob platform capabilities to protect your data and allo For information about the limitations of the current solution, see the [support matrix](blob-backup-support-matrix.md). -# [Vaulted backup (preview)](#tab/vaulted-backup) +# [Vaulted backup](#tab/vaulted-backup) -Vaulted backup (preview) uses the platform capability of object replication to copy data to the Backup vault. Object replication asynchronously copies block blobs between a source storage account and a destination storage account. The contents of the blob, any versions associated with the blob, and the blob's metadata and properties are all copied from the source container to the destination container. +Vaulted backup uses the platform capability of object replication to copy data to the Backup vault. Object replication asynchronously copies block blobs between a source storage account and a destination storage account. The contents of the blob, any versions associated with the blob, and the blob's metadata and properties are all copied from the source container to the destination container. When you configure protection, Azure Backup allocates a destination storage account (Backup vault's storage account managed by Azure Backup) and enables object replication policy at container level on both destination and source storage account. When a backup job is triggered, the Azure Backup service creates a recovery point marker on the source storage account and polls the destination account for the recovery point marker replication. Once the replication point marker is present on the destination, a recovery point is created. To allow Backup to enable these properties on the storage accounts to be protect >[!NOTE] >Operational backup supports operations on block blobs only and operations on containers canΓÇÖt be restored. If you delete a container from the storage account by calling the **Delete Container** operation, that container canΓÇÖt be restored with a restore operation. ItΓÇÖs suggested you enable soft delete to enhance data protection and recovery. -# [Vaulted backup (preview)](#tab/vaulted-backup) +# [Vaulted backup](#tab/vaulted-backup) Vaulted backup is configured at the storage account level. However, you can exclude containers that don't need backup. If your storage account has *>100* containers, you need to mandatorily exclude containers to reduce the count to *100* or below. For vaulted backups, the schedule and retention are managed via backup policy. You can set the frequency as *daily* or *weekly*, and specify when the backup recovery points need to be created. You can also configure different retention values for backups taken every day, week, month, or year. The retention rules are evaluated in a predetermined order of priority. The *yearly* rule has the priority compared to *monthly* and *weekly* rule. Default retention settings are applied if other rules don't qualify. You can enable operational backup and vaulted backup (or both) of blobs on a sto Once you have enabled backup on a storage account, a Backup Instance is created corresponding to the storage account in the Backup vault. You can perform any Backup-related operations for a storage account like initiating restores, monitoring, stopping protection, and so on, through its corresponding Backup Instance. -Both operational and vaulted backups integrate directly with Backup Center to help you manage the protection of all your storage accounts centrally, along with all other Backup supported workloads. Backup Center is your single pane of glass for all your Backup requirements like monitoring jobs and state of backups and restores, ensuring compliance and governance, analyzing backup usage, and performing operations pertaining to back up and restore of data. +Both operational and vaulted backups integrate directly with Backup Center to help you manage the protection of all your storage accounts centrally, along with all other Backup supported workloads. Backup Center is your single blade of glass for all your Backup requirements like monitoring jobs and state of backups and restores, ensuring compliance and governance, analyzing backup usage, and performing operations pertaining to back up and restore of data. You won't incur any management charges or instance fee when using operational ba - Retention of data because of [Soft delete for blobs](../storage/blobs/soft-delete-blob-overview.md), [Change feed support in Azure Blob Storage](../storage/blobs/storage-blob-change-feed.md), and [Blob versioning](../storage/blobs/versioning-overview.md). -# [Vaulted backup (preview)](#tab/vaulted-backup) +# [Vaulted backup](#tab/vaulted-backup) -You won't incur backup storage charges or instance fees during the preview. However, you'll incur the source side cost, [associated with Object replication](../storage/blobs/object-replication-overview.md#billing), on the backed-up source account. +You will incur backup storage charges or instance fees, and the source side cost ([associated with Object replication](../storage/blobs/object-replication-overview.md#billing)) on the backed-up source account. |
backup | Blob Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md | Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 04/01/2024 Last updated : 07/24/2024 Operational backup for blobs is available in all public cloud regions, except Fr # [Vaulted backup](#tab/vaulted-backup) -Vaulted backup (preview) for blobs is currently available in all public regions **except** South Africa West, Sweden Central, Sweden South, Israel Central, Poland Central, India Central, Italy North and Malaysia South. +Vaulted backup for blobs is currently available in all public regions **except** South Africa West, Sweden Central, Sweden South, Israel Central, Poland Central, India Central, Italy North and Malaysia South. Operational backup of blobs uses blob point-in-time restore, blob versioning, so - You can back up storage accounts with *up to 100 containers*. You can also select a subset of containers to back up (up to 100 containers). - If your storage account contains more than 100 containers, you need to select *up to 100 containers* to back up. - To back up any new containers that get created after backup configuration for the storage account, modify the protection of the storage account. These containers aren't backed up automatically.-- The storage accounts to be backed up must contain *a minimum of 1 container*. If the storage account doesn't contain any containers or if no containers are selected, an error may appear when you configure backup.+- The storage accounts to be backed up must contain *a minimum of one container*. If the storage account doesn't contain any containers or if no containers are selected, an error may appear when you configure backup. - Currently, you can perform only *one backup* per day (that includes scheduled and on-demand backups). Backup fails if you attempt to perform more than one backup operation a day. - If you stop protection (vaulted backup) on a storage account, it doesn't delete the object replication policy created on the storage account. In these scenarios, you need to manually delete the *OR policies*. - Cool and archived blobs are currently not supported. |
backup | Blob Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-restore.md | Title: Restore Azure Blobs description: Learn how to restore Azure Blobs. Previously updated : 03/06/2024 Last updated : 07/24/2024 |
backup | Quick Blob Vaulted Backup Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-blob-vaulted-backup-cli.md | + + Title: Quickstart - Configure vaulted backup for Azure Blobs using Azure CLI +description: In this Quickstart, learn how to configure vaulted backup for Azure Blobs using Azure CLI. +ms.devlang: azurecli + Last updated : 07/24/2024++++++# Quickstart: Configure vaulted backup for Azure Blobs using Azure Backup via Azure CLI ++This quickstart describes how to configure vaulted backup for Azure Blobs using Azure CLI. +++## Prerequisites ++Before you configure blob vaulted backup, ensure that: ++- You review the [support matrix](../backup/blob-backup-support-matrix.md) to learn about the Azure Blob region availability, supported scenarios, and limitations. +- You have a Backup vault to configure Azure Blob backup. If you haven't created the Backup vault, [create one](../backup/backup-blobs-storage-account-ps.md#create-a-backup-vault). ++## Create a backup policy +++## Configure backup +++## Prepare the request to configure blob backup +++## Next step ++[Restore Azure Blobs using Azure CLI](/azure/backup/restore-blobs-storage-account-cli). +++ |
backup | Quick Blob Vaulted Backup Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-blob-vaulted-backup-powershell.md | + + Title: Quickstart - Configure vaulted backup for Azure Blobs using Azure PowerShell +description: In this Quickstart, learn how to configure vaulted backup for Azure Blobs using Azure PowerShell. +ms.devlang: azurecli + Last updated : 07/24/2024++++++# Quickstart: Configure vaulted backup for Azure Blobs using Azure Backup via Azure PowerShell ++This quickstart describes how to configure vaulted backup for Azure Blobs using Azure PowerShell. +++## Prerequisites ++Before you configure blob vaulted backup, ensure that: ++- You install the Azure PowerShell version **Az 5.9.0**. +- You review the [support matrix](../backup/blob-backup-support-matrix.md) to learn about the Azure Blob region availability, supported scenarios, and limitations. +- You have a Backup vault to configure Azure Blob backup. If you haven't created the Backup vault, [create one](../backup/backup-blobs-storage-account-ps.md#create-a-backup-vault). ++## Create a backup policy ++++## Configure backup +++## Prepare the request to configure blob backup +++## Next step ++[Restore Azure blobs using Azure PowerShell](/azure/backup/restore-blobs-storage-account-ps). |
backup | Restore Azure Database Postgresql Flex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql-flex.md | This article explains how to restore an Azure PostgreSQL -flex server backed up ## Prerequisites -Before you restore from Azure Database for PostgreSQL Flexible server backups, ensure that you have the required [permissions for the restore operation](backup-azure-database-postgresql-flex-overview.md#permissions-for-backup). +1. Before you restore from Azure Database for PostgreSQL Flexible server backups, ensure that you have the required [permissions for the restore operation](backup-azure-database-postgresql-flex-overview.md#permissions-for-backup). ++2. Backup data is stored in the Backup vault as a blob within the Microsoft tenant. During a restore operation, the backup data is copied from one storage account to another across tenants. Ensure that the target storage account for the restore has the **AllowCrossTenantReplication** property set to **true**. ## Restore Azure PostgreSQL-Flexible database Follow these steps: 1. Submit the Restore operation and track the triggered job under **Backup jobs**. :::image type="content" source="./media/restore-azure-database-postgresql-flex/validate.png" alt-text="Screenshot showing the validate process page.":::++1. Once the job is finished, the backed-up data is restored into the storage account. Below are the set of files recovered in your storage account after the restore: ++ - The first file is a marker or timestamp file that gives the customer the time the backup was taken at. The file cannot be restored but if opened with a text editor should tell the customer the UTC time when the backup was taken. + + - The Second file **_database_** is an individual database backup for database called tempdata2 taken using pg_dump. Each database has a separate file with format **ΓÇô {backup_name}_database_{db_name}.sql** + + - The Third File **_roles**. Has roles backed up using pg_dumpall + + - The Fourth file **_schemas**. backed up using pg_dumpall + + - The Fifth file **_tablespaces**. Has the tablespaces backed up using pg_dumpall ++1. Post restoration completion to the target storage account, you can use pg_restore utility to restore an Azure Database for PostgreSQL flexible server database from the target. Use the following command to connect to an existing postgresql flexible server and an existing database ++ `pg_restore -h <hostname> -U <username> -d <db name> -Fd -j <NUM> -C <dump directory>` ++ * `-Fd`: The directory format. + * `-j`: The number of jobs. + * `-C`: Begin the output with a command to create the database itself and then reconnect to it. ++ Here's an example of how this syntax might appear: ++ `pg_restore -h <hostname> -U <username> -j <Num of parallel jobs> -Fd -C -d <databasename> sampledb_dir_format` ++ If you have more than one database to restore, re-run the earlier command for each database. ++ Also, by using multiple concurrent jobs **-j**, you can reduce the time it takes to restore a large database on a multi-vCore target server. The number of jobs can be equal to or less than the number of vCPUs that are allocated for the target server. ## Next steps -[Support matrix for PostgreSQL-Flex database backup by using Azure Backup](backup-azure-database-postgresql-flex-support-matrix.md). +[Support matrix for PostgreSQL-Flex database backup by using Azure Backup](backup-azure-database-postgresql-flex-support-matrix.md). |
backup | Restore Azure Database Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql.md | Title: Restore Azure Database for PostgreSQL description: Learn about how to restore Azure Database for PostgreSQL backups. Previously updated : 02/01/2024 Last updated : 07/24/2024 |
backup | Restore Blobs Storage Account Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-cli.md | Title: Restore Azure Blobs via Azure CLI description: Learn how to restore Azure Blobs to any point-in-time using Azure CLI. Previously updated : 05/30/2024 Last updated : 07/24/2024 -You can restore Azure Blobs to point-in-time using *operational backups* and *vaulted backups (preview)* for Azure Blobs via Azure CLI. Here, let's use an existing Backup vault `TestBkpVault`, under the resource group `testBkpVaultRG` in the examples. +You can restore Azure Blobs to point-in-time using *operational backups* and *vaulted backups* for Azure Blobs via Azure CLI. Here, let's use an existing Backup vault `TestBkpVault`, under the resource group `testBkpVaultRG` in the examples. > [!IMPORTANT] > Before you restore Azure Blobs using Azure Backup, see [important points](blob-restore.md#before-you-start). ## Fetch details to restore a blob backup -To restore a blob backup, you need to *fetch the valid time range for *operational backup* and *fetch the list of recovery points* for *vaulted backup (preview)*. +To restore a blob backup, you need to *fetch the valid time range for *operational backup* and *fetch the list of recovery points* for *vaulted backup*. **Choose a backup tier**: az dataprotection restorable-time-range find --start-time 2021-05-30T00:00:00 -- } ``` -# [Vaulted backup (preview)](#tab/vaulted-backup) +# [Vaulted backup](#tab/vaulted-backup) -To fetch the list of recovery points available to restore vaulted backup (preview), use the `az dataprotection recovery-point list` command. +To fetch the list of recovery points available to restore vaulted backup, use the `az dataprotection recovery-point list` command. To fetch the name of the backup instance corresponding to your backed-up storage account, use the `az dataprotection backup-instance list` command. az dataprotection backup-instance restore initialize-for-item-recovery --datasou az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --from-prefix-pattern container1/text1 container2/text4 --to-prefix-pattern container1/text4 container2/text41 > restore.json ``` -# [Vaulted backup (preview)](#tab/vaulted-backup) +# [Vaulted backup](#tab/vaulted-backup) -Prepare the request body for the following restore scenarios supported by Azure Blobs vaulted backup (preview). +Prepare the request body for the following restore scenarios supported by Azure Blobs vaulted backup. ### Restore all containers |
backup | Restore Blobs Storage Account Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-ps.md | Title: Restore Azure Blobs using Azure PowerShell description: Learn how to restore Azure blobs to any point-in-time using Azure PowerShell. Previously updated : 07/01/2024 Last updated : 07/24/2024 # Restore Azure Blobs using Azure PowerShell -This article describes how to use the PowerShell to perform restores for Azure Blob from [operational](blob-backup-overview.md?tabs=operational-backup) or [vaulted](blob-backup-overview.md?tabs=vaulted-backup) backups. With operational backups, you can restore all block blobs in storage accounts with operational backup configured or a subset of blob content to any point-in-time within the retention range. With vaulted backups (preview), you can perform restores using a recovery point created, based on your backup schedule. +This article describes how to use the PowerShell to perform restores for Azure Blob from [operational](blob-backup-overview.md?tabs=operational-backup) or [vaulted](blob-backup-overview.md?tabs=vaulted-backup) backups. With operational backups, you can restore all block blobs in storage accounts with operational backup configured or a subset of blob content to any point-in-time within the retention range. With vaulted backups, you can perform restores using a recovery point created, based on your backup schedule. > [!IMPORTANT] > Support for Azure blobs is available from version **Az 5.9.0**. You can restore a subset of blobs using a prefix match. You can specify up to 10 ```azurepowershell-interactive $restorerequest = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureBlob -SourceDataStore OperationalStore -RestoreLocation $TestBkpVault.Location -RestoreType OriginalLocation -PointInTime (Get-Date -Date "2021-04-23T02:47:02.9500000Z") -BackupInstance $AllInstances[2] -ItemLevelRecovery -FromPrefixPattern "containerabc/aaa","containerabc/ccc" -ToPrefixPattern "containerabc/bbb","containerabc/ddd" ```-# [Vaulted backup (preview)](#tab/vaulted-backup) +# [Vaulted backup](#tab/vaulted-backup) [!INCLUDE [blob-vaulted-backup-restore-ps.md](../../includes/blob-vaulted-backup-restore-ps.md)] |
backup | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md | Title: What's new in Azure Backup + Title: What's new in the Azure Backup service description: Learn about the new features in the Azure Backup service. Previously updated : 07/02/2024 Last updated : 07/24/2024 - ignite-2023 You can learn more about the new releases by bookmarking this page or by [subscr ## Updates summary - July 2024+ - [Azure Blob vaulted backup is now generally available](#azure-blob-vaulted-backup-is-now-generally-available) - [Backup and restore of virtual machines with private endpoint enabled disks is now Generally Available](#backup-and-restore-of-virtual-machines-with-private-endpoint-enabled-disks-is-now-generally-available) - May 2024 - [Migration of Azure VM backups from standard to enhanced policy (preview)](#migration-of-azure-vm-backups-from-standard-to-enhanced-policy-preview) You can learn more about the new releases by bookmarking this page or by [subscr - February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview) ++## Azure Blob vaulted backup is now generally available ++Azure Backup now enables you to perform a vaulted backup of block blob data in *general-purpose v2 storage accounts* to protect data against ransomware attacks or source data loss due to malicious or rogue admin. You can define the backup schedule to create recovery points and the retention settings that determine how long backups will be retained in the vault. You can configure and manage the vaulted and operational backups using a single backup policy. ++Under vaulted backups, the data is copied and stored in the Backup vault. So, you get an offsite copy of data that can be retained for up to *10 years*. If any data loss happens on the source account, you can trigger a restore to an alternate account and get access to your data. The vaulted backups can be managed at scale via the Backup center, and monitored via the rich alerting and reporting capabilities offered by the Azure Backup service. ++If you're currently using operational backups, we recommend you to switch to vaulted backups for complete protection against different data loss scenarios. ++For more information, see [Azure Blob backup overview](blob-backup-overview.md?tabs=vaulted-backup). + ## Backup and restore of virtual machines with private endpoint enabled disks is now Generally Available Azure Backup now allows you to back up the Azure Virtual Machines that use disks with private endpoints (disk access). This support is extended for Virtual Machines that are backed up using Enhanced backup policies, along with the existing support for those that were backed up using Standard backup policies. While initiating the restore operation, you can specify the network access settings required for the restored disks. You can choose to keep the network configuration of the restored disks the same as that of the source disks, specify the access from specific networks only, or allow public access from all networks. |
cloud-services-extended-support | Available Sizes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/available-sizes.md | This article describes the available virtual machine sizes for Cloud Services (e ## Configure sizes for Cloud Services (extended support) -You can specify the virtual machine size of a role instance as part of the service model in the service definition file. The size of the role determines the number of CPU cores, memory capacity and the local file system size. +You can specify the virtual machine size of a role instance as part of the service model in the service definition file. The size of the role determines the number of CPU cores, memory capacity, and the local file system size. For example, setting the web role instance size to `Standard_D2`: To change the size of an existing role, change the virtual machine size in the s ## Get a list of available sizes -To retrieve a list of available sizes see [Resource Skus - List](/rest/api/compute/resourceskus/list) and apply the following filters: +To retrieve a list of available sizes, see [Resource Skus - List](/rest/api/compute/resourceskus/list) and apply the following filters: ```powershell # Update the location |
cloud-services-extended-support | Certificates And Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/certificates-and-key-vault.md | Key Vault is used to store certificates that are associated to Cloud Services (e ## Upload a certificate to Key Vault -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to the Key Vault. If you do not have a Key Vault set up, you can opt to create one in this same window. +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to the Key Vault. If you don't have a Key Vault set up, you can opt to create one in this same window. 2. Select **Access Configuration** :::image type="content" source="media/certs-and-key-vault-1.png" alt-text="Image shows selecting access policies from the key vault blade."::: -3. Ensure the access configuration include the following property: +3. Ensure the access configuration includes the following property: - **Enable access to Azure Virtual Machines for deployment** :::image type="content" source="media/certs-and-key-vault-2.png" alt-text="Image shows access policies window in the Azure portal."::: |
cloud-services-extended-support | Cloud Services Model And Package | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/cloud-services-model-and-package.md | -A cloud service is created from three components, the service definition *(.csdef)*, the service config *(.cscfg)*, and a service package *(.cspkg)*. Both the **ServiceDefinition.csdef** and **ServiceConfig.cscfg** files are XML-based and describe the structure of the cloud service and how it's configured; collectively called the model. The **ServicePackage.cspkg** is a zip file that is generated from the **ServiceDefinition.csdef** and among other things, contains all the required binary-based dependencies. Azure creates a cloud service from both the **ServicePackage.cspkg** and the **ServiceConfig.cscfg**. +A cloud service is created from three components, the service definition *(.csdef)*, the service config *(.cscfg)*, and a service package *(.cspkg)*. Both the **ServiceDefinition.csdef** and **ServiceConfig.cscfg** files are XML-based and describe the structure of the cloud service and its configuration. We collectively call these files the model. The **ServicePackage.cspkg** is a zip file that is generated from the **ServiceDefinition.csdef** and among other things, contains all the required binary-based dependencies. Azure creates a cloud service from both the **ServicePackage.cspkg** and the **ServiceConfig.cscfg**. -Once the cloud service is running in Azure, you can reconfigure it through the **ServiceConfig.cscfg** file, but you cannot alter the definition. +Once the cloud service is running in Azure, you can reconfigure it through the **ServiceConfig.cscfg** file, but you can't alter the definition. ## What would you like to know more about? * I want to know more about the [ServiceDefinition.csdef](#csdef) and [ServiceConfig.cscfg](#cscfg) files. The **ServiceDefinition.csdef** file specifies the settings that are used by Azu </ServiceDefinition> ``` -You can refer to the [Service Definition Schema](schema-csdef-file.md)) for a better understanding of the XML schema used here, however, here is a quick explanation of some of the elements: +You can refer to the [Service Definition Schema](schema-csdef-file.md)) for a better understanding of the XML schema used here, however, here's a quick explanation of some of the elements: **Sites** Contains the definitions for websites or web applications that are hosted in IIS7. Contains tasks that are run when the role starts. The tasks are defined in a .cm ## ServiceConfiguration.cscfg The configuration of the settings for your cloud service is determined by the values in the **ServiceConfiguration.cscfg** file. You specify the number of instances that you want to deploy for each role in this file. The values for the configuration settings that you defined in the service definition file are added to the service configuration file. The thumbprints for any management certificates that are associated with the cloud service are also added to the file. The [Azure Service Configuration Schema (.cscfg File)](schema-cscfg-file.md) provides the allowable format for a service configuration file. -The service configuration file is not packaged with the application, but is uploaded to Azure as a separate file and is used to configure the cloud service. You can upload a new service configuration file without redeploying your cloud service. The configuration values for the cloud service can be changed while the cloud service is running. The following example shows the configuration settings that can be defined for the Web and Worker roles: +The service configuration file isn't packaged with the application. It uploads to Azure as a separate file and is used to configure the cloud service. You can upload a new service configuration file without redeploying your cloud service. The configuration values for the cloud service can be changed while the cloud service is running. The following example shows the configuration settings that can be defined for the Web and Worker roles: ```xml <?xml version="1.0"?> The service configuration file is not packaged with the application, but is uplo </ServiceConfiguration> ``` -You can refer to the [Service Configuration Schema](schema-cscfg-file.md) for better understanding the XML schema used here, however, here is a quick explanation of the elements: +You can refer to the [Service Configuration Schema](schema-cscfg-file.md) for better understanding the XML schema used here, however, here's a quick explanation of the elements: **Instances** -Configures the number of running instances for the role. To prevent your cloud service from potentially becoming unavailable during upgrades, it is recommended that you deploy more than one instance of your web-facing roles. By deploying more than one instance, you are adhering to the guidelines in the [Azure Compute Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/), which guarantees 99.95% external connectivity for Internet-facing roles when two or more role instances are deployed for a service. +Configures the number of running instances for the role. To prevent your cloud service from potentially becoming unavailable during upgrades, we recommend you deploy more than one instance of your web-facing roles. By deploying more than one instance, you adhere to the guidelines in the [Azure Compute Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/), which guarantees 99.95% external connectivity for Internet-facing roles when two or more role instances are deployed for a service. **ConfigurationSettings** Configures the settings for the running instances for a role. The name of the `<Setting>` elements must match the setting definitions in the service definition file. Configures the certificates that are used by the service. The previous code exam ## Defining ports for role instances Azure allows only one entry point to a web role. Meaning that all traffic occurs through one IP address. You can configure your websites to share a port by configuring the host header to direct the request to the correct location. You can also configure your applications to listen to well-known ports on the IP address. -The following sample shows the configuration for a web role with a website and web application. The website is configured as the default entry location on port 80, and the web applications are configured to receive requests from an alternate host header that is called ΓÇ£mail.mysite.cloudapp.netΓÇ¥. +The following sample shows the configuration for a web role with a website and web application. The website is configured as the default entry location on port 80, and the web applications are configured to receive requests from an alternate host header called `mail.mysite.cloudapp.net`. ```xml <WebRole> The following sample shows the configuration for a web role with a website and w ## Changing the configuration of a role-You can update the configuration of your cloud service while it is running in Azure, without taking the service offline. To change configuration information, you can either upload a new configuration file, or edit the configuration file in place and apply it to your running service. The following changes can be made to the configuration of a service: +You can update the configuration of your cloud service while it's running in Azure, without taking the service offline. To change configuration information, you can either upload a new configuration file, or edit the configuration file in place and apply it to your running service. The following changes can be made to the configuration of a service: * **Changing the values of configuration settings** When a configuration setting changes, a role instance can choose to apply the change while the instance is online, or to recycle the instance gracefully and apply the change while the instance is offline. * **Changing the service topology of role instances** - Topology changes do not affect running instances, except where an instance is being removed. All remaining instances generally do not need to be recycled; however, you can choose to recycle role instances in response to a topology change. + Topology changes don't affect running instances, except where an instance is being removed. All remaining instances generally don't need to be recycled; however, you can choose to recycle role instances in response to a topology change. * **Changing the certificate thumbprint** - You can only update a certificate when a role instance is offline. If a certificate is added, deleted, or changed while a role instance is online, Azure gracefully takes the instance offline to update the certificate and bring it back online after the change is complete. + You can only update a certificate when a role instance is offline. If a certificate is added, deleted, or changed while a role instance is online, Azure gracefully takes the instance offline to update the certificate. Azure brings it back online after the change completes. ### Handling configuration changes with Service Runtime Events The Azure Runtime Library includes the Microsoft.WindowsAzure.ServiceRuntime namespace, which provides classes for interacting with the Azure environment from a role. The RoleEnvironment class defines the following events that are raised before and after a configuration change: Where the variables are defined as follows: | | | | \[DirectoryName\] |The subdirectory under the root project directory that contains the .csdef file of the Azure project. | | \[ServiceDefinition\] |The name of the service definition file. By default, this file is named ServiceDefinition.csdef. |-| \[OutputFileName\] |The name for the generated package file. Typically, this is set to the name of the application. If no file name is specified, the application package is created as \[ApplicationName\].cspkg. | +| \[OutputFileName\] |The name for the generated package file. Typically, this variable is set to the name of the application. If no file name is specified, the application package is created as \[ApplicationName\].cspkg. | | \[RoleName\] |The name of the role as defined in the service definition file. | | \[RoleBinariesDirectory] |The location of the binary files for the role. | | \[VirtualPath\] |The physical directories for each virtual path defined in the Sites section of the service definition. | |
cloud-services-extended-support | Configure Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/configure-scaling.md | -Conditions can be configured to enable Cloud Services (extended support) deployments to scale in and out. These conditions can be based on CPU usage, disk load and network load. +Conditions can be configured to enable Cloud Services (extended support) deployments to scale in and out. These conditions can be based on CPU usage, disk load, and network load. Consider the following information when configuring scaling of your Cloud Service deployments: - Scaling impacts core usage. Larger role instances consume more cores and you can only scale within the core limit of your subscription. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). Consider the following information when configuring scaling of your Cloud Servic :::image type="content" source="media/enable-scaling-1.png" alt-text="Image shows selecting the Remote Desktop option in the Azure portal"::: -4. A page will display a list of all the roles in which scaling can be configured. Select the role you want to configure. +4. A page displays a list of all the roles in which scaling can be configured. Select the role you want to configure. 5. Select the type of scale you want to configure- - **Manual scale** will set the absolute count of instances. + - **Manual scale** sets the absolute count of instances. 1. Select **Manual scale**. 2. Input the number of instances you want to scale up or down to. 3. Select **Save**. :::image type="content" source="media/enable-scaling-2.png" alt-text="Image shows setting up manual scaling in the Azure portal"::: - 4. The scaling operation will begin immediately. + 4. The scaling operation begins immediately. - - **Custom Autoscale** will allow you to set rules that govern how much or how little to scale. + - **Custom Autoscale** allows you to set rules that govern how much or how little to scale. 1. Select **Custom autoscale** 2. Choose to scale based on a metric or instance count. Consider the following information when configuring scaling of your Cloud Servic :::image type="content" source="media/enable-scaling-4.png" alt-text="Image shows setting up custom autoscale rules in the Azure portal"::: 4. Select **Save**.- 5. The scaling operations will begin as soon as a rule is triggered. + 5. The scaling operations begin as soon as a rule is triggered. 6. You can view or adjust existing scaling rules applied to your deployments by selecting the **Scale** tab. |
cloud-services-extended-support | Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-portal.md | To deploy Cloud Services (extended support) by using the portal: - If you have IP input endpoints defined in your definition (.csdef) file, create a public IP address for your cloud service. - Cloud Services (extended support) supports only a Basic SKU public IP address. - If your configuration (.cscfg) file contains a reserved IP address, set the allocation type for the public IP address to **Static**.- - (Optional) You can assign a DNS name for your cloud service endpoint by updating the DNS label property of the public IP address that's associated with the cloud service. - - (Optional) **Start cloud service**: Select the checkbox if you want to start the service immediately after it's deployed. + - (Optional) You can assign a DNS name for your cloud service endpoint by updating the DNS label property of the public IP address associated with the cloud service. + - (Optional) **Start cloud service**: Select the checkbox if you want to start the service immediately after it deploys. - **Key vault**: Select a key vault. - A key vault is required when you specify one or more certificates in your configuration (.cscfg) file. When you select a key vault, we attempt to find the selected certificates that are defined in your configuration (.cscfg) file based on the certificate thumbprints. If any certificates are missing from your key vault, you can upload them now , and then select **Refresh**. |
cloud-services-extended-support | Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-powershell.md | Complete the following steps as prerequisites to creating your deployment by usi ## Deploy Cloud Services (extended support) -Use any of the following PowerShell cmdlet options to deploy Cloud Services (extended support): +To deploy Cloud Services (extended support), use any of the following PowerShell cmdlet options: - Quick-create a deployment by using a [storage account](#quick-create-a-deployment-by-using-a-storage-account) - This parameter set inputs the package (.cspkg or .zip) file, the configuration (.cscfg) file, and the definition (.csdef) file for the deployment as inputs with the storage account.- - The Cloud Services (extended support) role profile, network profile, and OS profile are created by the cmdlet with minimal input. + - The cmdlet creates the Cloud Services (extended support) role profile, network profile, and OS profile with minimal input. - To input a certificate, you must specify a key vault name. The certificate thumbprints in the key vault are validated against the certificates that you specify in the configuration (.cscfg) file for the deployment. - Quick-create a deployment by using a [shared access signature URI](#quick-create-a-deployment-by-using-an-sas-uri) - This parameter set inputs the shared access signature (SAS) URI of the package (.cspkg or .zip) file with the local paths to the configuration (.cscfg) file and definition (.csdef) file. No storage account input is required.- - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input. + - The cmdlet creates the cloud service role profile, network profile, and OS profile minimal input. - To input a certificate, you must specify a key vault name. The certificate thumbprints in the key vault are validated against the certificates that you specify in the configuration (.cscfg) file for the deployment. - Create a deployment by using a [role profile, OS profile, network profile, and extension profile with shared access signature URIs](#create-a-deployment-by-using-profile-objects-and-sas-uris) New-AzCloudService Add-AzKeyVaultCertificate -VaultName "ContosKeyVault" -Name "ContosCert" -CertificatePolicy $Policy ``` -1. Create an OS profile in-memory object. An OS profile specifies the certificates that are associated with Cloud Services (extended support) roles. This is the certificate that you created in the preceding step. +1. Create an OS profile in-memory object. An OS profile specifies the certificates that are associated with Cloud Services (extended support) roles, which is the certificate that you created in the preceding step. ```azurepowershell-interactive $keyVault = Get-AzKeyVault -ResourceGroupName ContosOrg -VaultName ContosKeyVault New-AzCloudService $osProfile = @{secret = @($secretGroup)} ``` -1. Create a role profile in-memory object. A role profile defines a role's SKU-specific properties such as name, capacity, and tier. In this example, two roles are defined: frontendRole and backendRole. Role profile information must match the role configuration that's defined in the deployment configuration (.cscfg) file and definition (.csdef) file. +1. Create a role profile in-memory object. A role profile defines a role's SKU-specific properties such as name, capacity, and tier. In this example, two roles are defined: frontendRole and backendRole. Role profile information must match the role configuration defined in the deployment configuration (.cscfg) file and definition (.csdef) file. ```azurepowershell-interactive $frontendRole = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2 |
cloud-services-extended-support | Deploy Prerequisite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-prerequisite.md | The subscription that contains networking resources must have the [Network Contr ## Key vault creation -Azure Key Vault stores certificates that are associated with Cloud Services (extended support). Add the certificates to a key vault, and then reference the certificate thumbprints in the configuration (.cscfg) file for your deployment. You also must enable the key vault access policy (in the portal) for **Azure Virtual Machines for deployment** so that the Cloud Services (extended support) resource can retrieve the certificate that's stored as secrets in the key vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). You must create the key vault in the same region and subscription as the cloud service. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md). +Azure Key Vault stores certificates that are associated with Cloud Services (extended support). Add the certificates to a key vault, and then reference the certificate thumbprints in the configuration (.cscfg) file for your deployment. You also must enable the key vault access policy (in the portal) for **Azure Virtual Machines for deployment** so that the Cloud Services (extended support) resource can retrieve the certificate stored as secrets in the key vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). You must create the key vault in the same region and subscription as the cloud service. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md). ## Related content |
cloud-services-extended-support | Deploy Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-sdk.md | -This article shows how to use the [Azure SDK](https://azure.microsoft.com/downloads/) to create an Azure Cloud Services (extended support) deployment that has multiple roles (WebRole and WorkerRole) and the Remote Desktop Protocol (RDP) extension. Cloud Services (extended support) is a deployment model of Azure Cloud Services that's based on Azure Resource Manager. +This article shows how to use the [Azure SDK](https://azure.microsoft.com/downloads/) to create an Azure Cloud Services (extended support) deployment that has multiple roles (WebRole and WorkerRole). It also covers how to use the Remote Desktop Protocol (RDP) extension. Cloud Services (extended support) is a deployment model of Azure Cloud Services based on Azure Resource Manager. ## Prerequisites To deploy Cloud Services (extended support) by using the SDK: resourceGroup = await resourceGroups.CreateOrUpdateAsync(resourceGroupName, resourceGroup); ``` -1. Create a storage account and container where you'll store the package (.cspkg or .zip) file and configuration (.cscfg) file for the deployment. Install the [Azure Storage NuGet package](https://www.nuget.org/packages/Azure.Storage.Common/). This step is optional if you're using an existing storage account. The storage account name must be unique. +1. Create a storage account and container where you store the package (.cspkg or .zip) file and configuration (.cscfg) file for the deployment. Install the [Azure Storage NuGet package](https://www.nuget.org/packages/Azure.Storage.Common/). This step is optional if you're using an existing storage account. The storage account name must be unique. ```csharp string storageAccountName = ΓÇ£ContosoSASΓÇ¥ To deploy Cloud Services (extended support) by using the SDK: 1. Create a role profile object. A role profile defines role-specific properties for a SKU, such as name, capacity, and tier. - This example defines two roles: ContosoFrontend and ContosoBackend. Role profile information must match the role that's defined in the configuration (.cscfg) file and definition (.csdef) file. + This example defines two roles: ContosoFrontend and ContosoBackend. Role profile information must match the role defined in the configuration (.cscfg) file and definition (.csdef) file. ```csharp CloudServiceRoleProfile cloudServiceRoleProfile = new CloudServiceRoleProfile() |
cloud-services-extended-support | Deploy Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-template.md | To deploy Cloud Services (extended support) by using a template: ] ``` -1. Create a Cloud Services (extended support) object. Add relevant `dependsOn` references if you are deploying virtual networks or public IP addresses in your template. +1. Create a Cloud Services (extended support) object. Add relevant `dependsOn` references if you deploy virtual networks or public IP addresses in your template. ```json { To deploy Cloud Services (extended support) by using a template: } ``` -1. Deploy the template and parameter file (to define parameters in the template file) to create the Cloud Services (extended support) deployment. You can use these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support). +1. To create the Cloud Services (extended support) deployment, deploy the template and parameter file (to define parameters in the template file). You can use these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support). ```powershell New-AzResourceGroupDeployment -ResourceGroupName "ContosOrg" -TemplateFile "file path to your template file" -TemplateParameterFile "file path to your parameter file" |
cloud-services-extended-support | Enable Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-alerts.md | This article explains how to enable alerts on existing Cloud Service (extended s 4. Select the **New Alert** icon. :::image type="content" source="media/enable-alerts-2.png" alt-text="Image shows selecting the add new alert option."::: -5. Input the desired conditions and required actions based on what metrics you are interested in tracking. You can define the rules based on individual metrics or the activity log. +5. Input the desired conditions and required actions based on what metrics you want to track. You can define the rules based on individual metrics or the activity log. :::image type="content" source="media/enable-alerts-3.png" alt-text="Image shows where to add conditions to alerts."::: This article explains how to enable alerts on existing Cloud Service (extended s :::image type="content" source="media/enable-alerts-5.png" alt-text="Image shows configuring action group logic."::: -6. When you have finished setting up alerts, save the changes and based on the metrics configured you will begin to see the **Alerts** blade populate over time. +6. When you finish setting up alerts, save the changes and based on the metrics configured you begin to see the **Alerts** blade populate over time. ## Next steps - Review [frequently asked questions](faq.yml) for Cloud Services (extended support). |
cloud-services-extended-support | Enable Key Vault Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-key-vault-virtual-machine.md | -# Apply the Key Vault VM extension to Azure Cloud Services (extended support) +# Apply the Key Vault Virtual Machine (VM) extension to Azure Cloud Services (extended support) This article provides basic information about the Azure Key Vault VM extension for Windows and shows you how to enable it in Azure Cloud Services. The Key Vault VM extension provides automatic refresh of certificates stored in The Key Vault VM extension is now supported on the Azure Cloud Services (extended support) platform to enable the management of certificates end to end. The extension can now pull certificates from a configured key vault at a predefined polling interval and install them for the service to use. ## How can I use the Key Vault VM extension?-The following procedure will show you how to install the Key Vault VM extension on Azure Cloud Services by first creating a bootstrap certificate in your vault to get a token from Microsoft Entra ID. That token will help in the authentication of the extension with the vault. After the authentication process is set up and the extension is installed, all the latest certificates will be pulled down automatically at regular polling intervals. +The following procedure shows you how to install the Key Vault VM extension on Azure Cloud Services by first creating a bootstrap certificate in your vault to get a token from Microsoft Entra ID. That token helps in the authentication of the extension with the vault. After the authentication process is set up and the extension is installed, all the latest certificates will be pulled down automatically at regular polling intervals. > [!NOTE] > The Key Vault VM extension downloads all the certificates in the Windows certificate store to the location provided by the `certificateStoreLocation` property in the VM extension settings. Currently, the Key Vault VM extension grants access to the private key of the certificate only to the local system admin account. |
cloud-services-extended-support | Enable Rdp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-rdp.md | -The Azure portal uses the remote desktop extension to enable remote desktop even after the application is deployed. The remote desktop settings for your Cloud Service allows you to enable remote desktop, update the local administrator account, select the certificates used in authentication and set the expiration date for those certificates. +The Azure portal uses the remote desktop extension to enable remote desktop even after the application is deployed. The remote desktop settings for your Cloud Service allow you to enable remote desktop, update the local administrator account, select the certificates used in authentication, and set the expiration date for those certificates. ## Apply Remote Desktop extension 1. Navigate to the Cloud Service you want to enable remote desktop for and select **"Remote Desktop"** in the left navigation pane. The Azure portal uses the remote desktop extension to enable remote desktop even 2. Select **Add**. 3. Choose the roles to enable remote desktop for.-4. Fill in the required fields for user name, password and expiration. +4. Fill in the required fields for user name, password, and expiration. > [!NOTE] > The password for remote desktop must be between 8-123 characters long and must satisfy at least 3 of password complexity requirements from the following: 1) Contains an uppercase character 2) Contains a lowercase character 3) Contains a numeric digit 4) Contains a special character 5) Control characters are not allowed :::image type="content" source="media/remote-desktop-2.png" alt-text="Image shows inputting the information required to connect to remote desktop."::: -5. When finished, select **Save**. It will take a few moments before your role instances are ready to receive connections. +5. When finished, select **Save**. It takes a few moments before your role instances are ready to receive connections. ## Connect to role instances with Remote Desktop enabled Once remote desktop is enabled on the roles, you can initiate a connection directly from the Azure portal. -1. Click on **Roles and Instances** to open the instance settings. +1. Select on **Roles and Instances** to open the instance settings. :::image type="content" source="media/remote-desktop-3.png" alt-text="Image shows selecting the roles and instances option in the configuration blade."::: 2. Select a role instance that has remote desktop configured.-3. Click **Connect** to download an remote desktop connection file. +3. Select **Connect** to download a remote desktop connection file. :::image type="content" source="media/remote-desktop-4.png" alt-text="Image shows selecting the worker role instance in the Azure portal."::: 4. Open the file to connect to the role instance. ## Update Remote Desktop Extension using PowerShell-Follow the below steps to update your cloud service to the latest module with an RDP extension +Follow the below steps to update your cloud service to the latest module with a Remote Desktop Protocol (RDP) extension 1. Update Az.CloudService module to the [latest version](https://www.powershellgallery.com/packages/Az.CloudService/0.5.0) |
cloud-services-extended-support | Enable Wad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-wad.md | Title: Apply the Windows Azure diagnostics extension in Cloud Services (extended support) -description: Apply the Windows Azure diagnostics extension for Cloud Services (extended support) + Title: Apply the Microsoft Azure diagnostics extension in Cloud Services (extended support) +description: Apply the Microsoft Azure diagnostics extension for Cloud Services (extended support) Previously updated : 10/13/2020 Last updated : 07/24/2024 -# Apply the Windows Azure diagnostics extension in Cloud Services (extended support) -You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the Microsoft.Azure.Diagnostics extension applied to a role, that role can collect additional points of data. For more information, see [Extensions Overview](extensions.md) +# Apply the Microsoft Azure diagnostics extension in Cloud Services (extended support) +You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the Microsoft.Azure.Diagnostics extension applied to a role, that role can collect more points of data. For more information, see [Extensions Overview](extensions.md) -Windows Azure Diagnostics extension can be enabled for Cloud Services (extended support) through [PowerShell](deploy-powershell.md) or [ARM template](deploy-template.md) +Microsoft Azure Diagnostics extension can be enabled for Cloud Services (extended support) through [PowerShell](deploy-powershell.md) or [ARM template](deploy-template.md) -## Apply Windows Azure Diagnostics extension using PowerShell +## Apply Microsoft Azure Diagnostics extension using PowerShell ```powershell # Create WAD extension object Download the public configuration file schema definition by executing the follow ```powershell (Get-AzureServiceAvailableExtension -ExtensionName 'PaaSDiagnostics' -ProviderNamespace 'Microsoft.Azure.Diagnostics').PublicConfigurationSchema | Out-File -Encoding utf8 -FilePath 'PublicWadConfig.xsd' ```-Here is an example of the public configuration XML file +Here's an example of the public configuration XML file ``` <?xml version="1.0" encoding="utf-8"?> <PublicConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> Download the private configuration file schema definition by executing the follo ```powershell (Get-AzureServiceAvailableExtension -ExtensionName 'PaaSDiagnostics' -ProviderNamespace 'Microsoft.Azure.Diagnostics').PrivateConfigurationSchema | Out-File -Encoding utf8 -FilePath 'PrivateWadConfig.xsd' ```-Here is an example of the private configuration XML file +Here's an example of the private configuration XML file ``` <?xml version="1.0" encoding="utf-8"?> Here is an example of the private configuration XML file </PrivateConfig> ``` -## Apply Windows Azure Diagnostics extension using ARM template +## Apply Microsoft Azure Diagnostics extension using ARM template ```json "extensionProfile": { "extensions": [ |
cloud-services-extended-support | Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/extensions.md | Extensions are small applications that provide post-deployment configuration and ## Key Vault Extension -The Key Vault VM extension provides automatic refresh of certificates stored in an Azure Key Vault. Specifically, the extension monitors a list of observed certificates stored in key vaults, and upon detecting a change, retrieves, and installs the corresponding certificates. It also allows cross region/cross subscription reference of certificates for Cloud Service (extended support). +The Key Vault Virtual Machine (VM) extension provides automatic refresh of certificates stored in an Azure Key Vault. Specifically, the extension monitors a list of observed certificates stored in key vaults, and upon detecting a change, retrieves, and installs the corresponding certificates. It also allows cross region/cross subscription reference of certificates for Cloud Service (extended support). For more information, see [Configure key vault extension for Cloud Service (extended support)](./enable-key-vault-virtual-machine.md) ## Remote Desktop extension -Remote Desktop enables you to access the desktop of a role running in Azure. You can use a remote desktop connection to troubleshoot and diagnose problems with your application while it is running. +Remote Desktop enables you to access the desktop of a role running in Azure. You can use a remote desktop connection to troubleshoot and diagnose problems with your application while it's running. You can enable a remote desktop connection in your role during development by including the remote desktop modules in your service definition or through the remote desktop extension. For more information, see [Configure remote desktop from the Azure portal](enable-rdp.md) -## Windows Azure Diagnostics extension +## Microsoft Azure Diagnostics extension -You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the Microsoft.Azure.Diagnostics extension applied to a role, that role can collect additional points of data. +You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the Microsoft.Azure.Diagnostics extension applied to a role, that role can collect more points of data. -With basic monitoring, performance counter data from role instances is sampled and collected at 3-minute intervals. This basic monitoring data is not stored in your storage account and has no additional cost associated with it. +With basic monitoring, performance counter data from role instances is sampled and collected at 3-minute intervals. This basic monitoring data isn't stored in your storage account and has no additional cost associated with it. -With advanced monitoring, additional metrics are sampled and collected at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is stored in a storage account, in tables, and is purged after 10 days. The storage account used is configured by role; you can use different storage accounts for different roles. +With advanced monitoring, more metrics are sampled and collected at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is stored in a storage account, in tables, and is purged after 10 days. The storage account used configures based on role; you can use different storage accounts for different roles. -For more information, see [Apply the Windows Azure diagnostics extension in Cloud Services (extended support)](enable-wad.md) +For more information, see [Apply the Microsoft Azure diagnostics extension in Cloud Services (extended support)](enable-wad.md) ## Anti Malware Extension-An Azure application or service can enable and configure Microsoft Antimalware for Azure Cloud Services using PowerShell cmdlets. Note that Microsoft Antimalware is installed in a disabled state in the Cloud Services platform running Windows Server 2012 R2 and older which requires an action by an Azure application to enable it. For Windows Server 2016 and above, Windows Defender is enabled by default, hence these cmdlets can be used for configuring Antimalware. +An Azure application or service can enable and configure Microsoft Antimalware for Azure Cloud Services using PowerShell cmdlets. Microsoft Antimalware is installed in a disabled state in the Cloud Services platform running Windows Server 2012 R2 and older, which requires an action by an Azure application to enable it. For Windows Server 2016 and above, Windows Defender is enabled by default, and so, these cmdlets can be used for configuring Antimalware. For more information, see [Add Microsoft Antimalware to Azure Cloud Service using Extended Support(CS-ES)](../security/fundamentals/antimalware-code-samples.md#add-microsoft-antimalware-to-azure-cloud-service-using-extended-support) -To know more about Azure Antimalware, please visit [here](../security/fundamentals/antimalware.md) +To know more about Azure Antimalware, visit [Microsoft Antimalware for Azure Cloud Services and Virtual Machines](../security/fundamentals/antimalware.md) |
cloud-services-extended-support | Feature Support Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/feature-support-analysis.md | -This article provides a feature analysis of Cloud Services (extended support) and Virtual Machine Scale Sets. For more information on Virtual Machine Scale Sets, please visit the documentation [here](../virtual-machine-scale-sets/overview.md) +This article provides a feature analysis of Cloud Services (extended support) and Virtual Machine Scale Sets. For more information on Virtual Machine Scale Sets, visit the documentation [here](../virtual-machine-scale-sets/overview.md) ## Basic setup This article provides a feature analysis of Cloud Services (extended support) an ||||| |Virtual machine type|Basic Azure PaaS VM (Microsoft.compute/cloudServices)|Standard Azure IaaS VM (Microsoft.compute/virtualmachines)|Scale Set specific VMs (Microsoft.compute /virtualmachinescalesets/virtualmachines)| |Maximum Instance Count (with FD guarantees)|1100|1000|3000 (1000 per Availability Zone)|-|SKUs supported|D, Dv2, Dv3, Dav4 series, Ev3, Eav4 series, G series, H series|D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) are not supported|All SKUs| +|SKUs supported|D, Dv2, Dv3, Dav4 series, Ev3, Eav4 series, G series, H series|D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) aren't supported|All SKUs| |Full control over VM, NICs, Disks|Limited control over NICs and VM via CS-ES APIs. No support for Disks|Yes|Limited control with virtual machine scale sets VM API| |RBAC Permissions Required|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write| |Accelerated networking|No|Yes|Yes| |Spot instances and pricing|No|Yes, you can have both Spot and Regular priority instances|Yes, instances must either be all Spot or all Regular|-|Mix operating systems|Extremely limited Windows support|Yes, Linux and Windows can reside in the same Flexible scale set|No, instances are the same operating system| +|Mix operating systems|Limited Windows support|Yes, Linux and Windows can reside in the same Flexible scale set|No, instances are the same operating system| |Disk Types|No Disk Support|Managed disks only, all storage types|Managed and unmanaged disks, All Storage Types |Disk Server Side Encryption with Customer Managed Keys|No|Yes| | |Write Accelerator|No|No|Yes| This article provides a feature analysis of Cloud Services (extended support) an | Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) | ||||| |Availability SLA|[SLA](https://azure.microsoft.com/support/legal/sla/cloud-services/v1_5/)|[SLA](https://azure.microsoft.com/support/legal/sla/virtual-machine-scale-sets/v1_1/)|[SLA](https://azure.microsoft.com/support/legal/sla/virtual-machine-scale-sets/v1_1/)|-|Availability Zones|No|Specify instances land across 1, 2 or 3 availability zones|Specify instances land across 1, 2 or 3 availability zones| +|Availability Zones|No|Specify instances land across 1, 2, or 3 availability zones|Specify instances land across 1, 2, or 3 availability zones| |Assign VM to a Specific Availability Zone|No|Yes|No|-|Fault Domain – Max Spreading (Azure will maximally spread instances)|Yes|Yes|Yes| -|Fault Domain – Fixed Spreading|5 update domains|2-3 FDs (depending on regional maximum FD Count); 1 for zonal deployments|2, 3 5 FDs 1, 5 for zonal deployments| +|Fault Domain – Max Spreading (Azure maximally spreads instances)|Yes|Yes|Yes| +|Fault Domain – Fixed Spreading|Five update domains|2-3 FDs (depending on regional maximum FD Count); 1 for zonal deployments|2, 3 5 FDs 1, 5 for zonal deployments| |Assign VM to a Specific Fault Domain|No|Yes|No|-|Update Domains|Yes|Depreciated (platform maintenance performed FD by FD)|5 update domains| +|Update Domains|Yes|Depreciated (platform maintenance performed FD by FD)|Five update domains| |Perform Maintenance|No|Trigger maintenance on each instance using VM API|Yes| |VM Deallocation|No|Yes|Yes| This article provides a feature analysis of Cloud Services (extended support) an |Infiniband Networking|No|No|Yes, single placement group only| |Azure Load Balancer Basic SKU|Yes|No|Yes| |Network Port Forwarding|Yes (NAT Pool for role instance input endpoints)|Yes (NAT Rules for individual instances)|Yes (NAT Pool)|-|Edge Sites|No|Yes|Yes| +|Microsoft Edge Sites|No|Yes|Yes| |Ipv6 Support|No|Yes|Yes| |Internal Load Balancer|No |Yes|Yes| |
cloud-services-extended-support | Generate Template Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/generate-template-portal.md | This article explains how to download the ARM template and parameter file from t ## Get ARM template via portal - 1. Go to the Azure portal and [create a new cloud service](deploy-portal.md). Add your cloud service configuration, package and definition files. + 1. Go to the Azure portal and [create a new cloud service](deploy-portal.md). Add your cloud service configuration, package, and definition files. :::image type="content" source="media/deploy-portal-4.png" alt-text="Image shows the upload section of the basics tab during creation."::: - 2. Once all fields have been completed, move to the Review and Create tab to validate your deployment configuration and click on **Download template for automation** your Cloud Service (extended support). + 2. Once you complete all fields, move to the Review and Create tab to validate your deployment configuration and select on **Download template for automation** your Cloud Service (extended support). :::image type="content" source="media/download-template-portal-1.png" alt-text="Image shows downloading the template under cloud service (extended support) on the Azure portal."::: 3. Download your template and parameter files. |
cloud-services-extended-support | In Place Migration Common Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-common-errors.md | Following issues are known and being addressed. | Known issues | Mitigation | |||-| Role Instances restarting UD by UD after successful commit. | Restart operation follows the same method as monthly guest OS rollouts. Do not commit migration of cloud services with single role instance or impacted by restart.| -| Azure portal cannot read migration state after browser refresh. | Rerun validate and prepare operation to get back to the original migration state. | +| Role Instances restarting UD by UD after successful commit. | Restart operation follows the same method as monthly guest OS rollouts. Don't commit migration of cloud services with single role instance or impacted by restart.| +| Azure portal can't read migration state after browser refresh. | Rerun validate and prepare operation to get back to the original migration state. | | Certificate displayed as secret resource in key vault. | After migration, reupload the certificate as a certificate resource to simplify update operation on Cloud Services (extended support). | | Deployment labels not getting saved as tags as part of migration. | Manually create the tags after migration to maintain this information.-| Resource Group name is in all caps. | Non-impacting. Solution not yet available. | -| Name of the lock on Cloud Services (extended support) lock is incorrect. | Non-impacting. Solution not yet available. | -| IP address name is incorrect on Cloud Services (extended support) portal blade. | Non-impacting. Solution not yet available. | -| Invalid DNS name shown for virtual IP address after on update operation on a migrated cloud service. | Non-impacting. Solution not yet available. | -| After successful prepare, linking a new Cloud Services (extended support) deployment as swappable isn't allowed. | Do not link a new cloud service as swappable to a prepared cloud service. | -| Error messages need to be updated. | Non-impacting. | +| Resource Group name is in all caps. | Nonimpacting. Solution not yet available. | +| Name of the lock on Cloud Services (extended support) lock is incorrect. | Nonimpacting. Solution not yet available. | +| IP address name is incorrect on Cloud Services (extended support) portal blade. | Nonimpacting. Solution not yet available. | +| Invalid DNS name shown for virtual IP address after on update operation on a migrated cloud service. | Nonimpacting. Solution not yet available. | +| After successful prepare, linking a new Cloud Services (extended support) deployment as swappable isn't allowed. | Don't link a new cloud service as swappable to a prepared cloud service. | +| Error messages need to be updated. | Nonimpacting. | ## Common migration errors Common migration errors and mitigation steps. | Error message | Details | |||-| The resource type could not be found in the namespace `Microsoft.Compute` for api version '2020-10-01-preview'. | [Register the subscription](in-place-migration-overview.md#setup-access-for-migration) for CloudServices feature flag to access public preview. | +| The resource type couldn't be found in the namespace `Microsoft.Compute` for API version '2020-10-01-preview'. | [Register the subscription](in-place-migration-overview.md#set-up-access-for-migration) for CloudServices feature flag to access public preview. | | The server encountered an internal error. Retry the request. | Retry the operation, use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or contact support. | | The server encountered an unexpected error while trying to allocate network resources for the cloud service. Retry the request. | Retry the operation, use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or contact support. | -| Deployment deployment-name in cloud service cloud-service-name must be within a virtual network to be migrated. | Deployment isn't located in a virtual network. Refer to [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. | +| Deployment deployment-name in cloud service cloud-service-name must be within a virtual network to be migrated. | Deployment isn't located in a virtual network. For more information, see [the Migration of deployments not in a virtual network section of Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network). | | Migration of deployment deployment-name in cloud service cloud-service-name isn't supported because it is in region region-name. Allowed regions: [list of available regions]. | Region isn't yet supported for migration. | -| The Deployment deployment-name in cloud service cloud-service-name cannot be migrated because there are no subnets associated with the role(s) role-name. Associate all roles with a subnet, then retry the migration of the cloud service. | Update the cloud service (classic) deployment by placing it in a subnet before migration. | -| The deployment deployment-name in cloud service cloud-service-name cannot be migrated because the deployment requires at least one feature that not registered on the subscription in Azure Resource Manager. Register all required features to migrate this deployment. Missing feature(s): [list of missing features]. | Contact support to get the feature flags registered. | -| The deployment cannot be migrated because the deployment's cloud service has two occupied slots. Migration of cloud services is only supported for deployments that are the only deployment in their cloud service. Delete the other deployment in the cloud service to proceed with the migration of this deployment. | Refer to the [unsupported scenario](in-place-migration-technical-details.md#unsupported-configurations--migration-scenarios) list for more details. | -| Deployment deployment-name in HostedService cloud-service-name is in intermediate state: state. Migration not allowed. | Deployment is either being created, deleted or updated. Wait for the operation to complete and retry. | +| The Deployment deployment-name in cloud service cloud-service-name can't be migrated because there are no subnets associated with the role(s) role-name. Associate all roles with a subnet, then retry the migration of the cloud service. | Update the cloud service (classic) deployment by placing it in a subnet before migration. | +| The deployment deployment-name in cloud service cloud-service-name can't be migrated because the deployment requires at least one feature that not registered on the subscription in Azure Resource Manager. Register all required features to migrate this deployment. | Contact support to get the feature flags registered. | +| The deployment can't be migrated because the deployment's cloud service has two occupied slots. Migration of cloud services is only supported for deployments that are the only deployment in their cloud service. To proceed with the migration of this deployment, delete the other deployment in the cloud service. | For more information, see the [unsupported scenario list](in-place-migration-technical-details.md#unsupported-configurations--migration-scenarios). | +| Deployment deployment-name in HostedService cloud-service-name is in intermediate state: state. Migration not allowed. | Deployment is either being created, deleted, or updated. Wait for the operation to complete and retry. | | The deployment deployment-name in hosted service cloud-service-name has reserved IP(s) but no reserved IP name. To resolve this issue, update reserved IP name or contact the Microsoft Azure service desk. | Update cloud service deployment. | | The deployment deployment-name in hosted service cloud-service-name has reserved IP(s) reserved-ip-name but no endpoint on the reserved IP. To resolve this issue, add at least one endpoint to the reserved IP. | Add endpoint to reserved IP. | -| Migration of Deployment {0} in HostedService {1} is in the process of being committed and cannot be changed until it completes successfully. | Wait or retry operation. | -| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait or retry operation. | +| Migration of Deployment {0} in HostedService {1} is in the process of being committed and can't be changed until it completes successfully. | Wait or retry operation. | +| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and can't be changed until it completes successfully. | Wait or retry operation. | | One or more VMs in Deployment {0} in HostedService {1} is undergoing an update operation. It can't be migrated until the previous operation completes successfully. Retry after sometime. | Wait for operation to complete. | -| Migration isn't supported for Deployment {0} in HostedService {1} because it uses following features not yet supported for migration: Non-vnet deployment.| Deployment isn't located in a virtual network. Refer to [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. | -| The virtual network name cannot be null or empty. | Provide virtual network name in the REST request body | -| The Subnet Name cannot be null or empty. | Provide subnet name in the REST request body. | +| Migration isn't supported for Deployment {0} in HostedService {1} because it uses following features not yet supported for migration: Nonvnet deployment.| Deployment isn't located in a virtual network. For more information, see [the Migration of deployments not in a virtual network section of Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network). | +| The virtual network name can't be null or empty. | Provide virtual network name in the REST request body | +| The Subnet Name can't be null or empty. | Provide subnet name in the REST request body. | | DestinationVirtualNetwork must be set to one of the following values: Default, New, or Existing. | Provide DestinationVirtualNetwork property in the REST request body. | -| Default VNet destination option not implemented. | ΓÇ£DefaultΓÇ¥ value isn't supported for DestinationVirtualNetwork property in the REST request body. | -| The deployment {0} cannot be migrated because the CSPKG isn't available. | Upgrade the deployment and try again. | -| The subnet with ID '{0}' is in a different location than deployment '{1}' in hosted service '{2}'. The location for the subnet is '{3}' and the location for the hosted service is '{4}'. Specify a subnet in the same location as the deployment. | Update the cloud service to have both subnet and cloud service in the same location before migration. | -| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait for abort to complete or retry abort. Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support otherwise. | -| Deployment {0} in HostedService {1} has not been prepared for Migration. | Run prepare on the cloud service before running the commit operation. | +| Default virtual network destination option not implemented. | ΓÇ£DefaultΓÇ¥ value isn't supported for DestinationVirtualNetwork property in the REST request body. | +| The deployment {0} can't be migrated because the CSPKG isn't available. | Upgrade the deployment and try again. | +| The subnet with ID '{0}' is in a different location than deployment '{1}' in hosted service '{2}'. The location for the subnet is '{3}' and the location for the hosted service is '{4}'. Specify a subnet in the same location as the deployment. | Update the cloud service to have both subnet and cloud service in the same location before migration. | +| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and can't be changed until it completes successfully. | Wait for abort to complete or retry abort. Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support otherwise. | +| Deployment {0} in HostedService {1} hasn't been prepared for Migration. | Run prepare on the cloud service before running the commit operation. | | UnknownExceptionInEndExecute: Contract.Assert failed: rgName is null or empty: Exception received in EndExecute that isn't an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | UnknownExceptionInEndExecute: A task was canceled: Exception received in EndExecute that isn't an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | XrpVirtualNetworkMigrationError: Virtual network migration failure. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | Deployment {0} in HostedService {1} belongs to Virtual Network {2}. Migrate Virtual Network {2} to migrate this HostedService {1}. | Refer to [Virtual Network migration](in-place-migration-technical-details.md#virtual-network-migration). | -| The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota has been raised. | Follow appropriate channels to request quota increase: <br>[Quota increase for networking resources](../azure-portal/supportability/networking-quota-requests.md) <br>[Quota increase for compute resources](../azure-portal/supportability/per-vm-quota-requests.md) | -|XrpPaaSMigrationCscfgCsdefValidationMismatch: Migration could not be completed on deployment deployment-name in hosted service service-name because the deployment's metadata is stale. Please abort the migration and upgrade the deployment before retrying migration. Validation Message: The service name 'service-name'in the service definition file does not match the name 'service-name-in-config-file' in the service configuration file|match the service names in both .csdef and .cscfg file| +| The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota raises. | To request a quota increase, follow the appropriate channels: <br>[Quota increase for networking resources](../azure-portal/supportability/networking-quota-requests.md) <br>[Quota increase for compute resources](../azure-portal/supportability/per-vm-quota-requests.md) | +|XrpPaaSMigrationCscfgCsdefValidationMismatch: Migration couldn't be completed on deployment deployment-name in hosted service service-name because the deployment's metadata is stale. Abort the migration and upgrade the deployment before retrying migration. Validation Message: The service name 'service-name'in the service definition file doesn't match the name 'service-name-in-config-file' in the service configuration file|match the service names in both .csdef and .cscfg file| |NetworkingInternalOperationError when deploying Cloud Service (extended support) resource| The issue may occur if the Service name is same as role name. The recommended remediation is to use different names for service and roles| ## Next steps |
cloud-services-extended-support | In Place Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-overview.md | This document provides an overview for migrating Cloud Services (classic) to Clo [Cloud Services (extended support)](overview.md) has the primary benefit of providing regional resiliency along with feature parity with Azure Cloud Services deployed using Azure Service Manager. It also offers some Azure Resource Manager capabilities such as role-based access control (RBAC), tags, policy, and supports deployment templates, private link. Both deployment models (extended support and classic) are available with [similar pricing structures](https://azure.microsoft.com/pricing/details/cloud-services/). -Cloud Services (extended support) supports two paths for customers to migrate from Azure Service Manager to Azure Resource +Cloud Services (extended support) supports two paths for customers to migrate from Azure Service Manager to Azure Resource -The below table highlights comparison between these two options. +The following table highlights comparison between these two options. | Redeploy | In-place migration | The below table highlights comparison between these two options. | Redeploy allows customers to: <br><br> - Define resource names. <br><br> - Organize or reuse resources as preferred. <br><br> - Reuse service configuration and definition files with minimal changes. | For in-place migration, the platform: <br><br> - Defines resource names. <br><br> - Organizes each deployment and related resources in individual Resource Groups. <br><br> - Modifies existing configuration and definition file for Azure Resource Manager. | | Customers need to orchestrate traffic to the new deployment. | Migration retains IP address and data path remains the same. | | Customers need to delete the old cloud services in Azure Resource Manager. | Platform deletes the Cloud Services (classic) resources after migration. | -| This is a lift and shift migration which offers more flexibility but requires additional time to migrate. | This is an automated migration which offers quick migration but less flexibility. | +| This migration is a lift and shift scenario, which offers more flexibility but requires more time to migrate. | This scenario is an automated migration that offers quick migration but less flexibility. | -When evaluating migration plans from Cloud Services (classic) to Cloud Services (extended support) you may want to investigate additional Azure services such as: [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), [App Service](../app-service/overview.md), [Azure Kubernetes Service](../aks/intro-kubernetes.md), and [Azure Service Fabric](../service-fabric/overview-managed-cluster.md). These services will continue to feature additional capabilities, while Cloud Services (extended support) will primarily maintain feature parity with Cloud Services (classic.) +When evaluating migration plans from Cloud Services (classic) to Cloud Services (extended support), you may want to investigate other Azure services such as: [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), [App Service](../app-service/overview.md), [Azure Kubernetes Service](../aks/intro-kubernetes.md), and [Azure Service Fabric](../service-fabric/overview-managed-cluster.md). These services continue to feature other capabilities, while Cloud Services (extended support) maintains feature parity with Cloud Services (classic). -Depending on the application, Cloud Services (extended support) may require substantially less effort to move to Azure Resource Manager compared to other options. If your application is not evolving, Cloud Services (extended support) is a viable option to consider as it provides a quick migration path. Conversely, if your application is continuously evolving and needs a more modern feature set, do explore other Azure services to better address your current and future requirements. +Depending on the application, Cloud Services (extended support) may require substantially less effort to move to Azure Resource Manager compared to other options. If your application isn't evolving, Cloud Services (extended support) is a viable option to consider as it provides a quick migration path. Conversely, if your application is continuously evolving and needs a more modern feature set, do explore other Azure services to better address your current and future requirements. ## Redeploy Overview Redeploying your services with [Cloud Services (extended support)](overview.md) - There are no changes to the design, architecture, or components of web and worker roles. - No changes are required to runtime code as the data plane is the same as cloud services. - Azure GuestOS releases and associated updates are aligned with Cloud Services (classic). -- Underlying update process with respect to update domains, how upgrade proceeds, rollback, and allowed service changes during an update will not change.+- Underlying update process with respect to update domains, how upgrade proceeds, rollback, and allowed service changes during an update remains unchanged. A new Cloud Service (extended support) can be deployed directly in Azure Resource Manager using the following client tools: The platform supported migration provides following key benefits: - Enables seamless platform orchestrated migration with no downtime for most scenarios. Learn more about [supported scenarios](in-place-migration-technical-details.md). - Migrates existing cloud services in three simple steps: validate, prepare, commit (or abort). Learn more about how the [migration tool works](in-place-migration-overview.md#migration-steps).-- Provides the ability to test migrated deployments after successful preparation. Commit and finalize the migration while abort rolls back the migration.+- Offers testing for migrated deployments after successful preparation. Commit and finalize the migration while abort rolls back the migration. The migration tool utilizes the same APIs and has the same experience as the [Virtual Machine (classic) migration](../virtual-machines/migration-classic-resource-manager-overview.md). -## Setup access for migration +## Set up access for migration To perform this migration, you must be added as a coadministrator for the subscription and register the providers needed. 1. Sign in to the Azure portal. 3. On the Hub menu, select Subscription. If you don't see it, select All services.-3. Find the appropriate subscription entry, and then look at the MY ROLE field. For a coadministrator, the value should be Account admin. If you're not able to add a co-administrator, contact a service administrator or co-administrator for the subscription to get yourself added. +3. Find the appropriate subscription entry, and then look at the MY ROLE field. For a coadministrator, the value should be Account admin. If you're not able to add a coadministrator, contact a service administrator or coadministrator for the subscription to get yourself added. -4. Register your subscription for Microsoft.ClassicInfrastructureMigrate namespace using [Portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [PowerShell](../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell) or [CLI](../azure-resource-manager/management/resource-providers-and-types.md#azure-cli) +4. Register your subscription for Microsoft.ClassicInfrastructureMigrate namespace using [Portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [PowerShell](../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell), or [CLI](../azure-resource-manager/management/resource-providers-and-types.md#azure-cli) ```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate The list of supported scenarios differs between Cloud Services (classic) and Vir Customers can migrate their Cloud Services (classic) deployments using the same four operations used to migrate Virtual Machines (classic). -1. **Validate Migration** - Validates that the migration will not be prevented by common unsupported scenarios. -2. **Prepare Migration** ΓÇô Duplicates the resource metadata in Azure Resource Manager. All resources are locked for create/update/delete operations to ensure resource metadata is in sync across Azure Server Manager and Azure Resource Manager. All read operations will work using both Cloud Services (classic) and Cloud Services (extended support) APIs. +1. **Validate Migration** - Validates that common unsupported scenarios won't prevent migration. +2. **Prepare Migration** ΓÇô Duplicates the resource metadata in Azure Resource Manager. All resources are locked for create/update/delete operations to ensure resource metadata is in sync across Azure Server Manager and Azure Resource Manager. All read operations work using both Cloud Services (classic) and Cloud Services (extended support) APIs. 3. **Abort Migration** - Removes resource metadata from Azure Resource Manager. Unlocks all resources for create/update/delete operations.-4. **Commit Migration** - Removes resource metadata from Azure Service Manager. Unlocks the resource for create/update/delete operations. Abort is no longer allowed after commit has been attempted. +4. **Commit Migration** - Removes resource metadata from Azure Service Manager. Unlocks the resource for create/update/delete operations. Abort is no longer allowed after commit attempts. >[!NOTE] > Prepare, Abort and Commit are idempotent and therefore, if failed, a retry should fix the issue. For more information, see [Overview of Platform-supported migration of IaaS reso - Network Traffic Rules ## Supported configurations / migration scenarios-These are top scenarios involving combinations of resources, features, and Cloud Services. This list is not exhaustive. +The following list contains top scenarios involving combinations of resources, features, and Cloud Services. This list isn't exhaustive. | Service | Configuration | Comments | |||| | [Microsoft Entra Domain Services](../active-directory-domain-services/overview.md) | Virtual networks that contain Microsoft Entra Domain Services. | Virtual network containing both Cloud Service deployment and Microsoft Entra Domain Services is supported. Customer first needs to separately migrate Microsoft Entra Domain Services and then migrate the virtual network left only with the Cloud Service deployment |-| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing a prod slot deployment can be migrated. It is not recommended to migrate staging slot as this can result in issues with retaining service FQDN. To migrate staging slot, first promote staging deployment to production and then migrate to ARM. | -| Cloud Service | Deployment not in a publicly visible virtual network (default virtual network deployment) | A Cloud Service can be in a publicly visible virtual network, in a hidden virtual network or not in any virtual network. Cloud Services in a hidden virtual network and publicly visible virtual networks are supported for migration. Customer can use the Validate API to tell if a deployment is inside a default virtual network or not and thus determine if it can be migrated. | +| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing a prod slot deployment can be migrated. It isn't recommended to migrate staging slot as this process can result in issues with retaining service FQDN. To migrate staging slot, first promote staging deployment to production and then migrate to Azure Resource Manager. | +| Cloud Service | Deployment not in a publicly visible virtual network (default virtual network deployment) | A Cloud Service can be in a publicly visible virtual network, in a hidden virtual network or not in any virtual network. Cloud Services in a hidden virtual network and publicly visible virtual networks are supported for migration. Customer can use the Validate API to tell if a deployment is inside a default virtual network or not and thus determine if it can be migrated. | |Cloud Service | XML extensions (BGInfo, Visual Studio Debugger, Web Deploy, and Remote Debugging). | All xml extensions are supported for migration -| Virtual Network | Virtual network containing multiple Cloud Services. | Virtual network contain multiple cloud services is supported for migration. The virtual network and all the Cloud Services within it will be migrated together to Azure Resource Manager. | -| Virtual Network | Migration of virtual networks created via Portal (Requires using ΓÇ£Group Resource-group-name VNet-NameΓÇ¥ in .cscfg file) | As part of migration, the virtual network name in cscfg will be changed to use Azure Resource Manager ID of the virtual network. (subscription/subscription-id/resource-group/resource-group-name/resource/vnet-name) <br><br>To manage the deployment after migration, update the local copy of .cscfg file to start using Azure Resource Manager ID instead of virtual network name. <br><br>A .cscfg file that uses the old naming scheme will not pass validation. +| Virtual Network | Virtual network containing multiple Cloud Services. | Virtual network contain multiple cloud services is supported for migration. The virtual network and all the Cloud Services within it migrate together to Azure Resource Manager. | +| Virtual Network | Migration of virtual networks created via Portal (Requires using ΓÇ£Group Resource-group-name VNet-NameΓÇ¥ in .cscfg file) | As part of migration, the virtual network name in cscfg changes to use Azure Resource Manager ID of the virtual network. (subscription/subscription-id/resource-group/resource-group-name/resource/vnet-name) <br><br>To manage the deployment after migration, update the local copy of .cscfg file to start using Azure Resource Manager ID instead of virtual network name. <br><br>A .cscfg file that uses the old naming scheme fails validation. | Virtual Network | Migration of deployment with roles in different subnet. | A cloud service with different roles in different subnets is supported for migration. | ## Next steps |
cloud-services-extended-support | In Place Migration Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-portal.md | To perform this migration, you must be added as a coadministrator for the subscr 2. On the **Hub** menu, select **Subscription**. If you don't see it, select **All services**. 3. Find the appropriate subscription entry, and then look at the **MY ROLE** field. For a coadministrator, the value should be *Account admin*. -If you're not able to add a co-administrator, contact a service administrator or [co-administrator](../role-based-access-control/classic-administrators.md) for the subscription to get yourself added. +If you're not able to add a coadministrator, contact a service administrator or [coadministrator](../role-based-access-control/classic-administrators.md) for the subscription to get yourself added. **Sign up for Migration resource provider** 1. Register with the migration resource provider `Microsoft.ClassicInfrastructureMigrate` and preview feature `Cloud Services` under Microsoft.Compute namespace using the [Azure portal](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1). -1. Wait five minutes for the registration to complete then check the status of the approval. +1. Wait five minutes for the registration to complete, then check the status of the approval. ## Migrate your Cloud Service resources If you're not able to add a co-administrator, contact a service administrator or :::image type="content" source="media/in-place-migration-portal-1.png" alt-text="Image shows the Migrate to ARM blade in the Azure portal."::: - If validate fails, a list of unsupported scenarios will be displayed and need to be fixed before migration can continue. + If validation fails, a list of unsupported scenarios displays. They need to be fixed before migration can continue. :::image type="content" source="media/in-place-migration-portal-3.png" alt-text="Image shows validation error in the Azure portal."::: 5. Prepare for the migration. - If the prepare is successful, the migration is ready for commit. + If the preparation is successful, the migration is ready for commit. :::image type="content" source="media/in-place-migration-portal-4.png" alt-text="Image shows validation passing in the Azure portal."::: - If the prepare fails, review the error, address any issues, and retry the prepare. + If the preparation fails, review the error, address any issues, and retry the preparation. :::image type="content" source="media/in-place-migration-portal-5.png" alt-text="Image shows validation failure error."::: If you're not able to add a co-administrator, contact a service administrator or >[!IMPORTANT] > Once you commit to the migration, there is no option to roll back. - Type in "yes" to confirm and commit to the migration. The migration is now complete. The migrated Cloud Services (extended support) deployment is unlocked for all operations". + Type in "yes" to confirm and commit to the migration. The migration is now complete. The migrated Cloud Services (extended support) deployment is unlocked for all operations. ## Next steps -Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment. +Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation, and other attributes of your new Cloud Services (extended support) deployment. |
cloud-services-extended-support | In Place Migration Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-powershell.md | -## 1) Plan for migration -Planning is the most important step for a successful migration experience. Review the [Cloud Services (extended support) overview](overview.md) and [Planning for migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-plan.md) prior to beginning any migration steps. +## Plan for migration +Planning is the most important step for a successful migration experience. Review the [Cloud Services (extended support) overview](overview.md) and [Planning for migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-plan.md) before beginning any migration steps. -## 2) Install the latest version of PowerShell +## Install the latest version of PowerShell There are two main options to install Azure PowerShell: [PowerShell Gallery](https://www.powershellgallery.com/profiles/azure-sdk/) or [Web Platform Installer (WebPI)](https://aka.ms/webpi-azps). WebPI receives monthly updates. PowerShell Gallery receives updates on a continuous basis. This article is based on Azure PowerShell version 2.1.0. For installation instructions, see [How to install and configure Azure PowerShell](/powershell/azure/servicemanagement/install-azure-ps?preserve-view=true&view=azuresmps-4.0.0). -## 3) Ensure Admin permissions +## Ensure Admin permissions To perform this migration, you must be added as a coadministrator for the subscription in the [Azure portal](https://portal.azure.com). 1. Sign in to the [Azure portal](https://portal.azure.com). 2. On the **Hub** menu, select **Subscription**. If you don't see it, select **All services**. 3. Find the appropriate subscription entry, and then look at the **MY ROLE** field. For a coadministrator, the value should be *Account admin*. -If you're not able to add a co-administrator, contact a service administrator or co-administrator for the subscription to get yourself added. +If you're not able to add a coadministrator, contact a service administrator or coadministrator for the subscription to get yourself added. -## 4) Register the classic provider and CloudService feature +## Register the classic provider and CloudService feature First, start a PowerShell prompt. For migration, set up your environment for both classic and Resource Manager. Sign in to your account for the Resource Manager model. Check the status of the classic provider approval by using the following command Get-AzResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate ``` -Check the status of registration using the following: +Check the status of registration using the following command: + ```powershell Get-AzProviderFeature -FeatureName CloudServices -ProviderNamespace Microsoft.Compute ``` Select-AzureSubscription ΓÇôSubscriptionName "My Azure Subscription" ``` -## 5) Migrate your Cloud Services +## Migrate your Cloud Services Before starting the migration, understand how the [migration steps](./in-place-migration-overview.md#migration-steps) works and what each step does. -* [Migrate a Cloud Service not in a virtual network](#51-option-1migrate-a-cloud-service-not-in-a-virtual-network) -* [Migrate a Cloud Service in a virtual network](#51-option-2migrate-a-cloud-service-in-a-virtual-network) +* [Migrate a Cloud Service not in a virtual network](#option-1migrate-a-cloud-service-not-in-a-virtual-network) +* [Migrate a Cloud Service in a virtual network](#option-2migrate-a-cloud-service-in-a-virtual-network) > [!NOTE] > All the operations described here are idempotent. If you have a problem other than an unsupported feature or a configuration error, we recommend that you retry the prepare, abort, or commit operation. The platform then tries the action again. -### 5.1) Option 1 - Migrate a Cloud Service not in a virtual network +### Option 1 - Migrate a Cloud Service not in a virtual network Get the list of cloud services by using the following command. Then pick the cloud service that you want to migrate. ```powershell If you're ready to complete the migration, commit the migration Move-AzureService -Commit -ServiceName $serviceName -DeploymentName $deploymentName ``` -### 5.1) Option 2 - Migrate a Cloud Service in a virtual network +### Option 2 - Migrate a Cloud Service in a virtual network To migrate a Cloud Service in a virtual network, you migrate the virtual network. The Cloud Service automatically migrates with the virtual network. If the prepared configuration looks good, you can move forward and commit the re Move-AzureVirtualNetwork -Commit -VirtualNetworkName $vnetName ``` - ## Next steps -Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment. +Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation, and other attributes of your new Cloud Services (extended support) deployment. |
cloud-services-extended-support | In Place Migration Technical Details | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md | This article discusses the technical details regarding the migration tool as per ### Extensions and plugin migration -- All enabled and supported extensions will be migrated. +- All enabled and supported extensions are migrated. - Disabled extensions won't be migrated. -- Plugins are a legacy concept and should be removed before migration. They're supported for migration and but after migration, if extension needs to be enabled, plugin needs to be removed first before installing the extension. Remote desktop plugins and extensions are most impacted by this. +- Plugins are a legacy concept and should be removed before migration. They're supported for migration, but after migration, if extension needs to be enabled, the plugin needs to be removed before installing the extension. This limitation affects remote desktop plugins and extensions the most. ### Certificate migration - In Cloud Services (extended support), certificates are stored in a Key Vault. As part of migration, we create a Key Vault for the customers having the Cloud Service name and transfer all certificates from Azure Service Manager to Key Vault. This article discusses the technical details regarding the migration tool as per ### Service Configuration and Service Definition files - The .cscfg and .csdef files need to be updated for Cloud Services (extended support) with minor changes. -- The names of resources like virtual network and VM SKU are different. See [Translation of resources and naming convention post migration](#translation-of-resources-and-naming-convention-post-migration)+- The names of resources like virtual network and virtual machine (VM) SKU are different. See [Translation of resources and naming convention post migration](#translation-of-resources-and-naming-convention-post-migration) - Customers can retrieve their new deployments through [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) and [REST API](/rest/api/compute/cloudservices/get). ### Cloud Service and deployments-- Each Cloud Services (extended support) deployment is an independent Cloud Service. Deployment are no longer grouped into a cloud service using slots.+- Each Cloud Services (extended support) deployment is an independent Cloud Service. Deployments are no longer grouped into a cloud service using slots. - If you have two slots in your Cloud Service (classic), you need to delete one slot (staging) and use the migration tool to move the other (production) slot to Azure Resource Manager. - The public IP address on the Cloud Service deployment remains the same after migration to Azure Resource Manager and is exposed as a Basic SKU IP (dynamic or static) resource. - The DNS name and domain (cloudapp.net) for the migrated cloud service remains the same. This article discusses the technical details regarding the migration tool as per ### Migration of deployments not in a virtual network - In late 2018, Azure started automatically creating new deployments (without customer specified virtual network) into a platform created ΓÇ£defaultΓÇ¥ virtual network. These default virtual networks are hidden from customers. -- As part of the migration, this default virtual network will be exposed to customers once in Azure Resource Manager. To manage or update the deployment in Azure Resource Manager, customers need to add this virtual network information in the NetworkConfiguration section of the .cscfg file. +- As part of the migration, this default virtual network is exposed to customers once in Azure Resource Manager. To manage or update the deployment in Azure Resource Manager, customers need to add this virtual network information in the NetworkConfiguration section of the .cscfg file. - The default virtual network, when migrated to Azure Resource Manager, is placed in the same resource group as the Cloud Service. - Cloud Services created before this time (before end of 2018) won't be in any virtual network and can't be migrated using the tool. Consider redeploying these Cloud Services directly in Azure Resource Manager. Another approach is to migrate via creating new Staging deployment and VIPSwap Check more details [here](./non-vnet-migration.md)-- To check if a deployment is eligible to migrate, run the validate API on the deployment. The result of Validate API will contain error message explicitly mentioning if this deployment is eligible to migrate. +- To check if a deployment is eligible to migrate, run the validate API on the deployment. The result of Validate API contains error message explicitly mentioning if this deployment is eligible to migrate. ### Load Balancer - For a Cloud Service using a public endpoint, a platform created load balancer associated with the Cloud Service is exposed inside the customerΓÇÖs subscription in Azure Resource Manager. The load balancer is a read-only resource, and updates are restricted only through the Service Configuration (.cscfg) and Service Definition (.csdef) files. ### Key Vault-- As part of migration, Azure automatically creates a new Key Vault and migrates all the certificates to it. The tool does not allow you to use an existing Key Vault. +- As part of migration, Azure automatically creates a new Key Vault and migrates all the certificates to it. The tool doesn't allow you to use an existing Key Vault. - Cloud Services (extended support) requires a Key Vault located in the same region and subscription. This Key Vault is automatically created as part of the migration. ## Resources and features not available for migration-These are top scenarios involving combinations of resources, features and Cloud Services. This list isn't exhaustive. +This list contains the top scenarios involving combinations of resources, features, and Cloud Services. This list isn't exhaustive. | Resource | Next steps / work-around | ||| These are top scenarios involving combinations of resources, features and Cloud | Alerts | Migration goes through but alerts are dropped. [Recreate the rules](./enable-alerts.md) after migration on Cloud Services (extended support). | | VPN Gateway | Remove the VPN Gateway before beginning migration and then recreate the VPN Gateway once migration is complete. | | Express Route Gateway (in the same subscription as Virtual Network only) | Remove the Express Route Gateway before beginning migration and then recreate the Gateway once migration is complete. | -| Quota | Quota is not migrated. [Request new quota](../azure-resource-manager/templates/error-resource-quota.md#solution) on Azure Resource Manager prior to migration for the validation to be successful. | +| Quota | Quota isn't migrated. [Request new quota](../azure-resource-manager/templates/error-resource-quota.md#solution) on Azure Resource Manager prior to migration for the validation to be successful. | | Affinity Groups | Not supported. Remove any affinity groups before migration. | | Virtual networks using [virtual network peering](../virtual-network/virtual-network-peering-overview.md)| Before migrating a virtual network that is peered to another virtual network, delete the peering, migrate the virtual network to Resource Manager and re-create peering. This can cause downtime depending on the architecture. | | Virtual networks that contain App Service environments | Not supported | These are top scenarios involving combinations of resources, features and Cloud | Configuration / Scenario | Next steps / work-around | |||-| Migration of some older deployments not in a virtual network | Some Cloud Service deployments not in a virtual network aren't supported for migration. <br><br> 1. Use the validate API to check if the deployment is eligible to migrate. <br> 2. If eligible, the deployments will be moved to Azure Resource Manager under a virtual network with prefix of ΓÇ£DefaultRdfeVnetΓÇ¥ | +| Migration of some older deployments not in a virtual network | Some Cloud Service deployments not in a virtual network aren't supported for migration. <br><br> 1. Use the validate API to check if the deployment is eligible to migrate. <br> 2. If eligible, the deployments move to Azure Resource Manager under a virtual network with prefix of ΓÇ£DefaultRdfeVnetΓÇ¥ | | Migration of deployments containing both production and staging slot deployment using dynamic IP addresses | Migration of a two slot Cloud Service requires deletion of the staging slot. Once the staging slot is deleted, migrate the production slot as an independent Cloud Service (extended support) in Azure Resource Manager. Then redeploy the staging environment as a new Cloud Service (extended support) and make it swappable with the first one. | | Migration of deployments containing both production and staging slot deployment using Reserved IP addresses | Not supported. | | Migration of production and staging deployment in different virtual network|Migration of a two slot cloud service requires deleting the staging slot. Once the staging slot is deleted, migrate the production slot as an independent cloud service (extended support) in Azure Resource Manager. A new Cloud Services (extended support) deployment can then be linked to the migrated deployment with swappable property enabled. Deployments files of the old staging slot deployment can be reused to create this new swappable deployment. | | Migration of empty Cloud Service (Cloud Service with no deployment) | Not supported. | -| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins aren't recommended](./deploy-prerequisite.md#required-definition-file-updates) for use on Cloud Services (extended support).| -| Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This will cause downtime. | +| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration then goes through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins aren't recommended](./deploy-prerequisite.md#required-definition-file-updates) for use on Cloud Services (extended support).| +| Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This causes downtime. | Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The role sizes need to be updated before migration. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)|-| Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This will cause downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This will cause downtime. | +| Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This causes downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This causes downtime. | | Cloud Service in a virtual network but doesn't have an explicit subnet assigned | Not supported. Mitigation involves moving the role into a subnet, which requires a role restart (downtime) | ## Translation of resources and naming convention post migration As part of migration, the resource names are changed, and few Cloud Services fea | Cloud Services (classic) <br><br> Resource name | Cloud Services (classic) <br><br> Syntax| Cloud Services (extended support) <br><br> Resource name| Cloud Services (extended support) <br><br> Syntax | ||||| | Cloud Service | `cloudservicename` | Not associated| Not associated |-| Deployment (portal created) <br><br> Deployment (non-portal created) | `deploymentname` | Cloud Services (extended support) | `cloudservicename` | +| Deployment (portal created) <br><br> Deployment (nonportal created) | `deploymentname` | Cloud Services (extended support) | `cloudservicename` | | Virtual Network | `vnetname` <br><br> `Group resourcegroupname vnetname` <br><br> Not associated | Virtual Network (not portal created) <br><br> Virtual Network (portal created) <br><br> Virtual Networks (Default) | `vnetname` <br><br> `group-resourcegroupname-vnetname` <br><br> `VNet-cloudservicename`| | Not associated | Not associated | Key Vault | `KV-cloudservicename` | | Not associated | Not associated | Resource Group for Cloud Service Deployments | `cloudservicename-migrated` | | Not associated | Not associated | Resource Group for Virtual Network | `vnetname-migrated` <br><br> `group-resourcegroupname-vnetname-migrated`| | Not associated | Not associated | Public IP (Dynamic) | `cloudservicenameContractContract` | -| Reserved IP Name | `reservedipname` | Reserved IP (non-portal created) <br><br> Reserved IP (portal created) | `reservedipname` <br><br> `group-resourcegroupname-reservedipname` | +| Reserved IP Name | `reservedipname` | Reserved IP (nonportal created) <br><br> Reserved IP (portal created) | `reservedipname` <br><br> `group-resourcegroupname-reservedipname` | | Not associated| Not associated | Load Balancer | `LB-cloudservicename`| As part of migration, the resource names are changed, and few Cloud Services fea - Contact support to help migrate or roll back the deployment from the backend. ### Migration failed in an operation. -- If validate failed, it is because the deployment or virtual network contains an unsupported scenario/feature/resource. Use the list of unsupported scenarios to find the work-around in the documents. +- If validation failed, it is because the deployment or virtual network contains an unsupported scenario/feature/resource. Use the list of unsupported scenarios to find the work-around in the documents. - Prepare operation first does validation including some expensive validations (not covered in validate). Prepare failure could be due to an unsupported scenario. Find the scenario and the work-around in the public documents. Abort needs to be called to go back to the original state and unlock the deployment for updates and delete operations. - If abort failed, retry the operation. If retries fail, then contact support.-- If commit failed, retry the operation. If retry fail, then contact support. Even in commit failure, there should be no data plane issue to your deployment. Your deployment should be able to handle customer traffic without any issue. +- If the commit failed, retry the operation. If retry fail, then contact support. Even in commit failure, there should be no data plane issue to your deployment. Your deployment should be able to handle customer traffic without any issue. ### Portal refreshed after Prepare. Experience restarted and Commit or Abort not visible anymore. - Portal stores the migration information locally and therefore after refresh, it will start from validate phase even if the Cloud Service is in the prepare phase. As part of migration, the resource names are changed, and few Cloud Services fea - Customers can use PowerShell or REST API to abort or commit. ### How much time can the operations take?<br>-Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs. +Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but takes less time compared to prepare. All operations will time out after 24 hrs. ## Next steps For assistance migrating your Cloud Services (classic) deployment to Cloud Services (extended support) see our [Support and troubleshooting](support-help.md) landing page. |
cloud-services-extended-support | Non Vnet Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/non-vnet-migration.md | Title: Migrate cloud services not in a virtual network to a virtual network -description: How to migrate non-vnet cloud services to a virtual network +description: How to migrate nonvnet cloud services to a virtual network Previously updated : 01/24/2024 Last updated : 07/24/2024 # Migrate cloud services not in a virtual network to a virtual network -Some legacy cloud services are still running without Vnet support. While there's a process for migrating directly through the portal, there are certain considerations that should be made prior to migration. This article walks you through the process of migrating a non Vnet supporting Cloud Service to a Vnet supporting Cloud Service. +Some legacy cloud services are still running without virtual network support. While there's a process for migrating directly through the portal, there are certain considerations that should be made before migration. This article walks you through the process of migrating a non virtual network supporting Cloud Service to a virtual network supporting Cloud Service. ## Advantages of this approach Some legacy cloud services are still running without Vnet support. While there's ## Migration procedure using the Azure portal -1. Create a non vnet classic cloud service in the same region as the vnet you want to migrate to. In the Azure portal, select the 'Staging' drop-down. +1. Create a non virtual network classic cloud service in the same region as the virtual network you want to migrate to. In the Azure portal, select the 'Staging' drop-down. ![Screenshot of the staging drop-down in the Azure portal.](./media/vnet-migrate-staging.png) -1. Create a deployment with same configuration as existing deployment by selecting 'Upload' next to the staging drop-down. The platform creates a Default Vnet deployment in staging slot. +1. Create a deployment with same configuration as existing deployment by selecting 'Upload' next to the staging drop-down. The platform creates a Default virtual network deployment in staging slot. ![Screenshot of the upload button in the Azure portal.](./media/vnet-migrate-upload.png) 1. Once staging deployment is created, the URL, IP address, and label populate. |
cloud-services-extended-support | Override Sku | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/override-sku.md | This article describes how to update the role size and instance count in Azure C ## Set the allowModelOverride property You can set the **allowModelOverride** property to `true` or `false`. -* When **allowModelOverride** is set to `true`, an API call will update the role size and instance count for the cloud service without validating the values with the .csdef and .cscfg files. +* When **allowModelOverride** is set to `true`, an API call updates the role size and instance count for the cloud service without validating the values with the .csdef and .cscfg files. > [!Note] > The .cscfg file will be updated to reflect the role instance count. The .csdef file (embedded within the .cspkg) will retain the old values. The default value is `false`. If the property is reset to `false` after being se The following samples show how to set the **allowModelOverride** property by using an Azure Resource Manager (ARM) template, PowerShell, or the SDK. ### ARM template-Setting the **allowModelOverride** property to `true` here will update the cloud service with the role properties defined in the `roleProfile` section: +Setting the **allowModelOverride** property to `true` here updates the cloud service with the role properties defined in the `roleProfile` section: ```json "properties": { "packageUrl": "[parameters('packageSasUri')]", Setting the **allowModelOverride** property to `true` here will update the cloud ``` ### PowerShell-Setting the `AllowModelOverride` switch on the new `New-AzCloudService` cmdlet will update the cloud service with the SKU properties defined in the role profile: +Setting the `AllowModelOverride` switch on the new `New-AzCloudService` cmdlet updates the cloud service with the SKU properties defined in the role profile: ```powershell New-AzCloudService ` -Name "ContosoCS" ` New-AzCloudService ` -Tag $tag ``` ### SDK-Setting the `AllowModelOverride` variable to `true` will update the cloud service with the SKU properties defined in the role profile: +Setting the `AllowModelOverride` variable to `true` updates the cloud service with the SKU properties defined in the role profile: ```csharp CloudService cloudService = new CloudService |
cloud-services-extended-support | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/overview.md | -Cloud Services (extended support) is a new [Azure Resource Manager](../azure-resource-manager/management/overview.md) based deployment model for [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) product and is now generally available. Cloud Services (extended support) has the primary benefit of providing regional resiliency along with feature parity with Azure Cloud Services deployed using Azure Service Manager. It also offers some ARM capabilities such as role-based access and control (RBAC), tags, policy, and supports deployment templates. +Cloud Services (extended support) is a new [Azure Resource Manager](../azure-resource-manager/management/overview.md) based deployment model for [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) product and is now generally available. Cloud Services (extended support) has the primary benefit of providing regional resiliency along with feature parity with Azure Cloud Services deployed using Azure Service Manager. It also offers some Azure Resource Manager capabilities such as role-based access and control (RBAC), tags, policy, and supports deployment templates. -With this change, the Azure Service Manager based deployment model for Cloud Services will be renamed [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md). You will retain the ability to build and rapidly deploy your web and cloud applications and services. You will be able to scale your cloud services infrastructure based on current demand and ensure that the performance of your applications can keep up while simultaneously reducing costs. +With this change, the Azure Service Manager based deployment model for Cloud Services is renamed to [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md). You retain the ability to build and rapidly deploy your web and cloud applications and services. You're able to scale your cloud services infrastructure based on current demand and ensure that the performance of your applications can keep up while simultaneously reducing costs. :::image type="content" source="media/inside-azure-for-iot.png" alt-text="YouTube video for Cloud Services (extended support)." link="https://youtu.be/H4K9xTUvNdw"::: -## What does not change +## What doesn't change - You create the code, define the configurations, and deploy it to Azure. Azure sets up the compute environment, runs your code then monitors and maintains it for you. - Cloud Services (extended support) also supports two types of roles, [web and worker](../cloud-services/cloud-services-choose-me.md). There are no changes to the design, architecture, or components of web and worker roles. -- The three components of a cloud service, the service definition (.csdef), the service config (.cscfg), and the service package (.cspkg) are carried forward and there is no change in the [formats](cloud-services-model-and-package.md). +- The three components of a cloud service, the service definition (.csdef), the service config (.cscfg), and the service package (.cspkg) are carried forward and there's no change in the [formats](cloud-services-model-and-package.md). - No changes are required to runtime code as data plane is the same and control plane is only changing. - Azure GuestOS releases and associated updates are aligned with Cloud Services (classic) - Underlying update process with respect to update domains, how upgrade proceeds, rollback and allowed service changes during an update don't change ## Changes in deployment model -Minimal changes are required to Service Configuration (.cscfg) and Service Definition (.csdef) files to deploy Cloud Services (extended support). No changes are required to runtime code. However, deployment scripts will need to be updated to call the new Azure Resource Manager based APIs. +Minimal changes are required to Service Configuration (.cscfg) and Service Definition (.csdef) files to deploy Cloud Services (extended support). No changes are required to runtime code. However, deployment scripts need to be updated to call the new Azure Resource Manager based APIs. :::image type="content" source="media/overview-image-1.png" alt-text="Image shows classic cloud service configuration with addition of template section. "::: The major differences between Cloud Services (classic) and Cloud Services (extended support) with respect to deployment are: -- Azure Resource Manager deployments use [ARM templates](../azure-resource-manager/templates/overview.md), which is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. Service Configuration and Service definition file needs to be consistent with the [ARM Template](../azure-resource-manager/templates/overview.md) while deploying Cloud Services (extended support). This can be achieved either by [manually creating the ARM template](deploy-template.md) or using [PowerShell](deploy-powershell.md), [Portal](deploy-portal.md) and [Visual Studio](deploy-visual-studio.md). +- Azure Resource Manager deployments use [ARM templates](../azure-resource-manager/templates/overview.md), which is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. Service Configuration and Service definition file needs to be consistent with the [ARM Template](../azure-resource-manager/templates/overview.md) while deploying Cloud Services (extended support). This can be achieved either by [manually creating the ARM template](deploy-template.md) or using [PowerShell](deploy-powershell.md), [Portal](deploy-portal.md), and [Visual Studio](deploy-visual-studio.md). -- Customers must use [Azure Key Vault](../key-vault/general/overview.md) to [manage certificates in Cloud Services (extended support)](certificates-and-key-vault.md). Azure Key Vault lets you securely store and manage application credentials such as secrets, keys and certificates in a central and secure cloud repository. Your applications can authenticate to Key Vault at run time to retrieve credentials. +- Customers must use [Azure Key Vault](../key-vault/general/overview.md) to [manage certificates in Cloud Services (extended support)](certificates-and-key-vault.md). Azure Key Vault lets you securely store and manage application credentials such as secrets, keys, and certificates in a central and secure cloud repository. Your applications can authenticate to Key Vault at run time to retrieve credentials. -- All resources deployed through the [Azure Resource Manager](../azure-resource-manager/templates/overview.md) must be inside a virtual network. Virtual networks and subnets are created in Azure Resource Manager using existing Azure Resource Manager APIs and will need to be referenced within the NetworkConfiguration section of the .cscfg when deploying Cloud Services (extended support). +- All resources deployed through the [Azure Resource Manager](../azure-resource-manager/templates/overview.md) must be inside a virtual network. Virtual networks and subnets are created in Azure Resource Manager using existing Azure Resource Manager APIs. They need to be referenced within the NetworkConfiguration section of the .cscfg when deploying Cloud Services (extended support). -- Each cloud service (extended support) is a single independent deployment. Cloud services (extended support) does not support multiple slots within a single cloud service. +- Each cloud service (extended support) is a single independent deployment. Cloud Services (extended support) doesn't support multiple slots within a single cloud service. - VIP Swap capability may be used to swap between two cloud services (extended support). To test and stage a new release of a cloud service, deploy a cloud service (extended support) and tag it as VIP swappable with another cloud service (extended support) - Domain Name Service (DNS) label is optional for a cloud service (extended support). In Azure Resource Manager, the DNS label is a property of the Public IP resource associated with the cloud service. Cloud Services (extended support) provides two paths for you to migrate from [Az ### Additional migration options -When evaluating migration plans from Cloud Services (classic) to Cloud Services (extended support) you may want to investigate additional Azure services such as: [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), [App Service](../app-service/overview.md), [Azure Kubernetes Service](../aks/intro-kubernetes.md), and [Azure Service Fabric](../service-fabric/service-fabric-overview.md). These services will continue to feature additional capabilities, while Cloud Services (extended support) will primarily maintain feature parity with Cloud Services (classic.) +When evaluating migration plans from Cloud Services (classic) to Cloud Services (extended support), you may want to investigate other Azure services such as: [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), [App Service](../app-service/overview.md), [Azure Kubernetes Service](../aks/intro-kubernetes.md), and [Azure Service Fabric](../service-fabric/service-fabric-overview.md). These services continue to feature additional capabilities, while Cloud Services (extended support) maintains feature parity with Cloud Services (classic.) -Depending on the application, Cloud Services (extended support) may require substantially less effort to move to Azure Resource Manager compared to other options. If your application is not evolving, Cloud Services (extended support) is a viable option to consider as it provides a quick migration path. Conversely, if your application is continuously evolving and needs a more modern feature set, do explore other Azure services to better address your current and future requirements. +Depending on the application, Cloud Services (extended support) may require substantially less effort to move to Azure Resource Manager compared to other options. If your application isn't evolving, Cloud Services (extended support) is a viable option to consider as it provides a quick migration path. Conversely, if your application is continuously evolving and needs a more modern feature set, do explore other Azure services to better address your current and future requirements. ## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). |
cloud-services-extended-support | Post Migration Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/post-migration-changes.md | Title: Azure Cloud Services (extended support) post migration changes + Title: Azure Cloud Services (extended support) post-migration changes description: Overview of post migration changes after migrating to Cloud Services (extended support) The Cloud Services (classic) deployment is converted to a Cloud Services (extend ## Changes to deployment files -Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deployment files conform to the Azure Resource Manager and Cloud Services (extended support) requirements. Post migration retrieves your new deployment files or update the existing files. This will be needed for update/delete operations. +Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deployment files conform to the Azure Resource Manager and Cloud Services (extended support) requirements. Post migration retrieves your new deployment files or updates the existing files, which are needed for update/delete operations. - Virtual Network uses full Azure Resource Manager resource ID instead of just the resource name in the NetworkConfiguration section of the .cscfg file. For example, `/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Network/virtualNetworks/vnet-name`. For virtual networks belonging to the same resource group as the cloud service, you can choose to update the .cscfg file back to using just the virtual network name. Customers need to update their tooling and automation to start using the new API - Recreate rules and policies required to manage and scale cloud services - [Auto Scale rules](configure-scaling.md) aren't migrated. After migration, recreate the auto scale rules. - [Alerts](enable-alerts.md) aren't migrated. After migration, recreate the alerts.- - The Key Vault is created without any access policies. [Create appropriate policies](../key-vault/general/assign-access-policy-portal.md) on the Key Vault to view or manage your certificates. Certificates will be visible under settings on the tab called secrets. + - The Key Vault is created without any access policies. To view or manage your certificates, [create appropriate policies](../key-vault/general/assign-access-policy-portal.md) on the Key Vault. Certificates are visible under settings on the tab called secrets. ## Changes to Certificate Management Post Migration -As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell or REST API. +As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell, or REST API. Currently, the Azure portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate isn't found. However, if you're planning to use Certificates as secrets, then these certificates can't be validated for their thumbprint and any update operation that involves addition of secrets would fail via Portal. Customers are recommended to use PowerShell or RestAPI to continue updates involving Secrets. ## Changes for Update via Visual Studio-If you were publishing updates via Visual Studio directly, then you would need to first download the latest CSCFG file from your deployment post migration. Use this file as reference to add Network Configuration details to your current CSCFG file in Visual Studio project. Then build the solution and publish it. You might have to choose the Key Vault and Resource Group for this update. +If you published updates via Visual Studio directly, then you would need to first download the latest CSCFG file from your deployment post migration. Use this file as reference to add Network Configuration details to your current CSCFG file in Visual Studio project. Then build the solution and publish it. You might have to choose the Key Vault and Resource Group for this update. ## Next steps |
cloud-services-extended-support | Sample Create Cloud Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/sample-create-cloud-service.md | |
cloud-services-extended-support | Sample Get Cloud Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/sample-get-cloud-service.md | |
cloud-services-extended-support | Sample Reset Cloud Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/sample-reset-cloud-service.md | These samples cover various ways to reset an existing Azure Cloud Service (exten $roleInstances = @("ContosoFrontEnd_IN_0", "ContosoBackEnd_IN_1") Invoke-AzCloudServiceReimage -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances ```-This command reimages 2 role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg. +This command reimages two role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg. ## Reimage all roles of Cloud Service ```powershell This command reimages role instance named ContosoFrontEnd_IN_0 of cloud service $roleInstances = @("ContosoFrontEnd_IN_0", "ContosoBackEnd_IN_1") Invoke-AzCloudServiceRebuild -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances ```-This command rebuilds 2 role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg. +This command rebuilds two role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg. ## Rebuild all roles of cloud service ```powershell This command rebuilds all role instances of cloud service named ContosoCS that b $roleInstances = @("ContosoFrontEnd_IN_0", "ContosoBackEnd_IN_1") Restart-AzCloudService -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances ```-This command restarts 2 role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg. +This command restarts two role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg. ## Restart all roles of cloud service ```powershell |
cloud-services-extended-support | Sample Update Cloud Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/sample-update-cloud-service.md | -Below set of commands adds a RDP extension to already existing cloud service named ContosoCS that belongs to the resource group named ContosOrg. +The following set of commands adds a Remote Desktop Protocol (RDP) extension to already existing cloud service named ContosoCS that belongs to the resource group named ContosOrg. ```powershell # Create RDP extension object $rdpExtension = New-AzCloudServiceRemoteDesktopExtensionObject -Name "RDPExtension" -Credential $credential -Expiration $expiration -TypeHandlerVersion "1.2.1" $cloudService | Update-AzCloudService ``` ## Remove all extensions from a Cloud Service-Below set of commands removes all extensions from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg. +The following set of commands removes all extensions from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg. ```powershell # Get existing cloud service $cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS" $cloudService | Update-AzCloudService ``` ## Remove the remote desktop extension from Cloud Service-Below set of commands removes RDP extension from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg. +The following set of commands removes RDP extension from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg. ```powershell # Get existing cloud service $cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS" $cloudService | Update-AzCloudService ``` ## Scale-out / scale-in role instances-Below set of commands shows how to scale-out and scale-in role instance count for cloud service named ContosoCS that belongs to the resource group named ContosOrg. +The following set of commands shows how to scale-out and scale-in role instance count for cloud service named ContosoCS that belongs to the resource group named ContosOrg. ```powershell # Get existing cloud service $cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS" |
cloud-services-extended-support | Schema Cscfg File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-file.md | Title: Azure Cloud Services (extended support) Definition Schema (.cscfg File) | description: Information related to the definition schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024 -The service configuration file specifies the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role. If the service is part of a Virtual Network, configuration information for the network must be provided in the service configuration file, as well as in the virtual networking configuration file. The default extension for the service configuration file is cscfg. +The service configuration file specifies the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role. If the service is part of a Virtual Network, configuration information for the network must be provided in the service configuration file and the virtual networking configuration file. The default extension for the service configuration file is cscfg. -The service model is described by the [Cloud Service (extended support) definition schema](schema-csdef-file.md). +The [Cloud Service (extended support) definition schema](schema-csdef-file.md) describes the service model. By default, the Azure Diagnostics configuration schema file is installed to the `C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\<version>\schemas` directory. Replace `<version>` with the installed version of the [Azure SDK](https://azure.microsoft.com/downloads/). The basic format of the service configuration file is as follows. ``` ## Schema definitions-The following topics describe the schema for the `ServiceConfiguration` element: +The following articles describe the schema for the `ServiceConfiguration` element: - [Role Schema](schema-cscfg-role.md) - [NetworkConfiguration Schema](schema-cscfg-networkconfiguration.md) The following table describes the attributes of the `ServiceConfiguration` eleme | Attribute | Description | | | -- | |serviceName|Required. The name of the Cloud Service. The name given here must match the name specified in the service definition file.|-|osFamily|Optional. Specifies the Guest OS that will run on role instances in the Cloud Service. For information about supported Guest OS releases, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> If you do not include an `osFamily` value and you have not set the `osVersion` attribute to a specific Guest OS version, a default value of 1 is used.| -|osVersion|Optional. Specifies the version of the Guest OS that will run on role instances in the Cloud Service. For more information about Guest OS versions, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> You can specify that the Guest OS should be automatically upgraded to the latest version. To do this, set the value of the `osVersion` attribute to `*`. When set to `*`, the role instances are deployed using the latest version of the Guest OS for the specified OS family and will be automatically upgraded when new versions of the Guest OS are released.<br /><br /> To specify a specific version manually, use the `Configuration String` from the table in the **Future, Current and Transitional Guest OS Versions** section of [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> The default value for the `osVersion` attribute is `*`.| +|osFamily|Optional. Specifies the Guest OS that runs on role instances in the Cloud Service. For information about supported Guest OS releases, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> If you don't include an `osFamily` value and you have not set the `osVersion` attribute to a specific Guest OS version, a default value of 1 is used.| +|osVersion|Optional. Specifies the version of the Guest OS that runs on role instances in the Cloud Service. For more information about Guest OS versions, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> You can specify that the Guest OS should be automatically upgraded to the latest version. To do this, set the value of the `osVersion` attribute to `*`. When set to `*`, the role instances are deployed using the latest version of the Guest OS for the specified OS family and automatically upgrades when new versions of the Guest OS are released.<br /><br /> To specify a specific version manually, use the `Configuration String` from the table in the **Future, Current, and Transitional Guest OS Versions** section of [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> The default value for the `osVersion` attribute is `*`.| |schemaVersion|Optional. Specifies the version of the Service Configuration schema. The schema version allows Visual Studio to select the correct SDK tools to use for schema validation if more than one version of the SDK is installed side-by-side. For more information about schema and version compatibility, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md)| The service configuration file must contain one `ServiceConfiguration` element. The `ServiceConfiguration` element may include any number of `Role` elements and zero or 1 `NetworkConfiguration` elements. |
cloud-services-extended-support | Schema Cscfg Networkconfiguration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-networkconfiguration.md | Title: Azure Cloud Services (extended support) NetworkConfiguration Schema | Mic description: Information related to the network configuration schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024 -The `NetworkConfiguration` element of the service configuration file specifies Virtual Network and DNS values. These settings are optional for Cloud Services (classic). +The `NetworkConfiguration` element of the service configuration file specifies Virtual Network and Domain Name System (DNS) values. These settings are optional for Cloud Services (classic). You can use the following resource to learn more about Virtual Networks and the associated schemas: The following table describes the child elements of the `NetworkConfiguration` e | Rule | Optional. Specifies the action that should be taken for a specified subnet range of IP addresses. The order of the rule is defined by a string value for the `order` attribute. The lower the rule number the higher the priority. For example, rules could be specified with order numbers of 100, 200, and 300. The rule with the order number of 100 takes precedence over the rule that has an order of 200.<br /><br /> The action for the rule is defined by a string for the `action` attribute. Possible values are:<br /><br /> - `permit` ΓÇô Specifies that only packets from the specified subnet range can communicate with the endpoint.<br />- `deny` ΓÇô Specifies that access is denied to the endpoints in the specified subnet range.<br /><br /> The subnet range of IP addresses that are affected by the rule are defined by a string for the `remoteSubnet` attribute. The description for the rule is defined by a string for the `description` attribute.| | EndpointAcl | Optional. Specifies the assignment of access control rules to an endpoint. The name of the role that contains the endpoint is defined by a string for the `role` attribute. The name of the endpoint is defined by a string for the `endpoint` attribute. The name of the set of `AccessControl` rules that should be applied to the endpoint are defined in a string for the `accessControl` attribute. More than one `EndpointAcl` elements can be defined.| | DnsServer | Optional. Specifies the settings for a DNS server. You can specify settings for DNS servers without a Virtual Network. The name of the DNS server is defined by a string for the `name` attribute. The IP address of the DNS server is defined by a string for the `IPAddress` attribute. The IP address must be a valid IPv4 address.|-| VirtualNetworkSite | Mandatory. Specifies the name of the Virtual Network site in which you want deploy your Cloud Service. This setting does not create a Virtual Network Site. It references a site that has been previously defined in the network file for your Virtual Network. A Cloud Service (extended support) can only be a member of one Virtual Network. The name of the Virtual Network site is defined by a string for the `name` attribute.| -| InstanceAddress | Mandatory. Specifies the association of a role to a subnet or set of subnets in the Virtual Network. When you associate a role name to an instance address, you can specify the subnets to which you want this role to be associated. The `InstanceAddress` contains a Subnets element. The name of the role that is associated with the subnet or subnets is defined by a string for the `roleName` attribute.You need to specify one instance address for each role defined for your cloud service| +| VirtualNetworkSite | Mandatory. Specifies the name of the Virtual Network site in which you want to deploy your Cloud Service. This setting doesn't create a Virtual Network Site. It references a site previously defined in the network file for your Virtual Network. A Cloud Service (extended support) can only be a member of one Virtual Network. The name of the Virtual Network site is defined by a string for the `name` attribute.| +| InstanceAddress | Mandatory. Specifies the association of a role to a subnet or set of subnets in the Virtual Network. When you associate a role name to an instance address, you can specify the subnets to which you want this role to be associated. The `InstanceAddress` contains a Subnets element. The name of the role that is associated with the subnet or subnets is defined by a string for the `roleName` attribute. You need to specify one instance address for each role defined for your cloud service| | Subnet | Mandatory. Specifies the subnet that corresponds to the subnet name in the network configuration file. The name of the subnet is defined by a string for the `name` attribute.|-| ReservedIP | Optional. Specifies the reserved IP address that should be associated with the deployment. The allocation method for a reserved IP needs to be specified as `Static` for template and powershell deployments. Each deployment in a Cloud Service can be associated with only one reserved IP address. The name of the reserved IP address is defined by a string for the `name` attribute.| +| ReservedIP | Optional. Specifies the reserved IP address that should be associated with the deployment. The allocation method for a reserved IP needs to be specified as `Static` for template and PowerShell deployments. Each deployment in a Cloud Service can be associated with only one reserved IP address. The name of the reserved IP address is defined by a string for the `name` attribute.| ## See also [Cloud Service (extended support) Configuration Schema](schema-cscfg-file.md). |
cloud-services-extended-support | Schema Cscfg Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-role.md | Title: Azure Cloud Services (extended support) Role Schema | Microsoft Docs description: Information related to the role schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024 The `Role` element of the configuration file specifies the number of role instan For more information about the Azure Service Configuration Schema, see [Cloud Service (extended support) Configuration Schema](schema-cscfg-file.md). For more information about the Azure Service Definition Schema, see [Cloud Service (extended support) Definition Schema](schema-csdef-file.md). -## <a name="Role"></a> role element +## <a name="Role"></a> Role element The following example shows the `Role` element and its child elements. ```xml |
cloud-services-extended-support | Schema Csdef File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-file.md | Title: Azure Cloud Services (extended support) Definition Schema (csdef File) | description: Information related to the definition schema (csdef) for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024 By default, the Azure Diagnostics configuration schema file is installed to the The default extension for the service definition file is csdef. ## Basic service definition schema-The service definition file must contain one `ServiceDefinition` element. The service definition must contain at least one role (`WebRole` or `WorkerRole`) element. It can contain up to 25 roles defined in a single definition and you can mix role types. The service definition also contains the optional `NetworkTrafficRules` element which restricts which roles can communicate to specified internal endpoints. The service definition also contains the optional `LoadBalancerProbes` element which contains customer defined health probes of endpoints. +The service definition file must contain one `ServiceDefinition` element. The service definition must contain at least one role (`WebRole` or `WorkerRole`) element. It can contain up to 25 roles defined in a single definition and you can mix role types. The service definition also contains the optional `NetworkTrafficRules` element, which restricts which roles can communicate to specified internal endpoints. The service definition also contains the optional `LoadBalancerProbes` element, which contains customer defined health probes of endpoints. The basic format of the service definition file is as follows. The basic format of the service definition file is as follows. ``` ## Schema definitions-The following topics describe the schema: +The following articles describe the schema: - [LoadBalancerProbe Schema](schema-csdef-loadbalancerprobe.md) - [WebRole Schema](schema-csdef-webrole.md) The following table describes the attributes of the `ServiceDefinition` element. | Attribute | Description | | -- | -- | | name |Required. The name of the service. The name must be unique within the service account.|-| topologyChangeDiscovery | Optional. Specifies the type of topology change notification. Possible values are:<br /><br /> - `Blast` - Sends the update as soon as possible to all role instances. If you choose option, the role should be able to handle the topology update without being restarted.<br />- `UpgradeDomainWalk` ΓÇô Sends the update to each role instance in a sequential manner after the previous instance has successfully accepted the update.| +| topologyChangeDiscovery | Optional. Specifies the type of topology change notification. Possible values are:<br /><br /> - `Blast` - Sends the update as soon as possible to all role instances. If you choose option, the role should be able to handle the topology update without being restarted.<br />- `UpgradeDomainWalk` ΓÇô Sends the update to each role instance in a sequential manner after the previous instance successfully accepts the update.| | schemaVersion | Optional. Specifies the version of the service definition schema. The schema version allows Visual Studio to select the correct SDK tools to use for schema validation if more than one version of the SDK is installed side-by-side.| | upgradeDomainCount | Optional. Specifies the number of upgrade domains across which roles in this service are allocated. Role instances are allocated to an upgrade domain when the service is deployed. For more information, see [Update a Cloud Service role or deployment](sample-update-cloud-service.md) and [Manage the availability of virtual machines](../virtual-machines/availability.md) You can specify up to 20 upgrade domains. If not specified, the default number of upgrade domains is 5.| |
cloud-services-extended-support | Schema Csdef Loadbalancerprobe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-loadbalancerprobe.md | Title: Azure Cloud Services (extended support) Def. LoadBalancerProbe Schema | M description: Information related to the load balancer probe schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024 -The load balancer probe is a customer defined health probe of UDP endpoints and endpoints in role instances. The `LoadBalancerProbe` is not a standalone element; it is combined with the web role or worker role in a service definition file. A `LoadBalancerProbe` can be used by more than one role. +The load balancer probe is a customer defined health probe of UDP endpoints and endpoints in role instances. The `LoadBalancerProbe` isn't a standalone element; it's combined with the web role or worker role in a service definition file. More than one role can use a `LoadBalancerProbe`. The default extension for the service definition file is csdef. ## The function of a load balancer probe The Azure Load Balancer is responsible for routing incoming traffic to your role instances. The load balancer determines which instances can receive traffic by regularly probing each instance in order to determine the health of that instance. The load balancer probes every instance multiple times per minute. There are two different options for providing instance health to the load balancer ΓÇô the default load balancer probe, or a custom load balancer probe, which is implemented by defining the LoadBalancerProbe in the csdef file. -The default load balancer probe utilizes the Guest Agent inside the virtual machine, which listens and responds with an HTTP 200 OK response only when the instance is in the Ready state (like when the instance is not in the Busy, Recycling, Stopping, etc. states). If the Guest Agent fails to respond with HTTP 200 OK, the Azure Load Balancer marks the instance as unresponsive and stops sending traffic to that instance. The Azure Load Balancer continues to ping the instance, and if the Guest Agent responds with an HTTP 200, the Azure Load Balancer sends traffic to that instance again. When using a web role your website code typically runs in w3wp.exe which is not monitored by the Azure fabric or guest agent, which means failures in w3wp.exe (eg. HTTP 500 responses) is not be reported to the guest agent and the load balancer does not know to take that instance out of rotation. +The default load balancer probe utilizes the Guest Agent inside the virtual machine, which listens and responds with an HTTP 200 OK response only when the instance is in the Ready state (like when the instance isn't in the Busy, Recycling, Stopping, etc. states). If the Guest Agent fails to respond with HTTP 200 OK, the Azure Load Balancer marks the instance as unresponsive and stops sending traffic to that instance. The Azure Load Balancer continues to ping the instance, and if the Guest Agent responds with an HTTP 200, the Azure Load Balancer sends traffic to that instance again. When using a web role your website code typically runs in w3wp.exe, which isn't monitored by the Azure fabric or guest agent, which means failures in w3wp.exe (for example, HTTP 500 responses) isn't be reported to the guest agent and the load balancer doesn't know to take that instance out of rotation. -The custom load balancer probe overrides the default guest agent probe and allows you to create your own custom logic to determine the health of the role instance. The load balancer regularly probes your endpoint (every 15 seconds, by default) and the instance is be considered in rotation if it responds with a TCP ACK or HTTP 200 within the timeout period (default of 31 seconds). This can be useful to implement your own logic to remove instances from load balancer rotation, for example returning a non-200 status if the instance is above 90% CPU. For web roles using w3wp.exe, this also means you get automatic monitoring of your website, since failures in your website code return a non-200 status to the load balancer probe. If you do not define a LoadBalancerProbe in the csdef file, then the default load balancer behavior (as previously described) is be used. +The custom load balancer probe overrides the default guest agent probe and allows you to create your own custom logic to determine the health of the role instance. The load balancer regularly probes your endpoint (every 15 seconds, by default), and the instance is considered in rotation if it responds with a TCP ACK or HTTP 200 within the timeout period (default of 31 seconds). This can be useful to implement your own logic to remove instances from load balancer rotation, for example returning a non-200 status if the instance is above 90% CPU. For web roles using w3wp.exe, this also means you get automatic monitoring of your website, since failures in your website code return a non-200 status to the load balancer probe. If you don't define a LoadBalancerProbe in the csdef file, then the default load balancer behavior (as previously described) is used. -If you use a custom load balancer probe, you must ensure that your logic takes into consideration the RoleEnvironment.OnStop method. When using the default load balancer probe, the instance is taken out of rotation prior to OnStop being called, but a custom load balancer probe can continue to return a 200 OK during the OnStop event. If you are using the OnStop event to clean up cache, stop service, or otherwise making changes that can affect the runtime behavior of your service, then you need to ensure that your custom load balancer probe logic removes the instance from rotation. +If you use a custom load balancer probe, you must ensure that your logic takes into consideration the RoleEnvironment.OnStop method. When you use the default load balancer probe, the instance is taken out of rotation before OnStop is called, but a custom load balancer probe can continue to return a 200 OK during the OnStop event. If you use the OnStop event to clean up cache, stop service, or otherwise making changes that can affect the runtime behavior of your service, then you need to ensure that your custom load balancer probe logic removes the instance from rotation. ## Basic service definition schema for a load balancer probe The basic format of a service definition file containing a load balancer probe is as follows. The following table describes the attributes of the `LoadBalancerProbe` element: | - | -- | --| | `name` | `string` | Required. The name of the load balancer probe. The name must be unique.| | `protocol` | `string` | Required. Specifies the protocol of the end point. Possible values are `http` or `tcp`. If `tcp` is specified, a received ACK is required for the probe to be successful. If `http` is specified, a 200 OK response from the specified URI is required for the probe to be successful.|-| `path` | `string` | The URI used for requesting health status from the VM. `path` is required if `protocol` is set to `http`. Otherwise, it is not allowed.<br /><br /> There is no default value.| -| `port` | `integer` | Optional. The port for communicating the probe. This is optional for any endpoint, as the same port will then be used for the probe. You can configure a different port for their probing, as well. Possible values range from 1 to 65535, inclusive.<br /><br /> The default value is set by the endpoint.| -| `intervalInSeconds` | `integer` | Optional. The interval, in seconds, for how frequently to probe the endpoint for health status. Typically, the interval is slightly less than half the allocated timeout period (in seconds) which allows two full probes before taking the instance out of rotation.<br /><br /> The default value is 15, the minimum value is 5.| -| `timeoutInSeconds` | `integer` | Optional. The timeout period, in seconds, applied to the probe where no response will result in stopping further traffic from being delivered to the endpoint. This value allows endpoints to be taken out of rotation faster or slower than the typical times used in Azure (which are the defaults).<br /><br /> The default value is 31, the minimum value is 11.| +| `path` | `string` | The URI used for requesting health status from the VM. `path` is required if `protocol` is set to `http`. Otherwise, it isn't allowed.<br /><br /> There's no default value.| +| `port` | `integer` | Optional. The port for communicating the probe. This attribute is optional for any endpoint, as the same port is used for the probe. You can configure a different port for their probing, as well. Possible values range from 1 to 65535, inclusive.<br /><br /> The default value set by the endpoint.| +| `intervalInSeconds` | `integer` | Optional. The interval, in seconds, for how frequently to probe the endpoint for health status. Typically, the interval is slightly less than half the allocated timeout period (in seconds) which allows two full probes before taking the instance out of rotation.<br /><br /> The default value is 15. The minimum value is 5.| +| `timeoutInSeconds` | `integer` | Optional. The timeout period, in seconds, applied to the probe where no response results in stopping further traffic from being delivered to the endpoint. This value allows endpoints to be taken out of rotation faster or slower than the typical times used in Azure (which are the defaults).<br /><br /> The default value is 31. The minimum value is 11.| ## See also [Cloud Service (extended support) Definition Schema](schema-csdef-file.md). |
cloud-services-extended-support | Schema Csdef Networktrafficrules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-networktrafficrules.md | Title: Azure Cloud Services (extended support) Def. NetworkTrafficRules Schema | description: Information related to the network traffic rules associated with Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024 -The `NetworkTrafficRules` node is an optional element in the service definition file that specifies how roles communicate with each other. It limits which roles can access the internal endpoints of the specific role. The `NetworkTrafficRules` is not a standalone element; it is combined with two or more roles in a service definition file. +The `NetworkTrafficRules` node is an optional element in the service definition file that specifies how roles communicate with each other. It limits which roles can access the internal endpoints of the specific role. The `NetworkTrafficRules` isn't a standalone element; it's combined with two or more roles in a service definition file. The default extension for the service definition file is csdef. The basic format of a service definition file containing network traffic definit ``` ## Schema elements-The `NetworkTrafficRules` node of the service definition file includes these elements, described in detail in subsequent sections in this topic: +The `NetworkTrafficRules` node of the service definition file includes these elements, described in detail in subsequent sections in this article: [NetworkTrafficRules Element](#NetworkTrafficRules) The `NetworkTrafficRules` element specifies which roles can communicate with whi The `OnlyAllowTrafficTo` element describes a collection of destination endpoints and the roles that can communicate with them. You can specify multiple `OnlyAllowTrafficTo` nodes. ## <a name="Destinations"></a> Destinations element-The `Destinations` element describes a collection of RoleEndpoints than can be communicated with. +The `Destinations` element describes a collection of RoleEndpoints that can be communicated with. ## <a name="RoleEndpoint"></a> RoleEndpoint element The `RoleEndpoint` element describes an endpoint on a role to allow communications with. You can specify multiple `RoleEndpoint` elements if there are more than one endpoint on the role. The `RoleEndpoint` element describes an endpoint on a role to allow communicatio The `AllowAllTraffic` element is a rule that allows all roles to communicate with the endpoints defined in the `Destinations` node. ## <a name="WhenSource"></a> WhenSource element-The `WhenSource` element describes a collection of roles than can communicate with the endpoints defined in the `Destinations` node. +The `WhenSource` element describes a collection of roles that can communicate with the endpoints defined in the `Destinations` node. | Attribute | Type | Description | | | -- | -- | |
cloud-services-extended-support | Schema Csdef Webrole | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-webrole.md | Title: Azure Cloud Services (extended support) Def. WebRole Schema | Microsoft D description: Information related to the web role for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024 The basic format of a service definition file containing a web role is as follow ``` ## Schema elements -The service definition file includes these elements, described in detail in subsequent sections in this topic: +The service definition file includes these elements, described in detail in subsequent sections in this article: [WebRole](#WebRole) The name of the directory allocated to the local storage resource corresponds to ## <a name="Endpoints"></a> Endpoints The `Endpoints` element describes the collection of input (external), internal, and instance input endpoints for a role. This element is the parent of the `InputEndpoint`, `InternalEndpoint`, and `InstanceInputEndpoint` elements. -Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints which can be allocated across the 25 roles allowed in a service. For example, if have 5 roles you can allocate 5 input endpoints per role or you can allocate 25 input endpoints to a single role or you can allocate 1 input endpoint each to 25 roles. +Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints, which can be allocated across the 25 roles allowed in a service. For example, if you have five roles, you can allocate five input endpoints per role, or you can allocate 25 input endpoints to a single role or you can allocate one input endpoint each to 25 roles. > [!NOTE] > Each role deployed requires one instance per role. The default provisioning for a subscription is limited to 20 cores and thus is limited to 20 instances of a role. If your application requires more instances than is provided by the default provisioning see [Billing, Subscription Management and Quota Support](https://azure.microsoft.com/support/options/) for more information on increasing your quota. The following table describes the attributes of the `InputEndpoint` element. |protocol|string|Required. The transport protocol for the external endpoint. For a web role, possible values are `HTTP`, `HTTPS`, `UDP`, or `TCP`.| |port|int|Required. The port for the external endpoint. You can specify any port number you choose, but the port numbers specified for each role in the service must be unique.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| |certificate|string|Required for an HTTPS endpoint. The name of a certificate defined by a `Certificate` element.| -|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.| -|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored and the endpoint will not be removed by the load balancer. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role is not in a Ready state.| +|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This attribute is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.| +|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored, and the load balancer won't remove the endpoint. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role isn't in a Ready state.| |loadBalancerProbe|string|Optional. The name of the load balancer probe associated with the input endpoint. For more information, see [LoadBalancerProbe Schema](schema-csdef-loadbalancerprobe.md).| ## <a name="InternalEndpoint"></a> InternalEndpoint -The `InternalEndpoint` element describes an internal endpoint to a web role. An internal endpoint is available only to other role instances running within the service; it is not available to clients outside the service. Web roles that do not include the `Sites` element can only have a single HTTP, UDP, or TCP internal endpoint. +The `InternalEndpoint` element describes an internal endpoint to a web role. An internal endpoint is available only to other role instances running within the service; it isn't available to clients outside the service. Web roles that don't include the `Sites` element can only have a single HTTP, UDP, or TCP internal endpoint. The following table describes the attributes of the `InternalEndpoint` element. The following table describes the attributes of the `InternalEndpoint` element. | | - | -- | |name|string|Required. A unique name for the internal endpoint.| |protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `HTTP`, `TCP`, `UDP`, or `ANY`.<br /><br /> A value of `ANY` specifies that any protocol, any port is allowed.| -|port|int|Optional. The port used for internal load balanced connections on the endpoint. A Load balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically these are these are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.| +|port|int|Optional. The port used for internal load-balanced connections on the endpoint. A load-balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically, these values are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.| ## <a name="InstanceInputEndpoint"></a> InstanceInputEndpoint The `InstanceInputEndpoint` element describes an instance input endpoint to a web role. An instance input endpoint is associated with a specific role instance by using port forwarding in the load balancer. Each instance input endpoint is mapped to a specific port from a range of possible ports. This element is the parent of the `AllocatePublicPortFrom` element. The following table describes the attributes of the `InstanceInputEndpoint` elem | Attribute | Type | Description | | | - | -- | |name|string|Required. A unique name for the endpoint.| -|localPort|int|Required. Specifies the internal port that all role instances will listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.| +|localPort|int|Required. Specifies the internal port that all role instances listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.| |protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `udp` or `tcp`. Use `tcp` for http/https based traffic.| ## <a name="AllocatePublicPortFrom"></a> AllocatePublicPortFrom -The `AllocatePublicPortFrom` element describes the public port range that can be used by external customers to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element. +The `AllocatePublicPortFrom` element describes the public port range that external customers can use to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element. The `AllocatePublicPortFrom` element is only available using the Azure SDK version 1.7 or higher. The following table describes the attributes of the `FixedPort` element. | Attribute | Type | Description | | | - | -- | -|port|int|Required. The port for the internal endpoint. This has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| +|port|int|Required. The port for the internal endpoint. This attribute has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| ## <a name="FixedPortRange"></a> FixedPortRange The `FixedPortRange` element specifies the range of ports that are assigned to the internal endpoint or instance input endpoint, and sets the port used for load balanced connections on the endpoint. The following table describes the attributes of the `Certificate` element. | Attribute | Type | Description | | | - | -- | -|name|string|Required. A name for this certificate, which is used to refer to it when it is associated with an HTTPS `InputEndpoint` element.| +|name|string|Required. A name for this certificate, which is used to refer to it when it's associated with an HTTPS `InputEndpoint` element.| |storeLocation|string|Required. The location of the certificate store where this certificate may be found on the local machine. Possible values are `CurrentUser` and `LocalMachine`.| |storeName|string|Required. The name of the certificate store where this certificate resides on the local machine. Possible values include the built-in store names `My`, `Root`, `CA`, `Trust`, `Disallowed`, `TrustedPeople`, `TrustedPublisher`, `AuthRoot`, `AddressBook`, or any custom store name. If a custom store name is specified, the store is automatically created.| |permissionLevel|string|Optional. Specifies the access permissions given to the role processes. If you want only elevated processes to be able to access the private key, then specify `elevated` permission. `limitedOrElevated` permission allows all role processes to access the private key. Possible values are `limitedOrElevated` or `elevated`. The default value is `limitedOrElevated`.| The following table describes the attributes of the `Import` element. | Attribute | Type | Description | | | - | -- | -|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information see [Extensions](extensions.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance.| +|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information, see [Extensions](extensions.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance.| ## <a name="Runtime"></a> Runtime The `Runtime` element describes a collection of environment variable settings for a web role that control the runtime environment of the Azure host process. This element is the parent of the `Environment` element. This element is optional and a role can have only one runtime block. The following table describes the attributes of the `NetFxEntryPoint` element. | Attribute | Type | Description | | | - | -- | -|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (do not specify **\\%ROLEROOT%\Approot** in `commandLine`, it is assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> For HWC roles the path is always relative to the **\\%ROLEROOT%\Approot\bin** folder.<br /><br /> For full IIS and IIS Express web roles, if the assembly cannot be found relative to **\\%ROLEROOT%\Approot** folder, the **\\%ROLEROOT%\Approot\bin** is searched.<br /><br /> This fall back behavior for full IIS is not a recommend best practice and maybe removed in future versions.| +|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (don't specify **\\%ROLEROOT%\Approot** in command line; it's assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> For HWC roles the path is always relative to the **\\%ROLEROOT%\Approot\bin** folder.<br /><br /> For full IIS and IIS Express web roles, if the assembly can't be found relative to **\\%ROLEROOT%\Approot** folder, the **\\%ROLEROOT%\Approot\bin** is searched.<br /><br /> This fall back behavior for full IIS isn't a recommend best practice and maybe removed in future versions.| |targetFrameworkVersion|string|Required. The version of the .NET framework on which the assembly was built. For example, `targetFrameworkVersion="v4.0"`.| ## <a name="Sites"></a> Sites -The `Sites` element describes a collection of websites and web applications that are hosted in a web role. This element is the parent of the `Site` element. If you do not specify a `Sites` element, your web role is hosted as legacy web role and you can only have one website hosted in your web role. This element is optional and a role can have only one sites block. +The `Sites` element describes a collection of websites and web applications that are hosted in a web role. This element is the parent of the `Site` element. If you don't specify a `Sites` element, your web role is hosted as legacy web role, and you can only have one website hosted in your web role. This element is optional and a role can have only one sites block. The `Sites` element is only available using the Azure SDK version 1.3 or higher. The following table describes the attributes of the `VirtualApplication` element | Attribute | Type | Description | | | - | -- | |name|string|Required. Specifies a name to identify the virtual application.| -|physicalDirectory|string|Required. Specifies the path on the development machine that contains the virtual application. In the compute emulator, IIS is configured to retrieve content from this location. When deploying to the Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.| +|physicalDirectory|string|Required. Specifies the path on the development machine that contains the virtual application. In the compute emulator, IIS is configured to retrieve content from this location. When deployed to Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.| ## <a name="VirtualDirectory"></a> VirtualDirectory The `VirtualDirectory` element specifies a directory name (also referred to as path) that you specify in IIS and map to a physical directory on a local or remote server. The following table describes the attributes of the `VirtualDirectory` element. | Attribute | Type | Description | | | - | -- | |name|string|Required. Specifies a name to identify the virtual directory.| -|value|physicalDirectory|Required. Specifies the path on the development machine that contains the website or Virtual directory contents. In the compute emulator, IIS is configured to retrieve content from this location. When deploying to the Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.| +|value|physicalDirectory|Required. Specifies the path on the development machine that contains the website or Virtual directory contents. In the compute emulator, IIS is configured to retrieve content from this location. When deployed to Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.| ## <a name="Bindings"></a> Bindings -The `Bindings` element describes a collection of bindings for a website. It is the parent element of the `Binding` element. The element is required for every `Site` element. For more information about configuring endpoints, see [Enable Communication for Role Instances](../cloud-services/cloud-services-enable-communication-role-instances.md). +The `Bindings` element describes a collection of bindings for a website. It's the parent element of the `Binding` element. The element is required for every `Site` element. For more information about configuring endpoints, see [Enable Communication for Role Instances](../cloud-services/cloud-services-enable-communication-role-instances.md). The `Bindings` element is only available using the Azure SDK version 1.3 or higher. The following table describes the attributes of the `Task` element. | Attribute | Type | Description | | | - | -- | -|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file will not process properly.| +|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file processes incorrectly.| |executionContext|string|Specifies the context in which the script is run.<br /><br /> - `limited` [Default] ΓÇô Run with the same privileges as the role hosting the process.<br />- `elevated` ΓÇô Run with administrator privileges.| -|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System does not wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role is not restarted until all foreground tasks exit.| +|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System doesn't wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role isn't restarted until all foreground tasks exit.| ## <a name="Contents"></a> Contents The `Contents` element describes the collection of content for a web role. This element is the parent of the `Content` element. The `Contents` element describes the collection of content for a web role. This The `Contents` element is only available using the Azure SDK version 1.5 or higher. ## <a name="Content"></a> Content -The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it is copied. +The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it copies. The `Content` element is only available using the Azure SDK version 1.5 or higher. The following table describes the attributes of the `SourceDirectory` element. | Attribute | Type | Description | | | - | -- | -|path|string|Required. Relative or absolute path of a local directory whose contents will be copied to the Azure virtual machine. Expansion of environment variables in the directory path is supported.| +|path|string|Required. Relative or absolute path of a local directory whose contents copy to the Azure virtual machine. Expansion of environment variables in the directory path is supported.| -## See also +## Next steps [Cloud Service (extended support) Definition Schema](schema-csdef-file.md).---- |
cloud-services-extended-support | Schema Csdef Workerrole | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-workerrole.md | Title: Azure Cloud Services (extended support) Def. WorkerRole Schema | Microsof description: Information related to the worker role schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024 The basic format of the service definition file containing a worker role is as f ``` ## Schema elements-The service definition file includes these elements, described in detail in subsequent sections in this topic: +The service definition file includes these elements, described in detail in subsequent sections in this article: [WorkerRole](#WorkerRole) The name of the directory allocated to the local storage resource corresponds to ## <a name="Endpoints"></a> Endpoints The `Endpoints` element describes the collection of input (external), internal, and instance input endpoints for a role. This element is the parent of the `InputEndpoint`, `InternalEndpoint`, and `InstanceInputEndpoint` elements. -Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints which can be allocated across the 25 roles allowed in a service. For example, if have 5 roles you can allocate 5 input endpoints per role or you can allocate 25 input endpoints to a single role or you can allocate 1 input endpoint each to 25 roles. +Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints, which can be allocated across the 25 roles allowed in a service. For example, if you have five roles, you can allocate five input endpoints per role, or you can allocate 25 input endpoints to a single role or you can allocate one input endpoint each to 25 roles. > [!NOTE] > Each role deployed requires one instance per role. The default provisioning for a subscription is limited to 20 cores and thus is limited to 20 instances of a role. If your application requires more instances than is provided by the default provisioning see [Billing, Subscription Management and Quota Support](https://azure.microsoft.com/support/options/) for more information on increasing your quota. The following table describes the attributes of the `InputEndpoint` element. |protocol|string|Required. The transport protocol for the external endpoint. For a worker role, possible values are `HTTP`, `HTTPS`, `UDP`, or `TCP`.| |port|int|Required. The port for the external endpoint. You can specify any port number you choose, but the port numbers specified for each role in the service must be unique.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| |certificate|string|Required for an HTTPS endpoint. The name of a certificate defined by a `Certificate` element.|-|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.| -|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored and the endpoint will not be removed by the load balancer. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role is not in a Ready state.| +|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This attribute is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.| +|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored and the endpoint won't be removed by the load balancer. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role isn't in a Ready state.| |loadBalancerProbe|string|Optional. The name of the load balancer probe associated with the input endpoint. For more information, see [LoadBalancerProbe Schema](schema-csdef-loadbalancerprobe.md).| ## <a name="InternalEndpoint"></a> InternalEndpoint-The `InternalEndpoint` element describes an internal endpoint to a worker role. An internal endpoint is available only to other role instances running within the service; it is not available to clients outside the service. A worker role may have up to five HTTP, UDP, or TCP internal endpoints. +The `InternalEndpoint` element describes an internal endpoint to a worker role. An internal endpoint is available only to other role instances running within the service; it isn't available to clients outside the service. A worker role may have up to five HTTP, UDP, or TCP internal endpoints. The following table describes the attributes of the `InternalEndpoint` element. The following table describes the attributes of the `InternalEndpoint` element. | | - | -- | |name|string|Required. A unique name for the internal endpoint.| |protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `HTTP`, `TCP`, `UDP`, or `ANY`.<br /><br /> A value of `ANY` specifies that any protocol, any port is allowed.|-|port|int|Optional. The port used for internal load balanced connections on the endpoint. A Load balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically these are these are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.| +|port|int|Optional. The port used for internal load-balanced connections on the endpoint. A load-balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically, these values are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.| ## <a name="InstanceInputEndpoint"></a> InstanceInputEndpoint The `InstanceInputEndpoint` element describes an instance input endpoint to a worker role. An instance input endpoint is associated with a specific role instance by using port forwarding in the load balancer. Each instance input endpoint is mapped to a specific port from a range of possible ports. This element is the parent of the `AllocatePublicPortFrom` element. The following table describes the attributes of the `InstanceInputEndpoint` elem | Attribute | Type | Description | | | - | -- | |name|string|Required. A unique name for the endpoint.|-|localPort|int|Required. Specifies the internal port that all role instances will listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.| +|localPort|int|Required. Specifies the internal port that all role instances listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.| |protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `udp` or `tcp`. Use `tcp` for http/https based traffic.| ## <a name="AllocatePublicPortFrom"></a> AllocatePublicPortFrom-The `AllocatePublicPortFrom` element describes the public port range that can be used by external customers to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element. +The `AllocatePublicPortFrom` element describes the public port range that external customers can use to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element. The `AllocatePublicPortFrom` element is only available using the Azure SDK version 1.7 or higher. The following table describes the attributes of the `FixedPort` element. | Attribute | Type | Description | | | - | -- |-|port|int|Required. The port for the internal endpoint. This has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| +|port|int|Required. The port for the internal endpoint. This attribute has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| ## <a name="FixedPortRange"></a> FixedPortRange The `FixedPortRange` element specifies the range of ports that are assigned to the internal endpoint or instance input endpoint, and sets the port used for load balanced connections on the endpoint. The following table describes the attributes of the `Certificate` element. | Attribute | Type | Description | | | - | -- |-|name|string|Required. A name for this certificate, which is used to refer to it when it is associated with an HTTPS `InputEndpoint` element.| +|name|string|Required. A name for this certificate, which is used to refer to it when it's associated with an HTTPS `InputEndpoint` element.| |storeLocation|string|Required. The location of the certificate store where this certificate may be found on the local machine. Possible values are `CurrentUser` and `LocalMachine`.| |storeName|string|Required. The name of the certificate store where this certificate resides on the local machine. Possible values include the built-in store names `My`, `Root`, `CA`, `Trust`, `Disallowed`, `TrustedPeople`, `TrustedPublisher`, `AuthRoot`, `AddressBook`, or any custom store name. If a custom store name is specified, the store is automatically created.| |permissionLevel|string|Optional. Specifies the access permissions given to the role processes. If you want only elevated processes to be able to access the private key, then specify `elevated` permission. `limitedOrElevated` permission allows all role processes to access the private key. Possible values are `limitedOrElevated` or `elevated`. The default value is `limitedOrElevated`.| The following table describes the attributes of the `Import` element. | Attribute | Type | Description | | | - | -- |-|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information see [Extensions](extensions.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance| +|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information, see [Extensions](extensions.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance| ## <a name="Runtime"></a> Runtime The `Runtime` element describes a collection of environment variable settings for a worker role that control the runtime environment of the Azure host process. This element is the parent of the `Environment` element. This element is optional and a role can have only one runtime block. The following table describes the attributes of the `NetFxEntryPoint` element. | Attribute | Type | Description | | | - | -- |-|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (do not specify **\\%ROLEROOT%\Approot** in `commandLine`, it is assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.| +|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (don't specify **\\%ROLEROOT%\Approot** in the command line; it's assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.| |targetFrameworkVersion|string|Required. The version of the .NET framework on which the assembly was built. For example, `targetFrameworkVersion="v4.0"`.| ## <a name="ProgramEntryPoint"></a> ProgramEntryPoint-The `ProgramEntryPoint` element specifies the program to run for a role. The `ProgramEntryPoint` element allows you to specify a program entry point that is not based on a .NET assembly. +The `ProgramEntryPoint` element specifies the program to run for a role. The `ProgramEntryPoint` element allows you to specify a program entry point that isn't based on a .NET assembly. > [!NOTE] > The `ProgramEntryPoint` element is only available using the Azure SDK version 1.5 or higher. The following table describes the attributes of the `ProgramEntryPoint` element. | Attribute | Type | Description | | | - | -- |-|commandLine|string|Required. The path, file name, and any command line arguments of the program to execute. The path is relative to the folder **%ROLEROOT%\Approot** (do not specify **%ROLEROOT%\Approot** in commandLine, it is assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> If the program ends, the role is recycled, so generally set the program to continue to run, instead of being a program that just starts up and runs a finite task.| -|setReadyOnProcessStart|boolean|Required. Specifies whether the role instance waits for the command line program to signal it is started. This value must be set to `true` at this time. Setting the value to `false` is reserved for future use.| +|commandLine|string|Required. The path, file name, and any command line arguments of the program to execute. The path is relative to the folder **%ROLEROOT%\Approot** (don't specify **%ROLEROOT%\Approot** in the command line; it's assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> If the program ends, the role is recycled, so generally set the program to continue to run, instead of being a program that just starts up and runs a finite task.| +|setReadyOnProcessStart|boolean|Required. Specifies whether the role instance waits for the command line program to signal when it starts. This value must be set to `true` at this time. Setting the value to `false` is reserved for future use.| ## <a name="Startup"></a> Startup The `Startup` element describes a collection of tasks that run when the role is started. This element can be the parent of the `Variable` element. For more information about using the role startup tasks, see [How to configure startup tasks](../cloud-services/cloud-services-startup-tasks.md). This element is optional and a role can have only one startup block. The following table describes the attributes of the `Task` element. | Attribute | Type | Description | | | - | -- |-|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file will not process properly.| +|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file processes incorrectly.| |executionContext|string|Specifies the context in which the script is run.<br /><br /> - `limited` [Default] ΓÇô Run with the same privileges as the role hosting the process.<br />- `elevated` ΓÇô Run with administrator privileges.|-|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System does not wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role is not restarted until all foreground tasks exit.| +|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System doesn't wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role isn't restarted until all foreground tasks exit.| ## <a name="Contents"></a> Contents The `Contents` element describes the collection of content for a worker role. This element is the parent of the `Content` element. The `Contents` element describes the collection of content for a worker role. Th The `Contents` element is only available using the Azure SDK version 1.5 or higher. ## <a name="Content"></a> Content-The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it is copied. +The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it copies. The `Content` element is only available using the Azure SDK version 1.5 or higher. The following table describes the attributes of the `SourceDirectory` element. | Attribute | Type | Description | | | - | -- |-|path|string|Required. Relative or absolute path of a local directory whose contents will be copied to the Azure virtual machine. Expansion of environment variables in the directory path is supported.| +|path|string|Required. Relative or absolute path of a local directory whose contents copy to the Azure virtual machine. Expansion of environment variables in the directory path is supported.| ## See also [Cloud Service (extended support) Definition Schema](schema-csdef-file.md). |
cloud-services-extended-support | States | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/states.md | This table lists the different power states for Cloud Services (extended support |Started|The Role Instance is healthy and is currently running| |Stopping|The Role Instance is in the process of getting stopped| |Stopped|The Role Instance is in the Stopped State|-|Unknown|The Role Instance is either in the process of creating or is not ready to service the traffic| +|Unknown|The Role Instance is either in the process of creating or isn't ready to service the traffic| |Starting|The Role Instance is in the process of moving to healthy/running state|-|Busy|The Role Instance is not responding| +|Busy|The Role Instance isn't responding| |Destroyed|The Role instance is destroyed| |
cloud-services-extended-support | Support Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/support-help.md | |
cloud-services-extended-support | Swap Cloud Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/swap-cloud-service.md | -After you swap the deployments, you can stage and test your new release by using the new cloud service deployment. In effect, swapping promotes a new cloud service that's staged to production release. +After you swap the deployments, you can stage and test your new release by using the new cloud service deployment. In effect, swapping promotes a new cloud service staged to production release. > [!NOTE] > You can't swap between an Azure Cloud Services (classic) deployment and an Azure Cloud Services (extended support) deployment. -You must make a cloud service swappable with another cloud service when you deploy the second of a pair of cloud services for the first time. Once the second pair of cloud service is deployed, it can not be made swappable with an existing cloud service in subsequent updates. +You must make a cloud service swappable with another cloud service when you deploy the second of a pair of cloud services for the first time. Once the second pair of cloud service is deployed, it canΓÇÖt be made swappable with an existing cloud service in subsequent updates. You can swap the deployments by using an Azure Resource Manager template (ARM template), the Azure portal, or the REST API. -Upon deployment of the second cloud service, both the cloud services have their SwappableCloudService property set to point to each other. Any subsequent update to these cloud services will need to specify this property failing which an error will be returned indicating that the SwappableCloudService property cannot be deleted or updated. +Upon deployment of the second cloud service, both the cloud services have their SwappableCloudService property set to point to each other. Any subsequent update to these cloud services needs to specify this property, failing which an error is returned indicating that the SwappableCloudService property can't delete or update. -Once set, the SwappableCloudService property is treated as readonly. It cannot be deleted or changed to another value. Deleting one of the cloud services (of the swappable pair) will result in the SwappableCloudService property of the remaining cloud service being cleared. +Once set, the SwappableCloudService property is treated as readonly. It can't delete or change to another value. Deleting one of the cloud services (of the swappable pair) results in the SwappableCloudService property of the remaining cloud service being cleared. ## ARM template To swap a deployment in the Azure portal: :::image type="content" source="media/swap-cloud-service-portal-confirm.png" alt-text="Screenshot that shows confirming the deployment swap information."::: -Deployments swap quickly because the only thing that changes is the virtual IP address for the cloud service that's deployed. +Deployments swap quickly because the only thing that changes is the virtual IP address for the cloud service deployed. To save compute costs, you can delete one of the cloud services (designated as a staging environment for your application deployment) after you verify that your swapped cloud service works as expected. |
cloud-services | Cloud Services Role Config Xpath | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-config-xpath.md | description: The various XPath settings you can use in the cloud service role co Previously updated : 02/21/2023 Last updated : 07/23/2024 Retrieves the endpoint port for the instance. | Code |var port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint.Port; | ## Example-Here is an example of a worker role that creates a startup task with an environment variable named `TestIsEmulated` set to the [@emulated xpath value](#app-running-in-emulator). +Here's an example of a worker role that creates a startup task with an environment variable named `TestIsEmulated` set to the [@emulated xpath value](#app-running-in-emulator). ```xml <WorkerRole name="Role1"> |
cloud-services | Cloud Services Role Enable Remote Desktop New Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-new-portal.md | Title: Use the portal to enable Remote Desktop for a Role -description: How to configure your azure cloud service application to allow remote desktop connections +description: How to configure your Azure cloud service application to allow remote desktop connections through the Azure portal. Previously updated : 02/21/2023 Last updated : 07/23/2024 -Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it is running. +Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it runs. -You can enable a Remote Desktop connection in your role during development by including the Remote Desktop modules in your service definition or you can choose to enable Remote Desktop through the Remote Desktop Extension. The preferred approach is to use the Remote Desktop extension as you can enable Remote Desktop even after the application is deployed without having to redeploy your application. +You can enable a Remote Desktop connection in your role during development by including the Remote Desktop modules in your service definition. Alternatively, you can choose to enable Remote Desktop through the Remote Desktop extension. The preferred approach is to use the Remote Desktop extension, as you can enable Remote Desktop even after the application is deployed without having to redeploy your application. ## Configure Remote Desktop from the Azure portal -The Azure portal uses the Remote Desktop Extension approach so you can enable Remote Desktop even after the application is deployed. The **Remote Desktop** settings for your cloud service allows you to enable Remote Desktop, change the local Administrator account used to connect to the virtual machines, the certificate used in authentication and set the expiration date. +The Azure portal uses the Remote Desktop Extension approach so you can enable Remote Desktop even after the application is deployed. The **Remote Desktop** setting for your cloud service allows you to enable Remote Desktop, change the local Administrator account used to connect to the virtual machines, the certificate used in authentication and set the expiration date. -1. Click **Cloud Services**, select the name of the cloud service, and then select **Remote Desktop**. +1. Select **Cloud Services**, select the name of the cloud service, and then select **Remote Desktop**. ![image shows Cloud services remote desktop](./media/cloud-services-role-enable-remote-desktop-new-portal/CloudServices_Remote_Desktop.png) The Azure portal uses the Remote Desktop Extension approach so you can enable Re 4. In **Roles**, select the role you want to update or select **All** for all roles. -5. When you finish your configuration updates, select **Save**. It will take a few moments before your role instances are ready to receive connections. +5. When you finish your configuration updates, select **Save**. It takes a few moments before your role instances are ready to receive connections. ## Remote into role instances Once Remote Desktop is enabled on the roles, you can initiate a connection directly from the Azure portal: -1. Click **Instances** to open the **Instances** settings. -2. Select a role instance that has Remote Desktop configured. -3. Click **Connect** to download an RDP file for the role instance. +1. Select **Instances** to open the **Instances** settings. +2. Choose a role instance that has Remote Desktop configured. +3. Select **Connect** to download a Remote Desktop Protocol (RDP) file for the role instance. ![Cloud services remote desktop image](./media/cloud-services-role-enable-remote-desktop-new-portal/CloudServices_Remote_Desktop_Connect.png) -4. Click **Open** and then **Connect** to start the Remote Desktop connection. +4. Choose **Open** and then **Connect** to start the Remote Desktop connection. >[!NOTE] > If your cloud service is sitting behind an NSG, you may need to create rules that allow traffic on ports **3389** and **20000**. Remote Desktop uses port **3389**. Cloud Service instances are load balanced, so you can't directly control which instance to connect to. The *RemoteForwarder* and *RemoteAccess* agents manage RDP traffic and allow the client to send an RDP cookie and specify an individual instance to connect to. The *RemoteForwarder* and *RemoteAccess* agents require that port **20000** is open, which may be blocked if you have an NSG. -## Additional resources +## Next steps -[How to Configure Cloud Services](cloud-services-how-to-configure-portal.md) +* [How to Configure Cloud Services](cloud-services-how-to-configure-portal.md) |
cloud-services | Cloud Services Role Enable Remote Desktop Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-powershell.md | Title: Use PowerShell to enable Remote Desktop for a Role -description: How to configure your azure cloud service application using PowerShell to allow remote desktop connections +description: How to configure your Azure cloud service application using PowerShell to allow remote desktop connections through PowerShell. Previously updated : 02/21/2023 Last updated : 07/23/2024 -Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it is running. +Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it runs. This article describes how to enable remote desktop on your Cloud Service Roles using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article. PowerShell utilizes the Remote Desktop Extension so you can enable Remote Desktop after the application is deployed. ## Configure Remote Desktop from PowerShell The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/set-azureserviceremotedesktopextension) cmdlet allows you to enable Remote Desktop on specified roles or all roles of your cloud service deployment. The cmdlet lets you specify the Username and Password for the remote desktop user through the *Credential* parameter that accepts a PSCredential object. -If you are using PowerShell interactively, you can easily set the PSCredential object by calling the [Get-Credentials](/powershell/module/microsoft.powershell.security/get-credential) cmdlet. +If you use PowerShell interactively, you can easily set the PSCredential object by calling the [Get-Credentials](/powershell/module/microsoft.powershell.security/get-credential) cmdlet. ```powershell $remoteusercredentials = Get-Credential $expiry = $(Get-Date).AddDays(1) $credential = New-Object System.Management.Automation.PSCredential $username,$securepassword Set-AzureServiceRemoteDesktopExtension -ServiceName $servicename -Credential $credential -Expiration $expiry ```-You can also optionally specify the deployment slot and roles that you want to enable remote desktop on. If these parameters are not specified, the cmdlet enables remote desktop on all roles in the **Production** deployment slot. +You can also optionally specify the deployment slot and roles that you want to enable remote desktop on. If these parameters aren't specified, the cmdlet enables remote desktop on all roles in the **Production** deployment slot. The Remote Desktop extension is associated with a deployment. If you create a new deployment for the service, you have to enable remote desktop on that deployment. If you always want to have remote desktop enabled, then you should consider integrating the PowerShell scripts into your deployment workflow. ## Remote Desktop into a role instance -The [Get-AzureRemoteDesktopFile](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet is used to remote desktop into a specific role instance of your cloud service. You can use the *LocalPath* parameter to download the RDP file locally. Or you can use the *Launch* parameter to directly launch the Remote Desktop Connection dialog to access the cloud service role instance. +The [Get-AzureRemoteDesktopFile](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet is used to remote desktop into a specific role instance of your cloud service. You can use the *LocalPath* parameter to download the Remote Desktop Protocol (RDP) file locally. Or you can use the *Launch* parameter to directly launch the Remote Desktop Connection dialog to access the cloud service role instance. ```powershell Get-AzureRemoteDesktopFile -ServiceName $servicename -Name "WorkerRole1_IN_0" -Launch Get-AzureRemoteDesktopFile -ServiceName $servicename -Name "WorkerRole1_IN_0" -L ## Check if Remote Desktop extension is enabled on a service -The [Get-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet displays that remote desktop is enabled or disabled on a service deployment. The cmdlet returns the username for the remote desktop user and the roles that the remote desktop extension is enabled for. By default, this happens on the deployment slot and you can choose to use the staging slot instead. +The [Get-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet displays that remote desktop is enabled or disabled on a service deployment. The cmdlet returns the username for the remote desktop user and the roles that the remote desktop extension is enabled for. By default, the deployment slot is used, but you can choose to use the staging slot instead. ```powershell Get-AzureServiceRemoteDesktopExtension -ServiceName $servicename Get-AzureServiceRemoteDesktopExtension -ServiceName $servicename ## Remove Remote Desktop extension from a service -If you have already enabled the remote desktop extension on a deployment, and need to update the remote desktop settings, first remove the extension. And enable it again with the new settings. For example, if you want to set a new password for the remote user account, or the account expired. Doing this is required on existing deployments that have the remote desktop extension enabled. For new deployments, you can simply apply the extension directly. +If you already enabled the remote desktop extension on a deployment and need to update the remote desktop settings, first remove the extension. Then, enable it again with the new settings. For example, if you want to set a new password for the remote user account or the account expired. Doing this step is required on existing deployments that have the remote desktop extension enabled. For new deployments, you can apply the extension directly. To remove the remote desktop extension from the deployment, you can use the [Remove-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/remove-azureserviceremotedesktopextension) cmdlet. You can also optionally specify the deployment slot and role from which you want to remove the remote desktop extension. Remove-AzureServiceRemoteDesktopExtension -ServiceName $servicename -UninstallCo > > The **UninstallConfiguration** parameter uninstalls any extension configuration that is applied to the service. Every extension configuration is associated with the service configuration. Calling the *remove* cmdlet without **UninstallConfiguration** disassociates the **deployment** from the extension configuration, thus effectively removing the extension. However, the extension configuration remains associated with the service. -## Additional resources +## Next steps -[How to Configure Cloud Services](cloud-services-how-to-configure-portal.md) +* [How to Configure Cloud Services](cloud-services-how-to-configure-portal.md) |
cloud-services | Cloud Services Role Enable Remote Desktop Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-visual-studio.md | Title: Using Visual Studio, enable Remote Desktop for a Role (Azure Cloud Services classic) -description: How to configure your Azure cloud service application to allow remote desktop connections +description: How to configure your Azure cloud service application to allow remote desktop connections through Visual Studio. Previously updated : 02/21/2023 Last updated : 07/23/2024 -Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it is running. +Remote Desktop enables you to access the desktop of a role running in Azure, using Remote Desktop Protocol (RDP). You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it runs. The publish wizard that Visual Studio provides for cloud services includes an option to enable Remote Desktop during the publishing process, using credentials that you provide. Using this option is suitable when using Visual Studio 2017 version 15.4 and earlier. -With Visual Studio 2017 version 15.5 and later, however, it's recommended that you avoid enabling Remote Desktop through the publish wizard unless you're working only as a single developer. For any situation in which the project might be opened by other developers, you instead enable Remote Desktop through the Azure portal, through PowerShell, or from a release pipeline in a continuous deployment workflow. This recommendation is due to a change in how Visual Studio communicates with Remote Desktop on the cloud service VM, as is explained in this article. +With Visual Studio 2017 version 15.5 and later, we recommend you avoid enabling Remote Desktop through the publish wizard unless you're working as a single developer. For any situation in which multiple developers open the project, you should instead enable Remote Desktop through the Azure portal, through PowerShell, or from a release pipeline in a continuous deployment workflow. This recommendation is due to a change in how Visual Studio communicates with Remote Desktop on the cloud service virtual machine (VM), as is explained in this article. ## Configure Remote Desktop through Visual Studio 2017 version 15.4 and earlier When using Visual Studio 2017 version 15.4 and earlier, you can use the **Enable 6. Provide a user name and a password. You canΓÇÖt use an existing account. DonΓÇÖt use "Administrator" as the user name for the new account. -7. Choose a date on which the account will expire and after which Remote Desktop connections will be blocked. +7. Choose a date on which the account will expire. An expired account automatically blocks further Remote Desktop connections. -8. After you've provided all the required information, select **OK**. Visual Studio adds the Remote Desktop settings to your project's `.cscfg` and `.csdef` files, including the password that's encrypted using the chosen certificate. +8. After you provide all the required information, select **OK**. Visual Studio adds the Remote Desktop settings to your project's `.cscfg` and `.csdef` files, including the password that's encrypted using the chosen certificate. 9. Complete any remaining steps using the **Next** button, then select **Publish** when youΓÇÖre ready to publish your cloud service. If you're not ready to publish, select **Cancel** and answer **Yes** when prompted to save changes. You can publish your cloud service later with these settings. With Visual Studio 2017 version 15.5 and later, you can still use the publish wi If you're working as part of a team, you should instead enable remote desktop on the Azure cloud service by using either the [Azure portal](cloud-services-role-enable-remote-desktop-new-portal.md) or [PowerShell](cloud-services-role-enable-remote-desktop-powershell.md). -This recommendation is due to a change in how Visual Studio 2017 version 15.5 and later communicates with the cloud service VM. When enabling Remote Desktop through the publish wizard, earlier versions of Visual Studio communicate with the VM through what's called the "RDP plugin." Visual Studio 2017 version 15.5 and later communicates instead using the "RDP extension" that is more secure and more flexible. This change also aligns with the fact that the Azure portal and PowerShell methods to enable Remote Desktop also use the RDP extension. +This recommendation is due to a change in how Visual Studio 2017 version 15.5 and later communicates with the cloud service VM. When you enable Remote Desktop through the publish wizard, earlier versions of Visual Studio communicate with the VM through the "RDP plugin." Visual Studio 2017 version 15.5 and later communicates instead using the "RDP extension" that is more secure and more flexible. This change also aligns with the fact that the Azure portal and PowerShell methods to enable Remote Desktop also use the RDP extension. -When Visual Studio communicates with the RDP extension, it transmit a plain text password over TLS. However, the project's configuration files store only an encrypted password, which can be decrypted into plain text only with the local certificate that was originally used to encrypt it. +When Visual Studio communicates with the RDP extension, it transmits a plain text password over Transport Layer Security (TLS). However, the project's configuration files store only an encrypted password, which can be decrypted into plain text only with the local certificate that was originally used to encrypt it. If you deploy the cloud service project from the same development computer each time, then that local certificate is available. In this case, you can still use the **Enable Remote Desktop for all roles** option in the publish wizard. -If you or other developers want to deploy the cloud service project from different computers, however, then those other computers won't have the necessary certificate to decrypt the password. As a result, you see the following error message: +However, if you or other developers want to deploy the cloud service project from different computers, then those other computers lack the necessary certificate to decrypt the password. As a result, you see the following error message: ```output-Applying remote desktop protocol (RDP) extension. +Applying remote desktop protocol extension. Certificate with thumbprint [thumbprint] doesn't exist. ``` You could change the password every time you deploy the cloud service, but that action becomes inconvenient for everyone who needs to use Remote Desktop. -If you're sharing the project with a team, then, it's best to clear the option in the publish wizard and instead enable Remote Desktop directly through the [Azure portal](cloud-services-role-enable-remote-desktop-new-portal.md) or by using [PowerShell](cloud-services-role-enable-remote-desktop-powershell.md). +If you're sharing the project with a team, then it's best to clear the option in the publish wizard and instead enable Remote Desktop directly through the [Azure portal](cloud-services-role-enable-remote-desktop-new-portal.md) or by using [PowerShell](cloud-services-role-enable-remote-desktop-powershell.md). ### Deploying from a build server with Visual Studio 2017 version 15.5 and later To use the RDP extension from Azure DevOps Services, include the following detai 1. After the deployment step, add an **Azure PowerShell** step, set its **Display name** property to "Azure Deployment: Enable RDP Extension" (or another suitable name), and select your appropriate Azure subscription. -1. Set **Script Type** to "Inline" and paste the code below into the **Inline Script** field. (You can also create a `.ps1` file in your project with this script, set **Script Type** to "Script File Path", and set **Script Path** to point to the file.) +1. Set **Script Type** to "Inline" and paste the following below into the **Inline Script** field. (You can also create a `.ps1` file in your project with this script, set **Script Type** to "Script File Path", and set **Script Path** to point to the file.) ```ps Param( To use the RDP extension from Azure DevOps Services, include the following detai ## Connect to an Azure Role by using Remote Desktop -After you publish your cloud service on Azure and have enabled Remote Desktop, you can use Visual Studio Server Explorer to log into the cloud service VM: +After you publish your cloud service on Azure and enable Remote Desktop, you can use Visual Studio Server Explorer to log into the cloud service VM: 1. In Server Explorer, expand the **Azure** node, and then expand the node for a cloud service and one of its roles to display a list of instances. 2. Right-click an instance node and select **Connect Using Remote Desktop**. -3. Enter the user name and password that you created previously. You are now logged into your remote session. +3. Enter the user name and password that you created previously. You're now signed into your remote session. -## Additional resources +## Next steps -[How to Configure Cloud Services](cloud-services-how-to-configure-portal.md) +* [How to Configure Cloud Services](cloud-services-how-to-configure-portal.md) |
cloud-services | Cloud Services Role Lifecycle Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-lifecycle-dotnet.md | Title: Handle Cloud Service (classic) lifecycle events | Microsoft Docs description: Learn how to use the lifecycle methods of a Cloud Service role in .NET, including RoleEntryPoint, which provides methods to respond to lifecycle events. Previously updated : 02/21/2023 Last updated : 07/23/2024 -When you create a worker role, you extend the [RoleEntryPoint](/previous-versions/azure/reference/ee758619(v=azure.100)) class which provides methods for you to override that let you respond to lifecycle events. For web roles this class is optional, so you must use it to respond to lifecycle events. +When you create a worker role, you extend the [RoleEntryPoint](/previous-versions/azure/reference/ee758619(v=azure.100)) class, which provides methods for you to override that let you respond to lifecycle events. For web roles, this class is optional, so you must use it to respond to lifecycle events. ## Extend the RoleEntryPoint class-The [RoleEntryPoint](/previous-versions/azure/reference/ee758619(v=azure.100)) class includes methods that are called by Azure when it **starts**, **runs**, or **stops** a web or worker role. You can optionally override these methods to manage role initialization, role shutdown sequences, or the execution thread of the role. +The [RoleEntryPoint](/previous-versions/azure/reference/ee758619(v=azure.100)) class includes methods that are called by Azure when it **starts**, **runs**, or **stops** a web or worker role. You can optionally override these methods to manage role initialization, role shut down sequences, or the execution thread of the role. When extending **RoleEntryPoint**, you should be aware of the following behaviors of the methods: -* The [OnStart](/previous-versions/azure/reference/ee772851(v=azure.100)) method returns a boolean value, so it is possible to return **false** from this method. +* The [OnStart](/previous-versions/azure/reference/ee772851(v=azure.100)) method returns a boolean value, so it's possible to return **false** from this method. If your code returns **false**, the role process is abruptly terminated, without running any shutdown sequence you may have in place. In general, you should avoid returning **false** from the **OnStart** method. * Any uncaught exception within an overload of a **RoleEntryPoint** method is treated as an unhandled exception. - If an exception occurs within one of the lifecycle methods, Azure will raise the [UnhandledException](/dotnet/api/system.appdomain.unhandledexception) event, and then the process is terminated. After your role has been taken offline, it will be restarted by Azure. When an unhandled exception occurs, the [Stopping](/previous-versions/azure/reference/ee758136(v=azure.100)) event is not raised, and the **OnStop** method is not called. + If an exception occurs within one of the lifecycle methods, Azure raises the [UnhandledException](/dotnet/api/system.appdomain.unhandledexception) event, and then the process is terminated. After your role goes offline, Azure restarts it. When an unhandled exception occurs, the [Stopping](/previous-versions/azure/reference/ee758136(v=azure.100)) event isn't raised, and the **OnStop** method isn't called. -If your role does not start, or is recycling between the initializing, busy, and stopping states, your code may be throwing an unhandled exception within one of the lifecycle events each time the role restarts. In this case, use the [UnhandledException](/dotnet/api/system.appdomain.unhandledexception) event to determine the cause of the exception and handle it appropriately. Your role may also be returning from the [Run](/previous-versions/azure/reference/ee772746(v=azure.100)) method, which causes the role to restart. For more information about deployment states, see [Common Issues Which Cause Roles to Recycle](cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md). +If your role doesn't start, or is recycling between the initializing, busy, and stopping states, your code may be throwing an unhandled exception within one of the lifecycle events each time the role restarts. In this case, use the [UnhandledException](/dotnet/api/system.appdomain.unhandledexception) event to determine the cause of the exception and handle it appropriately. Your role may also be returning from the [Run](/previous-versions/azure/reference/ee772746(v=azure.100)) method, which causes the role to restart. For more information about deployment states, see [Common Issues Which Cause Roles to Recycle](cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md). > [!NOTE] > If you are using the **Azure Tools for Microsoft Visual Studio** to develop your application, the role project templates automatically extend the **RoleEntryPoint** class for you, in the *WebRole.cs* and *WorkerRole.cs* files. If your role does not start, or is recycling between the initializing, busy, and > ## OnStart method-The **OnStart** method is called when your role instance is brought online by Azure. While the OnStart code is executing, the role instance is marked as **Busy** and no external traffic will be directed to it by the load balancer. You can override this method to perform initialization work, such as implementing event handlers and starting [Azure Diagnostics](cloud-services-how-to-monitor.md). +The **OnStart** method is called when your role instance is brought online by Azure. While the OnStart code is executing, the role instance is marked as **Busy** and the load balancer doesn't direct any external traffic to it. You can override this method to perform initialization work, such as implementing event handlers and starting [Azure Diagnostics](cloud-services-how-to-monitor.md). If **OnStart** returns **true**, the instance is successfully initialized and Azure calls the **RoleEntryPoint.Run** method. If **OnStart** returns **false**, the role terminates immediately, without executing any planned shutdown sequences. public override bool OnStart() ``` ## OnStop method-The **OnStop** method is called after a role instance has been taken offline by Azure and before the process exits. You can override this method to call code required for your role instance to cleanly shut down. +The **OnStop** method is called after Azures takes a role instance offline and before the process exits. You can override this method to call code required for your role instance to cleanly shut down. > [!IMPORTANT] > Code running in the **OnStop** method has a limited time to finish when it is called for reasons other than a user-initiated shutdown. After this time elapses, the process is terminated, so you must make sure that code in the **OnStop** method can run quickly or tolerates not running to completion. The **OnStop** method is called after the **Stopping** event is raised. The **OnStop** method is called after a role instance has been taken offline by ## Run method You can override the **Run** method to implement a long-running thread for your role instance. -Overriding the **Run** method is not required; the default implementation starts a thread that sleeps forever. If you do override the **Run** method, your code should block indefinitely. If the **Run** method returns, the role is automatically gracefully recycled; in other words, Azure raises the **Stopping** event and calls the **OnStop** method so that your shutdown sequences may be executed before the role is taken offline. +Overriding the **Run** method isn't required; the default implementation starts a thread that sleeps forever. If you do override the **Run** method, your code should block indefinitely. If the **Run** method returns, the role is automatically recycled; in other words, Azure raises the **Stopping** event and calls the **OnStop** method so that your shutdown sequences may be executed before the role is taken offline. ### Implementing the ASP.NET lifecycle methods for a web role-You can use the ASP.NET lifecycle methods, in addition to those provided by the **RoleEntryPoint** class, to manage initialization and shutdown sequences for a web role. This may be useful for compatibility purposes if you are porting an existing ASP.NET application to Azure. The ASP.NET lifecycle methods are called from within the **RoleEntryPoint** methods. The **Application\_Start** method is called after the **RoleEntryPoint.OnStart** method finishes. The **Application\_End** method is called before the **RoleEntryPoint.OnStop** method is called. +You can use the ASP.NET lifecycle methods, in addition to the methods provided by the **RoleEntryPoint** class, to manage initialization and shutdown sequences for a web role. This approach may be useful for compatibility purposes if you're porting an existing ASP.NET application to Azure. The ASP.NET lifecycle methods are called from within the **RoleEntryPoint** methods. The **Application\_Start** method is called after the **RoleEntryPoint.OnStart** method finishes. The **Application\_End** method is called before the **RoleEntryPoint.OnStop** method is called. ## Next steps Learn how to [create a cloud service package](cloud-services-model-and-package.md). |
cloud-services | Cloud Services Sizes Specs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-sizes-specs.md | description: Lists the different virtual machine sizes (and IDs) for Azure cloud Previously updated : 02/21/2023 Last updated : 07/23/2024 -This topic describes the available sizes and options for Cloud Service role instances (web roles and worker roles). It also provides deployment considerations to be aware of when planning to use these resources. Each size has an ID that you put in your [service definition file](cloud-services-model-and-package.md#csdef). Prices for each size are available on the [Cloud Services Pricing](https://azure.microsoft.com/pricing/details/cloud-services/) page. +This article describes the available sizes and options for Cloud Service role instances (web roles and worker roles). It also provides deployment considerations to be aware of when planning to use these resources. Each size has an ID that you put in your [service definition file](cloud-services-model-and-package.md#csdef). Prices for each size are available on the [Cloud Services Pricing](https://azure.microsoft.com/pricing/details/cloud-services/) page. > [!NOTE]-> To see related Azure limits, see [Azure Subscription and Service Limits, Quotas, and Constraints](../azure-resource-manager/management/azure-subscription-service-limits.md) -> -> +> To see related Azure limits, visit [Azure Subscription and Service Limits, Quotas, and Constraints](../azure-resource-manager/management/azure-subscription-service-limits.md) ## Sizes for web and worker role instances There are multiple standard sizes to choose from on Azure. Considerations for some of these sizes include: * D-series VMs are designed to run applications that demand higher compute power and temporary disk performance. D-series VMs provide faster processors, a higher memory-to-core ratio, and a solid-state drive (SSD) for the temporary disk. For details, see the announcement on the Azure blog, [New D-Series Virtual Machine Sizes](https://azure.microsoft.com/updates/d-series-virtual-machine-sizes).-* Dv3-series, Dv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-series CPU is about 35% faster than the D-series CPU. It is based on the latest generation 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, and with the Intel Turbo Boost Technology 2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the D-series. +* Dv3-series, Dv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-series CPU is about 35% faster than the D-series CPU. It bases itself on the latest generation 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, and with the Intel Turbo Boost Technology 2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the D-series. * G-series VMs offer the most memory and run on hosts that have Intel Xeon E5 V3 family processors.-* The A-series VMs can be deployed on various hardware types and processors. The size is throttled, based on the hardware, to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine. -* The A0 size is over-subscribed on the physical hardware. For this specific size only, other customer deployments may impact the performance of your running workload. The relative performance is outlined below as the expected baseline, subject to an approximate variability of 15 percent. +* The A-series VMs can be deployed on various hardware types and processors. The size is throttled based on the hardware to offer consistent processor performance for the running instance, regardless of the deployment scenario hardware. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine. +* The A0 size is over-subscribed on the physical hardware. For this specific size only, other customer deployments may affect the performance of your running workload. We outline the expected baseline of relative performance, subject to an approximate variability of 15 percent, later in the article. The size of the virtual machine affects the pricing. The size also affects the processing, memory, and storage capacity of the virtual machine. Storage costs are calculated separately based on used pages in the storage account. For details, see [Cloud Services Pricing Details](https://azure.microsoft.com/pricing/details/cloud-services/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/). The following considerations might help you decide on a size: -* The A8-A11 and H-series sizes are also known as *compute-intensive instances*. The hardware that runs these sizes is designed and optimized for compute-intensive and network-intensive applications, including high-performance computing (HPC) cluster applications, modeling, and simulations. The A8-A11 series uses Intel Xeon E5-2670 @ 2.6 GHZ and the H-series uses Intel Xeon E5-2667 v3 @ 3.2 GHz. For detailed information and considerations about using these sizes, see [High performance compute VM sizes](../virtual-machines/sizes-hpc.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). +* The A8-A11 and H-series sizes are also known as *compute-intensive instances*. The hardware that runs these sizes is designed and optimized for compute-intensive and network-intensive applications, including high-performance computing (HPC) cluster applications, modeling, and simulations. The A8-A11 series uses Intel Xeon E5-2670 @ 2.6 GHz and the H-series uses Intel Xeon E5-2667 v3 @ 3.2 GHz. For detailed information and considerations about using these sizes, see [High performance compute virtual machine (VM) sizes](../virtual-machines/sizes-hpc.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). * Dv3-series, Dv2-series, D-series, G-series, are ideal for applications that demand faster CPUs, better local disk performance, or have higher memory demands. They offer a powerful combination for many enterprise-grade applications. * Some of the physical hosts in Azure data centers may not support larger virtual machine sizes, such as A5 – A11. As a result, you may see the error message **Failed to configure virtual machine {machine name}** or **Failed to create virtual machine {machine name}** when resizing an existing virtual machine to a new size; creating a new virtual machine in a virtual network created before April 16, 2013; or adding a new virtual machine to an existing cloud service. See [Error: “Failed to configure virtual machine”](https://social.msdn.microsoft.com/Forums/9693f56c-fcd3-4d42-850e-5e3b56c7d6be/error-failed-to-configure-virtual-machine-with-a5-a6-or-a7-vm-size?forum=WAVirtualMachinesforWindows) on the support forum for workarounds for each deployment scenario. * Your subscription might also limit the number of cores you can deploy in certain size families. To increase a quota, contact Azure Support. ## Performance considerations-We have created the concept of the Azure Compute Unit (ACU) to provide a way of comparing compute (CPU) performance across Azure SKUs and to identify which SKU is most likely to satisfy your performance needs. ACU is currently standardized on a Small (Standard_A1) VM being 100 and all other SKUs then represent approximately how much faster that SKU can run a standard benchmark. +We created the concept of the Azure Compute Unit (ACU) to provide a way of comparing compute (CPU) performance across Azure SKUs and to identify which SKU is most likely to satisfy your performance needs. ACU is currently standardized on a Small (Standard_A1) VM being 100. Following that sandard, all other SKUs represent approximately how much faster that SKU can run a standard benchmark. > [!IMPORTANT] > The ACU is only a guideline. The results for your workload may vary.-> -> <br> ACUs marked with a * use Intel® Turbo technology to increase CPU frequency and ## Size tables The following tables show the sizes and the capacities they provide. -* Storage capacity is shown in units of GiB or 1024^3 bytes. When comparing disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB +* Storage capacity is shown in units of GiB or 1024^3 bytes. When comparing disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3), remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB * Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec. * Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to **ReadOnly** or **ReadWrite**. For uncached data disk operation, the host cache mode is set to **None**.-* Maximum network bandwidth is the maximum aggregated bandwidth allocated and assigned per VM type. The maximum bandwidth provides guidance for selecting the right VM type to ensure adequate network capacity is available. When moving between Low, Moderate, High and Very High, the throughput increases accordingly. Actual network performance will depend on many factors including network and application loads, and application network settings. +* Maximum network bandwidth is the maximum aggregated bandwidth allocated and assigned per VM type. The maximum bandwidth provides guidance for selecting the right VM type to ensure adequate network capacity is available. When moving between Low, Moderate, High and Very High, the throughput increases accordingly. Actual network performance depends on many factors including network and application loads, and application network settings. ## A-series | Size | CPU cores | Memory: GiB | Temporary Storage: GiB | Max NICs / Network bandwidth | In addition to the substantial CPU power, the H-series offers diverse options fo ## Configure sizes for Cloud Services You can specify the Virtual Machine size of a role instance as part of the service model described by the [service definition file](cloud-services-model-and-package.md#csdef). The size of the role determines the number of CPU cores, the memory capacity, and the local file system size that is allocated to a running instance. Choose the role size based on your application's resource requirement. -Here is an example for setting the role size to be Standard_D2 for a Web Role instance: +Here's an example for setting the role size to be Standard_D2 for a Web Role instance: ```xml <WorkerRole name="Worker1" vmsize="Standard_D2"> Here is an example for setting the role size to be Standard_D2 for a Web Role in ## Changing the size of an existing role -As the nature of your workload changes or new VM sizes become available, you may want to change the size of your role. To do so, you must change the VM size in your service definition file (as shown above), repackage your Cloud Service, and deploy it. +As the nature of your workload changes or new VM sizes become available, you may want to change the size of your role. To do so, you must change the VM size in your service definition file (as previously shown), repackage your Cloud Service, and deploy it. >[!TIP] > You may want to use different VM sizes for your role in different environments (eg. test vs production). One way to do this is to create multiple service definition (.csdef) files in your project, then create different cloud service packages per environment during your automated build using the CSPack tool. To learn more about the elements of a cloud services package and how to create them, see [What is the cloud services model and how do I package it?](cloud-services-model-and-package.md) As the nature of your workload changes or new VM sizes become available, you may > ## Get a list of sizes-You can use PowerShell or the REST API to get a list of sizes. The REST API is documented [here](/previous-versions/azure/reference/dn469422(v=azure.100)). The following code is a PowerShell command that will list all the sizes available for Cloud Services. +You can use PowerShell or the REST API to get a list of sizes. The REST API is documented [here](/previous-versions/azure/reference/dn469422(v=azure.100)). The following code is a PowerShell command that lists all the sizes available for Cloud Services. ```powershell Get-AzureRoleSize | where SupportedByWebWorkerRoles -eq $true | select InstanceSize, RoleSizeLabel ``` ## Next steps-* Learn about [azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). +* Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). * Learn more [about high performance compute VM sizes](../virtual-machines/sizes-hpc.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) for HPC workloads. |
cloud-services | Cloud Services Startup Tasks Common | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-startup-tasks-common.md | Title: Common startup tasks for Cloud Services (classic) | Microsoft Docs description: Provides some examples of common startup tasks you may want to perform in your cloud services web role or worker role. Previously updated : 02/21/2023 Last updated : 07/23/2024 -This article provides some examples of common startup tasks you may want to perform in your cloud service. You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering COM components, setting registry keys, or starting a long running process. +This article provides some examples of common startup tasks you may want to perform in your cloud service. You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering Component Object Model (COM) components, setting registry keys, or starting a long running process. See [this article](cloud-services-startup-tasks.md) to understand how startup tasks work, and specifically how to create the entries that define a startup task. See [this article](cloud-services-startup-tasks.md) to understand how startup ta > ## Define environment variables before a role starts+ If you need environment variables defined for a specific task, use the [Environment] element inside the [Task] element. ```xml Variables can also use a [valid Azure XPath value](cloud-services-role-config-xp ## Configure IIS startup with AppCmd.exe-The [AppCmd.exe](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj635852(v=ws.11)) command-line tool can be used to manage IIS settings at startup on Azure. *AppCmd.exe* provides convenient, command-line access to configuration settings for use in startup tasks on Azure. Using *AppCmd.exe*, Website settings can be added, modified, or removed for applications and sites. ++The [AppCmd.exe](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj635852(v=ws.11)) command-line tool can be used to manage Internet Information Service (IIS) settings at startup on Azure. *AppCmd.exe* provides convenient, command-line access to configuration settings for use in startup tasks on Azure. When you use *AppCmd.exe*, Website settings can be added, modified, or removed for applications and sites. However, there are a few things to watch out for in the use of *AppCmd.exe* as a startup task: * Startup tasks can be run more than once between reboots. For instance, when a role recycles. * If a *AppCmd.exe* action is performed more than once, it may generate an error. For example, attempting to add a section to *Web.config* twice could generate an error.-* Startup tasks fail if they return a non-zero exit code or **errorlevel**. For example, when *AppCmd.exe* generates an error. +* Startup tasks fail if they return a nonzero exit code or **errorlevel**. For example, when *AppCmd.exe* generates an error. -It is a good practice to check the **errorlevel** after calling *AppCmd.exe*, which is easy to do if you wrap the call to *AppCmd.exe* with a *.cmd* file. If you detect a known **errorlevel** response, you can ignore it, or pass it back. +It's a good practice to check the **errorlevel** after calling *AppCmd.exe*, which is easy to do if you wrap the call to *AppCmd.exe* with a *.cmd* file. If you detect a known **errorlevel** response, you can ignore it, or pass it back. -The errorlevel returned by *AppCmd.exe* are listed in the winerror.h file, and can also be seen on [MSDN](/windows/desktop/Debug/system-error-codes--0-499-). +The errorlevel values returned by *AppCmd.exe* are listed in the winerror.h file and can also be seen on the [Microsoft Developer Network (MSDN)](/windows/desktop/Debug/system-error-codes--0-499-). ### Example of managing the error level+ This example adds a compression section and a compression entry for JSON to the *Web.config* file, with error handling and logging. The relevant sections of the [ServiceDefinition.csdef] file are shown here, which include setting the [executionContext](/previous-versions/azure/reference/gg557552(v=azure.100)#task) attribute to `elevated` to give *AppCmd.exe* sufficient permissions to change the settings in the *Web.config* file: EXIT %ERRORLEVEL% ``` ## Add firewall rules-In Azure, there are effectively two firewalls. The first firewall controls connections between the virtual machine and the outside world. This firewall is controlled by the [EndPoints] element in the [ServiceDefinition.csdef] file. -The second firewall controls connections between the virtual machine and the processes within that virtual machine. This firewall can be controlled by the `netsh advfirewall firewall` command-line tool. +In Azure, there are effectively two firewalls. The first firewall controls connections between the virtual machine and the outside world. The [EndPoints] element in the [ServiceDefinition.csdef] file controls this firewall. ++The second firewall controls connections between the virtual machine and the processes within that virtual machine. You can control this firewall from the `netsh advfirewall firewall` command-line tool. -Azure creates firewall rules for the processes started within your roles. For example, when you start a service or program, Azure automatically creates the necessary firewall rules to allow that service to communicate with the Internet. However, if you create a service that is started by a process outside your role (like a COM+ service or a Windows Scheduled Task), you need to manually create a firewall rule to allow access to that service. These firewall rules can be created by using a startup task. +Azure creates firewall rules for the processes started within your roles. For example, when you start a service or program, Azure automatically creates the necessary firewall rules to allow that service to communicate with the Internet. However, if you create a service started by a process outside your role (like a COM+ service or a Windows Scheduled Task), you need to manually create a firewall rule to allow access to that service. These firewall rules can be created by using a startup task. A startup task that creates a firewall rule must have an [executionContext][Task] of **elevated**. Add the following startup task to the [ServiceDefinition.csdef] file. A startup task that creates a firewall rule must have an [executionContext][Task </ServiceDefinition> ``` -To add the firewall rule, you must use the appropriate `netsh advfirewall firewall` commands in your startup batch file. In this example, the startup task requires security and encryption for TCP port 80. +To add the firewall rule, you must use the appropriate `netsh advfirewall firewall` commands in your startup batch file. In this example, the startup task requires security and encryption for Transmission Control Protocol (TCP) port 80. ```cmd REM Add a firewall rule in a startup task. EXIT /B %errorlevel% ``` ## Block a specific IP address-You can restrict an Azure web role access to a set of specified IP addresses by modifying your IIS **web.config** file. You also need to use a command file which unlocks the **ipSecurity** section of the **ApplicationHost.config** file. -To do unlock the **ipSecurity** section of the **ApplicationHost.config** file, create a command file that runs at role start. Create a folder at the root level of your web role called **startup** and, within this folder, create a batch file called **startup.cmd**. Add this file to your Visual Studio project and set the properties to **Copy Always** to ensure that it is included in your package. +You can restrict an Azure web role access to a set of specified IP addresses by modifying your IIS **web.config** file. You also need to use a command file that unlocks the **ipSecurity** section of the **ApplicationHost.config** file. ++To do unlock the **ipSecurity** section of the **ApplicationHost.config** file, create a command file that runs at role start. Create a folder at the root level of your web role called **startup** and, within this folder, create a batch file called **startup.cmd**. Add this file to your Visual Studio project and set the properties to **Copy Always** to ensure you include it in your package. Add the following startup task to the [ServiceDefinition.csdef] file. This sample config **denies** all IPs from accessing the server except for the t ``` ## Create a PowerShell startup task-Windows PowerShell scripts cannot be called directly from the [ServiceDefinition.csdef] file, but they can be invoked from within a startup batch file. -PowerShell (by default) does not run unsigned scripts. Unless you sign your script, you need to configure PowerShell to run unsigned scripts. To run unsigned scripts, the **ExecutionPolicy** must be set to **Unrestricted**. The **ExecutionPolicy** setting that you use is based on the version of Windows PowerShell. +Windows PowerShell scripts can't be called directly from the [ServiceDefinition.csdef] file, but they can be invoked from within a startup batch file. ++PowerShell (by default) doesn't run unsigned scripts. Unless you sign your script, you need to configure PowerShell to run unsigned scripts. To run unsigned scripts, the **ExecutionPolicy** must be set to **Unrestricted**. The **ExecutionPolicy** setting that you use is based on the version of Windows PowerShell. ```cmd REM Run an unsigned PowerShell script and log the output EXIT /B %errorlevel% ``` ## Create files in local storage from a startup task-You can use a local storage resource to store files created by your startup task that is accessed later by your application. ++You can use a local storage resource to store files created by your startup task that your application later accesses. To create the local storage resource, add a [LocalResources] section to the [ServiceDefinition.csdef] file and then add the [LocalStorage] child element. Give the local storage resource a unique name and an appropriate size for your startup task. string fileContent = System.IO.File.ReadAllText(System.IO.Path.Combine(localStor ``` ## Run in the emulator or cloud-You can have your startup task perform different steps when it is operating in the cloud compared to when it is in the compute emulator. For example, you may want to use a fresh copy of your SQL data only when running in the emulator. Or you may want to do some performance optimizations for the cloud that you don't need to do when running in the emulator. ++You can have your startup task perform different steps when it's operating in the cloud compared to when it is in the compute emulator. For example, you may want to use a fresh copy of your SQL data only when running in the emulator. Or you may want to do some performance optimizations for the cloud that you don't need to do when running in the emulator. This ability to perform different actions on the compute emulator and the cloud can be accomplished by creating an environment variable in the [ServiceDefinition.csdef] file. You then test that environment variable for a value in your startup task. To create the environment variable, add the [Variable]/[RoleInstanceValue] eleme </ServiceDefinition> ``` -The task can now check the **%ComputeEmulatorRunning%** environment variable to perform different actions based on whether the role is running in the cloud or the emulator. Here is a .cmd shell script that checks for that environment variable. +The task can now check the **%ComputeEmulatorRunning%** environment variable to perform different actions based on whether the role is running in the cloud or the emulator. Here's a .cmd shell script that checks for that environment variable. ```cmd REM Check if this task is running on the compute emulator. IF "%ComputeEmulatorRunning%" == "true" ( ## Detect that your task has already run-The role may recycle without a reboot causing your startup tasks to run again. There is no flag to indicate that a task has already run on the hosting VM. You may have some tasks where it doesn't matter that they run multiple times. However, you may run into a situation where you need to prevent a task from running more than once. -The simplest way to detect that a task has already run is to create a file in the **%TEMP%** folder when the task is successful and look for it at the start of the task. Here is a sample cmd shell script that does that for you. +The role may recycle without a reboot causing your startup tasks to run again. There's no flag to indicate that a task already ran on the host virtual machine (VM). You may have some tasks where it doesn't matter that they run multiple times. However, you may run into a situation where you need to prevent a task from running more than once. ++The simplest way to detect that a task has already run is to create a file in the **%TEMP%** folder when the task is successful and look for it at the start of the task. Here's a sample cmd shell script that does that for you. ```cmd REM If Task1_Success.txt exists, then Application 1 is already installed. EXIT /B 0 ``` ## Task best practices+ Here are some best practices you should follow when configuring task for your web or worker role. ### Always log startup activities-Visual Studio does not provide a debugger to step through batch files, so it's good to get as much data on the operation of batch files as possible. Logging the output of batch files, both **stdout** and **stderr**, can give you important information when trying to debug and fix batch files. To log both **stdout** and **stderr** to the StartupLog.txt file in the directory pointed to by the **%TEMP%** environment variable, add the text `>> "%TEMP%\\StartupLog.txt" 2>&1` to the end of specific lines you want to log. For example, to execute setup.exe in the **%PathToApp1Install%** directory: `"%PathToApp1Install%\setup.exe" >> "%TEMP%\StartupLog.txt" 2>&1` ++Visual Studio doesn't provide a debugger to step through batch files, so it's good to get as much data on the operation of batch files as possible. Logging the output of batch files, both **stdout** and **stderr**, can give you important information when trying to debug and fix batch files. To log both **stdout** and **stderr** to the StartupLog.txt file in the directory pointed to by the **%TEMP%** environment variable, add the text `>> "%TEMP%\\StartupLog.txt" 2>&1` to the end of specific lines you want to log. For example, to execute setup.exe in the **%PathToApp1Install%** directory: `"%PathToApp1Install%\setup.exe" >> "%TEMP%\StartupLog.txt" 2>&1` To simplify your xml, you can create a wrapper *cmd* file that calls all of your startup tasks along with logging and ensures each child-task shares the same environment variables. -You may find it annoying though to use `>> "%TEMP%\StartupLog.txt" 2>&1` on the end of each startup task. You can enforce task logging by creating a wrapper that handles logging for you. This wrapper calls the real batch file you want to run. Any output from the target batch file will be redirected to the *Startuplog.txt* file. +You may find it annoying though to use `>> "%TEMP%\StartupLog.txt" 2>&1` on the end of each startup task. You can enforce task logging by creating a wrapper that handles logging for you. This wrapper calls the real batch file you want to run. Any output from the target batch file redirects to the *Startuplog.txt* file. The following example shows how to redirect all output from a startup batch file. In this example, the ServerDefinition.csdef file creates a startup task that calls *logwrap.cmd*. *logwrap.cmd* calls *Startup2.cmd*, redirecting all output to **%TEMP%\\StartupLog.txt**. Sample output in the **StartupLog.txt** file: > ### Set executionContext appropriately for startup tasks+ Set privileges appropriately for the startup task. Sometimes startup tasks must run with elevated privileges even though the role runs with normal privileges. The [executionContext][Task] attribute sets the privilege level of the startup task. Using `executionContext="limited"` means the startup task has the same privilege level as the role. Using `executionContext="elevated"` means the startup task has administrator privileges, which allows the startup task to perform administrator tasks without giving administrator privileges to your role. The [executionContext][Task] attribute sets the privilege level of the startup t An example of a startup task that requires elevated privileges is a startup task that uses **AppCmd.exe** to configure IIS. **AppCmd.exe** requires `executionContext="elevated"`. ### Use the appropriate taskType+ The [taskType][Task] attribute determines the way the startup task is executed. There are three values: **simple**, **background**, and **foreground**. The background and foreground tasks are started asynchronously, and then the simple tasks are executed synchronously one at a time. -With **simple** startup tasks, you can set the order in which the tasks run by the order in which the tasks are listed in the ServiceDefinition.csdef file. If a **simple** task ends with a non-zero exit code, then the startup procedure stops and the role does not start. +With **simple** startup tasks, you can set the order in which the tasks run by the order in which the tasks are listed in the ServiceDefinition.csdef file. If a **simple** task ends with a nonzero exit code, then the startup procedure stops and the role doesn't start. -The difference between **background** startup tasks and **foreground** startup tasks is that **foreground** tasks keep the role running until the **foreground** task ends. This also means that if the **foreground** task hangs or crashes, the role will not recycle until the **foreground** task is forced closed. For this reason, **background** tasks are recommended for asynchronous startup tasks unless you need that feature of the **foreground** task. +The difference between **background** startup tasks and **foreground** startup tasks is that **foreground** tasks keep the role running until the **foreground** task ends. This structure means that if the **foreground** task hangs or crashes, the role remains unrecycled until the **foreground** task is forced closed. For this reason, **background** tasks are recommended for asynchronous startup tasks unless you need that feature of the **foreground** task. ### End batch files with EXIT /B 0-The role will only start if the **errorlevel** from each of your simple startup task is zero. Not all programs set the **errorlevel** (exit code) correctly, so the batch file should end with an `EXIT /B 0` if everything ran correctly. -A missing `EXIT /B 0` at the end of a startup batch file is a common cause of roles that do not start. +The role only starts if the **errorlevel** from each of your simple startup task is zero. Not all programs set the **errorlevel** (exit code) correctly, so the batch file should end with an `EXIT /B 0` if everything ran correctly. ++A missing `EXIT /B 0` at the end of a startup batch file is a common cause of roles that don't start. > [!NOTE] > I've noticed that nested batch files sometimes stop responding when using the `/B` parameter. You may want to make sure that this problem does not happen if another batch file calls your current batch file, like if you use the [log wrapper](#always-log-startup-activities). You can omit the `/B` parameter in this case. A missing `EXIT /B 0` at the end of a startup batch file is a common cause of ro > ### Expect startup tasks to run more than once-Not all role recycles include a reboot, but all role recycles include running all startup tasks. This means that startup tasks must be able to run multiple times between reboots without any problems. This is discussed in the [preceding section](#detect-that-your-task-has-already-run). ++Not all role recycles include a reboot, but all role recycles include running all startup tasks. This design means that startup tasks must be able to run multiple times between reboots without any problems, which is discussed in the [preceding section](#detect-that-your-task-has-already-run). ### Use local storage to store files that must be accessed in the role+ If you want to copy or create a file during your startup task that is then accessible to your role, then that file must be placed in local storage. See the [preceding section](#create-files-in-local-storage-from-a-startup-task). ## Next steps |
cloud-services | Cloud Services Startup Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-startup-tasks.md | Title: Run Startup Tasks in Azure Cloud Services (classic) | Microsoft Docs -description: Startup tasks help prepare your cloud service environment for your app. This teaches you how startup tasks work and how to make them +description: Startup tasks help prepare your cloud service environment for your app. This article teaches you how startup tasks work and how to make them Previously updated : 02/21/2023 Last updated : 07/23/2024 -You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering COM components, setting registry keys, or starting a long running process. +You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering Component Object Model (COM) components, setting registry keys, or starting a long running process. > [!NOTE] > Startup tasks are not applicable to Virtual Machines, only to Cloud Service Web and Worker roles. You can use startup tasks to perform operations before a role starts. Operations > ## How startup tasks work-Startup tasks are actions that are taken before your roles begin and are defined in the [ServiceDefinition.csdef] file by using the [Task] element within the [Startup] element. Frequently startup tasks are batch files, but they can also be console applications, or batch files that start PowerShell scripts. -Environment variables pass information into a startup task, and local storage can be used to pass information out of a startup task. For example, an environment variable can specify the path to a program you want to install, and files can be written to local storage that can then be read later by your roles. +Startup tasks are actions taken before your roles begin. The [ServiceDefinition.csdef] file defines startup tasks by using the [Task] element within the [Startup] element. Frequently startup tasks are batch files, but they can also be console applications, or batch files that start PowerShell scripts. ++Environment variables pass information into a startup task, and local storage can be used to pass information out of a startup task. For example, an environment variable can specify the path to a program you want to install, and files can be written to local storage. From there, your roles can read the files. Your startup task can log information and errors to the directory specified by the **TEMP** environment variable. During the startup task, the **TEMP** environment variable resolves to the *C:\\Resources\\temp\\[guid].[rolename]\\RoleTemp* directory when running on the cloud. -Startup tasks can also be executed several times between reboots. For example, the startup task will be run each time the role recycles, and role recycles may not always include a reboot. Startup tasks should be written in a way that allows them to run several times without problems. +Startup tasks can also be executed several times between reboots. For example, the startup task runs each time the role recycles, and role recycles may not always include a reboot. Startup tasks should be written in a way that allows them to run several times without problems. -Startup tasks must end with an **errorlevel** (or exit code) of zero for the startup process to complete. If a startup task ends with a non-zero **errorlevel**, the role will not start. +Startup tasks must end with an **errorlevel** (or exit code) of zero for the startup process to complete. If a startup task ends with a nonzero **errorlevel**, the role fails to start. ## Role startup order+ The following lists the role startup procedure in Azure: -1. The instance is marked as **Starting** and does not receive traffic. +1. The instance is marked as **Starting** and doesn't receive traffic. 2. All startup tasks are executed according to their **taskType** attribute. * The **simple** tasks are executed synchronously, one at a time. The following lists the role startup procedure in Azure: > IIS may not be fully configured during the startup task stage in the startup process, so role-specific data may not be available. Startup tasks that require role-specific data should use [Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.OnStart](/previous-versions/azure/reference/ee772851(v=azure.100)). > > -3. The role host process is started and the site is created in IIS. +3. The role host process is started and the site is created in Internet Information Services (IIS). 4. The [Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.OnStart](/previous-versions/azure/reference/ee772851(v=azure.100)) method is called. 5. The instance is marked as **Ready** and traffic is routed to the instance. 6. The [Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.Run](/previous-versions/azure/reference/ee772746(v=azure.100)) method is called. ## Example of a startup task-Startup tasks are defined in the [ServiceDefinition.csdef] file, in the **Task** element. The **commandLine** attribute specifies the name and parameters of the startup batch file or console command, the **executionContext** attribute specifies the privilege level of the startup task, and the **taskType** attribute specifies how the task will be executed. ++Startup tasks are defined in the [ServiceDefinition.csdef] file, in the **Task** element. The **commandLine** attribute specifies the name and parameters of the startup batch file or console command, the **executionContext** attribute specifies the privilege level of the startup task, and the **taskType** attribute specifies how the task executes. In this example, an environment variable, **MyVersionNumber**, is created for the startup task and set to the value "**1.0.0.0**". EXIT /B 0 > ## Description of task attributes+ The following describes the attributes of the **Task** element in the [ServiceDefinition.csdef] file: **commandLine** - Specifies the command line for the startup task: * The command, with optional command line parameters, which begins the startup task.-* Frequently this is the filename of a .cmd or .bat batch file. -* The task is relative to the AppRoot\\Bin folder for the deployment. Environment variables are not expanded in determining the path and file of the task. If environment expansion is required, you can create a small .cmd script that calls your startup task. +* Frequently this attribute is the filename of a .cmd or .bat batch file. +* The task is relative to the AppRoot\\Bin folder for the deployment. Environment variables aren't expanded in determining the path and file of the task. If environment expansion is required, you can create a small .cmd script that calls your startup task. * Can be a console application or a batch file that starts a [PowerShell script](cloud-services-startup-tasks-common.md#create-a-powershell-startup-task). **executionContext** - Specifies the privilege level for the startup task. The privilege level can be limited or elevated: The following describes the attributes of the **Task** element in the [ServiceDe * **limited** The startup task runs with the same privileges as the role. When the **executionContext** attribute for the [Runtime] element is also **limited**, then user privileges are used. * **elevated** - The startup task runs with administrator privileges. This allows startup tasks to install programs, make IIS configuration changes, perform registry changes, and other administrator level tasks, without increasing the privilege level of the role itself. + The startup task runs with administrator privileges. These privileges allow startup tasks to install programs, make IIS configuration changes, perform registry changes, and other administrator level tasks, without increasing the privilege level of the role itself. > [!NOTE] > The privilege level of a startup task does not need to be the same as the role itself. The following describes the attributes of the **Task** element in the [ServiceDe **taskType** - Specifies the way a startup task is executed. * **simple** - Tasks are executed synchronously, one at a time, in the order specified in the [ServiceDefinition.csdef] file. When one **simple** startup task ends with an **errorlevel** of zero, the next **simple** startup task is executed. If there are no more **simple** startup tasks to execute, then the role itself will be started. + Tasks are executed synchronously, one at a time, in the order specified in the [ServiceDefinition.csdef] file. When one **simple** startup task ends with an **errorlevel** of zero, the next **simple** startup task is executed. If there are no more **simple** startup tasks to execute, then the role itself starts. > [!NOTE] > If the **simple** task ends with a non-zero **errorlevel**, the instance will be blocked. Subsequent **simple** startup tasks, and the role itself, will not start. The following describes the attributes of the **Task** element in the [ServiceDe * **background** Tasks are executed asynchronously, in parallel with the startup of the role. * **foreground** - Tasks are executed asynchronously, in parallel with the startup of the role. The key difference between a **foreground** and a **background** task is that a **foreground** task prevents the role from recycling or shutting down until the task has ended. The **background** tasks do not have this restriction. + Tasks are executed asynchronously, in parallel with the startup of the role. The key difference between a **foreground** and a **background** task is that a **foreground** task prevents the role from recycling or shutting down until the task ends. The **background** tasks don't have this restriction. ## Environment variables-Environment variables are a way to pass information to a startup task. For example, you can put the path to a blob that contains a program to install, or port numbers that your role will use, or settings to control features of your startup task. ++Environment variables are a way to pass information to a startup task. For example, you can put the path to a blob that contains a program to install, or port numbers that your role uses, or settings to control features of your startup task. There are two kinds of environment variables for startup tasks; static environment variables and environment variables based on members of the [RoleEnvironment] class. Both are in the [Environment] section of the [ServiceDefinition.csdef] file, and both use the [Variable] element and **name** attribute. -Static environment variables uses the **value** attribute of the [Variable] element. The example above creates the environment variable **MyVersionNumber** which has a static value of "**1.0.0.0**". Another example would be to create a **StagingOrProduction** environment variable which you can manually set to values of "**staging**" or "**production**" to perform different startup actions based on the value of the **StagingOrProduction** environment variable. +Static environment variables use the **value** attribute of the [Variable] element. The preceding example creates the environment variable **MyVersionNumber** which has a static value of "**1.0.0.0**". Another example would be to create a **StagingOrProduction** environment variable, which you can manually set to values of "**staging**" or "**production**" to perform different startup actions based on the value of the **StagingOrProduction** environment variable. -Environment variables based on members of the RoleEnvironment class do not use the **value** attribute of the [Variable] element. Instead, the [RoleInstanceValue] child element, with the appropriate **XPath** attribute value, are used to create an environment variable based on a specific member of the [RoleEnvironment] class. Values for the **XPath** attribute to access various [RoleEnvironment] values can be found [here](cloud-services-role-config-xpath.md). +Environment variables based on members of the RoleEnvironment class don't use the **value** attribute of the [Variable] element. Instead, the [RoleInstanceValue] child element, with the appropriate **XPath** attribute value, are used to create an environment variable based on a specific member of the [RoleEnvironment] class. Values for the **XPath** attribute to access various [RoleEnvironment] values can be found [here](cloud-services-role-config-xpath.md). For example, to create an environment variable that is "**true**" when the instance is running in the compute emulator, and "**false**" when running in the cloud, use the following [Variable] and [RoleInstanceValue] elements: For example, to create an environment variable that is "**true**" when the insta ``` ## Next steps+ Learn how to perform some [common startup tasks](cloud-services-startup-tasks-common.md) with your Cloud Service. [Package](cloud-services-model-and-package.md) your Cloud Service. |
cloud-services | Cloud Services Troubleshoot Common Issues Which Cause Roles Recycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md | Title: Common causes of Cloud Service (classic) roles recycling | Microsoft Docs description: A cloud service role that suddenly recycles can cause significant downtime. Here are some common issues that cause roles to be recycled, which may help you reduce downtime. Previously updated : 02/21/2023 Last updated : 07/23/2024 This article discusses some of the common causes of deployment problems and prov [!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)] ## Missing runtime dependencies-If a role in your application relies on any assembly that is not part of the .NET Framework or the Azure managed library, you must explicitly include that assembly in the application package. Keep in mind that other Microsoft frameworks are not available on Azure by default. If your role relies on such a framework, you must add those assemblies to the application package. -Before you build and package your application, verify the following: +If a role in your application relies on any assembly that isn't part of the .NET Framework or the Azure managed library, you must explicitly include that assembly in the application package. Keep in mind that other Microsoft frameworks aren't available on Azure by default. If your role relies on such a framework, you must add those assemblies to the application package. -* If using Visual studio, make sure the **Copy Local** property is set to **True** for each referenced assembly in your project that is not part of the Azure SDK or the .NET Framework. -* Make sure the web.config file does not reference any unused assemblies in the compilation element. -* The **Build Action** of every .cshtml file is set to **Content**. This ensures that the files will appear correctly in the package and enables other referenced files to appear in the package. +Before you build and package your application, verify the following statements are true: ++* If using Visual studio, make sure the **Copy Local** property is set to **True** for each referenced assembly in your project that isn't part of the Azure SDK or the .NET Framework. +* Make sure the web.config file doesn't reference any unused assemblies in the compilation element. +* The **Build Action** of every .cshtml file is set to **Content**. This setting ensures that the files appear correctly in the package and enables other referenced files to appear in the package. ## Assembly targets wrong platform-Azure is a 64-bit environment. Therefore, .NET assemblies compiled for a 32-bit target won't work on Azure. ++Azure is a 64-bit environment. Therefore, .NET assemblies compiled for a 32-bit target aren't compatible with Azure. ## Role throws unhandled exceptions while initializing or stopping-Any exceptions that are thrown by the methods of the [RoleEntryPoint] class, which includes the [OnStart], [OnStop], and [Run] methods, are unhandled exceptions. If an unhandled exception occurs in one of these methods, the role will recycle. If the role is recycling repeatedly, it may be throwing an unhandled exception each time it tries to start. ++Any exceptions thrown by the methods of the [RoleEntryPoint] class, which includes the [OnStart], [OnStop], and [Run] methods, are unhandled exceptions. If an unhandled exception occurs in one of these methods, the role recycles. If the role is recycling repeatedly, it may be throwing an unhandled exception each time it tries to start. ## Role returns from Run method+ The [Run] method is intended to run indefinitely. If your code overrides the [Run] method, it should sleep indefinitely. If the [Run] method returns, the role recycles. ## Incorrect DiagnosticsConnectionString setting+ If application uses Azure Diagnostics, your service configuration file must specify the `DiagnosticsConnectionString` configuration setting. This setting should specify an HTTPS connection to your storage account in Azure. -To ensure that your `DiagnosticsConnectionString` setting is correct before you deploy your application package to Azure, verify the following: +To ensure that your `DiagnosticsConnectionString` setting is correct before you deploy your application package to Azure, verify the following statements are true: * The `DiagnosticsConnectionString` setting points to a valid storage account in Azure. - By default, this setting points to the emulated storage account, so you must explicitly change this setting before you deploy your application package. If you do not change this setting, an exception is thrown when the role instance attempts to start the diagnostic monitor. This may cause the role instance to recycle indefinitely. + By default, this setting points to the emulated storage account, so you must explicitly change this setting before you deploy your application package. If you don't change this setting, an exception is thrown when the role instance attempts to start the diagnostic monitor. This event may cause the role instance to recycle indefinitely. * The connection string is specified in the following [format](../storage/common/storage-configure-connection-string.md). (The protocol must be specified as HTTPS.) Replace *MyAccountName* with the name of your storage account, and *MyAccountKey* with your access key: ```console DefaultEndpointsProtocol=https;AccountName=MyAccountName;AccountKey=MyAccountKey ``` - If you are developing your application by using Azure Tools for Microsoft Visual Studio, you can use the property pages to set this value. + If you're developing your application by using Azure Tools for Microsoft Visual Studio, you can use the property pages to set this value. ++## Exported certificate doesn't include private key -## Exported certificate does not include private key -To run a web role under TLS, you must ensure that your exported management certificate includes the private key. If you use the *Windows Certificate Manager* to export the certificate, be sure to select **Yes** for the **Export the private key** option. The certificate must be exported in the PFX format, which is the only format currently supported. +To run a web role under Transport Layer Security (TLS), you must ensure that your exported management certificate includes the private key. If you use the *Windows Certificate Manager* to export the certificate, be sure to select **Yes** for the **Export the private key** option. The certificate must be exported in the .pfx format, which is the only format currently supported. ## Next steps+ View more [troubleshooting articles](../index.yml?product=cloud-services&tag=top-support-issue) for cloud services. View more role recycling scenarios at [Kevin Williamson's blog series](/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data). |
cloud-services | Cloud Services Troubleshoot Constrained Allocation Failed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-constrained-allocation-failed.md | -In this article, you'll troubleshoot allocation failures where Azure Cloud services (classic) can't deploy because of allocation constraints. +In this article, you troubleshoot allocation failures where Azure Cloud services (classic) can't deploy because of allocation constraints. When you deploy instances to a Cloud service (classic) or add new web or worker role instances, Microsoft Azure allocates compute resources. In Azure portal, navigate to your Cloud service (classic) and in the sidebar sel ![Image shows the Operation log (classic) blade.](./media/cloud-services-troubleshoot-constrained-allocation-failed/cloud-services-troubleshoot-allocation-logs.png) -When you're inspecting the logs of your Cloud service (classic), you'll see the following exception: +When you inspect the logs of your Cloud service (classic), you see the following exception: |Exception Type |Error Message | |||-|ConstrainedAllocationFailed |Azure operation '`{Operation ID}`' failed with code Compute.ConstrainedAllocationFailed. Details: Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there is an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Retry later or try reducing the VM size or number of role instances. Alternatively, if possible, remove the aforementioned constraints or try deploying to a different region.| +|ConstrainedAllocationFailed |Azure operation '`{Operation ID}`' failed with code Compute.ConstrainedAllocationFailed. Details: Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there's an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Retry later or try reducing the virtual machine (VM) size or number of role instances. Alternatively, if possible, remove the constraints or try deploying to a different region.| ## Cause When the first instance is deployed to a Cloud service (in either staging or production), that Cloud service gets pinned to a cluster. -Over time, the resources in this cluster may become fully utilized. If a Cloud service (classic) makes an allocation request for more resources when insufficient resources are available in the pinned cluster, the request will result in an allocation failure. For more information, see the [allocation failure common issues](cloud-services-allocation-failures.md#common-issues). +Over time, the resources in this cluster may become fully utilized. If a Cloud service (classic) makes an allocation request for more resources when insufficient resources are available in the pinned cluster, the request results in an allocation failure. For more information, see the [allocation failure common issues](cloud-services-allocation-failures.md#common-issues). ## Solution -Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) will happen in the same cluster. +Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) happen in the same cluster. When you experience an allocation error in this scenario, the recommended course of action is to redeploy to a new Cloud service (classic) (and update the *CNAME*). For more allocation failure solutions and background information: > [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md) -If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*. +If your Azure issue isn't addressed in this article, visit the Azure forums on [the Microsoft Developer Network (MSDN) and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*. |
cloud-services | Cloud Services Troubleshoot Default Temp Folder Size Too Small Web Worker Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-default-temp-folder-size-too-small-web-worker-role.md | Title: Default TEMP folder size is too small for a role | Microsoft Docs description: A cloud service role has a limited amount of space for the TEMP folder. This article provides some suggestions on how to avoid running out of space. Previously updated : 02/21/2023 Last updated : 07/24/2024 The default temporary directory of a cloud service worker or web role has a maxi [!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)] ## Why do I run out of space?-The standard Windows environment variables TEMP and TMP are available to code that is running in your application. Both TEMP and TMP point to a single directory that has a maximum size of 100 MB. Any data that is stored in this directory is not persisted across the lifecycle of the cloud service; if the role instances in a cloud service are recycled, the directory is cleaned. +The standard Windows environment variables TEMP and TMP are available to code that is running in your application. Both TEMP and TMP point to a single directory that has a maximum size of 100 MB. Any data stored in this directory isn't persisted across the lifecycle of the cloud service. If the role instances in a cloud service are recycled, the directory is cleaned. ## Suggestion to fix the problem Implement one of the following alternatives: |
cloud-services | Cloud Services Troubleshoot Deployment Problems | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-deployment-problems.md | Title: Troubleshoot cloud service (classic) deployment problems | Microsoft Docs description: There are a few common problems you may run into when deploying a cloud service to Azure. This article provides solutions to some of them. Previously updated : 02/21/2023 Last updated : 07/24/2024 When you deploy a cloud service application package to Azure, you can obtain inf You can find the **Properties** pane as follows: -* In the Azure portal, click the deployment of your cloud service, click **All settings**, and then click **Properties**. +* In the Azure portal, choose the deployment of your cloud service, select **All settings**, and then select **Properties**. > [!NOTE] > You can copy the contents of the **Properties** pane to the clipboard by clicking the icon in the upper-right corner of the pane. You can find the **Properties** pane as follows: [!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)] -## Problem: I cannot access my website, but my deployment is started and all role instances are ready -The website URL link shown in the portal does not include the port. The default port for websites is 80. If your application is configured to run in a different port, you must add the correct port number to the URL when accessing the website. +## Problem: I can't access my website, but my deployment is started and all role instances are ready +The website URL link shown in the portal doesn't include the port. The default port for websites is 80. If your application is configured to run in a different port, you must add the correct port number to the URL when accessing the website. -1. In the Azure portal, click the deployment of your cloud service. +1. In the Azure portal, choose the deployment of your cloud service. 2. In the **Properties** pane of the Azure portal, check the ports for the role instances (under **Input Endpoints**).-3. If the port is not 80, add the correct port value to the URL when you access the application. To specify a non-default port, type the URL, followed by a colon (:), followed by the port number, with no spaces. +3. If the port isn't 80, add the correct port value to the URL when you access the application. To specify a nondefault port, type the URL, followed by a colon (:), followed by the port number, with no spaces. ## Problem: My role instances recycled without me doing anything-Service healing occurs automatically when Azure detects problem nodes and therefore moves role instances to new nodes. When this occurs, you might see your role instances recycling automatically. To find out if service healing occurred: +Service healing occurs automatically when Azure detects problem nodes and therefore moves role instances to new nodes. When these moves occur, you might see your role instances recycling automatically. To find out if service healing occurred: -1. In the Azure portal, click the deployment of your cloud service. +1. In the Azure portal, choose the deployment of your cloud service. 2. In the **Properties** pane of the Azure portal, review the information and determine whether service healing occurred during the time that you observed the roles recycling. -Roles will also recycle roughly once per month during host-OS and guest-OS updates. +Roles recycle roughly once per month during host-OS and guest-OS updates. For more information, see the blog post [Role Instance Restarts Due to OS Upgrades](/archive/blogs/kwill/role-instance-restarts-due-to-os-upgrades) -## Problem: I cannot do a VIP swap and receive an error -A VIP swap is not allowed if a deployment update is in progress. Deployment updates can occur automatically when: +## Problem: I can't do a VIP swap and receive an error +A VIP swap isn't allowed if a deployment update is in progress. Deployment updates can occur automatically when: -* A new guest operating system is available and you are configured for automatic updates. +* A new guest operating system is available and you configured for automatic updates. * Service healing occurs. To find out if an automatic update is preventing you from doing a VIP swap: -1. In the Azure portal, click the deployment of your cloud service. -2. In the **Properties** pane of the Azure portal, look at the value of **Status**. If it is **Ready**, then check **Last operation** to see if one recently happened that might prevent the VIP swap. +1. In the Azure portal, choose the deployment of your cloud service. +2. In the **Properties** pane of the Azure portal, look at the value of **Status**. If it's **Ready**, then check **Last operation** to see if one recently happened that might prevent the VIP swap. 3. Repeat steps 1 and 2 for the production deployment. 4. If an automatic update is in process, wait for it to finish before trying to do the VIP swap. ## Problem: A role instance is looping between Started, Initializing, Busy, and Stopped-This condition could indicate a problem with your application code, package, or configuration file. In that case, you should be able to see the status changing every few minutes and the Azure portal may say something like **Recycling**, **Busy**, or **Initializing**. This indicates that there is something wrong with the application that is keeping the role instance from running. +This condition could indicate a problem with your application code, package, or configuration file. In that case, you should be able to see the status changing every few minutes and the Azure portal may say something like **Recycling**, **Busy**, or **Initializing**. This fluctuation of status indicates that there's something wrong with the application that is keeping the role instance from running. For more information on how to troubleshoot for this problem, see the blog post [Azure PaaS Compute Diagnostics Data](/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data) and [Common issues that cause roles to recycle](cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md). ## Problem: My application stopped working-1. In the Azure portal, click the role instance. +1. In the Azure portal, choose the role instance. 2. In the **Properties** pane of the Azure portal, consider the following conditions to resolve your problem:- * If the role instance has recently stopped (you can check the value of **Abort count**), the deployment could be updating. Wait to see if the role instance resumes functioning on its own. + * If the role instance recently stopped (you can check the value of **Abort count**), the deployment could be updating. Wait to see if the role instance resumes functioning on its own. * If the role instance is **Busy**, check your application code to see if the [StatusCheck](/previous-versions/azure/reference/ee758135(v=azure.100)) event is handled. You might need to add or fix some code that handles this event. * Go through the diagnostic data and troubleshooting scenarios in the blog post [Azure PaaS Compute Diagnostics Data](/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data). |
cloud-services | Cloud Services Troubleshoot Fabric Internal Server Error | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-fabric-internal-server-error.md | -In this article, you'll troubleshoot allocation failures where the fabric controller cannot allocate when deploying an Azure Cloud service (classic). +In this article, you troubleshoot allocation failures where the fabric controller can't allocate when deploying an Azure Cloud service (classic). When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources. In Azure portal, navigate to your Cloud service (classic) and in the sidebar sel ![Image shows the Operation log (classic) blade.](./media/cloud-services-troubleshoot-fabric-internal-server-error/cloud-services-troubleshoot-allocation-logs.png) -When you're inspecting the logs of your Cloud service (classic), you'll see the following exception: +When you inspect the logs of your Cloud service (classic), you see the following exception: |Exception |Error Message | ||| Follow the guidance for allocation failures in the following scenarios. ### Not pinned to a cluster -The first time you deploy a Cloud service (classic), the cluster hasn't been selected yet, so the cloud service isn't *pinned*. Azure may have a deployment failure because: +The first time you deploy a Cloud service (classic), the cluster is unselected, so the cloud service isn't *pinned*. Azure may have a deployment failure because: -- You've selected a particular size that isn't available in the region.+- You selected a particular size that isn't available in the region. - The combination of sizes that are needed across different roles isn't available in the region. When you experience an allocation error in this scenario, the recommended course of action is to check the available sizes in the region and change the size you previously specified. When you experience an allocation error in this scenario, the recommended course ### Pinned to a cluster -Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) will happen in the same cluster. +Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) happen in the same cluster. When you experience an allocation error in this scenario, the recommended course of action is to redeploy to a new Cloud service (classic) (and update the *CNAME*). For more allocation failure solutions and background information: > [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md) -If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*. +If your Azure issue isn't addressed in this article, visit the Azure forums on [the Microsoft Developer Network (MSDN) and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*. |
cloud-services | Cloud Services Troubleshoot Location Not Found For Role Size | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-location-not-found-for-role-size.md | In the [Azure portal](https://portal.azure.com/), navigate to your Cloud service :::image type="content" source="./media/cloud-services-troubleshoot-location-not-found-for-role-size/cloud-services-troubleshoot-allocation-logs.png" alt-text="Screenshot shows the Operation log (classic) pane."::: -When you inspect the logs of your Cloud service (classic), you'll see the following exception: +When you inspect the logs of your Cloud service (classic), you see the following exception: |Exception Type |Error Message | ||| When you inspect the logs of your Cloud service (classic), you'll see the follow ## Cause -There's a capacity issue with the region or cluster that you're deploying to. The `LocationNotFoundForRoleSize` exception occurs when the resource SKU you've selected, the virtual machine size, isn't available for the region specified. +There's a capacity issue with the region or cluster that you're deploying to. The `LocationNotFoundForRoleSize` exception occurs when the resource SKU you selected, the virtual machine size, isn't available for the region specified. ## Find SKUs in a region -In this scenario, you should select a different region or SKU for your Cloud service (classic) deployment. Before you deploy or upgrade your Cloud service (classic), determine which SKUs are available in a region or availability zone. Follow the [Azure CLI](#list-skus-in-region-using-azure-cli), [PowerShell](#list-skus-in-region-using-powershell), or [REST API](#list-skus-in-region-using-rest-api) processes below. +In this scenario, you should select a different region or SKU for your Cloud service (classic) deployment. Before you deploy or upgrade your Cloud service (classic), determine which SKUs are available in a region or availability zone. Use the following the [Azure CLI](#list-skus-in-region-using-azure-cli), [PowerShell](#list-skus-in-region-using-powershell), or [REST API](#list-skus-in-region-using-rest-api) processes. ### List SKUs in region using Azure CLI You can use the [Resource Skus - List](/rest/api/compute/resourceskus/list) oper ## Next steps -For more allocation failure solutions and to better understand how they're generated: +For more allocation failure solutions and to better understand how allocation failures occur: > [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md) |
cloud-services | Cloud Services Troubleshoot Overconstrained Allocation Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-overconstrained-allocation-request.md | -In this article, you'll troubleshoot over constrained allocation failures that prevent deployment of Azure Cloud Services (classic). +In this article, you troubleshoot over constrained allocation failures that prevent deployment of Azure Cloud Services (classic). When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources. You may occasionally receive errors during these operations even before you reac |Exception Type |Error Message | |||-|OverconstrainedAllocationRequest |The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region.| +|OverconstrainedAllocationRequest |The virtual machine (VM) size (or combination of VM sizes) required by this deployment can't be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings. Also try deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group. You can try deploying to a different region altogether.| ## Cause Follow the guidance for allocation failures in the following scenarios. ### Not pinned to a cluster -The first time you deploy a Cloud service (classic), the cluster hasn't been selected yet, so the cloud service isn't *pinned*. Azure may have a deployment failure because: +The first time you deploy a Cloud service (classic), the cluster is unselected, so the cloud service isn't *pinned*. Azure may have a deployment failure because: -- You've selected a particular size that isn't available in the region.+- You selected a particular size that isn't available in the region. - The combination of sizes that are needed across different roles isn't available in the region. When you experience an allocation error in this scenario, the recommended course of action is to check the available sizes in the region and change the size you previously specified. When you experience an allocation error in this scenario, the recommended course ### Pinned to a cluster -Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) will happen in the same cluster. +Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) happen in the same cluster. When you experience an allocation error in this scenario, the recommended course of action is to redeploy to a new Cloud service (classic) (and update the *CNAME*). For more allocation failure solutions and background information: > [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md) -If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*. +If your Azure issue isn't addressed in this article, visit the Azure forums on [the Microsoft Developer Network (MSDN) and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*. |
cloud-services | Cloud Services Troubleshoot Roles That Fail Start | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-roles-that-fail-start.md | Title: Troubleshoot roles that fail to start | Microsoft Docs description: Here are some common reasons why a Cloud Service role may fail to start. Solutions to these problems are also provided. Previously updated : 02/21/2023 Last updated : 07/24/2024 Here are some common problems and solutions related to Azure Cloud Services role [!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)] ## Missing DLLs or dependencies-Unresponsive roles and roles that are cycling between **Initializing**, **Busy**, and **Stopping** states can be caused by missing DLLs or assemblies. +Unresponsive roles and roles that are cycling between **Initializing**, **Busy**, and **Stopping** states can be caused by missing dynamic link libraries (DLLs) or assemblies. Symptoms of missing DLLs or assemblies can be: * Your role instance is cycling through **Initializing**, **Busy**, and **Stopping** states.-* Your role instance has moved to **Ready** but if you navigate to your web application, the page does not appear. +* Your role instance moved to **Ready** but if you navigate to your web application, the page doesn't appear. There are several recommended methods for investigating these issues. ## Diagnose missing DLL issues in a web role-When you navigate to a website that is deployed in a web role, and the browser displays a server error similar to the following, it may indicate that a DLL is missing. +When you navigate to a website deployed in a web role, and the browser displays a server error similar to the following, it may indicate a DLL is missing. ![Server Error in '/' Application.](./media/cloud-services-troubleshoot-roles-that-fail-start/ic503388.png) To view more complete errors without using Remote Desktop: 4. Save the file. 5. Repackage and redeploy the service. -Once the service is redeployed, you will see an error message with the name of the missing assembly or DLL. +Once the service redeploys, you see an error message with the name of the missing assembly or DLL. ## Diagnose issues by viewing the error remotely You can use Remote Desktop to access the role and view more complete error information remotely. Use the following steps to view the errors by using Remote Desktop: You can use Remote Desktop to access the role and view more complete error infor 9. Open Internet Explorer. 10. Type the address and the name of the web application. For example, `http://<IPV4 Address>/default.aspx`. -Navigating to the website will now return more explicit error messages: +Navigating to the website now returns more explicit error messages: * Server Error in '/' Application. * Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. For best results in using this method of diagnosis, you should use a computer or 1. Install the standalone version of the [Azure SDK](https://azure.microsoft.com/downloads/). 2. On the development machine, build the cloud service project. 3. In Windows Explorer, navigate to the bin\debug folder of the cloud service project.-4. Copy the .csx folder and .cscfg file to the computer that you are using to debug the issues. +4. Copy the .csx folder and .cscfg file to the computer you're using to debug the issues. 5. On the clean machine, open an Azure SDK Command Prompt window and type `csrun.exe /devstore:start`. 6. At the command prompt, type `run csrun <path to .csx folder> <path to .cscfg file> /launchBrowser`.-7. When the role starts, you will see detailed error information in Internet Explorer. You can also use standard Windows troubleshooting tools to further diagnose the problem. +7. When the role starts, you see detailed error information in Internet Explorer. You can also use standard Windows troubleshooting tools to further diagnose the problem. ## Diagnose issues by using IntelliTrace For worker and web roles that use .NET Framework 4, you can use [IntelliTrace](/visualstudio/debugger/intellitrace), which is available in Microsoft Visual Studio Enterprise. Follow these steps to deploy the service with IntelliTrace enabled: 3. Once the instance starts, open the **Server Explorer**. 4. Expand the **Azure\\Cloud Services** node and locate the deployment. 5. Expand the deployment until you see the role instances. Right-click on one of the instances.-6. Choose **View IntelliTrace logs**. The **IntelliTrace Summary** will open. -7. Locate the exceptions section of the summary. If there are exceptions, the section will be labeled **Exception Data**. +6. Choose **View IntelliTrace logs**. The **IntelliTrace Summary** opens. +7. Locate the exceptions section of the summary. If there are exceptions, the section is labeled **Exception Data**. 8. Expand the **Exception Data** and look for **System.IO.FileNotFoundException** errors similar to the following: ![Exception data, missing file, or assembly](./media/cloud-services-troubleshoot-roles-that-fail-start/ic503390.png) To address missing DLL and assembly errors, follow these steps: 1. Open the solution in Visual Studio. 2. In **Solution Explorer**, open the **References** folder.-3. Click the assembly identified in the error. +3. Select the assembly identified in the error. 4. In the **Properties** pane, locate **Copy Local property** and set the value to **True**. 5. Redeploy the cloud service. -Once you have verified that all errors have been corrected, you can deploy the service without checking the **Enable IntelliTrace for .NET 4 roles** check box. +Once you verify all errors are corrected, you can deploy the service without checking the **Enable IntelliTrace for .NET 4 roles** check box. ## Next steps View more [troubleshooting articles](../index.yml?product=cloud-services&tag=top-support-issue) for cloud services. |
cloud-services | Cloud Services Update Azure Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-update-azure-service.md | Title: How to update a cloud service (classic) | Microsoft Docs description: Learn how to update cloud services in Azure. Learn how an update on a cloud service proceeds to ensure availability. Previously updated : 02/21/2023 Last updated : 07/24/2024 -Updating a cloud service, including both its roles and guest OS, is a three step process. First, the binaries and configuration files for the new cloud service or OS version must be uploaded. Next, Azure reserves compute and network resources for the cloud service based on the requirements of the new cloud service version. Finally, Azure performs a rolling upgrade to incrementally update the tenant to the new version or guest OS, while preserving your availability. This article discusses the details of this last step ΓÇô the rolling upgrade. +The process to update a cloud service, including both its roles and guest OS, takes three steps. First, the binaries and configuration files for the new cloud service or OS version must be uploaded. Next, Azure reserves compute and network resources for the cloud service based on the requirements of the new cloud service version. Finally, Azure performs a rolling upgrade to incrementally update the tenant to the new version or guest OS, while preserving your availability. This article discusses the details of this last step ΓÇô the rolling upgrade. ## Update an Azure Service-Azure organizes your role instances into logical groupings called upgrade domains (UD). Upgrade domains (UD) are logical sets of role instances that are updated as a group. Azure updates a cloud service one UD at a time, which allows instances in other UDs to continue serving traffic. +Azure organizes your role instances into logical groupings called upgrade domains (UD). Upgrade domains (UD) are logical sets of role instances that are updated as a group. Azure updates a cloud service one UD at a time, which allows instances in other UDs to continue serving traffic. The default number of upgrade domains is 5. You can specify a different number of upgrade domains by including the upgradeDomainCount attribute in the serviceΓÇÖs definition file (.csdef). For more information about the upgradeDomainCount attribute, see [Azure Cloud Services Definition Schema (.csdef File)](./schema-csdef-file.md). When you perform an in-place update of one or more roles in your service, Azure > [!NOTE] > While the terms **update** and **upgrade** have slightly different meaning in the context Azure, they can be used interchangeably for the processes and descriptions of the features in this document.-> -> -Your service must define at least two instances of a role for that role to be updated in-place without downtime. If the service consists of only one instance of one role, your service will be unavailable until the in-place update has finished. +Your service must define at least two instances of a role for that role to be updated in-place without downtime. If the service consists of only one instance of one role, your service is unavailable until the in-place update finishes. -This topic covers the following information about Azure updates: +This article covers the following information about Azure updates: * [Allowed service changes during an update](#AllowedChanges) * [How an upgrade proceeds](#howanupgradeproceeds) This topic covers the following information about Azure updates: ## Allowed service changes during an update The following table shows the allowed changes to a service during an update: -| Changes permitted to hosting, services, and roles | In-place update | Staged (VIP swap) | Delete and re-deploy | +| Changes permitted to hosting, services, and roles | In-place update | Staged (VIP swap) | Delete and redeploy | | | | | | | Operating system version |Yes |Yes |Yes | | .NET trust level |Yes |Yes |Yes | The following table shows the allowed changes to a service during an update: > > -The following items are not supported during an update: +The following items aren't supported during an update: * Changing the name of a role. Remove and then add the role with the new name. * Changing of the Upgrade Domain count. * Decreasing the size of the local resources. -If you are making other updates to your service's definition, such as decreasing the size of local resource, you must perform a VIP swap update instead. For more information, see [Swap Deployment](/previous-versions/azure/reference/ee460814(v=azure.100)). +If you make other updates to your service's definition, such as decreasing the size of local resource, you must perform a VIP swap update instead. For more information, see [Swap Deployment](/previous-versions/azure/reference/ee460814(v=azure.100)). <a name="howanupgradeproceeds"></a> ## How an upgrade proceeds-You can decide whether you want to update all of the roles in your service or a single role in the service. In either case, all instances of each role that is being upgraded and belong to the first upgrade domain are stopped, upgraded, and brought back online. Once they are back online, the instances in the second upgrade domain are stopped, upgraded, and brought back online. A cloud service can have at most one upgrade active at a time. The upgrade is always performed against the latest version of the cloud service. +You can decide whether you want to update all of the roles in your service or a single role in the service. In either case, all instances of each role that is being upgraded and belong to the first upgrade domain are stopped, upgraded, and brought back online. Once they're back online, the instances in the second upgrade domain are stopped, upgraded, and brought back online. A cloud service can have at most one upgrade active at a time. The upgrade is always performed against the latest version of the cloud service. -The following diagram illustrates how the upgrade proceeds if you are upgrading all of the roles in the service: +The following diagram illustrates how the upgrade proceeds if you upgrade all of the roles in the service: ![Upgrade service](media/cloud-services-update-azure-service/IC345879.png "Upgrade service") -This next diagram illustrates how the update proceeds if you are upgrading only a single role: +This next diagram illustrates how the update proceeds if you upgrade only a single role: ![Upgrade role](media/cloud-services-update-azure-service/IC345880.png "Upgrade role") -During an automatic update, the Azure Fabric Controller periodically evaluates the health of the cloud service to determine when itΓÇÖs safe to walk the next UD. This health evaluation is performed on a per-role basis and considers only instances in the latest version (i.e. instances from UDs that have already been walked). It verifies that a minimum number of role instances, for each role, have achieved a satisfactory terminal state. +During an automatic update, the Azure Fabric Controller periodically evaluates the health of the cloud service to determine when itΓÇÖs safe to walk the next UD. This health evaluation is performed on a per-role basis and considers only instances in the latest version (that is, instances from UDs that already walked). It verifies that, for each role, a minimum number of role instances achieved a satisfactory terminal state. ### Role Instance Start Timeout-The Fabric Controller will wait 30 minutes for each role instance to reach a Started state. If the timeout duration elapses, the Fabric Controller will continue walking to the next role instance. +The Fabric Controller waits 30 minutes for each role instance to reach a Started state. If the timeout duration elapses, the Fabric Controller will continue walking to the next role instance. ### Impact to drive data during Cloud Service upgrades -When upgrading a service from a single instance to multiple instances your service will be brought down while the upgrade is performed due to the way Azure upgrades services. The service level agreement guaranteeing service availability only applies to services that are deployed with more than one instance. The following list describes how the data on each drive is affected by each Azure service upgrade scenario: +When you upgrade a service from a single instance to multiple instances, Azure brings your services down while the upgrade is performed. The service level agreement guaranteeing service availability only applies to services that are deployed with more than one instance. The following list describes how each Azure service upgrade scenario affects the data on each drive: |Scenario|C Drive|D Drive|E Drive| |--|-|-|-|-|VM reboot|Preserved|Preserved|Preserved| +|Virtual machine (VM) reboot|Preserved|Preserved|Preserved| |Portal reboot|Preserved|Preserved|Destroyed| |Portal reimage|Preserved|Destroyed|Destroyed| |In-Place Upgrade|Preserved|Preserved|Destroyed| |Node migration|Destroyed|Destroyed|Destroyed| -Note that, in the above list, the E: drive represents the roleΓÇÖs root drive, and should not be hard-coded. Instead, use the **%RoleRoot%** environment variable to represent the drive. +In the preceding list, the E: drive represents the roleΓÇÖs root drive, and shouldn't be hard-coded. Instead, use the **%RoleRoot%** environment variable to represent the drive. To minimize the downtime when upgrading a single-instance service, deploy a new multi-instance service to the staging server and perform a VIP swap. <a name="RollbackofanUpdate"></a> ## Rollback of an update-Azure provides flexibility in managing services during an update by letting you initiate additional operations on a service, after the initial update request is accepted by the Azure Fabric Controller. A rollback can only be performed when an update (configuration change) or upgrade is in the **in progress** state on the deployment. An update or upgrade is considered to be in-progress as long as there is at least one instance of the service which has not yet been updated to the new version. To test whether a rollback is allowed, check the value of the RollbackAllowed flag, returned by [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)) operations, is set to true. +Azure provides flexibility in managing services during an update by letting you initiate more operations on a service, after the Azure Fabric Controller accepts the initial update request. A rollback can only be performed when an update (configuration change) or upgrade is in the **in progress** state on the deployment. An update or upgrade is considered to be in-progress as long as there is at least one instance of the service that remains unupdated to the new version. To test whether a rollback is allowed, check the value of the RollbackAllowed flag is set to true. [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)) operations return the RollbackAllowed flag for your reference. > [!NOTE] > It only makes sense to call Rollback on an **in-place** update or upgrade because VIP swap upgrades involve replacing one entire running instance of your service with another.-> -> Rollback of an in-progress update has the following effects on the deployment: -* Any role instances which had not yet been updated or upgraded to the new version are not updated or upgraded, because those instances are already running the target version of the service. -* Any role instances which had already been updated or upgraded to the new version of the service package (\*.cspkg) file or the service configuration (\*.cscfg) file (or both files) are reverted to the pre-upgrade version of these files. +* Any role instances that remain unupdated or unupgraded to the new version aren't updated or upgraded, because those instances are already running the target version of the service. +* Any role instances that already updated or upgraded to the new version of the service package (\*.cspkg) file or the service configuration (\*.cscfg) file (or both files) are reverted to the preupgrade version of these files. -This functionally is provided by the following features: +The following features provide this functionality: -* The [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation, which can be called on a configuration update (triggered by calling [Change Deployment Configuration](/previous-versions/azure/reference/ee460809(v=azure.100))) or an upgrade (triggered by calling [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100))) as long as there is at least one instance in the service which has not yet been updated to the new version. +* The [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation, which can be called on a configuration update (triggered by calling [Change Deployment Configuration](/previous-versions/azure/reference/ee460809(v=azure.100))) or an upgrade (triggered by calling [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100))) as long as there is at least one instance in the service that remains unupdated to the new version. * The Locked element and the RollbackAllowed element, which are returned as part of the response body of the [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)) operations: 1. The Locked element allows you to detect when a mutating operation can be invoked on a given deployment. 2. The RollbackAllowed element allows you to detect when the [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation can be called on a given deployment. - In order to perform a rollback, you do not have to check both the Locked and the RollbackAllowed elements. It suffices to confirm that RollbackAllowed is set to true. These elements are only returned if these methods are invoked by using the request header set to ΓÇ£x-ms-version: 2011-10-01ΓÇ¥ or a later version. For more information about versioning headers, see [Service Management Versioning](/previous-versions/azure/gg592580(v=azure.100)). + In order to perform a rollback, you don't have to check both the Locked and the RollbackAllowed elements. It suffices to confirm that RollbackAllowed is set to true. These elements are only returned if these methods are invoked by using the request header set to ΓÇ£x-ms-version: 2011-10-01ΓÇ¥ or a later version. For more information about versioning headers, see [the versioning of the classic deployment model](/previous-versions/azure/gg592580(v=azure.100)). -There are some situations where a rollback of an update or upgrade is not supported, these are as follows: +There are some situations where a rollback of an update or upgrade isn't supported, these situations are as follows: -* Reduction in local resources - If the update increases the local resources for a role the Azure platform does not allow rolling back. -* Quota limitations - If the update was a scale down operation you may no longer have sufficient compute quota to complete the rollback operation. Each Azure subscription has a quota associated with it that specifies the maximum number of cores which can be consumed by all hosted services that belong to that subscription. If performing a rollback of a given update would put your subscription over quota then that a rollback will not be enabled. -* Race condition - If the initial update has completed, a rollback is not possible. +* Reduction in local resources - If the update increases the local resources for a role the Azure platform doesn't allow rolling back. +* Quota limitations - If the update was a scale down operation you may no longer have sufficient compute quota to complete the rollback operation. Each Azure subscription has a quota associated with it. The quota specifies the maximum number of cores that all hosted services belonging to that subscription can consume. If performing a rollback of a given update would put your subscription over quota, then that rollback won't be enabled. +* Race condition - If the initial update completes, a rollback isn't possible. -An example of when the rollback of an update might be useful is if you are using the [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)) operation in manual mode to control the rate at which a major in-place upgrade to your Azure hosted service is rolled out. +An example of when the rollback of an update might be useful is if you use the [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)) operation in manual mode to control the rate at which a major in-place upgrade rolls out to your Azure hosted service. -During the rollout of the upgrade you call [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)) in manual mode and begin to walk upgrade domains. If at some point, as you monitor the upgrade, you note some role instances in the first upgrade domains that you examine have become unresponsive, you can call the [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation on the deployment, which will leave untouched the instances which had not yet been upgraded and rollback instances which had been upgraded to the previous service package and configuration. +During the rollout of the upgrade, you call [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)) in manual mode and begin to walk upgrade domains. If at some point, as you monitor the upgrade, you note some role instances in the first upgrade domains are unresponsive, you can call the [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation on the deployment. This operation leaves untouched the instances that remain unupgraded and rolls back upgraded instances to the previous service package and configuration. <a name="multiplemutatingoperations"></a> ## Initiating multiple mutating operations on an ongoing deployment-In some cases you may want to initiate multiple simultaneous mutating operations on an ongoing deployment. For example, you may perform a service update and, while that update is being rolled out across your service, you want to make some change, e.g. to roll the update back, apply a different update, or even delete the deployment. A case in which this might be necessary is if a service upgrade contains buggy code which causes an upgraded role instance to repeatedly crash. In this case, the Azure Fabric Controller will not be able to make progress in applying that upgrade because an insufficient number of instances in the upgraded domain are healthy. This state is referred to as a *stuck deployment*. You can unstick the deployment by rolling back the update or applying a fresh update over top of the failing one. +In some cases, you may want to initiate multiple simultaneous mutating operations on an ongoing deployment. For example, you may perform a service update and, while the update rolls out across your service, you want to make some change, like rolling back the update, applying a different update, or even deleting the deployment. A case in which this scenario might arise is if a service upgrade contains buggy code that causes an upgraded role instance to repeatedly crash. In this case, the Azure Fabric Controller is unable to make progress in applying that upgrade because an insufficient number of instances in the upgraded domain are healthy. This state is referred to as a *stuck deployment*. You can unstick the deployment by rolling back the update or applying a fresh update over top of the failing one. -Once the initial request to update or upgrade the service has been received by the Azure Fabric Controller, you can start subsequent mutating operations. That is, you do not have to wait for the initial operation to complete before you can start another mutating operation. +Once the Azure Fabric Controller receives the initial request to update or upgrade the service, you can start subsequent mutating operations. That is, you don't have to wait for the initial operation to complete before you can start another mutating operation. -Initiating a second update operation while the first update is ongoing will perform similar to the rollback operation. If the second update is in automatic mode, the first upgrade domain will be upgraded immediately, possibly leading to instances from multiple upgrade domains being offline at the same point in time. +Initiating a second update operation while the first update is ongoing plays out similarly to the rollback operation. If the second update is in automatic mode, the first upgrade domain upgrades immediately, possibly leading to instances from multiple upgrade domains being offline at the same time. The mutating operations are as follows: [Change Deployment Configuration](/previous-versions/azure/reference/ee460809(v=azure.100)), [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)), [Update Deployment Status](/previous-versions/azure/reference/ee460808(v=azure.100)), [Delete Deployment](/previous-versions/azure/reference/ee460815(v=azure.100)), and [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)). -Two operations, [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)), return the Locked flag which can be examined to determine whether a mutating operation can be invoked on a given deployment. +Two operations, [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)), return the Locked flag. You can examine the Locked flag to determine whether you can invoke a mutating operation on a given deployment. -In order to call the version of these methods which returns the Locked flag, you must set request header to ΓÇ£x-ms-version: 2011-10-01ΓÇ¥ or a later. For more information about versioning headers, see [Service Management Versioning](/previous-versions/azure/gg592580(v=azure.100)). +In order to call the version of these methods that returns the Locked flag, you must set request header to ΓÇ£x-ms-version: 2011-10-01ΓÇ¥ or a later. For more information about versioning headers, see [the versioning of the classic deployment model](/previous-versions/azure/gg592580(v=azure.100)). <a name="distributiondfroles"></a> ## Distribution of roles across upgrade domains Azure distributes instances of a role evenly across a set number of upgrade domains, which can be configured as part of the service definition (.csdef) file. The max number of upgrade domains is 20 and the default is 5. For more information about how to modify the service definition file, see [Azure Service Definition Schema (.csdef File)](cloud-services-model-and-package.md#csdef). -For example, if your role has ten instances, by default each upgrade domain contains two instances. If your role has 14 instances, then four of the upgrade domains contain three instances, and a fifth domain contains two. +For example, if your role has 10 instances, by default each upgrade domain contains two instances. If your role has 14 instances, then four of the upgrade domains contain three instances, and a fifth domain contains two. Upgrade domains are identified with a zero-based index: the first upgrade domain has an ID of 0, and the second upgrade domain has an ID of 1, and so on. -The following diagram illustrates how a service than contains two roles are distributed when the service defines two upgrade domains. The service is running eight instances of the web role and nine instances of the worker role. +The following diagram illustrates how the roles in a service containing two roles are distributed when the service defines two upgrade domains. The service is running eight instances of the web role and nine instances of the worker role. ![Distribution of Upgrade Domains](media/cloud-services-update-azure-service/IC345533.png "Distribution of Upgrade Domains") |
cloud-services | Cloud Services Workflow Process | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-workflow-process.md | Title: Workflow of Windows Azure VM Architecture | Microsoft Docs + Title: Workflow of Microsoft Azure Virtual Machine (VM) Architecture | Microsoft Docs description: This article provides overview of the workflow processes when you deploy a service. Previously updated : 02/21/2023 Last updated : 07/24/2024 -# Workflow of Windows Azure classic VM Architecture +# Workflow of Microsoft Azure classic Virtual Machine (VM) Architecture [!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)] The following diagram presents the architecture of Azure resources. ## Workflow basics -**A**. RDFE / FFE is the communication path from the user to the fabric. RDFE (RedDog Front End) is the publicly exposed API that is the front end to the Management Portal and the Service Management API such as Visual Studio, Azure MMC, and so on. All requests from the user go through RDFE. FFE (Fabric Front End) is the layer that translates requests from RDFE into fabric commands. All requests from RDFE go through the FFE to reach the fabric controllers. +**A**. RDFE / FFE is the communication path from the user to the fabric. RDFE (RedDog Front End) is the publicly exposed API that is the front end to the Management Portal and the classic deployment model API, such as Visual Studio, Azure MMC, and so on. All requests from the user go through RDFE. FFE (Fabric Front End) is the layer that translates requests from RDFE into fabric commands. All requests from RDFE go through the FFE to reach the fabric controllers. **B**. The fabric controller is responsible for maintaining and monitoring all the resources in the data center. It communicates with fabric host agents on the fabric OS sending information such as the Guest OS version, service package, service configuration, and service state. -**C**. The Host Agent lives on the Host OS and is responsible for setting up Guest OS and communicating with Guest Agent (WindowsAzureGuestAgent) in order to update the role toward an intended goal state and do heartbeat checks with the Guest agent. If Host Agent does not receive heartbeat response for 10 minutes, Host Agent restarts the Guest OS. +**C**. The Host Agent lives on the Host OS and is responsible for setting up Guest OS. It also handles communicating with Guest Agent (WindowsAzureGuestAgent) to update the role toward an intended goal state and do heartbeat checks with the Guest Agent. If Host Agent doesn't receive heartbeat response for 10 minutes, Host Agent restarts the Guest OS. **C2**. WaAppAgent is responsible for installing, configuring, and updating WindowsAzureGuestAgent.exe. -**D**. WindowsAzureGuestAgent is responsible for the following: +**D**. WindowsAzureGuestAgent is responsible for the following tasks: -1. Configuring the Guest OS including firewall, ACLs, LocalStorage resources, service package and configuration, and certificates. -2. Setting up the SID for the user account that the role will run under. -3. Communicating the role status to the fabric. -4. Starting WaHostBootstrapper and monitoring it to make sure that the role is in goal state. +* Configuring the Guest OS including firewall, ACLs, LocalStorage resources, service package and configuration, and certificates. +* Setting up the SID for the user account that the role runs under. +* Communicating the role status to the fabric. +* Starting WaHostBootstrapper and monitoring it to make sure that the role is in goal state. **E**. WaHostBootstrapper is responsible for: -1. Reading the role configuration, and starting all the appropriate tasks and processes to configure and run the role. -2. Monitoring all its child processes. -3. Raising the StatusCheck event on the role host process. +* Reading the role configuration, and starting all the appropriate tasks and processes to configure and run the role. +* Monitoring all its child processes. +* Raising the StatusCheck event on the role host process. -**F**. IISConfigurator runs if the role is configured as a Full IIS web role. It is responsible for: +**F**. IISConfigurator runs if the role is configured as a Full IIS web role. It's responsible for: -1. Starting the standard IIS services -2. Configuring the rewrite module in the web configuration -3. Setting up the AppPool for the configured role in the service model -4. Setting up IIS logging to point to the DiagnosticStore LocalStorage folder -5. Configuring permissions and ACLs -6. The website resides in %roleroot%:\sitesroot\0, and the AppPool points to this location to run IIS. +* Starting the standard IIS services +* Configuring the rewrite module in the web configuration +* Setting up the AppPool for the configured role in the service model +* Setting up IIS logging to point to the DiagnosticStore LocalStorage folder +* Configuring permissions and ACLs +* The website resides in %roleroot%:\sitesroot\0, and the AppPool points to this location to run IIS. -**G**. Startup tasks are defined by the role model and started by WaHostBootstrapper. Startup tasks can be configured to run in the background asynchronously, and the host bootstrapper will start the startup task and then continue on to other startup tasks. Startup tasks can also be configured to run in Simple (default) mode in which the host bootstrapper will wait for the startup task to finish running and return a success (0) exit code before continuing to the next startup task. +**G**. The role model defines startup tasks, and WaHostBootstrapper starts them. Startup tasks can be configured to run in the background asynchronously, and the host bootstrapper starts the startup task and then continue on to other startup tasks. Startup tasks can also be configured to run in Simple (default) mode. In Simple mode, the host bootstrapper waits for the startup task to finish running and return a success (0) exit code before continuing to the next startup task. -**H**. These tasks are part of the SDK and are defined as plugins in the roleΓÇÖs service definition (.csdef). When expanded into startup tasks, the **DiagnosticsAgent** and **RemoteAccessAgent** are unique in that they each define two startup tasks, one regular and one that has a **/blockStartup** parameter. The normal startup task is defined as a Background startup task so that it can run in the background while the role itself is running. The **/blockStartup** startup task is defined as a Simple startup task so that WaHostBootstrapper waits for it to exit before continuing. The **/blockStartup** task waits for the regular task to finish initializing, and then it exits and allow the host bootstrapper to continue. This is done so that diagnostics and RDP access can be configured before the role processes start (this is done through the /blockStartup task). This also allows diagnostics and RDP access to continue running after the host bootstrapper has finished the startup tasks (this is done through the Normal task). +**H**. These tasks are part of the SDK and are defined as plugins in the roleΓÇÖs service definition (.csdef). When expanded into startup tasks, the **DiagnosticsAgent** and **RemoteAccessAgent** are unique in that they each define two startup tasks, one regular and one that has a **/blockStartup** parameter. The normal startup task is defined as a Background startup task so that it can run in the background while the role itself is running. The **/blockStartup** startup task is defined as a Simple startup task so that WaHostBootstrapper waits for it to exit before continuing. The **/blockStartup** task waits for the regular task to finish initializing, and then it exits and allow the host bootstrapper to continue. This process is done so that diagnostics and RDP access can be configured before the role processes start, which is done through the /blockStartup task. This process also allows diagnostics and RDP access to continue running after the host bootstrapper finishes the startup tasks, which is done through the Normal task. -**I**. WaWorkerHost is the standard host process for normal worker roles. This host process hosts all the roleΓÇÖs DLLs and entry point code, such as OnStart and Run. +**I**. WaWorkerHost is the standard host process for normal worker roles. This host process hosts all the roleΓÇÖs DLLs and entry point code, such as OnStart and Run. **J**. WaIISHost is the host process for role entry point code for web roles that use Full IIS. This process loads the first DLL that is found that uses the **RoleEntryPoint** class and executes the code from this class (OnStart, Run, OnStop). Any **RoleEnvironment** events (such as StatusCheck and Changed) that are created in the RoleEntryPoint class are raised in this process. -**K**. W3WP is the standard IIS worker process that is used if the role is configured to use Full IIS. This runs the AppPool that is configured from IISConfigurator. Any RoleEnvironment events (such as StatusCheck and Changed) that are created here are raised in this process. Note that RoleEnvironment events will fire in both locations (WaIISHost and w3wp.exe) if you subscribe to events in both processes. +**K**. W3WP is the standard IIS worker process used if the role is configured to use Full IIS. This process runs the AppPool configured from IISConfigurator. Any RoleEnvironment events (such as StatusCheck and Changed) that are created here are raised in this process. RoleEnvironment events fire in both locations (WaIISHost and w3wp.exe) if you subscribe to events in both processes. ## Workflow processes -1. A user makes a request, such as uploading ".cspkg" and ".cscfg" files, telling a resource to stop or making a configuration change, and so on. This can be done through the Azure portal or a tool that uses the Service Management API, such as the Visual Studio Publish feature. This request goes to RDFE to do all the subscription-related work and then communicate the request to FFE. The rest of these workflow steps are to deploy a new package and start it. +1. A user makes a request, such as uploading ".cspkg" and ".cscfg" files, telling a resource to stop or making a configuration change, and so on. Requests can be made through the Azure portal or tools that use the classic deployment model API, such as the Visual Studio Publish feature. This request goes to RDFE to do all the subscription-related work and then communicate the request to FFE. The rest of these workflow steps are to deploy a new package and start it. 2. FFE finds the correct machine pool (based on customer input such, as affinity group or geographical location plus input from the fabric, such as machine availability) and communicates with the master fabric controller in that machine pool. 3. The fabric controller finds a host that has available CPU cores (or spins up a new host). The service package and configuration is copied to the host, and the fabric controller communicates with the host agent on the host OS to deploy the package (configure DIPs, ports, guest OS, and so on). 4. The host agent starts the Guest OS and communicates with the guest agent (WindowsAzureGuestAgent). The host sends heartbeats to the guest to make sure that the role is working towards its goal state. 5. WindowsAzureGuestAgent sets up the guest OS (firewall, ACLs, LocalStorage, and so on), copies a new XML configuration file to c:\Config, and then starts the WaHostBootstrapper process. 6. For Full IIS web roles, WaHostBootstrapper starts IISConfigurator and tells it to delete any existing AppPools for the web role from IIS.-7. WaHostBootstrapper reads the **Startup** tasks from E:\RoleModel.xml and begins executing startup tasks. WaHostBootstrapper waits until all Simple startup tasks have finished and returned a ΓÇ£successΓÇ¥ message. +7. WaHostBootstrapper reads the **Startup** tasks from E:\RoleModel.xml and begins executing startup tasks. WaHostBootstrapper waits until all Simple startup tasks finish and return a success message. 8. For Full IIS web roles, WaHostBootstrapper tells IISConfigurator to configure the IIS AppPool and points the site to `E:\Sitesroot\<index>`, where `<index>` is a zero-based index into the number of `<Sites>` elements defined for the service.-9. WaHostBootstrapper will start the host process depending on the role type: - 1. **Worker Role**: WaWorkerHost.exe is started. WaHostBootstrapper executes the OnStart() method. After it returns, WaHostBootstrapper starts to execute the Run() method, and then simultaneously marks the role as Ready and puts it into the load balancer rotation (if InputEndpoints are defined). WaHostBootsrapper then goes into a loop of checking the role status. +9. WaHostBootstrapper starts the host process depending on the role type: + 1. **Worker Role**: WaWorkerHost.exe is started. WaHostBootstrapper executes the OnStart() method. After it returns, WaHostBootstrapper starts to execute the Run() method, and then simultaneously marks the role as Ready and puts it into the load balancer rotation (if InputEndpoints are defined). WaHostBootsrapper then goes into a loop of checking the role status. 2. **Full IIS Web Role**: aIISHost is started. WaHostBootstrapper executes the OnStart() method. After it returns, it starts to execute the Run() method, and then simultaneously marks the role as Ready and puts it into the load balancer rotation. WaHostBootsrapper then goes into a loop of checking the role status.-10. Incoming web requests to a Full IIS web role triggers IIS to start the W3WP process and serve the request, the same as it would in an on-premises IIS environment. +10. Incoming web requests to a Full IIS web role trigger IIS to start the W3WP process and serve the request, the same as it would in an on-premises IIS environment. ## Log File locations **WindowsAzureGuestAgent** - C:\Logs\AppAgentRuntime.Log. -This log contains changes to the service including starts, stops, and new configurations. If the service does not change, you can expect to see large gaps of time in this log file. +This log contains changes to the service including starts, stops, and new configurations. If the service doesn't change, you can expect to see large gaps of time in this log file. - C:\Logs\WaAppAgent.Log. -This log contains status updates and heartbeat notifications and is updated every 2-3 seconds. This log contains a historic view of the status of the instance and will tell you when the instance was not in the Ready state. +This log contains status updates and heartbeat notifications and is updated every 2-3 seconds. This log contains a historic view of the status of the instance and tells you when the instance wasn't in the Ready state. **WaHostBootstrapper** |
cloud-services | Diagnostics Extension To Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/diagnostics-extension-to-storage.md | -Diagnostic data is not permanently stored unless you transfer it to the Microsoft Azure Storage Emulator or to Azure Storage. Once in storage, it can be viewed with one of several available tools. +Diagnostic data isn't permanently stored unless you transfer it to the Microsoft Azure Storage Emulator or to Azure Storage. Once in storage, it can be viewed with one of several available tools. ## Specify a storage account You specify the storage account that you want to use in the ServiceConfiguration.cscfg file. The account information is defined as a connection string in a configuration setting. The following example shows the default connection string created for a new Cloud Service project in Visual Studio: Depending on the type of diagnostic data that is being collected, Azure Diagnost ## Transfer diagnostic data For SDK 2.5 and later, the request to transfer diagnostic data can occur through the configuration file. You can transfer diagnostic data at scheduled intervals as specified in the configuration. -For SDK 2.4 and previous you can request to transfer the diagnostic data through the configuration file as well as programmatically. The programmatic approach also allows you to do on-demand transfers. +For SDK 2.4 and earlier, you can request to transfer the diagnostic data programmatically and through the configuration file. The programmatic approach also allows you to do on-demand transfers. > [!IMPORTANT] > When you transfer diagnostic data to an Azure storage account, you incur costs for the storage resources that your diagnostic data uses. Log data is stored in either Blob or Table storage with the following names: * **WadLogsTable** - Logs written in code using the trace listener. * **WADDiagnosticInfrastructureLogsTable** - Diagnostic monitor and configuration changes.-* **WADDirectoriesTable** ΓÇô Directories that the diagnostic monitor is monitoring. This includes IIS logs, IIS failed request logs, and custom directories. The location of the blob log file is specified in the Container field and the name of the blob is in the RelativePath field. The AbsolutePath field indicates the location and name of the file as it existed on the Azure virtual machine. +* **WADDirectoriesTable** ΓÇô Directories that the diagnostic monitor is monitoring. These directories include IIS logs, IIS failed request logs, and custom directories. The location of the blob log file is specified in the Container field and the name of the blob is in the RelativePath field. The AbsolutePath field indicates the location and name of the file as it existed on the Azure virtual machine. * **WADPerformanceCountersTable** ΓÇô Performance counters. * **WADWindowsEventLogsTable** ΓÇô Windows Event logs. **Blobs** -* **wad-control-container** ΓÇô (Only for SDK 2.4 and previous) Contains the XML configuration files that controls the Azure diagnostics . +* **wad-control-container** ΓÇô (Only for SDK 2.4 and previous) Contains the XML configuration files that control the Azure diagnostics. * **wad-iis-failedreqlogfiles** ΓÇô Contains information from IIS Failed Request logs. * **wad-iis-logfiles** ΓÇô Contains information about IIS logs.-* **"custom"** ΓÇô A custom container based on configuring directories that are monitored by the diagnostic monitor. The name of this blob container will be specified in WADDirectoriesTable. +* **"custom"** ΓÇô A custom container based on configuring directories that are monitored by the diagnostic monitor. WADDirectoriesTable specifies the name of this blob container. ## Tools to view diagnostic data-Several tools are available to view the data after it is transferred to storage. For example: +Several tools are available to view the data after it transfers to storage. For example: -* Server Explorer in Visual Studio - If you have installed the Azure Tools for Microsoft Visual Studio, you can use the Azure Storage node in Server Explorer to view read-only blob and table data from your Azure storage accounts. You can display data from your local storage emulator account and also from storage accounts you have created for Azure. For more information, see [Browsing and Managing Storage Resources with Server Explorer](/visualstudio/azure/vs-azure-tools-storage-resources-server-explorer-browse-manage). +* Server Explorer in Visual Studio - If you installed the Azure Tools for Microsoft Visual Studio, you can use the Azure Storage node in Server Explorer to view read-only blob and table data from your Azure storage accounts. You can display data from your local storage emulator account and also from storage accounts you created for Azure. For more information, see [Browsing and Managing Storage Resources with Server Explorer](/visualstudio/azure/vs-azure-tools-storage-resources-server-explorer-browse-manage). * [Microsoft Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) is a standalone app that enables you to easily work with Azure Storage data on Windows, OSX, and Linux.-* [Azure Management Studio](https://cerebrata.com/blog/introducing-azure-management-studio-and-azure-explorer) includes Azure Diagnostics Manager which allows you to view, download and manage the diagnostics data collected by the applications running on Azure. +* [Azure Management Studio](https://cerebrata.com/blog/introducing-azure-management-studio-and-azure-explorer) includes Azure Diagnostics Manager, which allows you to view, download, and manage the diagnostics data collected by the applications running on Azure. ## Next Steps [Trace the flow in a Cloud Services application with Azure Diagnostics](../cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md) |
cloud-services | Diagnostics Performance Counters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/diagnostics-performance-counters.md | Title: Collect on Performance Counters in Azure Cloud Services (classic) | Micro description: Learn how to discover, use, and create performance counters in Cloud Services with Azure Diagnostics and Application Insights. Previously updated : 02/21/2023 Last updated : 07/24/2024 A performance counter can be added to your cloud service for either Azure Diagno Azure Application Insights for Cloud Services allows you specify what performance counters you want to collect. After you [add Application Insights to your project](../azure-monitor/app/azure-web-apps-net-core.md), a config file named **ApplicationInsights.config** is added to your Visual Studio project. This config file defines what type of information Application Insights collects and sends to Azure. -Open the **ApplicationInsights.config** file and find the **ApplicationInsights** > **TelemetryModules** element. Each `<Add>` child-element defines a type of telemetry to collect, along with its configuration. The performance counter telemetry module type is `Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector`. If this element is already defined, do not add it a second time. Each performance counter to collect is defined under a node named `<Counters>`. Here is an example that collects drive performance counters: +Open the **ApplicationInsights.config** file and find the **ApplicationInsights** > **TelemetryModules** element. Each `<Add>` child-element defines a type of telemetry to collect, along with its configuration. The performance counter telemetry module type is `Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector`. If this element is already defined, don't add it a second time. Each performance counter to collect is defined under a node named `<Counters>`. Here's an example that collects drive performance counters: ```xml <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings"> Open the **ApplicationInsights.config** file and find the **ApplicationInsights* <!-- ... cut to save space ... --> ``` -Each performance counter is represented as an `<Add>` element under `<Counters>`. The `PerformanceCounter` attribute defines which performance counter to collect. The `ReportAs` attribute is the title to display in the Azure portal for the performance counter. Any performance counter you collect is put into a category named **Custom** in the portal. Unlike Azure Diagnostics, you cannot set the interval these performance counters are collected and sent to Azure. With Application Insights, performance counters are collected and sent every minute. +Each performance counter is represented as an `<Add>` element under `<Counters>`. The `PerformanceCounter` attribute defines which performance counter to collect. The `ReportAs` attribute is the title to display in the Azure portal for the performance counter. Any performance counter you collect is put into a category named **Custom** in the portal. Unlike Azure Diagnostics, you can't set the interval these performance counters are collected and sent to Azure. With Application Insights, performance counters are collected and sent every minute. Application Insights automatically collects the following performance counters: For more information, see [System performance counters in Application Insights]( The Azure Diagnostics extension for Cloud Services allows you specify what performance counters you want to collect. To set up Azure Diagnostics, see [Cloud Service Monitoring Overview](cloud-services-how-to-monitor.md#setup-diagnostics-extension). -The performance counters you want to collect are defined in the **diagnostics.wadcfgx** file. Open this file (it is defined per role) in Visual Studio and find the **DiagnosticsConfiguration** > **PublicConfig** > **WadCfg** > **DiagnosticMonitorConfiguration** > **PerformanceCounters** element. Add a new **PerformanceCounterConfiguration** element as a child. This element has two attributes: `counterSpecifier` and `sampleRate`. The `counterSpecifier` attribute defines which system performance counter set (outlined in the previous section) to collect. The `sampleRate` value indicates how often that value is polled. As a whole, all performance counters are transferred to Azure according to the parent `PerformanceCounters` element's `scheduledTransferPeriod` attribute value. +The performance counters you want to collect are defined in the **diagnostics.wadcfgx** file. Open this file in Visual Studio and find the **DiagnosticsConfiguration** > **PublicConfig** > **WadCfg** > **DiagnosticMonitorConfiguration** > **PerformanceCounters** element. Add a new **PerformanceCounterConfiguration** element as a child. This element has two attributes: `counterSpecifier` and `sampleRate`. The `counterSpecifier` attribute defines which system performance counter set (outlined in the previous section) to collect. The `sampleRate` value indicates how often that value is polled. As a whole, all performance counters are transferred to Azure according to the parent `PerformanceCounters` element's `scheduledTransferPeriod` attribute value. For more information about the `PerformanceCounters` schema element, see the [Azure Diagnostics Schema](../azure-monitor/agents/diagnostics-extension-schema-windows.md#performancecounters-element). -The period defined by the `sampleRate` attribute uses the XML duration data type to indicate how often the performance counter is polled. In the example below, the rate is set to `PT3M`, which means `[P]eriod[T]ime[3][M]inutes`: every three minutes. +The period defined by the `sampleRate` attribute uses the XML duration data type to indicate how often the performance counter is polled. In the following example, the rate is set to `PT3M`, which means `[P]eriod[T]ime[3][M]inutes`: every three minutes. For more information about how the `sampleRate` and `scheduledTransferPeriod` are defined, see the **Duration Data Type** section in the [W3 XML Date and Time Date Types](https://www.w3schools.com/XML/schema_dtypes_date.asp) tutorial. For more information about how the `sampleRate` and `scheduledTransferPeriod` ar ## Create a new perf counter -A new performance counter can be created and used by your code. Your code that creates a new performance counter must be running elevated, otherwise it will fail. Your cloud service `OnStart` startup code can create the performance counter, requiring you to run the role in an elevated context. Or you can create a startup task that runs elevated and creates the performance counter. For more information about startup tasks, see [How to configure and run startup tasks for a cloud service](cloud-services-startup-tasks.md). +A new performance counter can be created and used by your code. Your code that creates a new performance counter must be running elevated, otherwise it fails. Your cloud service `OnStart` startup code can create the performance counter, requiring you to run the role in an elevated context. Or you can create a startup task that runs elevated and creates the performance counter. For more information about startup tasks, see [How to configure and run startup tasks for a cloud service](cloud-services-startup-tasks.md). To configure your role to run elevated, add a `<Runtime>` element to the [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) file. As previously stated, the performance counters for Application Insights are defi ### Azure Diagnostics -As previously stated, the performance counters you want to collect are defined in the **diagnostics.wadcfgx** file. Open this file (it is defined per role) in Visual Studio and find the **DiagnosticsConfiguration** > **PublicConfig** > **WadCfg** > **DiagnosticMonitorConfiguration** > **PerformanceCounters** element. Add a new **PerformanceCounterConfiguration** element as a child. Set the `counterSpecifier` attribute to the category and name of the performance counter you created in your code. +As previously stated, the performance counters you want to collect are defined in the **diagnostics.wadcfgx** file. Open this file in Visual Studio and find the **DiagnosticsConfiguration** > **PublicConfig** > **WadCfg** > **DiagnosticMonitorConfiguration** > **PerformanceCounters** element. Add a new **PerformanceCounterConfiguration** element as a child. Set the `counterSpecifier` attribute to the category and name of the performance counter you created in your code. ```xml <?xml version="1.0" encoding="utf-8"?> As previously stated, the performance counters you want to collect are defined i </DiagnosticsConfiguration> ``` -## More information +## Next steps - [Application Insights for Azure Cloud Services](../azure-monitor/app/azure-web-apps-net-core.md) - [System performance counters in Application Insights](../azure-monitor/app/performance-counters.md) |
cloud-services | Mitigate Se | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/mitigate-se.md | keywords: spectre,meltdown,specter vm-windows Previously updated : 02/21/2023 Last updated : 07/24/2024 |
cloud-services | Resource Health For Cloud Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/resource-health-for-cloud-services.md | description: This article talks about Resource Health Check (RHC) Support for Mi Previously updated : 02/21/2023 Last updated : 07/24/2024 This article talks about Resource Health Check (RHC) Support for [Microsoft Azu [Azure Resource Health](../service-health/resource-health-overview.md) for cloud services helps you diagnose and get support for service problems that affect your Cloud Service deployment, Roles & Role Instances. It reports on the current and past health of your cloud services at Deployment, Role & Role Instance level. -Azure status reports on problems that affect a broad set of Azure customers. Resource Health gives you a personalized dashboard of the health of your resources. Resource Health shows all the times that your resources have been unavailable because of Azure service problems. This data makes it easy for you to see if an SLA was violated. +Azure status reports on problems that affect a broad set of Azure customers. Resource Health gives you a personalized dashboard of the health of your resources. Resource Health shows all the times that your resources were unavailable because of Azure service problems. This data makes it easy for you to see if a Service Level Agreement (SLA) was violated. :::image type="content" source="media/cloud-services-allocation-failure/rhc-blade-cloud-services.png" alt-text="Image shows the resource health check blade in the Azure portal."::: ## How health is checked and reported?-Resource health is reported at a deployment or role level. The health check happens at role instance level, we aggregate the status and report it on Role level. E.g. If all role instances are available, then the role status is available. Similarly, we aggregate the health status of all roles and report it on deployment level. E.g. If all roles are available then deployment status becomes available. +Resource health is reported at a deployment or role level. The health check happens at role instance level. We aggregate the status and report it on Role level. For example, if all role instances are available, then the role status is available. Similarly, we aggregate the health status of all roles and report it on deployment level. For example, if all roles are available, then deployment status becomes available. -## Why I cannot see health status for my staging slot deployment? -Resource health checks only work for production slot deployment. Staging slot deployment is not yet supported. +## Why I can't see health status for my staging slot deployment? +Resource health checks only work for production slot deployment. Staging slot deployment isn't yet supported. ## Does Resource Health Check also check the health of the application?-No, health check only happens for role instances and it does not monitor Application health. E.g. Even if 1 out of 3 role instances are unhealthy, the application can still be available. RHC does not use [load balancer probes](../load-balancer/load-balancer-custom-probe-overview.md) or Guest agent probe. Therefore, +No, health check only happens for role instances and it doesn't monitor Application health. For example, even if one out of three role instances are unhealthy, the application can still be available. RHC doesn't use [load balancer probes](../load-balancer/load-balancer-custom-probe-overview.md) or Guest agent probe. Therefore, Customers should continue to using load balancer probes to monitor the health of their application. ## What are the annotations for Cloud Services? Annotations are the health status of the deployment or roles. There are different annotations based on health status, reason for status change, etc. ## What does it mean by Role Instance being "unavailable"?-This means the role instance is not emitting a healthy signal to the platform. Please check the role instance status for detailed explanation of why healthy signal is not being emitted. +Unavailable means the role instance isn't emitting a healthy signal to the platform. Check the role instance status for detailed explanation of why healthy signal isn't being emitted. ## What does it mean by deployment being "unknown"?-Unknown means the aggregated health of the Cloud Service deployment cannot be determined. Usually this indicates either there is no production deployment created for the Cloud Service, the deployment was newly created (and that Azure is starting to collect health events), or platform is having issues collecting health events for this deployment. +Unknown means the aggregated health of the Cloud Service deployment can't be determined. Usually, unknown indicates one of the following scenarios: +* There's no production deployment created for the Cloud Service +* The deployment was newly created (and that Azure is starting to collect health events) +* The platform is having issues collecting health events for this deployment. -## Why does Role Instance Annotations mentions VMs instead of Role Instances? -Since Role Instances are basically VMs and the health check for VMs is reused for Role Instances, the VM term is used to represent Role Instances. +## Why does Role Instance Annotations mention VMs instead of Role Instances? +Since Role Instances are, in essence, virtual machines (VMs), and the health check for VMs is reused for Role Instances, the VM term is used to represent Role Instances. ## Cloud Services (Deployment Level) Annotations & their meanings | Annotation | Description | | | | | Available| There aren't any known Azure platform problems affecting this Cloud Service deployment |-| Unknown | We are currently unable to determine the health of this Cloud Service deployment | -| Setting up Resource Health | Setting up Resource health for this resource. Resource health watches your Azure resources to provide details about ongoing and past events that have impacted them| -| Degraded | Your Cloud Service deployment is degraded. We're working to automatically recover your Cloud Service deployment and to determine the source of the problem. No additional action is required from you at this time | +| Unknown | We're currently unable to determine the health of this Cloud Service deployment | +| Setting up Resource Health | Setting up Resource health for this resource. Resource health watches your Azure resources to provide details about ongoing and past events that affected them| +| Degraded | Your Cloud Service deployment is degraded. We're working to automatically recover your Cloud Service deployment and to determine the source of the problem. No further action is required from you at this time | | Unhealthy | Your Cloud Service deployment is unhealthy because {0} out of {1} role instances are unavailable | | Degraded | Your Cloud Service deployment is degraded because {0} out of {1} role instances are unavailable | -| Available and maybe impacted | Your Cloud Service deployment is running, however an ongoing Azure service outage may prevent you from connecting to it. Connectivity will be restored once the outage is resolved | -| Unavailable and maybe impacted | The health of this Cloud Service deployment may be impacted by an Azure service outage. Your Cloud Service deployment will automatically recover when the outage is resolved | -| Unknown and maybe impacted | We are currently unable to determine the health of this Cloud Service deployment. This could be caused by an ongoing Azure service outage that may be impacting this virtual machine, which will automatically recover when the outage is resolved | +| Available and maybe impacted | Your Cloud Service deployment is running, however an ongoing Azure service outage may prevent you from connecting to it. Connectivity restores once the outage is resolved | +| Unavailable and maybe impacted | An Azure service outage possibly affected the health of this Cloud Service deployment. Your Cloud Service deployment recovers automatically when the outage is resolved | +| Unknown and maybe impacted | We're currently unable to determine the health of this Cloud Service deployment. This status could be a result of an ongoing Azure service outage that may be impacting this virtual machine, which recovers automatically when the outage is resolved | ## Cloud Services (Role Instance Level) Annotations & their meanings | Annotation | Description | | | | | Available | There aren't any known Azure platform problems affecting this virtual machine | -| Unknown | We are currently unable to determine the health of this virtual machine | +| Unknown | We're currently unable to determine the health of this virtual machine | | Stopped and deallocating | This virtual machine is stopping and deallocating as requested by an authorized user or process |-| Setting up Resource Health | Setting up Resource health for this resource. Resource health watches your Azure resources to provide details about ongoing and past events that have impacted them | -| Unavailable | Your virtual machine is unavailable. We're working to automatically recover your virtual machine and to determine the source of the problem. No additional action is required from you at this time | -| Degraded | Your virtual machine is degraded. We're working to automatically recover your virtual machine and to determine the source of the problem. No additional action is required from you at this time | -| Host server hardware failure | This virtual machine is impacted by a fatal {HardwareCategory} failure on the host server. Azure will redeploy your virtual machine to a healthy host server | -| Migration scheduled due to degraded hardware | Azure has identified that the host server has a degraded {0} that is predicted to fail soon. If feasible, we will Live Migrate your virtual machine as soon as possible, or otherwise redeploy it after {1} UTC time. To minimize risk to your service, and in case the hardware fails before the system initiated migration occurs, we recommend that you self-redeploy your virtual machine as soon as possible | -| Available and maybe impacted | Your virtual machine is running, however an ongoing Azure service outage may prevent you from connecting to it. Connectivity will be restored once the outage is resolved | -| Unavailable and maybe impacted | The health of this virtual machine may be impacted by an Azure service outage. Your virtual machine will automatically recover when the outage is resolved | -| Unknown and maybe impacted | We are currently unable to determine the health of this virtual machine. This could be caused by an ongoing Azure service outage that may be impacting this virtual machine, which will automatically recover when the outage is resolved | -| Hardware resources allocated | Hardware resources have been assigned to the virtual machine and it will be online shortly | +| Setting up Resource Health | Setting up Resource health for this resource. Resource health watches your Azure resources to provide details about ongoing and past events that affected them | +| Unavailable | Your virtual machine is unavailable. We're working to automatically recover your virtual machine and to determine the source of the problem. No further action is required from you at this time | +| Degraded | Your virtual machine is degraded. We're working to automatically recover your virtual machine and to determine the source of the problem. No further action is required from you at this time | +| Host server hardware failure | A fatal {HardwareCategory} failure on the host server affected this virtual machine. Azure redeploys your virtual machine to a healthy host server | +| Migration scheduled due to degraded hardware | Azure identified that the host server has a degraded {0} that is predicted to fail soon. If feasible, we Live Migrate your virtual machine as soon as possible, or otherwise redeploy it after {1} UTC time. To minimize risk to your service, and in case the hardware fails before the system initiated migration occurs, we recommend you self-redeploy your virtual machine as soon as possible | +| Available and maybe impacted | Your virtual machine is running, however an ongoing Azure service outage may prevent you from connecting to it. Connectivity restores once the outage is resolved | +| Unavailable and maybe impacted | An Azure service outage possibly affected the health of this virtual machine. Your virtual machine recovers automatically when the outage is resolved | |