Updates from: 07/25/2024 01:12:36
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-support.md
The following table provides links to language support reference articles by sup
| Azure AI Language support | Description | | | |
-|![Content Moderator icon](medi) (retired) | Detect potentially offensive or unwanted content. |
+|![Content Moderator icon](~/reusable-content/ce-skilling/azure/medi) (retired) | Detect potentially offensive or unwanted content. |
|![Document Intelligence icon](~/reusable-content/ce-skilling/azure/medi) | Turn documents into intelligent data-driven solutions. |
-|![Immersive Reader icon](medi) | Help users read and comprehend text. |
+|![Immersive Reader icon](~/reusable-content/ce-skilling/azure/medi) | Help users read and comprehend text. |
|![Language icon](~/reusable-content/ce-skilling/azure/medi) | Build apps with industry-leading natural language understanding capabilities. |
-|![Language Understanding icon](medi) (retired) | Understand natural language in your apps. |
-|![QnA Maker icon](medi) (retired) | Distill information into easy-to-navigate questions and answers. |
+|![Language Understanding icon](~/reusable-content/ce-skilling/azure/medi) (retired) | Understand natural language in your apps. |
+|![QnA Maker icon](~/reusable-content/ce-skilling/azure/medi) (retired) | Distill information into easy-to-navigate questions and answers. |
|![Speech icon](~/reusable-content/ce-skilling/azure/medi)| Configure speech-to-text, text-to-speech, translation, and speaker recognition applications. | |![Translator icon](~/reusable-content/ce-skilling/azure/medi) | Translate more than 100 in-use, at-risk, and endangered languages and dialects.|
-|![Video Indexer icon](media/service-icons/video-indexer.svg)</br>[Video Indexer](/azure/azure-video-indexer/language-identification-model#guidelines-and-limitations) | Extract actionable insights from your videos. |
+|![Video Indexer icon](~/reusable-content/ce-skilling/azure/media/ai-services/video-indexer.svg)</br>[Video Indexer](/azure/azure-video-indexer/language-identification-model#guidelines-and-limitations) | Extract actionable insights from your videos. |
|![Vision icon](~/reusable-content/ce-skilling/azure/medi) | Analyze content in images and videos. | ## Language independent services
These Azure AI services are language agnostic and don't have limitations based o
| Azure AI service | Description | | | |
-|![Anomaly Detector icon](media/service-icons/anomaly-detector.svg)</br>[Anomaly Detector](./Anomaly-Detector/index.yml) | Identify potential problems early on. |
-|![Custom Vision icon](media/service-icons/custom-vision.svg)</br>[Custom Vision](./custom-vision-service/index.yml) |Customize image recognition for your business. |
+|![Anomaly Detector icon](~/reusable-content/ce-skilling/azure/media/ai-services/anomaly-detector.svg)</br>[Anomaly Detector](./Anomaly-Detector/index.yml) | Identify potential problems early on. |
+|![Custom Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/custom-vision.svg)</br>[Custom Vision](./custom-vision-service/index.yml) |Customize image recognition for your business. |
|![Face icon](~/reusable-content/ce-skilling/azure/medi) | Detect and identify people and emotions in images. |
-|![Personalizer icon](media/service-icons/personalizer.svg)</br>[Personalizer](./personalizer/index.yml) | Create rich, personalized experiences for users. |
+|![Personalizer icon](~/reusable-content/ce-skilling/azure/media/ai-services/personalizer.svg)</br>[Personalizer](./personalizer/index.yml) | Create rich, personalized experiences for users. |
## See also
ai-services Multi Service Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md
The multi-service resource enables access to the following Azure AI services wit
| Service | Description | | | |
-| ![Content Moderator icon](./media/service-icons/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content. |
-| ![Custom Vision icon](./media/service-icons/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition for your business. |
+| ![Content Moderator icon](~/reusable-content/ce-skilling/azure/media/ai-services/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content. |
+| ![Custom Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition for your business. |
| ![Document Intelligence icon](~/reusable-content/ce-skilling/azure/media/ai-services/document-intelligence.svg) [Document Intelligence](./document-intelligence/index.yml) | Turn documents into intelligent data-driven solutions. | | ![Face icon](~/reusable-content/ce-skilling/azure/medi) | Detect and identify people and emotions in images. | | ![Language icon](~/reusable-content/ce-skilling/azure/media/ai-services/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities. | | ![Speech icon](~/reusable-content/ce-skilling/azure/media/ai-services/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation, and speaker recognition. |
-| ![Translator icon](~/reusable-content/ce-skilling/azure/media/ai-services/translator.svg) [Translator](./translator/index.yml) | Use AI-powered translation technology to translate more than 100 in-use, at-risk, and endangered languages and dialects.. |
+| ![Translator icon](~/reusable-content/ce-skilling/azure/media/ai-services/translator.svg) [Translator](./translator/index.yml) | Use AI-powered translation technology to translate more than 100 in-use, at-risk, and endangered languages and dialects. |
| ![Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos. | ::: zone pivot="azportal"
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Along with using Elasticsearch databases in Azure OpenAI Studio, you can also us
## Deploy to a copilot (preview), Teams app (preview), or web app
-After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure OpenAI studio.
+After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure OpenAI Studio.
:::image type="content" source="../media/use-your-data/deploy-model.png" alt-text="A screenshot showing the model deployment button in Azure OpenAI Studio." lightbox="../media/use-your-data/deploy-model.png":::
This gives you multiple options for deploying your solution.
#### [Copilot (preview)](#tab/copilot)
-You can deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure OpenAI studio, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai).
+You can deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure OpenAI Studio, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai).
> [!NOTE] > Deploying to a copilot in Copilot Studio (preview) is only available in US regions.
A Teams app lets you bring conversational experience to your users in Teams to i
**Prerequisites** - The latest version of [Visual Studio Code](https://code.visualstudio.com/) installed.-- The latest version of [Teams toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) installed. This is a VS Code extension that creates a project scaffolding for your app.-- [Node.js](https://nodejs.org/en/download/) (version 16 or 17) installed. For more information, see [Node.js version compatibility table for project type](/microsoftteams/platform/toolkit/build-environments#nodejs-version-compatibility-table-for-project-type).
+- The latest version of [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) installed. This is a VS Code extension that creates a project scaffolding for your app.
+- [Node.js](https://nodejs.org/en/download/) (version 16 or 18) installed. For more information, see [Node.js version compatibility table for project type](/microsoftteams/platform/toolkit/build-environments#nodejs-version-compatibility-table-for-project-type).
- [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) installed. - Sign in to your [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) (using this link to get a test account: [Developer program](https://developer.microsoft.com/microsoft-365/dev-program)). - Enable **custom Teams apps** and turn on **custom app uploading** in your account (instructions [here](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant#enable-custom-teams-apps-and-turn-on-custom-app-uploading))
token_output = TokenEstimator.estimate_tokens(input_text)
## Troubleshooting
-To troubleshoot failed operations, always look out for errors or warnings specified either in the API response or Azure OpenAI studio. Here are some of the common errors and warnings:
+To troubleshoot failed operations, always look out for errors or warnings specified either in the API response or Azure OpenAI Studio. Here are some of the common errors and warnings:
### Failed ingestion jobs
ai-services Deployment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/deployment-types.md
Azure OpenAI offers three types of deployments. These provide a varied level of
| **Getting started** | [Model deployment](./create-resource.md) | [Model deployment](./create-resource.md) | [Provisioned onboarding](./provisioned-throughput-onboarding.md) | | **Cost** | [Global deployment pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | [Regional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | May experience cost savings for consistent usage | | **What you get** | Easy access to all new models with highest default pay-per-call limits.<br><br> Customers with high volume usage may see higher latency variability | Easy access with [SLA on availability](https://azure.microsoft.com/support/legal/sl#estimate-provisioned-throughput-and-cost) |
-| **What you don’t get** | ❌Data residency guarantees | ❌High volume w/consistent low latency | ❌Pay-per-call flexibility |
+| **What you don’t get** |❌Data processing guarantee<br> <br> Data might be processed outside of the resource's Azure geography, but data storage remains in its Azure geography. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) | ❌High volume w/consistent low latency | ❌Pay-per-call flexibility |
| **Per-call Latency** | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time. | | **Sku Name in code** | `GlobalStandard` | `Standard` | `ProvisionedManaged` | | **Billing model** | Pay-per-token | Pay-per-token | Monthly Commitments |
Standard deployments are optimized for low to medium volume workloads with high
## Global standard
+> [!IMPORTANT]
+> Data might be processed outside of the resource's Azure geography, but data storage remains in its Azure geography. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
+ Global deployments are available in the same Azure OpenAI resources as non-global offers but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard will provide the highest default quota for new models and eliminates the need to load balance across multiple resources. The deployment type is optimized for low to medium volume workloads with high burstiness. Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [quotas page to learn more](./quota.md).
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
- ignite-2023 - references_regions Previously updated : 07/18/2024 Last updated : 07/24/2024
The following sections provide you with a quick guide to the default quotas and
|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute | ||::|::|
-|Enterprise agreement | 10 M | 60 K |
+|Enterprise agreement | 30 M | 60 K |
|Default | 450 K | 2.7 K | M = million | K = thousand
ai-services Rest Api Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/reference/rest-api-resources.md
Select a service from the table to learn how it can help you meet your developme
| Service documentation | Description | Reference documentation | | : | : | : |
-| ![Azure AI Search icon](../media/service-icons/search.svg) [Azure AI Search](../../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps | [Azure AI Search API](/rest/api/searchservice) |
-| ![Azure OpenAI Service icon](../medi)</br>&bullet; [fine-tuning](/rest/api/azureopenai/fine-tuning) |
-| ![Bot service icon](../media/service-icons/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels | [Bot Service API](/azure/bot-service/rest-api/bot-framework-rest-connector-api-reference?view=azure-bot-service-4.0&preserve-view=true) |
+| ![Azure AI Search icon](~/reusable-content/ce-skilling/azure/media/ai-services/search.svg) [Azure AI Search](../../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps | [Azure AI Search API](/rest/api/searchservice) |
+| ![Azure OpenAI Service icon](~/reusable-content/ce-skilling/azure/medi)</br>&bullet; [fine-tuning](/rest/api/azureopenai/fine-tuning) |
+| ![Bot service icon](~/reusable-content/ce-skilling/azure/media/ai-services/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels | [Bot Service API](/azure/bot-service/rest-api/bot-framework-rest-connector-api-reference?view=azure-bot-service-4.0&preserve-view=true) |
| ![Content Safety icon](~/reusable-content/ce-skilling/azure/media/ai-services/content-safety.svg) [Content Safety](../content-safety/index.yml) | An AI service that detects unwanted contents | [Content Safety API](https://westus.dev.cognitive.microsoft.com/docs/services/content-safety-service-2023-10-15-preview/operations/TextBlocklists_AddOrUpdateBlocklistItems) |
-| ![Custom Vision icon](../media/service-icons/custom-vision.svg) [Custom Vision](../custom-vision-service/index.yml) | Customize image recognition for your business applications. |**Custom Vision APIs**<br>&bullet; [prediction](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)<br>&bullet; [training](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddebd)|
+| ![Custom Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/custom-vision.svg) [Custom Vision](../custom-vision-service/index.yml) | Customize image recognition for your business applications. |**Custom Vision APIs**<br>&bullet; [prediction](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)<br>&bullet; [training](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddebd)|
| ![Document Intelligence icon](~/reusable-content/ce-skilling/azure/media/ai-services/document-intelligence.svg) [Document Intelligence](../document-intelligence/index.yml) | Turn documents into intelligent data-driven solutions | [Document Intelligence API](/rest/api/aiservices/document-models?view=rest-aiservices-2023-07-31&preserve-view=true) | | ![Face icon](~/reusable-content/ce-skilling/azure/medi) | | ![Language icon](~/reusable-content/ce-skilling/azure/media/ai-services/language.svg) [Language](../language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities | [REST API](/rest/api/language/) | | ![Speech icon](~/reusable-content/ce-skilling/azure/medi) | | ![Translator icon](~/reusable-content/ce-skilling/azure/medi)|
-| ![Video Indexer icon](../media/service-icons/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer) | Extract actionable insights from your videos | [Video Indexer API](/rest/api/videoindexer/accounts?view=rest-videoindexer-2024-01-01&preserve-view=true) |
+| ![Video Indexer icon](~/reusable-content/ce-skilling/azure/media/ai-services/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer) | Extract actionable insights from your videos | [Video Indexer API](/rest/api/videoindexer/accounts?view=rest-videoindexer-2024-01-01&preserve-view=true) |
| ![Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/vision.svg) [Vision](../computer-vision/index.yml) | Analyze content in images and videos | [Vision API](https://eastus.dev.cognitive.microsoft.com/docs/services/Cognitive_Services_Unified_Vision_API_2024-02-01/operations/61d65934cd35050c20f73ab6) | ## Deprecated services | Service documentation | Description | Reference documentation | | | | |
-| ![Anomaly Detector icon](../media/service-icons/anomaly-detector.svg) [Anomaly Detector](../Anomaly-Detector/index.yml) <br>(deprecated 2023) | Identify potential problems early on | [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/CreateMultivariateModel) |
-| ![Content Moderator icon](../medi) |
-| ![Language Understanding icon](../media/service-icons/luis.svg) [Language understanding (LUIS)](../luis/index.yml) <br>(deprecated 2023) | Understand natural language in your apps | [LUIS API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) |
-| ![Metrics Advisor icon](../media/service-icons/metrics-advisor.svg) [Metrics Advisor](../metrics-advisor/index.yml) <br>(deprecated 2023) | An AI service that detects unwanted contents | [Metrics Advisor API](https://westus.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/createDataFeed) |
-| ![Personalizer icon](../media/service-icons/personalizer.svg) [Personalizer](../personalizer/index.yml) <br>(deprecated 2023) | Create rich, personalized experiences for each user | [Personalizer API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) |
-| ![QnA Maker icon](../media/service-icons/luis.svg) [QnA maker](../qnamaker/index.yml) <br>(deprecated 2022) | Distill information into easy-to-navigate questions and answers | [QnA Maker API](https://westus.dev.cognitive.microsoft.com/docs/services/5a93fcf85b4ccd136866eb37/operations/5ac266295b4ccd1554da75ff) |
+| ![Anomaly Detector icon](~/reusable-content/ce-skilling/azure/media/ai-services/anomaly-detector.svg) [Anomaly Detector](../Anomaly-Detector/index.yml) <br>(deprecated 2023) | Identify potential problems early on | [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/CreateMultivariateModel) |
+| ![Content Moderator icon](~/reusable-content/ce-skilling/azure/medi) |
+| ![Language Understanding icon](~/reusable-content/ce-skilling/azure/media/ai-services/luis.svg) [Language understanding (LUIS)](../luis/index.yml) <br>(deprecated 2023) | Understand natural language in your apps | [LUIS API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) |
+| ![Metrics Advisor icon](~/reusable-content/ce-skilling/azure/media/ai-services/metrics-advisor.svg) [Metrics Advisor](../metrics-advisor/index.yml) <br>(deprecated 2023) | An AI service that detects unwanted contents | [Metrics Advisor API](https://westus.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/createDataFeed) |
+| ![Personalizer icon](~/reusable-content/ce-skilling/azure/media/ai-services/personalizer.svg) [Personalizer](../personalizer/index.yml) <br>(deprecated 2023) | Create rich, personalized experiences for each user | [Personalizer API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) |
+| ![QnA Maker icon](~/reusable-content/ce-skilling/azure/media/ai-services/luis.svg) [QnA maker](../qnamaker/index.yml) <br>(deprecated 2022) | Distill information into easy-to-navigate questions and answers | [QnA Maker API](https://westus.dev.cognitive.microsoft.com/docs/services/5a93fcf85b4ccd136866eb37/operations/5ac266295b4ccd1554da75ff) |
## Next steps
ai-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-to-text.md
Previously updated : 5/21/2024 Last updated : 7/23/2024 # What is speech to text?
-In this overview, you learn about the benefits and capabilities of the speech to text feature of the Speech service, which is part of Azure AI services. Speech to text can be used for [real-time](#real-time-speech-to-text), [batch transcription](#batch-transcription-api), or [fast transcription](./fast-transcription-create.md) of audio streams into text.
+Azure AI Speech service offers advanced speech to text capabilities. This feature supports both real-time and batch transcription, providing versatile solutions for converting audio streams into text.
-> [!NOTE]
-> To compare pricing of [real-time](#real-time-speech-to-text), [batch transcription](#batch-transcription-api), and [fast transcription](./fast-transcription-create.md), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+## Core Features
-For a full list of available speech to text languages, see [Language and voice support](language-support.md?tabs=stt).
+The speech to text service offers the following core features:
+- [Real-time](#real-time-speech-to-text) transcription: Instant transcription with intermediate results for live audio inputs.
+- [Fast transcription](#fast-transcription-preview): Fastest synchronous output for situations with predictable latency.
+- [Batch transcription](#batch-transcription-api): Efficient processing for large volumes of prerecorded audio.
+- [Custom speech](#custom-speech): Models with enhanced accuracy for specific domains and conditions.
## Real-time speech to text
-With real-time speech to text, the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as:
-- Transcriptions, captions, or subtitles for live meetings-- [Diarization](get-started-stt-diarization.md)-- [Pronunciation assessment](how-to-pronunciation-assessment.md)-- Contact center agents assist-- Dictation-- Voice agents
+Real-time speech to text transcribes audio as it's recognized from a microphone or file. It's ideal for applications requiring immediate transcription, such as:
+- **Transcriptions, captions, or subtitles for live meetings**: Real-time audio transcription for accessibility and record-keeping.
+- **Diarization**: Identifying and distinguishing between different speakers in the audio.
+- **Pronunciation assessment**: Evaluating and providing feedback on pronunciation accuracy.
+- **Call center agents assist**: Providing real-time transcription to assist customer service representatives.
+- **Dictation**: Transcribing spoken words into written text for documentation purposes.
+- **Voice agents**: Enabling interactive voice response systems to transcribe user queries and commands.
-Real-time speech to text is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
+Real-time speech to text can be accessed via the Speech SDK, Speech CLI, and REST API, allowing integration into various applications and workflows.
+Real-time speech to text is available via the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), and REST APIs such as the [Fast transcription API](fast-transcription-create.md).
## Fast transcription (Preview)
-Fast transcription API is used to transcribe audio files with returning results synchronously and much faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as:
+Fast transcription API is used to transcribe audio files with returning results synchronously and faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as:
-- Quick audio or video transcription, subtitles, and edit. -- Video translation
+- **Quick audio or video transcription and subtitles**: Quickly get a transcription of an entire video or audio file in one go.
+- **Video translation**: Immediately get new subtitles for a video if you have audio in different languages.
> [!NOTE] > Fast transcription API is only available via the speech to text REST API version 2024-05-15-preview and later.
To get started with fast transcription, see [use the fast transcription API (pre
## Batch transcription API
-[Batch transcription](batch-transcription.md) is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as:
-- Transcriptions, captions, or subtitles for prerecorded audio-- Contact center post-call analytics-- Diarization
+[Batch transcription](batch-transcription.md) is designed for transcribing large amounts of audio stored in files. This method processes audio asynchronously and is suited for:
+- **Transcriptions, captions, or subtitles for prerecorded audio**: Converting stored audio content into text.
+- **Contact center post-call analytics**: Analyzing recorded calls to extract valuable insights.
+- **Diarization**: Differentiating between speakers in recorded audio.
Batch transcription is available via:-- [Speech to text REST API](rest-speech-to-text.md): To get started, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).-- The [Speech CLI](spx-overview.md) supports both real-time and batch transcription. For Speech CLI help with batch transcriptions, run the following command:
+- [Speech to text REST API](rest-speech-to-text.md): Facilitates batch processing with the flexibility of RESTful calls. To get started, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
+- [Speech CLI](spx-overview.md): Supports both real-time and batch transcription, making it easy to manage transcription tasks. For Speech CLI help with batch transcriptions, run the following command:
+ ```azurecli-interactive spx help batch transcription ```
With [custom speech](./custom-speech-overview.md), you can evaluate and improve
Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pretrained with dialects and phonetics representing various common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works well in most speech recognition scenarios.
-A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions. For more information, see [custom speech](./custom-speech-overview.md) and [Speech to text REST API](rest-speech-to-text.md).
+Custom speech allows you to tailor the speech recognition model to better suit your application's specific needs. This can be particularly useful for:
+- **Improving recognition of domain-specific vocabulary**: Train the model with text data relevant to your field.
+- **Enhancing accuracy for specific audio conditions**: Use audio data with reference transcriptions to refine the model.
+
+For more information about custom speech, see the [custom speech overview](./custom-speech-overview.md) and the [speech to text REST API](rest-speech-to-text.md) documentation.
+
+For details about customization options per language and locale, see the [language and voice support for the Speech service](./language-support.md?tabs=stt) documentation.
+
+## Usage Examples
+
+Here are some practical examples of how you can utilize Azure AI speech to text:
-Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt).
+| Use case | Scenario | Solution |
+| | | |
+| **Live meeting transcriptions and captions** | A virtual event platform needs to provide real-time captions for webinars. | Integrate real-time speech to text using the Speech SDK to transcribe spoken content into captions displayed live during the event. |
+| **Customer service enhancement** | A call center wants to assist agents by providing real-time transcriptions of customer calls. | Use real-time speech to text via the Speech CLI to transcribe calls, enabling agents to better understand and respond to customer queries. |
+| **Video subtitling** | A video-hosting platform wants to quickly generate a set of subtitles for a video. | Use fast transcription to quickly get a set of subtitles for the entire video. |
+| **Educational tools** | An e-learning platform aims to provide transcriptions for video lectures. | Apply batch transcription through the speech to text REST API to process prerecorded lecture videos, generating text transcripts for students. |
+| **Healthcare documentation** | A healthcare provider needs to document patient consultations. | Use real-time speech to text for dictation, allowing healthcare professionals to speak their notes and have them transcribed instantly. Use a custom model to enhance recognition of specific medical terms. |
+| **Media and entertainment** | A media company wants to create subtitles for a large archive of videos. | Use batch transcription to process the video files in bulk, generating accurate subtitles for each video. |
+| **Market research** | A market research firm needs to analyze customer feedback from audio recordings. | Employ batch transcription to convert audio feedback into text, enabling easier analysis and insights extraction. |
## Responsible AI
An AI system includes not only the technology, but also the people who use it, t
* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/ai-services/speech-service/context/context) * [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/ai-services/speech-service/context/context)
-## Next steps
+## Related content
- [Get started with speech to text](get-started-speech-to-text.md) - [Create a batch transcription](batch-transcription-create.md)
+- For detailed pricing information, visit the [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
ai-services What Are Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/what-are-ai-services.md
keywords: Azure AI services, cognitive Previously updated : 03/01/2024 Last updated : 8/1/2024 - build-2023
# What are Azure AI services?
-Azure AI services help developers and organizations rapidly create intelligent, cutting-edge, market-ready, and responsible applications with out-of-the-box and prebuilt and customizable APIs and models. Example applications include natural language processing for conversations, search, monitoring, translation, speech, vision, and decision-making.
-
-> [!TIP]
-> Try Azure AI services including Azure OpenAI, Content Safety, Speech, Vision, and more in [Azure AI Studio](https://ai.azure.com). For more information, see [What is Azure AI Studio?](../ai-studio/what-is-ai-studio.md).
-
-Most [Azure AI services](../ai-services/index.yml) are available through REST APIs and client library SDKs in popular development languages. For more information, see each service's documentation.
## Available Azure AI services
Learn how an Azure AI service can help your enhance applications and optimize yo
| Service | Description | | | |
-| ![Anomaly Detector icon](media/service-icons/anomaly-detector.svg) [Anomaly Detector](./Anomaly-Detector/index.yml) (retired) | Identify potential problems early on. |
-| ![Azure AI Search icon](media/service-icons/search.svg) [Azure AI Search](../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps. |
+| ![Anomaly Detector icon](~/reusable-content/ce-skilling/azure/media/ai-services/anomaly-detector.svg) [Anomaly Detector](./Anomaly-Detector/index.yml) (retired) | Identify potential problems early on. |
+| ![Azure AI Search icon](~/reusable-content/ce-skilling/azure/media/ai-services/search.svg) [Azure AI Search](../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps. |
| ![Azure OpenAI Service icon](~/reusable-content/ce-skilling/azure/media/ai-services/azure-openai.svg) [Azure OpenAI](./openai/index.yml) | Perform a wide variety of natural language tasks. |
-| ![Bot service icon](media/service-icons/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels. |
-| ![Content Moderator icon](media/service-icons/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content. |
+| ![Bot service icon](~/reusable-content/ce-skilling/azure/media/ai-services/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels. |
+| ![Content Moderator icon](~/reusable-content/ce-skilling/azure/media/ai-services/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content. |
| ![Content Safety icon](~/reusable-content/ce-skilling/azure/media/ai-services/content-safety.svg) [Content Safety](./content-safety/index.yml) | An AI service that detects unwanted contents. |
-| ![Custom Vision icon](media/service-icons/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition for your business. |
+| ![Custom Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition for your business. |
| ![Document Intelligence icon](~/reusable-content/ce-skilling/azure/media/ai-services/document-intelligence.svg) [Document Intelligence](./document-intelligence/index.yml) | Turn documents into intelligent data-driven solutions. | | ![Face icon](~/reusable-content/ce-skilling/azure/medi) | Detect and identify people and emotions in images. |
-| ![Immersive Reader icon](media/service-icons/immersive-reader.svg) [Immersive Reader](./immersive-reader/index.yml) | Help users read and comprehend text. |
+| ![Immersive Reader icon](~/reusable-content/ce-skilling/azure/media/ai-services/immersive-reader.svg) [Immersive Reader](./immersive-reader/index.yml) | Help users read and comprehend text. |
| ![Language icon](~/reusable-content/ce-skilling/azure/media/ai-services/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities. |
-| ![Language Understanding icon](media/service-icons/luis.svg) [Language understanding](./luis/index.yml) (retired) | Understand natural language in your apps. |
-| ![Metrics Advisor icon](media/service-icons/metrics-advisor.svg) [Metrics Advisor](./metrics-advisor/index.yml) (retired) | An AI service that detects unwanted contents. |
-| ![Personalizer icon](media/service-icons/personalizer.svg) [Personalizer](./personalizer/index.yml) (retired) | Create rich, personalized experiences for each user. |
-| ![QnA Maker icon](media/service-icons/luis.svg) [QnA maker](./qnamaker/index.yml) (retired) | Distill information into easy-to-navigate questions and answers. |
+| ![Language Understanding icon](~/reusable-content/ce-skilling/azure/media/ai-services/luis.svg) [Language understanding](./luis/index.yml) (retired) | Understand natural language in your apps. |
+| ![Metrics Advisor icon](~/reusable-content/ce-skilling/azure/media/ai-services/metrics-advisor.svg) [Metrics Advisor](./metrics-advisor/index.yml) (retired) | An AI service that detects unwanted contents. |
+| ![Personalizer icon](~/reusable-content/ce-skilling/azure/media/ai-services/personalizer.svg) [Personalizer](./personalizer/index.yml) (retired) | Create rich, personalized experiences for each user. |
+| ![QnA Maker icon](~/reusable-content/ce-skilling/azure/media/ai-services/luis.svg) [QnA maker](./qnamaker/index.yml) (retired) | Distill information into easy-to-navigate questions and answers. |
| ![Speech icon](~/reusable-content/ce-skilling/azure/media/ai-services/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation, and speaker recognition. | | ![Translator icon](~/reusable-content/ce-skilling/azure/media/ai-services/translator.svg) [Translator](./translator/index.yml) | Use AI-powered translation technology to translate more than 100 in-use, at-risk, and endangered languages and dialects. |
-| ![Video Indexer icon](media/service-icons/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer/) | Extract actionable insights from your videos. |
+| ![Video Indexer icon](~/reusable-content/ce-skilling/azure/media/ai-services/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer/) | Extract actionable insights from your videos. |
| ![Vision icon](~/reusable-content/ce-skilling/azure/media/ai-services/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos. | ## Pricing tiers and billing
ai-studio Deploy Models Mistral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral.md
Title: How to deploy Mistral family of models with Azure AI Studio
-description: Learn how to deploy Mistral Large with Azure AI Studio.
+description: Learn how to deploy Mistral family of models with Azure AI Studio.
In this article, you learn how to use Azure AI Studio to deploy the Mistral family of models as serverless APIs with pay-as-you-go token-based billing. Mistral AI offers two categories of models in the [Azure AI Studio](https://ai.azure.com). These models are available in the [model catalog](model-catalog-overview.md):
-* __Premium models__: Mistral Large and Mistral Small. These models can be deployed as serverless APIs with pay-as-you-go token-based billing.
-* __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models can be deployed to managed computes in your own Azure subscription.
+* __Premium models__: Mistral Large (2402), Mistral Large (2407), and Mistral Small.
+* __Open models__: Mistral Nemo, Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01.
+
+All the premium models and Mistral Nemo (an open model) can be deployed as serverless APIs with pay-as-you-go token-based billing. The other open models can be deployed to managed computes in your own Azure subscription.
You can browse the Mistral family of models in the model catalog by filtering on the Mistral collection.
You can browse the Mistral family of models in the model catalog by filtering on
# [Mistral Large](#tab/mistral-large)
-Mistral Large is Mistral AI's most advanced Large Language Model (LLM). It can be used on any language-based task, thanks to its state-of-the-art reasoning and knowledge capabilities.
+Mistral Large is Mistral AI's most advanced Large Language Model (LLM). It can be used on any language-based task, thanks to its state-of-the-art reasoning and knowledge capabilities. There are two variants available for the Mistral Large model version:
+
+- Mistral Large (2402)
+- Mistral Large (2407)
-Additionally, Mistral Large is:
+Additionally, some attributes of _Mistral Large (2402)_ include:
* __Specialized in RAG.__ Crucial information isn't lost in the middle of long context windows (up to 32-K tokens). * __Strong in coding.__ Code generation, review, and comments. Supports all mainstream coding languages. * __Multi-lingual by design.__ Best-in-class performance in French, German, Spanish, Italian, and English. Dozens of other languages are supported. * __Responsible AI compliant.__ Efficient guardrails baked in the model and extra safety layer with the `safe_mode` option.
+And attributes of _Mistral Large (2407)_ include:
+
+- **Multi-lingual by design.** Supports dozens of languages, including English, French, German, Spanish, and Italian.
+- **Proficient in coding.** Trained on more than 80 coding languages, including Python, Java, C, C++, JavaScript, and Bash. Also trained on more specific languages such as Swift and Fortran.
+- **Agent-centric.** Possesses agentic capabilities with native function calling and JSON outputting.
+- **Advanced in reasoning.** Demonstrates state-of-the-art mathematical and reasoning capabilities.
++ # [Mistral Small](#tab/mistral-small) Mistral Small is Mistral AI's most efficient Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency. Mistral Small is: -- **A small model optimized for low latency.** Very efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model, it outperforms Mixtral-8x7B and has lower latency.
+- **A small model optimized for low latency.** Efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model, it outperforms Mixtral-8x7B and has lower latency.
- **Specialized in RAG.** Crucial information isn't lost in the middle of long context windows (up to 32K tokens). - **Strong in coding.** Code generation, review, and comments. Supports all mainstream coding languages. - **Multi-lingual by design.** Best-in-class performance in French, German, Spanish, Italian, and English. Dozens of other languages are supported. - **Responsible AI compliant.** Efficient guardrails baked in the model, and extra safety layer with the `safe_mode` option. +
+# [Mistral Nemo](#tab/mistral-nemo)
+
+Mistral Nemo is a cutting-edge Language Model (LLM) boasting state-of-the-art reasoning, world knowledge, and coding capabilities within its size category.
+
+Mistral Nemo is a 12B model, making it a powerful drop-in replacement for any system using Mistral 7B, which it supersedes. It supports a context length of 128K, and it accepts only text inputs and generates text outputs.
+
+Additionally, Mistral Nemo is:
+
+- **Jointly developed with Nvidia.** This collaboration has resulted in a powerful 12B model that pushes the boundaries of language understanding and generation.
+- **Multilingual proficient.** Mistral Nemo is equipped with a tokenizer called Tekken, which is designed for multilingual applications. It supports over 100 languages, such as English, French, German, and Spanish. Tekken is more efficient than the Llama 3 tokenizer in compressing text for approximately 85% of all languages, with significant improvements in Malayalam, Hindi, Arabic, and prevalent European languages.
+- **Agent-centric.** Mistral Nemo possesses top-tier agentic capabilities, including native function calling and JSON outputting.
+- **Advanced in reasoning.** Mistral Nemo demonstrates state-of-the-art mathematical and reasoning capabilities within its size category.
+ ## Deploy Mistral family of models as a serverless API Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
-**Mistral Large** and **Mistral Small** can be deployed as a serverless API with pay-as-you-go billing and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models.
+**Mistral Large (2402)**, **Mistral Large (2407)**, **Mistral Small**, and **Mistral Nemo** can be deployed as a serverless API with pay-as-you-go billing and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models.
### Prerequisites
Certain models in the model catalog can be deployed as a serverless API with pay
### Create a new deployment
-The following steps demonstrate the deployment of Mistral Large, but you can use the same steps to deploy Mistral Small by replacing the model name.
+The following steps demonstrate the deployment of Mistral Large (2402), but you can use the same steps to deploy Mistral Nemo or any of the premium Mistral models by replacing the model name.
To create a deployment: 1. Sign in to [Azure AI Studio](https://ai.azure.com). 1. Select **Model catalog** from the left sidebar.
-1. Search for and select **Mistral-large** to open its Details page.
+1. Search for and select the Mistral Large (2402) model to open its Details page.
:::image type="content" source="../media/deploy-monitor/mistral/mistral-large-deploy-directly-from-catalog.png" alt-text="A screenshot showing how to access the model details page by going through the model catalog." lightbox="../media/deploy-monitor/mistral/mistral-large-deploy-directly-from-catalog.png":::
To create a deployment:
1. From the left sidebar of your project, select **Components** > **Deployments**. 1. Select **+ Create deployment**.
- 1. Search for and select **Mistral-large**. to open the Model's Details page.
+ 1. Search for and select the Mistral Large (2402) model to open the Model's Details page.
:::image type="content" source="../media/deploy-monitor/mistral/mistral-large-deploy-starting-from-project.png" alt-text="A screenshot showing how to access the model details page by going through the Deployments page in your project." lightbox="../media/deploy-monitor/mistral/mistral-large-deploy-starting-from-project.png":::
To learn about billing for the Mistral AI model deployed as a serverless API wit
### Consume the Mistral family of models as a service
-You can consume Mistral family models by using the chat API.
+You can consume Mistral models by using the chat API.
1. From your **Project overview** page, go to the left sidebar and select **Components** > **Deployments**.
ai-studio Model Catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md
Network isolation | [Configure managed networks for Azure AI Studio hubs.](confi
Model | Managed compute | Serverless API (pay-as-you-go) --|--|-- Llama family models | Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat <br> Llama-3-8B-Instruct <br> Llama-3-70B-Instruct <br> Llama-3-8B <br> Llama-3-70B | Llama-3-70B-Instruct <br> Llama-3-8B-Instruct <br> Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat
-Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large <br> Mistral-small
+Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large (2402) <br> Mistral-large (2407) <br> Mistral-small <br> Mistral-Nemo
Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual JAIS | Not available | jais-30b-chat
-Phi3 family models | Phi-3-small-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi3-medium-128k-instruct <br> Phi3-medium-4k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi3-medium-128k-instruct <br> Phi3-medium-4k-instruct
+Phi3 family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct
Nixtla | Not available | TimeGEN-1 Other models | Available | Not available
Llama-3-70B-Instruct <br> Llama-3-8B-Instruct | [Microsoft Managed Countries](/p
Llama-2-7b <br> Llama-2-13b <br> Llama-2-70b | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, West US, West US 3 | West US 3 Llama-2-7b-chat <br> Llama-2-13b-chat <br> Llama-2-70b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, West US, West US 3, | Not available Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
-Mistral-Large | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
+Mistral Large (2402) <br> Mistral-Large (2407) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
+Mistral Nemo | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Japan | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available TimeGEN-1 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Mexico <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available jais-30b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
-Phi-3-mini-4k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Canada Central, Sweden Central, West US 3 | Not available
-Phi-3-mini-128k-instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
+Phi-3-mini-4k-instruct <br> Phi-3-mini-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
+Phi-3-small-8k-instruct <br> Phi-3-small-128k-Instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
+Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
<!-- docutune:enable -->
aks Container Insights Live Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/container-insights-live-data.md
+
+ Title: View Azure Kubernetes Service (AKS) container logs, events, and pod metrics in real time
+description: Learn how to view Azure Kubernetes Service (AKS) container logs, events, and pod metrics in real time using Container Insights.
++++ Last updated : 11/01/2023++
+# View Azure Kubernetes Service (AKS) container logs, events, and pod metrics in real time
+
+In this article, you learn how to use the *live data* feature in Container Insights to view Azure Kubernetes Service (AKS) container logs, events, and pod metrics in real time. This feature provides direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods` to help you troubleshoot issues in real time.
+
+> [!NOTE]
+> AKS uses [Kubernetes cluster-level logging architectures][kubernetes-cluster-architecture]. The container logs are located inside `/var/log/containers` on the node. To access a node, see [Connect to Azure Kubernetes Service (AKS) cluster nodes][node-access].
+
+## Before you begin
+
+For help with setting up the *live data* feature, see [Configure live data in Container Insights][configure-live-data]. This feature directly accesses the Kubernetes API. For more information about the authentication model, see [Kubernetes API][kubernetes-api].
+
+## View AKS resource live logs
+
+> [!NOTE]
+> To access logs from a private cluster, you need to be on a machine on the same private network as the cluster.
+
+1. In the [Azure portal][azure-portal], navigate to your AKS cluster.
+2. Under **Kubernetes resources**, select **Workloads**.
+3. Select the *Deployment*, *Pod*, *Replica Set*, *Stateful Set*, *Job* or *Cron Job* that you want to view logs for, and then select **Live Logs**.
+4. Select the resource you want to view logs for.
+
+ The following example shows the logs for a *Pod* resource:
+
+ :::image type="content" source="./media/container-insights-live-data/live-data-deployment.png" alt-text="Screenshot that shows the deployment of live logs." lightbox="./media/container-insights-live-data/live-data-deployment.png":::
+
+## View live logs
+
+You can view real time log data as it's generated by the container engine on the *Cluster*, *Nodes*, *Controllers*, or *Containers*.
+
+1. In the [Azure portal][azure-portal], navigate to your AKS cluster.
+2. Under **Monitoring**, select **Insights**.
+3. Select the *Cluster*, *Nodes*, *Controllers*, or *Containers* tab, and then select the object you want to view logs for.
+4. On the resource **Overview**, select **Live Logs**.
+
+ > [!NOTE]
+ > To view the data from your Log Analytics workspace, select **View Logs in Log Analytics**. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container Insights][log-query].
+
+ After successful authentication, if data can be retrieved, it begins streaming to the Live Logs tab. You can view log data here in a continuous stream. The following image shows the logs for a *Container* resource:
+
+ :::image type="content" source="./media/container-insights-live-data/container-live-logs.png" alt-text="Screenshot that shows the container Live Logs view data option." lightbox="./media/container-insights-live-data/container-live-logs.png":::
+
+## View live events
+
+You can view real-time event data as it's generated by the container engine on the *Cluster*, *Nodes*, *Controllers*, or *Containers*.
+
+1. In the [Azure portal][azure-portal], navigate to your AKS cluster.
+2. Under **Monitoring**, select **Insights**.
+3. Select the *Cluster*, *Nodes*, *Controllers*, or *Containers* tab, and then select the object you want to view events for.
+4. On the resource **Overview** page, select **Live Events**.
+
+ > [!NOTE]
+ > To view the data from your Log Analytics workspace, select **View Events in Log Analytics**. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container Insights][log-query].
+
+ After successful authentication, if data can be retrieved, it begins streaming to the Live Events tab. The following image shows the events for a *Container* resource:
+
+ :::image type="content" source="./media/container-insights-live-data/container-live-events.png" alt-text="Screenshot that shows the container Live Events view data option." lightbox="./media/container-insights-live-data/container-live-events.png":::
+
+## View metrics
+
+You can view real-time metrics data as it's generated by the container engine on the *Nodes* or *Controllers* by selecting a *Pod* resource.
+
+1. In the [Azure portal][azure-portal], navigate to your AKS cluster.
+2. Under **Monitoring**, select **Insights**.
+3. Select the *Nodes* or *Controllers* tab, and then select the *Pod* object you want to view metrics for.
+4. On the resource **Overview** page, select **Live Metrics**.
+
+ > [!NOTE]
+ > To view the data from your Log Analytics workspace, select **View Events in Log Analytics**. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container Insights][log-query].
+
+ After successful authentication, if data can be retrieved, it begins streaming to the Live Metrics tab. The following image shows the metrics for a *Pod* resource:
+
+ :::image type="content" source="./media/container-insights-live-data/pod-live-metrics.png" alt-text="Screenshot that shows the pod Live Metrics view data option." lightbox="./media/container-insights-live-data/pod-live-metrics.png":::
+
+## Next steps
+
+For more information about monitoring on AKS, see the following articles:
+
+* [Azure Kubernetes Service (AKS) diagnose and solve problems][aks-diagnose-solve-problems]
+* [Monitor Kubernetes events for troubleshooting][aks-monitor-events]
+
+<!-- LINKS -->
+[kubernetes-cluster-architecture]: https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures
+[node-access]: ./node-access.md
+[configure-live-data]: ../azure-monitor/containers/container-insights-livedata-setup.md
+[kubernetes-api]: https://kubernetes.io/docs/concepts/overview/kubernetes-api/
+[azure-portal]: https://portal.azure.com/
+[log-query]: ../azure-monitor/containers/container-insights-log-query.md
+[aks-diagnose-solve-problems]: ./aks-diagnostics.md
+[aks-monitor-events]: ./events.md
aks Monitor Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md
Title: Monitor Azure Kubernetes Service control plane metrics (preview)
+ Title: Monitor Azure Kubernetes Service (AKS) control plane metrics (preview)
description: Learn how to collect metrics from the Azure Kubernetes Service (AKS) control plane and view the telemetry in Azure Monitor. --++ Last updated 01/31/2024 -
-#CustomerIntent: As a platform engineer, I want to collect metrics from the control plane and monitor them for any potential issues
+#CustomerIntent: As a platform engineer, I want to collect metrics from the control plane and monitor them for any potential issues.
# Monitor Azure Kubernetes Service (AKS) control plane metrics (preview)
-The Azure Kubernetes Service (AKS) [control plane](concepts-clusters-workloads.md#control-plane) health is critical for the performance and reliability of the cluster. Control plane metrics (preview) provide more visibility into its availability and performance, allowing you to maximize overall observability and maintain operational excellence. These metrics are fully compatible with Prometheus and Grafana, and can be customized to only store what you consider necessary. With these new metrics, you can collect all metrics from API server, ETCD, Scheduler, Autoscaler, and controller manager.
-
-This article helps you understand this new feature, how to implement it, and how to observe the telemetry collected.
+This article shows you how to use the control plane metrics (preview) feature in Azure Kubernetes Service (AKS) to collect metrics from the control plane and view the telemetry in Azure Monitor. The control plane metrics feature is fully compatible with Prometheus and Grafana and provides more visibility into the availability and performance of the control plane components, such as the API server, ETCD, Scheduler, Autoscaler, and controller manager. You can use these metrics to maximize overall observability and maintain operational excellence for your AKS cluster.
## Prerequisites and limitations -- Only supports [Azure Monitor managed service for Prometheus][managed-prometheus-overview].
+- Control plane metrics (preview) only supports [Azure Monitor managed service for Prometheus][managed-prometheus-overview].
- [Private link](../azure-monitor/logs/private-link-security.md) isn't supported.-- Only the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps) can be customized. All other customizations are not supported.-- The cluster must use [managed identity authentication](use-managed-identity.md).
+- You can only customize the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps). All other customizations aren't supported.
+- The AKS cluster must use [managed identity authentication](use-managed-identity.md).
### Install or update the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](~/reusable-content/ce-skilling/azure/includes/aks/includes/preview/preview-callout.md)]
-Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.
+- Install or update the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] or [`az extension update`][az-extension-update] command.
-```azurecli-interactive
-az extension add --name aks-preview
-```
+ ```azurecli-interactive
+ # Install the aks-preview extension
+ az extension add --name aks-preview
+
+ # Update the aks-preview extension
+ az extension update --name aks-preview
+ ```
-If you need to update the extension version, you can do this using the [`az extension update`][az-extension-update] command.
+### Register the `AzureMonitorMetricsControlPlanePreview` feature flag
-```azurecli-interactive
-az extension update --name aks-preview
-```
+1. Register the `AzureMonitorMetricsControlPlanePreview` feature flag using the [`az feature register`][az-feature-register] command.
-### Register the 'AzureMonitorMetricsControlPlanePreview' feature flag
-
-Register the `AzureMonitorMetricsControlPlanePreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
+ ```
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
-```
+ It takes a few minutes for the status to show *Registered*.
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+2. Verify the registration status using the [`az feature show`][az-feature-show] command.
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
-```
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
+ ```
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-```azurecli-interactive
-az provider register --namespace "Microsoft.ContainerService"
-```
+ ```azurecli-interactive
+ az provider register --namespace "Microsoft.ContainerService"
+ ```
## Enable control plane metrics on your AKS cluster
-You can enable control plane metrics with the Azure Monitor managed service for Prometheus add-on during cluster creation or for an existing cluster. To collect Prometheus metrics from your Kubernetes cluster, see [Enable Prometheus and Grafana for Kubernetes clusters][enable-monitoring-kubernetes-cluster] and follow the steps on the **CLI** tab for an AKS cluster.
+You can enable control plane metrics with the Azure Monitor managed service for Prometheus add-on when creating a new cluster or updating an existing cluster.
-If your cluster already has the Prometheus addon deployed, then you can simply run an `az aks update` to ensure the cluster updates to start collecting control plane metrics.
+## Enable control plane metrics on a new AKS cluster
-```azurecli
-az aks update --name <cluster-name> --resource-group <resource-group>
-```
+To collect Prometheus metrics from your Kubernetes cluster, see [Enable Prometheus and Grafana for AKS clusters][enable-monitoring-kubernetes-cluster] and follow the steps on the **CLI** tab for an AKS cluster.
+
+## Enable control plane metrics on an existing AKS cluster
+
+- If your cluster already has the Prometheus add-on, update the cluster to ensure it starts collecting control plane metrics using the [`az aks update`][az-aks-update] command.
+
+ ```azurecli-interactive
+ az aks update --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP
+ ```
> [!NOTE]
-> Unlike the metrics collected from cluster nodes, control plane metrics are collected by a component which isn't part of the **ama-metrics** add-on. Enabling the `AzureMonitorMetricsControlPlanePreview` feature flag and the managed prometheus add-on ensures control plane metrics are collected. After enabling metric collection, it can take several minutes for the data to appear in the workspace.
+> Unlike the metrics collected from cluster nodes, control plane metrics are collected by a component which isn't part of the **ama-metrics** add-on. Enabling the `AzureMonitorMetricsControlPlanePreview` feature flag and the managed Prometheus add-on ensures control plane metrics are collected. After enabling metric collection, it can take several minutes for the data to appear in the workspace.
-## Querying control plane metrics
+## Query control plane metrics
-Control plane metrics are stored in an Azure monitor workspace in the cluster's region. They can be queried directly from the workspace or through the Azure Managed Grafana instance connected to the workspace. To find the Azure Monitor workspace associated with the cluster, from the left-hand pane of your selected AKS cluster, navigate to the **Monitoring** section and select **Insights**. On the Container Insights page for the cluster, select **Monitor Settings**.
+Control plane metrics are stored in an Azure Monitor workspace in the cluster's region. You can query the metrics directly from the workspace or through the Azure managed Grafana instance connected to the workspace.
+View the control plane metrics in the Azure Monitor workspace using the following steps:
-If you're using Azure Managed Grafana to visualize the data, you can import the following dashboards. AKS provides dashboard templates to help you view and analyze your control plane telemetry data in real-time.
+1. In the [Azure portal][azure-portal], navigate to your AKS cluster.
+2. Under **Monitoring**, select **Insights**.
-* [API server][grafana-dashboard-template-api-server]
-* [ETCD][grafana-dashboard-template-etcd]
+ :::image type="content" source="media/monitor-control-plane-metrics/insights-azmon.png" alt-text="Screenshot of Azure Monitor workspace." lightbox="media/monitor-control-plane-metrics/insights-azmon.png":::
+
+> [!NOTE]
+> AKS provides dashboard templates to help you view and analyze your control plane telemetry data in real-time. If you're using Azure managed Grafana to visualize the data, you can import the following dashboards:
+>
+> - [API server][grafana-dashboard-template-api-server]
+> - [ETCD][grafana-dashboard-template-etcd]
## Customize control plane metrics
-By default, AKs includes a pre-configured set of metrics to collect and store for each component. `API server` and `etcd` are enabled by default. This list can be customized through the [ama-settings-configmap][ama-metrics-settings-configmap]. The list of `minimal-ingestion` profile metrics are available [here][list-of-default-metrics-aks-control-plane].
+AKS includes a preconfigured set of metrics to collect and store for each component. `API server` and `etcd` are enabled by default. You can customize this list through the [`ama-settings-configmap`][ama-metrics-settings-configmap].
-The following lists the default targets:
+The default targets include the following:
```yaml controlplane-apiserver = true
controlplane-kube-controller-manager = false
controlplane-etcd = true ```
-The various options are similar to Azure Managed Prometheus listed [here][prometheus-metrics-scrape-configuration-minimal].
+All ConfigMaps should be applied to the `kube-system` namespace for any cluster.
-All ConfigMaps should be applied to `kube-system` namespace for any cluster.
+For more information about `minimal-ingestion` profile metrics, see [Minimal ingestion profile for control plane metrics in managed Prometheus][list-of-default-metrics-aks-control-plane].
-### Ingest only minimal metrics for the default targets
+### Ingest only minimal metrics from default targets
-This is the default behavior with the setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`. Only metrics listed later in this article are ingested for each of the default targets, which in this case is `controlplane-apiserver` and `controlplane-etcd`.
+When setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`, only the minimal set of metrics are ingested for each of the default targets: `controlplane-apiserver` and `controlplane-etcd`.
### Ingest all metrics from all targets
-Perform the following steps to collect all metrics from all targets on the cluster.
+Collect all metrics from all targets on the cluster using the following steps:
1. Download the ConfigMap file [ama-metrics-settings-configmap.yaml][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.-
-1. Set `minimalingestionprofile = false` and verify the targets under `default-scrape-settings-enabled` that you want to scrape, are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`.
-
-1. Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.
+2. Set `minimalingestionprofile = false`.
+3. Under `default-scrape-settings-enabled`, verify that the targets you want to scrape are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`.
+4. Apply the ConfigMap using the [`kubectl apply`][kubectl-apply] command.
```bash kubectl apply -f configmap-controlplane.yaml
Perform the following steps to collect all metrics from all targets on the clust
### Ingest a few other metrics in addition to minimal metrics
-`Minimal ingestion profile` is a setting that helps reduce ingestion volume of metrics, as only metrics used by default dashboards, default recording rules & default alerts are collected. Perform the following steps to customize this behavior.
+The `minimal ingestion profile` setting helps reduce the ingestion volume of metrics, as it only collects metrics used by default dashboards, default recording rules, and default alerts are collected. To customize this setting, use the following steps:
1. Download the ConfigMap file [ama-metrics-settings-configmap][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.-
-1. Set `minimalingestionprofile = true` and verify the targets under `default-scrape-settings-enabled` that you want to scrape are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`.
-
-1. Under the `default-targets-metrics-keep-list`, specify the list of metrics for the `true` targets. For example,
+2. Set `minimalingestionprofile = true`.
+3. Under `default-scrape-settings-enabled`, verify that the targets you want to scrape are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`.
+4. Under `default-targets-metrics-keep-list`, specify the list of metrics for the `true` targets. For example:
```yaml controlplane-apiserver= "apiserver_admission_webhook_admission_duration_seconds| apiserver_longrunning_requests" ``` -- Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.
+5. Apply the ConfigMap using the [`kubectl apply`][kubectl-apply] command.
```bash kubectl apply -f configmap-controlplane.yaml
Perform the following steps to collect all metrics from all targets on the clust
### Ingest only specific metrics from some targets 1. Download the ConfigMap file [ama-metrics-settings-configmap][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.-
-1. Set `minimalingestionprofile = false` and verify the targets under `default-scrape-settings-enabled` that you want to scrape are set to `true`. The only targets you can specify here are `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`,`controlplane-kube-controller-manager`, and `controlplane-etcd`.
-
-1. Under the `default-targets-metrics-keep-list`, specify the list of metrics for the `true` targets. For example,
+2. Set `minimalingestionprofile = false`.
+3. Under `default-scrape-settings-enabled`, verify that the targets you want to scrape are set to `true`. The only targets you can specify here are `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`,`controlplane-kube-controller-manager`, and `controlplane-etcd`.
+4. Under `default-targets-metrics-keep-list`, specify the list of metrics for the `true` targets. For example:
```yaml controlplane-apiserver= "apiserver_admission_webhook_admission_duration_seconds| apiserver_longrunning_requests" ``` -- Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.
+5. Apply the ConfigMap using the [`kubectl apply`][kubectl-apply] command.
```bash kubectl apply -f configmap-controlplane.yaml
Perform the following steps to collect all metrics from all targets on the clust
## Troubleshoot control plane metrics issues
-Make sure to check that the feature flag `AzureMonitorMetricsControlPlanePreview` is enabled and the `ama-metrics` pods are running.
+Make sure the feature flag `AzureMonitorMetricsControlPlanePreview` is enabled and the `ama-metrics` pods are running.
> [!NOTE]
-> The [troubleshooting methods][prometheus-troubleshooting] for Azure managed service Prometheus won't translate directly here as the components scraping the control plane aren't present in the managed prometheus add-on.
+> The [troubleshooting methods][prometheus-troubleshooting] for Azure managed service Prometheus don't directly translate here, as the components scraping the control plane aren't present in the managed Prometheus add-on.
-## ConfigMap formatting or errors
+### ConfigMap formatting
-Make sure to double check the formatting of the ConfigMap, and if the fields are correctly populated with the intended values. Specifically the `default-targets-metrics-keep-list`, `minimal-ingestion-profile`, and `default-scrape-settings-enabled`.
+Make sure you're using proper formatting in the ConfigMap and that the fields, specifically `default-targets-metrics-keep-list`, `minimal-ingestion-profile`, and `default-scrape-settings-enabled`, are correctly populated with their intended values.
-### Isolate control plane from data plane issue
+### Isolate control plane from data plane
Start by setting some of the [node related metrics][node-metrics] to `true` and verify the metrics are being forwarded to the workspace. This helps determine if the issue is specific to scraping control plane metrics. ### Events ingested
-Once you applied the changes, you can open metrics explorer from the **Azure Monitor overview** page, or from the **Monitoring** section the selected cluster. In the Azure portal, select **Metrics**. Check for an increase or decrease in the number of events ingested per minute. It should help you determine if the specific metric is missing or all metrics are missing.
+Once you applied the changes, you can open metrics explorer from the **Azure Monitor overview** page or from the **Monitoring** section the selected cluster and check for an increase or decrease in the number of events ingested per minute. It should help you determine if a specific metric is missing or if all metrics are missing.
-### Specific metric is not exposed
+### Specific metric isn't exposed
-There were cases where the metrics are documented, but not exposed from the target and wasn't forwarded to the Azure Monitor workspace. In this case, it's necessary to verify other metrics are being forwarded to the workspace.
+There have been cases where metrics are documented, but aren't exposed from the target and aren't forwarded to the Azure Monitor workspace. In this case, it's necessary to verify other metrics are being forwarded to the workspace.
### No access to the Azure Monitor workspace
-When you enable the add-on, you might have specified an existing workspace that you don't have access to. In that case, it might look like the metrics are not being collected and forwarded. Make sure that you create a new workspace while enabling the add-on or while creating the cluster.
+When you enable the add-on, you might have specified an existing workspace that you don't have access to. In that case, it might look like the metrics aren't being collected and forwarded. Make sure that you create a new workspace while enabling the add-on or while creating the cluster.
## Disable control plane metrics on your AKS cluster
-You can disable control plane metrics at any time, by either disabling the feature flag, disabling managed Prometheus, or by deleting the AKS cluster.
-
-## Preview flag enabled after Managed Prometheus setup
-If the preview flag(`AzureMonitorMetricsControlPlanePreview`) was enabled on an existing Managed Prometheus cluster, it will require forcing an update for the cluster to emit control plane metrics
+You can disable control plane metrics at any time by disabling the managed Prometheus add-on and unregistering the `AzureMonitorMetricsControlPlanePreview` feature flag.
-You can run an az aks update to ensure the cluster updates to start collecting control plane metrics.
+1. Remove the metrics add-on that scrapes Prometheus metrics using the [`az aks update`][az-aks-update] command.
-```azurecli
-az aks update -n <cluster-name> -g <resource-group>
-```
-
-> [!NOTE]
-> This action doesn't remove any existing data stored in your Azure Monitor workspace.
+ ```azurecli-interactive
+ az aks update --disable-azure-monitor-metrics --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP
+ ```
-Run the following command to remove the metrics add-on that scrapes Prometheus metrics.
+2. Disable scraping of control plane metrics on the AKS cluster by unregistering the `AzureMonitorMetricsControlPlanePreview` feature flag using the [`az feature unregister`][az-feature-unregister] command.
-```azurecli-interactive
-az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
-```
+ ```azurecli-interactive
+ az feature unregister "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
+ ```
-Run the following command to disable scraping of control plane metrics on the AKS cluster by unregistering the `AzureMonitorMetricsControlPlanePreview` feature flag using the [az feature unregister][az-feature-unregister] command.
+## FAQ
-```azurecli-interactive
-az feature unregister "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
-```
+### Can I scrape control plane metrics with self hosted Prometheus?
-## FAQs
-* Can these metrics be scraped with self hosted prometheus?
- * The control plane metrics currently cannot be scraped with self hosted prometheus. Self hosted prometheus will be able to scrape the single instance depending on the load balancer. These metrics are notaccurate as there are often multiple replicas of the control plane metrics which will only be visible through Managed Prometheus
+No, you currently can't scrape control plane metrics with self hosted Prometheus. Self hosted Prometheus can only scrape the single instance depending on the load balancer. The metrics aren't reliable, as there are often multiple replicas of the control plane metrics are only visible through managed Prometheus
-* Why is the user agent not available through the control plane metrics?
- * [Control plane metrics in Kubernetes](https://kubernetes.io/docs/reference/instrumentation/metrics/) do not have the user agent. The user agent is only available through Control Plane logs available through [Diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md)
+### Why is the user agent not available through the control plane metrics?
+[Control plane metrics in Kubernetes](https://kubernetes.io/docs/reference/instrumentation/metrics/) don't have the user agent. The user agent is only available through the control plane logs available in the [diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md).
## Next steps After evaluating this preview feature, [share your feedback][share-feedback]. We're interested in hearing what you think. -- Learn more about the [list of default metrics for AKS control plane][list-of-default-metrics-aks-control-plane].
+To learn more about AKS control plane metrics, see the [list of default metrics for AKS control plane][list-of-default-metrics-aks-control-plane].
<!-- EXTERNAL LINKS --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
After evaluating this preview feature, [share your feedback][share-feedback]. We
[az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update [enable-monitoring-kubernetes-cluster]: ../azure-monitor/containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana
-[prometheus-metrics-scrape-configuration-minimal]: ../azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md#scenarios
[prometheus-troubleshooting]: ../azure-monitor/containers/prometheus-metrics-troubleshoot.md [node-metrics]: ../azure-monitor/containers/prometheus-metrics-scrape-default.md [list-of-default-metrics-aks-control-plane]: control-plane-metrics-default-list.md [az-feature-unregister]: /cli/azure/feature#az-feature-unregister
-[release-tracker]: https://releases.aks.azure.com/#tabversion
-
+[azure-portal]: https://portal.azure.com
+[az-aks-update]: /cli/azure/aks#az-aks-update
aks Use Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-vertical-pod-autoscaler.md
+
+ Title: Use the Vertical Pod Autoscaler in Azure Kubernetes Service (AKS)
+description: Learn how to deploy, upgrade, or disable the Vertical Pod Autoscaler on your Azure Kubernetes Service (AKS) cluster.
++ Last updated : 02/22/2024+++++
+# Use the Vertical Pod Autoscaler in Azure Kubernetes Service (AKS)
+
+This article shows you how to use the Vertical Pod Autoscaler (VPA) on your Azure Kubernetes Service (AKS) cluster. The VPA automatically adjusts the CPU and memory requests for your pods to match the usage patterns of your workloads. This feature helps to optimize the performance of your applications and reduce the cost of running your workloads in AKS.
+
+For more information, see the [Vertical Pod Autoscaler overview](./vertical-pod-autoscaler.md).
+
+## Before you begin
+
+* If you have an existing AKS cluster, make sure it's running Kubernetes version 1.24 or higher.
+* You need the Azure CLI version 2.52.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* If enabling VPA on an existing cluster, make sure `kubectl` is installed and configured to connect to your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --name <cluster-name> --resource-group <resource-group-name>
+ ```
+
+## Deploy the Vertical Pod Autoscaler on a new cluster
+
+* Create a new AKS cluster with the VPA enabled using the [`az aks create`][az-aks-create] command with the `--enable-vpa` flag.
+
+ ```azurecli-interactive
+ az aks create --name <cluster-name> --resource-group <resource-group-name> --enable-vpa --generate-ssh-keys
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+## Update an existing cluster to use the Vertical Pod Autoscaler
+
+* Update an existing cluster to use the VPA using the [`az aks update`][az-aks-update] command with the `--enable-vpa` flag.
+
+ ```azurecli-interactive
+ az aks update --name <cluster-name> --resource-group <resource-group-name> --enable-vpa
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+## Disable the Vertical Pod Autoscaler on an existing cluster
+
+* Disable the VPA on an existing cluster using the [`az aks update`][az-aks-update] command with the `--disable-vpa` flag.
+
+ ```azurecli-interactive
+ az aks update --name <cluster-name> --resource-group <resource-group-name> --disable-vpa
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+## Test Vertical Pod Autoscaler installation
+
+In the following example, we create a deployment with two pods, each running a single container that requests 100 millicore and tries to utilize slightly above 500 millicores. We also create a VPA config pointing at the deployment. The VPA observes the behavior of the pods, and after about five minutes, updates the pods to request 500 millicores.
+
+1. Create a file named `hamster.yaml` and copy in the following manifest of the Vertical Pod Autoscaler example from the [kubernetes/autoscaler][kubernetes-autoscaler-github-repo] GitHub repository:
+
+ ```yml
+ apiVersion: "autoscaling.k8s.io/v1"
+ kind: VerticalPodAutoscaler
+ metadata:
+ name: hamster-vpa
+ spec:
+ targetRef:
+ apiVersion: "apps/v1"
+ kind: Deployment
+ name: hamster
+ resourcePolicy:
+ containerPolicies:
+ - containerName: '*'
+ minAllowed:
+ cpu: 100m
+ memory: 50Mi
+ maxAllowed:
+ cpu: 1
+ memory: 500Mi
+ controlledResources: ["cpu", "memory"]
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: hamster
+ spec:
+ selector:
+ matchLabels:
+ app: hamster
+ replicas: 2
+ template:
+ metadata:
+ labels:
+ app: hamster
+ spec:
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534
+ containers:
+ - name: hamster
+ image: registry.k8s.io/ubuntu-slim:0.1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 50Mi
+ command: ["/bin/sh"]
+ args:
+ - "-c"
+ - "while true; do timeout 0.5s yes >; sleep 0.5s; done"
+ ```
+
+2. Deploy the `hamster.yaml` Vertical Pod Autoscaler example using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f hamster.yaml
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+3. View the running pods using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get pods -l app=hamster
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ hamster-78f9dcdd4c-hf7gk 1/1 Running 0 24s
+ hamster-78f9dcdd4c-j9mc7 1/1 Running 0 24s
+ ```
+
+4. View the CPU and Memory reservations on one of the pods using the [`kubectl describe`][kubectl-describe] command. Make sure you replace `<example-pod>` with one of the pod IDs returned in your output from the previous step.
+
+ ```bash
+ kubectl describe pod hamster-<example-pod>
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ hamster:
+ Container ID: containerd://
+ Image: k8s.gcr.io/ubuntu-slim:0.1
+ Image ID: sha256:
+ Port: <none>
+ Host Port: <none>
+ Command:
+ /bin/sh
+ Args:
+ -c
+ while true; do timeout 0.5s yes >; sleep 0.5s; done
+ State: Running
+ Started: Wed, 28 Sep 2022 15:06:14 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 100m
+ memory: 50Mi
+ Environment: <none>
+ ```
+
+ The pod has 100 millicpu and 50 Mibibytes of Memory reserved in this example. For this sample application, the pod needs less than 100 millicpu to run, so there's no CPU capacity available. The pods also reserves less memory than needed. The Vertical Pod Autoscaler *vpa-recommender* deployment analyzes the pods hosting the hamster application to see if the CPU and Memory requirements are appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values.
+
+5. Monitor the pods using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get --watch pods -l app=hamster
+ ```
+
+6. When the new hamster pod starts, you can view the updated CPU and Memory reservations using the [`kubectl describe`][kubectl-describe] command. Make sure you replace `<example-pod>` with one of the pod IDs returned in your output from the previous step.
+
+ ```bash
+ kubectl describe pod hamster-<example-pod>
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ ```
+
+ In the previous output, you can see that the CPU reservation increased to 587 millicpu, which is over five times the original value. The Memory increased to 262,144 Kilobytes, which is around 250 Mibibytes, or five times the original value. This pod was under-resourced, and the Vertical Pod Autoscaler corrected the estimate with a much more appropriate value.
+
+7. View updated recommendations from VPA using the [`kubectl describe`][kubectl-describe] command to describe the hamster-vpa resource information.
+
+ ```bash
+ kubectl describe vpa/hamster-vpa
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ ```
+
+## Set Vertical Pod Autoscaler requests
+
+The `VerticalPodAutoscaler` object automatically sets resource requests on pods with an `updateMode` of `Auto`. You can set a different value depending on your requirements and testing. In this example, we create and test a deployment manifest with two pods, each running a container that requests 100 milliCPU and 50 MiB of Memory, and sets the `updateMode` to `Recreate`.
+
+1. Create a file named `azure-autodeploy.yaml` and copy in the following manifest:
+
+ ```yml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: vpa-auto-deployment
+ spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: vpa-auto-deployment
+ template:
+ metadata:
+ labels:
+ app: vpa-auto-deployment
+ spec:
+ containers:
+ - name: mycontainer
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 50Mi
+ command: ["/bin/sh"]
+ args: ["-c", "while true; do timeout 0.5s yes >; sleep 0.5s; done"]
+ ```
+
+2. Create the pod using the [`kubectl create`][kubectl-create] command.
+
+ ```bash
+ kubectl create -f azure-autodeploy.yaml
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+3. View the running pods using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get pods
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ vpa-auto-deployment-54465fb978-kchc5 1/1 Running 0 52s
+ vpa-auto-deployment-54465fb978-nhtmj 1/1 Running 0 52s
+ ```
+
+4. Create a file named `azure-vpa-auto.yaml` and copy in the following manifest:
+
+ ```yml
+ apiVersion: autoscaling.k8s.io/v1
+ kind: VerticalPodAutoscaler
+ metadata:
+ name: vpa-auto
+ spec:
+ targetRef:
+ apiVersion: "apps/v1"
+ kind: Deployment
+ name: vpa-auto-deployment
+ updatePolicy:
+ updateMode: "Recreate"
+ ```
+
+ The `targetRef.name` value specifies that any pod controlled by a deployment named `vpa-auto-deployment` belongs to `VerticalPodAutoscaler`. The `updateMode` value of `Recreate` means that the Vertical Pod Autoscaler controller can delete a pod, adjust the CPU and Memory requests, and then create a new pod.
+
+5. Apply the manifest to the cluster using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl create -f azure-vpa-auto.yaml
+ ```
+
+6. Wait a few minutes and then view the running pods using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get pods
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ vpa-auto-deployment-54465fb978-qbhc4 1/1 Running 0 2m49s
+ vpa-auto-deployment-54465fb978-vbj68 1/1 Running 0 109s
+ ```
+
+7. Get detailed information about one of your running pods using the [`kubectl get`][kubectl-get] command. Make sure you replace `<pod-name>` with the name of one of your pods from your previous output.
+
+ ```bash
+ kubectl get pod <pod-name> --output yaml
+ ```
+
+ Your output should look similar to the following example output, which shows that VPA controller increased the Memory request to 262144k and the CPU request to 25 milliCPU:
+
+ ```output
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ annotations:
+ vpaObservedContainers: mycontainer
+ vpaUpdates: 'Pod resources updated by vpa-auto: container 0: cpu request, memory
+ request'
+ creationTimestamp: "2022-09-29T16:44:37Z"
+ generateName: vpa-auto-deployment-54465fb978-
+ labels:
+ app: vpa-auto-deployment
+
+ spec:
+ containers:
+ - args:
+ - -c
+ - while true; do timeout 0.5s yes >; sleep 0.5s; done
+ command:
+ - /bin/sh
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ imagePullPolicy: IfNotPresent
+ name: mycontainer
+ resources:
+ requests:
+ cpu: 25m
+ memory: 262144k
+ ```
+
+8. Get detailed information about the Vertical Pod Autoscaler and its recommendations for CPU and Memory using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get vpa vpa-auto --output yaml
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ recommendation:
+ containerRecommendations:
+ - containerName: mycontainer
+ lowerBound:
+ cpu: 25m
+ memory: 262144k
+ target:
+ cpu: 25m
+ memory: 262144k
+ uncappedTarget:
+ cpu: 25m
+ memory: 262144k
+ upperBound:
+ cpu: 230m
+ memory: 262144k
+ ```
+
+ In this example, the results in the `target` attribute specify that it doesn't need to change the CPU or the Memory target for the container to run optimally. However, results can vary depending on the application and its resource utilization.
+
+ The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a pod and replace it with a new pod. If a pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the pod and replaces it with a pod that meets the target attribute.
+
+## Extra Recommender for Vertical Pod Autoscaler
+
+The Recommender provides recommendations for resource usage based on real-time resource consumption. AKS deploys a Recommender when a cluster enables VPA. You can deploy a customized Recommender or an extra Recommender with the same image as the default one. The benefit of having a customized Recommender is that you can customize your recommendation logic. With an extra Recommender, you can partition VPAs to use different Recommenders.
+
+In the following example, we create an extra Recommender, apply to an existing AKS clust, and configure the VPA object to use the extra Recommender.
+
+1. Create a file named `extra_recommender.yaml` and copy in the following manifest:
+
+ ```yml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: extra-recommender
+ namespace: kube-system
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: extra-recommender
+ template:
+ metadata:
+ labels:
+ app: extra-recommender
+ spec:
+ serviceAccountName: vpa-recommender
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534
+ containers:
+ - name: recommender
+ image: registry.k8s.io/autoscaling/vpa-recommender:0.13.0
+ imagePullPolicy: Always
+ args:
+ - --recommender-name=extra-recommender
+ resources:
+ limits:
+ cpu: 200m
+ memory: 1000Mi
+ requests:
+ cpu: 50m
+ memory: 500Mi
+ ports:
+ - name: prometheus
+ containerPort: 8942
+ ```
+
+2. Deploy the `extra-recomender.yaml` Vertical Pod Autoscaler example using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f extra-recommender.yaml
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+3. Create a file named `hamster-extra-recommender.yaml` and copy in the following manifest:
+
+ ```yml
+ apiVersion: "autoscaling.k8s.io/v1"
+ kind: VerticalPodAutoscaler
+ metadata:
+ name: hamster-vpa
+ spec:
+ recommenders:
+ - name: 'extra-recommender'
+ targetRef:
+ apiVersion: "apps/v1"
+ kind: Deployment
+ name: hamster
+ updatePolicy:
+ updateMode: "Auto"
+ resourcePolicy:
+ containerPolicies:
+ - containerName: '*'
+ minAllowed:
+ cpu: 100m
+ memory: 50Mi
+ maxAllowed:
+ cpu: 1
+ memory: 500Mi
+ controlledResources: ["cpu", "memory"]
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: hamster
+ spec:
+ selector:
+ matchLabels:
+ app: hamster
+ replicas: 2
+ template:
+ metadata:
+ labels:
+ app: hamster
+ spec:
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534 # nobody
+ containers:
+ - name: hamster
+ image: k8s.gcr.io/ubuntu-slim:0.1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 50Mi
+ command: ["/bin/sh"]
+ args:
+ - "-c"
+ - "while true; do timeout 0.5s yes >; sleep 0.5s; done"
+ ```
+
+ If `memory` isn't specified in `controlledResources`, the Recommender doesn't respond to OOM events. In this example, we only set CPU in `controlledValues`. `controlledValues` allows you to choose whether to update the container's resource requests using the`RequestsOnly` option, or by both resource requests and limits using the `RequestsAndLimits` option. The default value is `RequestsAndLimits`. If you use the `RequestsAndLimits` option, requests are computed based on actual usage, and limits are calculated based on the current pod's request and limit ratio.
+
+ For example, if you start with a pod that requests 2 CPUs and limits to 4 CPUs, VPA always sets the limit to be twice as much as requests. The same principle applies to Memory. When you use the `RequestsAndLimits` mode, it can serve as a blueprint for your initial application resource requests and limits.
+
+ You can simplify the VPA object using `Auto` mode and computing recommendations for both CPU and Memory.
+
+4. Deploy the `hamster-extra-recomender.yaml` example using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f hamster-extra-recommender.yaml
+ ```
+
+5. Monitor your pods using the `[kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get --watch pods -l app=hamster
+ ````
+
+6. When the new hamster pod starts, view the updated CPU and Memory reservations using the [`kubectl describe`][kubectl-describe] command. Make sure you replace `<example-pod>` with one of your pod IDs.
+
+ ```bash
+ kubectl describe pod hamster-<example-pod>
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ ```
+
+7. View updated recommendations from VPA using the [`kubectl describe`][kubectl-describe] command.
+
+ ```bash
+ kubectl describe vpa/hamster-vpa
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ Spec:
+ recommenders:
+ Name: customized-recommender
+ ```
+
+## Troubleshoot the Vertical Pod Autoscaler
+
+If you encounter issues with the Vertical Pod Autoscaler, you can troubleshoot the system components and custom resource definition to identify the problem.
+
+1. Verify that all system components are running using the following command:
+
+ ```bash
+ kubectl --namespace=kube-system get pods|grep vpa
+ ```
+
+ Your output should list *three pods*: recommender, updater, and admission-controller, all with a status of `Running`.
+
+2. For each of the pods returned in your previous output, verify that the system components are logging any errors using the following command:
+
+ ```bash
+ kubectl --namespace=kube-system logs [pod name] | grep -e '^E[0-9]\{4\}'
+ ```
+
+3. Verify that the custom resource definition was created using the following command:
+
+ ```bash
+ kubectl get customresourcedefinition | grep verticalpodautoscalers
+ ```
+
+## Next steps
+
+To learn more about the VPA object, see the [Vertical Pod Autoscaler API reference](./vertical-pod-autoscaler-api-reference.md).
+
+<!-- EXTERNAL LINKS -->
+[kubernetes-autoscaler-github-repo]: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/examples/hamster.yaml
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+
+<!-- INTERNAL LINKS -->
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
Title: Vertical Pod Autoscaling in Azure Kubernetes Service (AKS)
-description: Learn how to vertically autoscale your pod on an Azure Kubernetes Service (AKS) cluster.
-
+ Title: Vertical pod autoscaling in Azure Kubernetes Service (AKS)
+description: Learn about vertical pod autoscaling in Azure Kubernetes Service (AKS) using the Vertical Pod Autoscaler (VPA).
+ Last updated 09/28/2023
-# Vertical Pod Autoscaling in Azure Kubernetes Service (AKS)
+# Vertical pod autoscaling in Azure Kubernetes Service (AKS)
-This article provides an overview of Vertical Pod Autoscaler (VPA) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA frees up CPU and Memory for the other pods and helps make effective utilization of your AKS cluster.
+This article provides an overview of using the Vertical Pod Autoscaler (VPA) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version.
-Vertical Pod autoscaling provides recommendations for resource usage over time. To manage sudden increases in resource usage, use the [Horizontal Pod Autoscaler][horizontal-pod-autoscaling], which scales the number of pod replicas as needed.
+When configured, the VPA automatically sets resource requests and limits on containers per workload based on past usage. The VPA frees up CPU and Memory for other pods and helps ensure effective utilization of your AKS clusters. The Vertical Pod Autoscaler provides recommendations for resource usage over time. To manage sudden increases in resource usage, use the [Horizontal Pod Autoscaler][horizontal-pod-autoscaling], which scales the number of pod replicas as needed.
## Benefits
-Vertical Pod Autoscaler provides the following benefits:
+The Vertical Pod Autoscaler offers the following benefits:
-* It analyzes and adjusts processor and memory resources to *right size* your applications. VPA isn't only responsible for scaling up, but also for scaling down based on their resource use over time.
+* Analyzes and adjusts processor and memory resources to *right size* your applications. VPA isn't only responsible for scaling up, but also for scaling down based on their resource use over time.
+* A pod with a scaling mode set to *auto* or *recreate* is evicted if it needs to change its resource requests.
+* You can set CPU and memory constraints for individual containers by specifying a resource policy.
+* Ensures nodes have correct resources for pod scheduling.
+* Offers configurable logging of any adjustments made to processor or memory resources made.
+* Improves cluster resource utilization and frees up CPU and memory for other pods.
-* A pod is evicted if it needs to change its resource requests if its scaling mode is set to *auto* or *recreate*.
+## Limitations and considerations
-* Set CPU and memory constraints for individual containers by specifying a resource policy
+Consider the following limitations and considerations when using the Vertical Pod Autoscaler:
-* Ensures nodes have correct resources for pod scheduling
-
-* Configurable logging of any adjustments to processor or memory resources made
-
-* Improve cluster resource utilization and frees up CPU and memory for other pods.
-
-## Limitations
-
-* Vertical Pod autoscaling supports a maximum of 1,000 pods associated with `VerticalPodAutoscaler` objects per cluster.
-
-* VPA might recommend more resources than available in the cluster. As a result, this prevents the pod from being assigned to a node and run, because the node doesn't have sufficient resources. You can overcome this limitation by setting the *LimitRange* to the maximum available resources per namespace, which ensures pods don't ask for more resources than specified. Additionally, you can set maximum allowed resource recommendations per pod in a `VerticalPodAutoscaler` object. Be aware that VPA cannot fully overcome an insufficient node resource issue. The limit range is fixed, but the node resource usage is changed dynamically.
-
-* We don't recommend using Vertical Pod Autoscaler with [Horizontal Pod Autoscaler][horizontal-pod-autoscaler-overview], which scales based on the same CPU and memory usage metrics.
-
-* VPA Recommender only stores up to eight days of historical data.
-
-* VPA does not support JVM-based workloads due to limited visibility into actual memory usage of the workload.
-
-* It is not recommended or supported to run your own implementation of VPA alongside this managed implementation of VPA. Having an extra or customized recommender is supported.
-
-* AKS Windows containers are not supported.
-
-## Before you begin
-
-* AKS cluster is running Kubernetes version 1.24 and higher.
-
-* The Azure CLI version 2.52.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-* `kubectl` should be connected to the cluster you want to install VPA.
+* VPA supports a maximum of 1,000 pods associated with `VerticalPodAutoscaler` objects per cluster.
+* VPA might recommend more resources than available in the cluster, which prevents the pod from being assigned to a node and run due to insufficient resources. You can overcome this limitation by setting the *LimitRange* to the maximum available resources per namespace, which ensures pods don't ask for more resources than specified. You can also set maximum allowed resource recommendations per pod in a `VerticalPodAutoscaler` object. The VPA can't completely overcome an insufficient node resource issue. The limit range is fixed, but the node resource usage is changed dynamically.
+* We don't recommend using VPA with the [Horizontal Pod Autoscaler (HPA)][horizontal-pod-autoscaler-overview], which scales based on the same CPU and memory usage metrics.
+* The VPA Recommender only stores up to *eight days* of historical data.
+* VPA doesn't support JVM-based workloads due to limited visibility into actual memory usage of the workload.
+* VPA doesn't support running your own implementation of VPA alongside it. Having an extra or customized recommender is supported.
+* AKS Windows containers aren't supported.
## VPA overview
-### API object
-
-The Vertical Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The version supported is 0.11 and higher, and can be found in the [Kubernetes autoscaler repo][github-autoscaler-repo-v011].
- The VPA object consists of three components: -- **Recommender** - it monitors the current and past resource consumption and, based on it, provides recommended values for the containers' cpu and memory requests/limits. The **Recommender** monitors the metric history, Out of Memory (OOM) events, and the VPA deployment spec, and suggests fair requests. By providing a proper resource request and limits configuration, the limits are raised and lowered.--- **Updater** - it checks which of the managed pods have correct resources set and, if not, kills them so that they can be recreated by their controllers with the updated requests.--- **VPA Admission controller** - it sets the correct resource requests on new pods (either created or recreated by their controller due to the Updater's activity).
+* **Recommender**: The Recommender monitors current and past resource consumption, including metric history, Out of Memory (OOM) events, and VPA deployment specs, and uses the information it gathers to provide recommended values for container CPU and Memory requests/limits.
+* **Updater**: The Updater monitors managed pods to ensure that their resource requests are set correctly. If not, it removes those pods so that their controllers can recreate them with the updated requests.
+* **VPA Admission Controller**: The VPA Admission Controller sets the correct resource requests on new pods either created or recreated by their controller based on the Updater's activity.
### VPA admission controller
-VPA admission controller is a binary that registers itself as a Mutating Admission Webhook. With each pod created, it gets a request from the apiserver and it evaluates if there's a matching VPA configuration, or find a corresponding one and use the current recommendation to set resource requests in the pod.
-
-A standalone job runs outside of the VPA admission controller, called `overlay-vpa-cert-webhook-check`. The `overlay-vpa-cert-webhook-check` is used to create and renew the certificates, and register the VPA admission controller as a `MutatingWebhookConfiguration`.
+The VPA Admission Controller is a binary that registers itself as a *Mutating Admission Webhook*. When a new pod is created, the VPA Admission Controller gets a request from the API server and evaluates if there's a matching VPA configuration or finds a corresponding one and uses the current recommendation to set resource requests in the pod.
-For high availability, AKS supports two admission controller replicas.
+A standalone job, `overlay-vpa-cert-webhook-check`, runs outside of the VPA Admission Controller. The `overlay-vpa-cert-webhook-check` job creates and renews the certificates and registers the VPA Admission Controller as a `MutatingWebhookConfiguration`.
### VPA object operation modes
-A Vertical Pod Autoscaler resource is inserted for each controller that you want to have automatically computed resource requirements. This is most commonly a *deployment*. There are four modes in which VPAs operate:
-
-* `Auto` - VPA assigns resource requests during pod creation and updates existing pods using the preferred update mechanism. Currently, `Auto` is equivalent to `Recreate`, and also is the default mode. Once restart free ("in-place") update of pod requests is available, it may be used as the preferred update mechanism by the `Auto` mode. When using `Recreate` mode, VPA evicts a pod if it needs to change its resource requests. It may cause the pods to be restarted all at once, thereby causing application inconsistencies. You can limit restarts and maintain consistency in this situation by using a [PodDisruptionBudget][pod-disruption-budget].
-* `Recreate` - VPA assigns resource requests during pod creation as well as update existing pods by evicting them when the requested resources differ significantly from the new recommendation (respecting the Pod Disruption Budget, if defined). This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. Otherwise, the `Auto` mode is preferred, which may take advantage of restart-free updates once they are available.
-* `Initial` - VPA only assigns resource requests during pod creation and never changes afterwards.
-* `Off` - VPA doesn't automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object.
+A Vertical Pod Autoscaler resource, most commonly a *deployment*, is inserted for each controller that you want to have automatically computed resource requirements.
-## Deployment pattern during application development
+There are four modes in which the VPA operates:
-A common deployment pattern recommended for you if you're unfamiliar with VPA is to perform the following steps during application development in order to identify its unique resource utilization characteristics, test VPA to verify it is functioning properly, and test alongside other Kubernetes components to optimize resource utilization of the cluster.
+* `Auto`: VPA assigns resource requests during pod creation and updates existing pods using the preferred update mechanism. `Auto`, which is equivalent to `Recreate`, is the default mode. Once restart-free, or *in-place*, updates of pod requests are available, it can be used as the preferred update mechanism by the `Auto` mode. With the `Auto` mode, VPA evicts a pod if it needs to change its resource requests. It might cause the pods to be restarted all at once, which can cause application inconsistencies. You can limit restarts and maintain consistency in this situation using a [PodDisruptionBudget][pod-disruption-budget].
+* `Recreate`: VPA assigns resource requests during pod creation and updates existing pods by evicting them when the requested resources differ significantly from the new recommendations (respecting the PodDisruptionBudget, if defined). You should only use this mode if you need to ensure that the pods are restarted whenever the resource request changes. Otherwise, we recommend using `Auto` mode, which takes advantage of restart-free updates once available.
+* `Initial`: VPA only assigns resource requests during pod creation. It doesn't update existing pods. This mode is useful for testing and understanding the VPA behavior without affecting the running pods.
+* `Off`: VPA doesn't automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object.
-1. Set UpdateMode = "Off" in your production cluster and run VPA in recommendation mode so you can test and gain familiarity with VPA. UpdateMode = "Off" can avoid introducing a misconfiguration that can cause an outage.
+## Deployment pattern for application development
-2. Establish observability first by collecting actual resource utilization telemetry over a given period of time. This helps you understand the behavior and signs of symptoms or issues from container and pod resources influenced by the workloads running on them.
-
-3. Get familiar with the monitoring data to understand the performance characteristics. Based on this insight, set the desired requests/limits accordingly and then in the next deployment or upgrade
+If you're unfamiliar with VPA, we recommend the following deployment pattern during application development to identify its unique resource utilization characteristics, test VPA to verify it's functioning properly, and test alongside other Kubernetes components to optimize resource utilization of the cluster:
+1. Set `UpdateMode = "Off"` in your production cluster and run VPA in recommendation mode so you can test and gain familiarity with VPA. `UpdateMode = "Off"` can avoid introducing a misconfiguration that can cause an outage.
+2. Establish observability first by collecting actual resource utilization telemetry over a given period of time, which helps you understand the behavior and any signs of issues from container and pod resources influenced by the workloads running on them.
+3. Get familiar with the monitoring data to understand the performance characteristics. Based on this insight, set the desired requests/limits accordingly and then in the next deployment or upgrade.
4. Set `updateMode` value to `Auto`, `Recreate`, or `Initial` depending on your requirements.
-## Deploy, upgrade, or disable VPA on a cluster
-
-In this section, you deploy, upgrade, or disable the Vertical Pod Autoscaler on your cluster.
-
-1. To enable VPA on a new cluster, use `--enable-vpa` parameter with the [az aks create][az-aks-create] command.
-
- ```azurecli-interactive
- az aks create \
- --name myAKSCluster \
- --resource-group myResourceGroup \
- --enable-vpa \
- --generate-ssh-keys
- ```
-
- After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-
-2. Optionally, to enable VPA on an existing cluster, use the `--enable-vpa` with the [https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update] command.
-
- ```azurecli-interactive
- az aks update --name myAKSCluster --resource-group myResourceGroup --enable-vpa
- ```
-
- After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-
-3. Optionally, to disable VPA on an existing cluster, use the `--disable-vpa` with the [https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update] command.
-
- ```azurecli-interactive
- az aks update --name myAKSCluster --resource-group myResourceGroup --disable-vpa
- ```
-
- After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-
-4. To verify that the Vertical Pod Autoscaler pods have been created successfully, use the [kubectl get][kubectl-get] command.
-
-```bash
-kubectl get pods --name kube-system
-```
-
-The output of the command includes the following results specific to the VPA pods. The pods should show a *running* status.
-
-```output
-NAME READY STATUS RESTARTS AGE
-vpa-admission-controller-7867874bc5-vjfxk 1/1 Running 0 41m
-vpa-recommender-5fd94767fb-ggjr2 1/1 Running 0 41m
-vpa-updater-56f9bfc96f-jgq2g 1/1 Running 0 41m
-```
-
-## Test your Vertical Pod Autoscaler installation
-
-The following steps create a deployment with two pods, each running a single container that requests 100 millicores and tries to utilize slightly above 500 millicores. Also a VPA config is created, pointing at the deployment. The VPA observes the behavior of the pods, and after about five minutes, they're updated with a higher CPU request.
-
-1. Create a file named `hamster.yaml` and copy in the following manifest of the Vertical Pod Autoscaler example from the [kubernetes/autoscaler][kubernetes-autoscaler-github-repo] GitHub repository.
-
-1. Deploy the `hamster.yaml` Vertical Pod Autoscaler example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
- ```bash
- kubectl apply -f hamster.yaml
- ```
-
- After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-
-1. Run the following [kubectl get][kubectl-get] command to get the pods from the hamster example application:
-
- ```bash
- kubectl get pods -l app=hamster
- ```
-
- The example output resembles the following:
-
- ```output
- hamster-78f9dcdd4c-hf7gk 1/1 Running 0 24s
- hamster-78f9dcdd4c-j9mc7 1/1 Running 0 24s
- ```
-
-1. Use the [kubectl describe][kubectl-describe] command on one of the pods to view its CPU and memory reservation. Replace "exampleID" with one of the pod IDs returned in your output from the previous step.
-
- ```bash
- kubectl describe pod hamster-exampleID
- ```
-
- The example output is a snippet of the information about the cluster:
-
- ```output
- hamster:
- Container ID: containerd://
- Image: k8s.gcr.io/ubuntu-slim:0.1
- Image ID: sha256:
- Port: <none>
- Host Port: <none>
- Command:
- /bin/sh
- Args:
- -c
- while true; do timeout 0.5s yes >; sleep 0.5s; done
- State: Running
- Started: Wed, 28 Sep 2022 15:06:14 -0400
- Ready: True
- Restart Count: 0
- Requests:
- cpu: 100m
- memory: 50Mi
- Environment: <none>
- ```
-
- The pod has 100 millicpu and 50 Mibibytes of memory reserved in this example. For this sample application, the pod needs less than 100 millicpu to run, so there's no CPU capacity available. The pods also reserves much less memory than needed. The Vertical Pod Autoscaler *vpa-recommender* deployment analyzes the pods hosting the hamster application to see if the CPU and memory requirements are appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values.
-
-1. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
-
- ```bash
- kubectl get --watch pods -l app=hamster
- ```
-
-1. When a new hamster pod is started, describe the pod running the [kubectl describe][kubectl-describe] command and view the updated CPU and memory reservations.
-
- ```bash
- kubectl describe pod hamster-<exampleID>
- ```
-
- The example output is a snippet of the information describing the pod:
-
- ```output
- State: Running
- Started: Wed, 28 Sep 2022 15:09:51 -0400
- Ready: True
- Restart Count: 0
- Requests:
- cpu: 587m
- memory: 262144k
- Environment: <none>
- ```
-
- In the previous output, you can see that the CPU reservation increased to 587 millicpu, which is over five times the original value. The memory increased to 262,144 Kilobytes, which is around 250 Mibibytes, or five times the original value. This pod was under-resourced, and the Vertical Pod Autoscaler corrected the estimate with a much more appropriate value.
-
-1. To view updated recommendations from VPA, run the [kubectl describe][kubectl-describe] command to describe the hamster-vpa resource information.
-
- ```bash
- kubectl describe vpa/hamster-vpa
- ```
-
- The example output is a snippet of the information about the resource utilization:
-
- ```output
- State: Running
- Started: Wed, 28 Sep 2022 15:09:51 -0400
- Ready: True
- Restart Count: 0
- Requests:
- cpu: 587m
- memory: 262144k
- Environment: <none>
- ```
-
-## Set Pod Autoscaler requests
-
-Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automatically set resource requests on pods when the updateMode is set to a **Auto**. You can set a different value depending on your requirements and testing. In this example, updateMode is set to `Recreate`.
-
-1. Enable VPA for your cluster by running the following command. Replace cluster name `myAKSCluster` with the name of your AKS cluster and replace `myResourceGroup` with the name of the resource group the cluster is hosted in.
-
- ```azurecli-interactive
- az aks update --name myAKSCluster --resource-group myResourceGroup --enable-vpa
- ```
-
-2. Create a file named `azure-autodeploy.yaml`, and copy in the following manifest.
-
- ```yml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: vpa-auto-deployment
- spec:
- replicas: 2
- selector:
- matchLabels:
- app: vpa-auto-deployment
- template:
- metadata:
- labels:
- app: vpa-auto-deployment
- spec:
- containers:
- - name: mycontainer
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 50Mi
- command: ["/bin/sh"]
- args: ["-c", "while true; do timeout 0.5s yes >; sleep 0.5s; done"]
- ```
-
- This manifest describes a deployment that has two pods. Each pod has one container that requests 100 milliCPU and 50 MiB of memory.
-
-3. Create the pod with the [kubectl create][kubectl-create] command, as shown in the following example:
-
- ```bash
- kubectl create -f azure-autodeploy.yaml
- ```
-
- After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-
-4. Run the following [kubectl get][kubectl-get] command to get the pods:
-
- ```bash
- kubectl get pods
- ```
-
- The output resembles the following example showing the name and status of the pods:
-
- ```output
- NAME READY STATUS RESTARTS AGE
- vpa-auto-deployment-54465fb978-kchc5 1/1 Running 0 52s
- vpa-auto-deployment-54465fb978--namehtmj 1/1 Running 0 52s
- ```
-
-5. Create a file named `azure-vpa-auto.yaml`, and copy in the following manifest that describes a `VerticalPodAutoscaler`:
-
- ```yml
- apiVersion: autoscaling.k8s.io/v1
- kind: VerticalPodAutoscaler
- metadata:
- name: vpa-auto
- spec:
- targetRef:
- apiVersion: "apps/v1"
- kind: Deployment
- name: vpa-auto-deployment
- updatePolicy:
- updateMode: "Recreate"
- ```
-
- The `targetRef.name` value specifies that any pod that's controlled by a deployment named `vpa-auto-deployment` belongs to `VerticalPodAutoscaler`. The `updateMode` value of `Recreate` means that the Vertical Pod Autoscaler controller can delete a pod, adjust the CPU and memory requests, and then create a new pod.
-
-6. Apply the manifest to the cluster using the [kubectl apply][kubectl-apply] command:
-
- ```bash
- kubectl create -f azure-vpa-auto.yaml
- ```
-
-7. Wait a few minutes, and view the running pods again by running the following [kubectl get][kubectl-get] command:
-
- ```bash
- kubectl get pods
- ```
-
- The output resembles the following example showing the pod names have changed and status of the pods:
-
- ```output
- NAME READY STATUS RESTARTS AGE
- vpa-auto-deployment-54465fb978-qbhc4 1/1 Running 0 2m49s
- vpa-auto-deployment-54465fb978-vbj68 1/1 Running 0 109s
- ```
-
-8. Get detailed information about one of your running pods by using the [Kubectl get][kubectl-get] command. Replace `podName` with the name of one of your pods that you retrieved in the previous step.
-
- ```bash
- kubectl get pod podName --output yaml
- ```
-
- The output resembles the following example, showing that the Vertical Pod Autoscaler controller has increased the memory request to 262144k and CPU request to 25 milliCPU.
-
- ```output
- apiVersion: v1
- kind: Pod
- metadata:
- annotations:
- vpaObservedContainers: mycontainer
- vpaUpdates: 'Pod resources updated by vpa-auto: container 0: cpu request, memory
- request'
- creationTimestamp: "2022-09-29T16:44:37Z"
- generateName: vpa-auto-deployment-54465fb978-
- labels:
- app: vpa-auto-deployment
-
- spec:
- containers:
- - args:
- - -c
- - while true; do timeout 0.5s yes >; sleep 0.5s; done
- command:
- - /bin/sh
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- imagePullPolicy: IfNotPresent
- name: mycontainer
- resources:
- requests:
- cpu: 25m
- memory: 262144k
- ```
-
-9. To get detailed information about the Vertical Pod Autoscaler and its recommendations for CPU and memory, use the [kubectl get][kubectl-get] command:
-
- ```bash
- kubectl get vpa vpa-auto --output yaml
- ```
-
- The output resembles the following example:
-
- ```output
- recommendation:
- containerRecommendations:
- - containerName: mycontainer
- lowerBound:
- cpu: 25m
- memory: 262144k
- target:
- cpu: 25m
- memory: 262144k
- uncappedTarget:
- cpu: 25m
- memory: 262144k
- upperBound:
- cpu: 230m
- memory: 262144k
- ```
-
- The results show the `target` attribute specifies that for the container to run optimally, it doesn't need to change the CPU or the memory target. Your results may vary where the target CPU and memory recommendation are higher.
-
- The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a pod and replace it with a new pod. If a pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the pod and replaces it with a pod that meets the target attribute.
-
-## Extra Recommender for Vertical Pod Autoscaler
-
-In the VPA, one of the core components is the Recommender. It provides recommendations for resource usage based on real time resource consumption. AKS deploys a recommender when a cluster enables VPA. You can deploy a customized recommender or an extra recommender with the same image as the default one. The benefit of having a customized recommender is that you can customize your recommendation logic. With an extra recommender, you can partition VPAs to multiple recommenders if there are many VPA objects.
-
-The following example is an extra recommender that you apply to your existing AKS cluster. You then configure the VPA object to use the extra recommender.
-
-1. Create a file named `extra_recommender.yaml` and copy in the following manifest:
-
- ```json
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: extra-recommender
- namespace: kube-system
- spec:
- replicas: 1
- selector:
- matchLabels:
- app: extra-recommender
- template:
- metadata:
- labels:
- app: extra-recommender
- spec:
- serviceAccountName: vpa-recommender
- securityContext:
- runAsNonRoot: true
- runAsUser: 65534 # nobody
- containers:
- - name: recommender
- image: registry.k8s.io/autoscaling/vpa-recommender:0.13.0
- imagePullPolicy: Always
- args:
- - --recommender--nameame=extra-recommender
- resources:
- limits:
- cpu: 200m
- memory: 1000Mi
- requests:
- cpu: 50m
- memory: 500Mi
- ports:
- - name: prometheus
- containerPort: 8942
- ```
-
-2. Deploy the `extra-recomender.yaml` Vertical Pod Autoscaler example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest.
-
- ```bash
- kubectl apply -f extra-recommender.yaml
- ```
-
- After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-
-3. Create a file named `hamnster_extra_recommender.yaml` and copy in the following manifest:
-
- ```yml
- apiVersion: "autoscaling.k8s.io/v1"
- kind: VerticalPodAutoscaler
- metadata:
- name: hamster-vpa
- spec:
- recommenders:
- - name: 'extra-recommender'
- targetRef:
- apiVersion: "apps/v1"
- kind: Deployment
- name: hamster
- updatePolicy:
- updateMode: "Auto"
- resourcePolicy:
- containerPolicies:
- - containerName: '*'
- minAllowed:
- cpu: 100m
- memory: 50Mi
- maxAllowed:
- cpu: 1
- memory: 500Mi
- controlledResources: ["cpu", "memory"]
-
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: hamster
- spec:
- selector:
- matchLabels:
- app: hamster
- replicas: 2
- template:
- metadata:
- labels:
- app: hamster
- spec:
- securityContext:
- runAsNonRoot: true
- runAsUser: 65534 # nobody
- containers:
- - name: hamster
- image: k8s.gcr.io/ubuntu-slim:0.1
- resources:
- requests:
- cpu: 100m
- memory: 50Mi
- command: ["/bin/sh"]
- args:
- - "-c"
- - "while true; do timeout 0.5s yes >; sleep 0.5s; done"
- ```
-
- If `memory` is not specified in `controlledResources`, the Recommender doesn't respond to OOM events. In this case, you are only setting CPU in `controlledValues`. `controlledValues` allows you to choose whether to update the container's resource requests by `RequestsOnly` option, or both resource requests and limits using the `RequestsAndLimits` option. The default value is `RequestsAndLimits`. If you use the `RequestsAndLimits` option, **requests** are computed based on actual usage, and **limits** are calculated based on the current pod's request and limit ratio.
-
- For example, if you start with a pod that requests 2 CPUs and limits to 4 CPUs, VPA always sets the limit to be twice as much as requests. The same principle applies to memory. When you use the `RequestsAndLimits` mode, it can serve as a blueprint for your initial application resource requests and limits.
-
-You can simplify VPA object by using Auto mode and computing recommendations for both CPU and Memory.
-
-4. Deploy the `hamster_extra-recomender.yaml` example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest.
-
- ```bash
- kubectl apply -f hamster_customized_recommender.yaml
- ```
-
-5. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
-
- ```bash
- kubectl get --watch pods -l app=hamster
- ````
-
-6. When a new hamster pod is started, describe the pod running the [kubectl describe][kubectl-describe] command and view the updated CPU and memory reservations.
-
- ```bash
- kubectl describe pod hamster-<exampleID>
- ```
-
- The example output is a snippet of the information describing the pod:
-
- ```output
- State: Running
- Started: Wed, 28 Sep 2022 15:09:51 -0400
- Ready: True
- Restart Count: 0
- Requests:
- cpu: 587m
- memory: 262144k
- Environment: <none>
- ```
-
-7. To view updated recommendations from VPA, run the [kubectl describe][kubectl-describe] command to describe the hamster-vpa resource information.
-
- ```bash
- kubectl describe vpa/hamster-vpa
- ```
-
- The example output is a snippet of the information about the resource utilization:
-
- ```output
- State: Running
- Started: Wed, 28 Sep 2022 15:09:51 -0400
- Ready: True
- Restart Count: 0
- Requests:
- cpu: 587m
- memory: 262144k
- Environment: <none>
- Spec:
- recommenders:
- Name: customized-recommender
- ```
-
-## Troubleshooting
-
-To diagnose problems with a VPA installation, perform the following steps.
-
-1. Check if all system components are running using the following command:
-
- ```bash
- kubectl --namespace=kube-system get pods|grep vpa
- ```
-
-The output should list three pods - recommender, updater and admission-controller all with the state showing a status of `Running`.
-
-2. Confirm if the system components log any errors. For each of the pods returned by the previous command, run the following command:
-
- ```bash
- kubectl --namespace=kube-system logs [pod name] | grep -e '^E[0-9]\{4\}'
- ```
-
-3. Confirm that the custom resource definition was created by running the following command:
-
- ```bash
- kubectl get customresourcedefinition | grep verticalpodautoscalers
- ```
- ## Next steps
-This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements.
-
-* You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
-
-* See the Vertical Pod Autoscaler [API reference] to learn more about the definitions for related VPA objects.
+To learn how to set up the Vertical Pod Autoscaler on your AKS cluster, see [Use the Vertical Pod Autoscaler in AKS](./use-vertical-pod-autoscaler.md).
<!-- EXTERNAL LINKS -->
-[kubernetes-autoscaler-github-repo]: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/examples/hamster.yaml
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
-[github-autoscaler-repo-v011]: https://github.com/kubernetes/autoscaler/blob/vpa-release-0.11/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go
[pod-disruption-budget]: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ <!-- INTERNAL LINKS -->
-[get-started-with-aks]: /azure/architecture/reference-architectures/containers/aks-start-here
-[install-azure-cli]: /cli/azure/install-azure-cli
-[az-aks-create]: /cli/azure/aks#az-aks-create
-[az-aks-upgrade]: /cli/azure/aks#az-aks-upgrade
[horizontal-pod-autoscaling]: concepts-scale.md#horizontal-pod-autoscaler
-[scale-applications-in-aks]: tutorial-kubernetes-scale.md
-[az-provider-register]: /cli/azure/provider#az-provider-register
-[az-feature-register]: /cli/azure/feature#az-feature-register
-[az-feature-show]: /cli/azure/feature#az-feature-show
[horizontal-pod-autoscaler-overview]: concepts-scale.md#horizontal-pod-autoscaler-
api-management Developer Portal Enable Usage Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-enable-usage-logs.md
To configure a diagnostic setting for developer portal usage logs:
1. **Category groups**: Optionally make a selection for your scenario. 1. Under **Categories**: Select **Logs related to Developer Portal usage**. Optionally select other categories as needed. 1. Under **Destination details**, select one or more options and specify details for the destination. For example, archive logs to a storage account or stream them to an event hub. [Learn more](../azure-monitor/essentials/diagnostic-settings.md)
- > [!NOTE]
- > Currently, the **Send to Log Analytics workspace** destination isn't supported for developer portal usage logs.
- 1. Select **Save**. ## View diagnostic log data
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
description: Learn how to migrate your App Service Environment to App Service En
Previously updated : 7/18/2024 Last updated : 7/24/2024 zone_pivot_groups: app-service-cli-portal
The in-place migration feature doesn't support the following scenarios. See the
- App Service Environment v1 in a [Classic virtual network](/previous-versions/azure/virtual-network/create-virtual-network-classic) - ELB App Service Environment v2 with IP SSL addresses - ELB App Service Environment v1 with IP SSL addresses
+- App Service Environment with a name that doesn't meet the character limits. The entire name, including the domain suffix, must be 64 characters or fewer. For example: *my-ase-name.appserviceenvironment.net* for ILB and *my-ase-name.p.azurewebsites.net* for ELB must be 64 characters or fewer. If you don't meet the character limit, you must migrate manually. The character limits specifically for the App Service Environment name are as follows:
+ - ILB App Service Environment name character limit: 36 characters
+ - ELB App Service Environment name character limit: 42 characters
The App Service platform reviews your App Service Environment to confirm in-place migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the in-place migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 7/23/2024 Last updated : 7/24/2024 # Migration to App Service Environment v3 using the side-by-side migration feature
The side-by-side migration feature doesn't support the following scenarios. See
- If you have an App Service Environment v1, you can migrate using the [in-place migration feature](migrate.md) or one of the [manual migration options](migration-alternatives.md). - ELB App Service Environment v2 with IP SSL addresses - [Zone pinned](zone-redundancy.md) App Service Environment v2
+- App Service Environment with a name that doesn't meet the character limits. The entire name, including the domain suffix, must be 64 characters or fewer. For example: *my-ase-name.appserviceenvironment.net* for ILB and *my-ase-name.p.azurewebsites.net* for ELB must be 64 characters or fewer. If you don't meet the character limit, you must migrate manually. The character limits specifically for the App Service Environment name are as follows:
+ - ILB App Service Environment name character limit: 36 characters
+ - ELB App Service Environment name character limit: 42 characters
The App Service platform reviews your App Service Environment to confirm side-by-side migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the side-by-side migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
application-gateway Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/cli-samples.md
- Title: Azure CLI examples for Azure Application Gateway
-description: This article has links to Azure CLI examples so you can quickly deploy Azure Application Gateway configured in various ways.
---- Previously updated : 11/16/2019----
-# Azure CLI examples for Azure Application Gateway
-
-The following table includes links to Azure CLI script examples for Azure Application Gateway.
-
-| Example | Description |
-|-- | -- |
-| [Manage web traffic](./scripts/create-vmss-cli.md) | Creates an application gateway and all related resources. |
-| [Restrict web traffic](./scripts/create-vmss-waf-cli.md) | Creates an application gateway that restricts traffic using OWASP rules.|
application-gateway Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/resource-manager-template-samples.md
- Title: Azure Resource Manager templates-
-description: This article has links to Azure Resource Manager template examples so you can quickly deploy Azure Application Gateway configured in various ways.
----- Previously updated : 11/16/2019---
-# Azure Resource Manager templates for Azure Application Gateway
-
-The following table includes links to Azure Resource Manager templates for Azure Application Gateway.
-
-| Example | Description |
-|-- | -- |
-| [Application Gateway v2 with Web Application Firewall](https://azure.microsoft.com/resources/templates/ag-docs-wafv2/) | Creates an Application Gateway v2 with Web Application Firewall v2.|
application-gateway Create Vmss Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-cli.md
- Title: Azure CLI Script Sample - Manage web traffic | Microsoft Docs
-description: Azure CLI Script Sample - Manage web traffic with an application gateway and a virtual machine scale set.
---- Previously updated : 01/29/2018----
-# Manage web traffic using the Azure CLI
-
-This script creates an application gateway that uses a virtual machine scale set for backend servers. The application gateway can then be configured to manage web traffic. After running the script, you can test the application gateway using its public IP address.
---
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/application-gateway/create-vmss/create-vmss.sh "Create application gateway")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, application gateway, and all related resources.
-
-```azurecli-interactive
-az group delete --name myResourceGroupAG --yes
-```
-
-## Script explanation
-
-This script uses the following commands to create the deployment. Each item in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az network vnet create](/cli/azure/network/vnet) | Creates a virtual network. |
-| [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) | Creates a subnet in a virtual network. |
-| [az network public-ip create](/cli/azure/network/public-ip) | Creates the public IP address for the application gateway. |
-| [az network application-gateway create](/cli/azure/network/application-gateway) | Create an application gateway. |
-| [az vmss create](/cli/azure/vmss) | Creates a virtual machine scale set. |
-| [az network public-ip show](/cli/azure/network/public-ip) | Gets the public IP address of the application gateway. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure/overview).
-
-Additional application gateway CLI script samples can be found in the [Azure Windows VM documentation](../cli-samples.md).
application-gateway Create Vmss Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-powershell.md
- Title: Azure PowerShell Script Sample - Manage web traffic | Microsoft Docs
-description: Azure PowerShell Script Sample - Manage web traffic with an application gateway and a virtual machine scale set.
---- Previously updated : 01/29/2018----
-# Manage web traffic with Azure PowerShell
-
-This script creates an application gateway that uses a virtual machine scale set for backend servers. The application gateway can then be configured to manage web traffic. After running the script, you can test the application gateway using its public IP address.
---
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/application-gateway/create-vmss/create-vmss.ps1 "Create application gateway")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, application gateway, and all related resources.
-
-```powershell
-Remove-AzResourceGroup -Name myResourceGroupAG
-```
-
-## Script explanation
-
-This script uses the following commands to create the deployment. Each item in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates the subnet configuration. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates the virtual network using with the subnet configurations. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates the public IP address for the application gateway. |
-| [New-AzApplicationGatewayIPConfiguration](/powershell/module/az.network/new-azapplicationgatewayipconfiguration) | Creates the configuration that associates a subnet with the application gateway. |
-| [New-AzApplicationGatewayFrontendIPConfig](/powershell/module/az.network/new-azapplicationgatewayfrontendipconfig) | Creates the configuration that assigns a public IP address to the application gateway. |
-| [New-AzApplicationGatewayFrontendPort](/powershell/module/az.network/new-azapplicationgatewayfrontendport) | Assigns a port to be used to access the application gateway. |
-| [New-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/new-azapplicationgatewaybackendaddresspool) | Creates a backend pool for an application gateway. |
-| [New-AzApplicationGatewayBackendHttpSettings](/powershell/module/az.network/new-azapplicationgatewaybackendhttpsetting) | Configures settings for a backend pool. |
-| [New-AzApplicationGatewayHttpListener](/powershell/module/az.network/new-azapplicationgatewayhttplistener) | Creates a listener. |
-| [New-AzApplicationGatewayRequestRoutingRule](/powershell/module/az.network/new-azapplicationgatewayrequestroutingrule) | Creates a routing rule. |
-| [New-AzApplicationGatewaySku](/powershell/module/az.network/new-azapplicationgatewaysku) | Specify the tier and capacity for an application gateway. |
-| [New-AzApplicationGateway](/powershell/module/az.network/new-azapplicationgateway) | Create an application gateway. |
-| [Set-AzVmssStorageProfile](/powershell/module/az.compute/set-azvmssstorageprofile) | Create a storage profile for the scale set. |
-| [Set-AzVmssOsProfile](/powershell/module/az.compute/set-azvmssosprofile) | Define the operating system for the scale set. |
-| [Add-AzVmssNetworkInterfaceConfiguration](/powershell/module/az.compute/add-azvmssnetworkinterfaceconfiguration) | Define the network interface for the scale set. |
-| [New-AzVmss](/powershell/module/az.compute/new-azvm) | Create a virtual machine scale set. |
-| [Get-AzPublicIPAddress](/powershell/module/az.network/get-azpublicipaddress) | Gets the public IP address of an application gateway. |
-|[Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group and all resources contained within. |
-
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional application gateway PowerShell script samples can be found in the [Azure Application Gateway documentation](../powershell-samples.md).
application-gateway Waf Custom Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/waf-custom-rules-powershell.md
- Title: Azure PowerShell Script Sample - Create WAF custom rules
-description: Azure PowerShell Script Sample - Create Web Application Firewall custom rules
--- Previously updated : 6/7/2019----
-# Create Web Application Firewall (WAF) custom rules with Azure PowerShell
-
-This script creates an Application Gateway Web Application Firewall that uses custom rules. The custom rule blocks traffic if the request header contains User-Agent *evilbot*.
-
-## Prerequisites
-
-### Azure PowerShell module
-
-If you choose to install and use Azure PowerShell locally, this script requires the Azure PowerShell module version 2.1.0 or later.
-
-1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
-2. To create a connection with Azure, run `Connect-AzAccount`.
--
-## Sample script
-
-[!code-powershell[main](../../../powershell_scripts/application-gateway/waf-rules/waf-custom-rules.ps1 "Custom WAF rules")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, application gateway, and all related resources.
-
-```powershell
-Remove-AzResourceGroup -Name CustomRulesTest
-```
-
-## Script explanation
-
-This script uses the following commands to create the deployment. Each item in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates the subnet configuration. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates the virtual network using with the subnet configurations. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates the public IP address for the application gateway. |
-| [New-AzApplicationGatewayIPConfiguration](/powershell/module/az.network/new-azapplicationgatewayipconfiguration) | Creates the configuration that associates a subnet with the application gateway. |
-| [New-AzApplicationGatewayFrontendIPConfig](/powershell/module/az.network/new-azapplicationgatewayfrontendipconfig) | Creates the configuration that assigns a public IP address to the application gateway. |
-| [New-AzApplicationGatewayFrontendPort](/powershell/module/az.network/new-azapplicationgatewayfrontendport) | Assigns a port to be used to access the application gateway. |
-| [New-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/new-azapplicationgatewaybackendaddresspool) | Creates a backend pool for an application gateway. |
-| [New-AzApplicationGatewayBackendHttpSettings](/powershell/module/az.network/new-azapplicationgatewaybackendhttpsetting) | Configures settings for a backend pool. |
-| [New-AzApplicationGatewayHttpListener](/powershell/module/az.network/new-azapplicationgatewayhttplistener) | Creates a listener. |
-| [New-AzApplicationGatewayRequestRoutingRule](/powershell/module/az.network/new-azapplicationgatewayrequestroutingrule) | Creates a routing rule. |
-| [New-AzApplicationGatewaySku](/powershell/module/az.network/new-azapplicationgatewaysku) | Specify the tier and capacity for an application gateway. |
-| [New-AzApplicationGateway](/powershell/module/az.network/new-azapplicationgateway) | Create an application gateway. |
-|[Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group and all resources contained within. |
-|[New-AzApplicationGatewayAutoscaleConfiguration](/powershell/module/az.network/New-AzApplicationGatewayAutoscaleConfiguration)|Creates an autoscale configuration for the Application Gateway.|
-|[New-AzApplicationGatewayFirewallMatchVariable](/powershell/module/az.network/New-AzApplicationGatewayFirewallMatchVariable)|Creates a match variable for firewall condition.|
-|[New-AzApplicationGatewayFirewallCondition](/powershell/module/az.network/New-AzApplicationGatewayFirewallCondition)|Creates a match condition for custom rule.|
-|[New-AzApplicationGatewayFirewallCustomRule](/powershell/module/az.network/New-AzApplicationGatewayFirewallCustomRule)|Creates a new custom rule for the application gateway firewall policy.|
-|[New-AzApplicationGatewayFirewallPolicy](/powershell/module/az.network/New-AzApplicationGatewayFirewallPolicy)|Creates a application gateway firewall policy.|
-|[New-AzApplicationGatewayWebApplicationFirewallConfiguration](/powershell/module/az.network/New-AzApplicationGatewayWebApplicationFirewallConfiguration)|Creates a WAF configuration for an application gateway.|
-
-## Next steps
--- For more information about WAF custom rules, see [Custom rules for Web Application Firewall](../../web-application-firewall/ag/custom-waf-rules-overview.md)-- For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).-- Additional application gateway PowerShell script samples can be found in the [Azure Application Gateway documentation](../powershell-samples.md).
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/release-notes.md
Title: "What's new with Azure Arc-enabled Kubernetes" Previously updated : 04/18/2024- Last updated : 07/23/2024+ description: "Learn about the latest releases of Arc-enabled Kubernetes." # What's new with Azure Arc-enabled Kubernetes
-Azure Arc-enabled Kubernetes is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the latest releases of the [Azure Arc-enabled Kubernetes agents](conceptual-agent-overview.md).
+Azure Arc-enabled Kubernetes is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about recent releases of the [Azure Arc-enabled Kubernetes agents](conceptual-agent-overview.md).
When any of the Arc-enabled Kubernetes agents are updated, all of the agents in the `azure-arc` namespace are incremented with a new version number, so that the version numbers are consistent across agents. When a new version is released, all of the agents are upgraded together to the newest version (whether or not there are functionality changes in a given agent), unless you have [disabled automatic upgrades](agent-upgrade.md) for the cluster. We generally recommend using the most recent versions of the agents. The [version support policy](agent-upgrade.md#version-support-policy) covers the most recent version and the two previous versions (N-2).
+## Version 1.18.x (July 2024)
+
+- Fixed `logCollector` pod restarts
+- Updated to Microsoft Go v1.22.5
+- Other bug fixes
+
+## Version 1.17.x (June 2024)
+
+- Upgraded to use [Microsoft Go 1.22 to be FIPS compliant](https://github.com/microsoft/go/blob/microsoft/main/eng/doc/fips/README.md#tls-with-fips-compliant-settings)
+
+## Version 1.16.x (May 2024)
+
+- Migrated to use Microsoft Go w/ OpenSSL and fixed some vulnerabilities
+ ## Version 1.15.3 (March 2024) - Various enhancements and bug fixes
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
For **Arc-enabled System Center Virtual Machine Manager (SCVMM)**, the manual up
Before upgrading an Arc resource bridge, the following prerequisites must be met:
+- The appliance VM must be on a General Availability version (1.0.15 or higher). If not, the Arc resource bridge VM needs to be redeployed. If you are using Arc-enabled VMware/AVS, then you have the option to [perform disaster recovery](../vmware-vsphere/recover-from-resource-bridge-deletion.md). If you are using Arc-enabled SCVMM, then follow this [disaster recovery guide](../system-center-virtual-machine-manager/disaster-recovery.md).
+ - The appliance VM must be online, healthy with a Status of "Running". You can check the Azure resource of your Arc resource bridge to verify. - The [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be up-to-date. To test that the credentials within the Arc resource bridge VM are valid, perform an operation on an Arc-enabled VM from Azure or [update the credentials](/azure/azure-arc/resource-bridge/maintenance) to be certain.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Connected Machine agent
-description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent.
Previously updated : 05/04/2023
+description: This article describes the different management tasks that you'll typically perform during the lifecycle of the Azure Connected Machine agent.
Last updated : 07/24/2024 -
- - ignite-2023
# Managing and maintaining the Connected Machine agent
Microsoft recommends using the most recent version of the Azure Connected Machin
### [Windows](#tab/windows)
-Links to the current and previous releases of the Windows agents are available below the heading of each [release note](agent-release-notes.md). If you're looking for an agent version that's more than 6 months old, check out the [release notes archive](agent-release-notes-archive.md).
+Links to the current and previous releases of the Windows agents are available below the heading of each [release note](agent-release-notes.md). If you're looking for an agent version that's more than six months old, check out the [release notes archive](agent-release-notes-archive.md).
### [Linux - apt](#tab/linux-apt)
Links to the current and previous releases of the Windows agents are available b
## Upgrade the agent
-The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that are not using the latest version of the machine agent and recommends that you upgrade to the latest version. It will notify you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal.
+The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that aren't using the latest version of the machine agent and recommends that you upgrade to the latest version. It notifies you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal.
-The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, or uninstalling the Azure Connected Machine Agent will not require you to restart your server.
+The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, or uninstalling the Azure Connected Machine Agent doesn't require you to restart your server.
The following table describes the methods supported to perform the agent upgrade:
For Windows Servers that belong to a domain and connect to the Internet to check
1. Select **OK**.
-The next time computers in your selected scope refresh their policy, they will start to check for updates in both Windows Update and Microsoft Update.
+The next time computers in your selected scope refresh their policy, they'll start to check for updates in both Windows Update and Microsoft Update.
For organizations that use Microsoft Configuration Manager (MECM) or Windows Server Update Services (WSUS) to deliver updates to their servers, you need to configure WSUS to synchronize the Azure Connected Machine Agent packages and approve them for installation on your servers. Follow the guidance for [Windows Server Update Services](/windows-server/administration/windows-server-update-services/manage/setting-up-update-synchronizations#to-specify-update-products-and-classifications-for-synchronization) or [MECM](/mem/configmgr/sum/get-started/configure-classifications-and-products#to-configure-classifications-and-products-to-synchronize) to add the following products and classifications to your configuration:
Once the updates are being synchronized, you can optionally add the Azure Connec
1. Run **AzureConnectedMachineAgent.msi** to start the Setup Wizard.
-If the Setup Wizard discovers a previous version of the agent, it will upgrade it automatically. When the upgrade completes, the Setup Wizard closes automatically.
+If the Setup Wizard discovers a previous version of the agent, it upgrades it automatically. When the upgrade completes, the Setup Wizard closes automatically.
#### To upgrade from the command line
The Azure Connected Machine agent doesn't automatically upgrade itself when a ne
## Renaming an Azure Arc-enabled server resource
-When you change the name of a Linux or Windows machine connected to Azure Arc-enabled servers, the new name is not recognized automatically because the resource name in Azure is immutable. As with other Azure resources, you must delete the resource and re-create it in order to use the new name.
+When you change the name of a Linux or Windows machine connected to Azure Arc-enabled servers, the new name isn't recognized automatically because the resource name in Azure is immutable. As with other Azure resources, you must delete the resource and re-create it in order to use the new name.
For Azure Arc-enabled servers, before you rename the machine, it's necessary to remove the VM extensions before proceeding:
For Azure Arc-enabled servers, before you rename the machine, it's necessary to
3. Use the **azcmagent** tool with the [Disconnect](azcmagent-disconnect.md) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. You can run this manually while logged on interactively, with a Microsoft identity [access token](../../active-directory/develop/access-tokens.md), or with the service principal you used for onboarding (or with a [new service principal that you create](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale).
- Disconnecting the machine from Azure Arc-enabled servers doesn't remove the Connected Machine agent, and you do not need to remove the agent as part of this process.
+ Disconnecting the machine from Azure Arc-enabled servers doesn't remove the Connected Machine agent, and you don't need to remove the agent as part of this process.
4. Re-register the Connected Machine agent with Azure Arc-enabled servers. Run the `azcmagent` tool with the [Connect](azcmagent-connect.md) parameter to complete this step. The agent will default to using the computer's current hostname, but you can choose your own resource name by passing the `--resource-name` parameter to the connect command.
For guidance on how to identify and remove any extensions on your Azure Arc-enab
### Step 2: Disconnect the server from Azure Arc
-Disconnecting the agent deletes the corresponding Azure resource for the server and clears the local state of the agent. To disconnect the agent, run the `azcmagent disconnect` command as an administrator on the server. You'll be prompted to log in with an Azure account that has permission to delete the resource in your subscription. If the resource has already been deleted in Azure, you'll need to pass an additional flag to clean up the local state: `azcmagent disconnect --force-local-only`.
+Disconnecting the agent deletes the corresponding Azure resource for the server and clears the local state of the agent. To disconnect the agent, run the `azcmagent disconnect` command as an administrator on the server. You'll be prompted to sign in with an Azure account that has permission to delete the resource in your subscription. If the resource has already been deleted in Azure, pass an additional flag to clean up the local state: `azcmagent disconnect --force-local-only`.
### Step 3a: Uninstall the Windows agent
-Both of the following methods remove the agent, but they do not remove the *C:\Program Files\AzureConnectedMachineAgent* folder on the machine.
+Both of the following methods remove the agent, but they don't remove the *C:\Program Files\AzureConnectedMachineAgent* folder on the machine.
#### Uninstall from Control Panel
You do not need to restart any services when reconfiguring the proxy settings wi
Starting with agent version 1.15, you can also specify services which should **not** use the specified proxy server. This can help with split-network designs and private endpoint scenarios where you want Microsoft Entra ID and Azure Resource Manager traffic to go through your proxy server to public endpoints but want Azure Arc traffic to skip the proxy and communicate with a private IP address on your network.
-The proxy bypass feature does not require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that should not use the proxy server. The location parameter refers to the Azure region of the Arc Server(s).
+The proxy bypass feature doesn't require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that shouldn't use the proxy server. The location parameter refers to the Azure region of the Arc Server(s).
Proxy bypass value when set to `ArcData` only bypasses the traffic of the Azure extension for SQL Server and not the Arc agent.
If you're already using environment variables to configure the proxy server for
1. Remove the unused environment variables by following the steps for [Windows](#windows-environment-variables) or [Linux](#linux-environment-variables).
+## Alerting for Azure Arc-enabled server disconnection
+
+The Connected Machine agent [sends a regular heartbeat message](overview.md#agent-status) to the service every five minutes. If an Arc-enabled server stops sending heartbeats to Azure for longer than 15 minutes, it can mean that it's offline, the network connection has been blocked, or the agent isn't running. Develop a plan for how youΓÇÖll respond and investigate these incidents, including setting up [Resource Health alerts](../..//service-health/resource-health-alert-monitor-guide.md) to get notified when such incidents occur.
++ ## Next steps * Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
azure-arc Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/disaster-recovery.md
To recover from Arc resource bridge VM deletion, you need to deploy a new resour
>[!Note] > DHCP-based Arc Resource Bridge deployment is no longer supported.<br><br> If you had deployed Arc Resource Bridge earlier using DHCP, you must clean up your deployment by removing your resources from Azure and do a [fresh onboarding](./quickstart-connect-system-center-virtual-machine-manager-to-arc.md).
+>
+## Prerequisites
+
+1. The disaster recovery script must be run from the same folder where the config (.yaml) files are present. The config files are present on the machine used to run the script to deploy Arc resource bridge.
+
+1. The machine being used to run the script must have bidirectional connectivity to the Arc resource bridge VM on port 6443 (Kubernetes API server) and 22 (SSH), and outbound connectivity to the Arc resource bridge VM on port 443 (HTTPS).
+ ### Recover Arc resource bridge from a Windows machine
If the recovery steps mentioned above are unsuccessful in restoring Arc resource
- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html). - Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
The script execution will take up to half an hour and you'll be prompted for var
Once the command execution is completed, your setup is complete, and you can try out the capabilities of Azure Arc-enabled SCVMM.
+>[!IMPORTANT]
+>After the successful installation of Azure Arc Resource Bridge, it's recommended to retain a copy of the resource bridge config (.yaml) files in a secure place that facilitates easy retrieval. These files are needed later to run commands to perform management operations (e.g. [az arcappliance upgrade](/cli/azure/arcappliance/upgrade#az-arcappliance-upgrade-vmware)) on the resource bridge. You can find the three config files (.yaml files) in the same folder where you ran the onboarding script.
++ ### Retry command - Windows If for any reason, the appliance creation fails, you need to retry it. Run the command with ```-Force``` to clean up and onboard again.
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
If all 5 tries fail, Azure Functions adds a message to a Storage queue named *we
## Memory usage and concurrency ::: zone pivot="programming-language-csharp"
-When you bind to an [output type](#usage) that doesn't support steaming, such as `string`, or `Byte[]`, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. When possible, use a stream-supporting type. Type support depends on the C# mode and extension version. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types).
+When you bind to an [output type](#usage) that doesn't support streaming, such as `string`, or `Byte[]`, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. When possible, use a stream-supporting type. Type support depends on the C# mode and extension version. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types).
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-typescript,programming-language-python,programming-language-powershell,programming-language-java" At this time, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs.
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
This example section creates a Standard general purpose v2 storage account:
### [Bicep](#tab/bicep) ```bicep
-resource storageAccountName 'Microsoft.Storage/storageAccounts@2023-05-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-05-01' = {
name: storageAccountName location: location kind: 'StorageV2'
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
If you're sending a high log volume through rsyslog and your system is set up to
1. `sudo systemctl restart rsyslog`
-### Azure Monitor Agent for Linux event buffer is filling a disk
-
-If you observe the `/var/opt/microsoft/azuremonitor/events` directory growing unbounded (10 GB or higher) and not reducing in size, [file a ticket](#file-a-ticket). For **Summary**, enter **Azure Monitor Agent Event Buffer is filling disk**. For **Problem type**, enter **I need help configuring data collection from a VM**.
-
azure-monitor Data Collection Log Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-log-text.md
Adhere to the following recommendations to ensure that you don't experience data
## Incoming stream The incoming stream of data includes the columns in the following table.
- | Column | Type | Description |
+| Column | Type | Description |
|:|:|:| | `TimeGenerated` | datetime | The time the record was generated. This value will be automatically populated with the time the record is added to the Log Analytics workspace. You can override this value using a transformation to set `TimeGenerated` to another value. | | `RawData` | string | The entire log entry in a single column. You can use a transformation if you want to break down this data into multiple columns before sending to the table. | | `FilePath` | string | If you add this column to the incoming stream in the DCR, it will be populated with the path to the log file. This column is not created automatically and can't be added using the portal. You must manually modify the DCR created by the portal or create the DCR using another method where you can explicitly define the incoming stream. |
-| `Computer` | string | If you add this column to the incoming stream in the DCR, it will be populated with the name of the computer. This column is not created automatically and can't be added using the portal. You must manually modify the DCR created by the portal or create the DCR using another method where you can explicitly define the incoming stream. |
## Custom table
$tableParams = @'
{ "name": "FilePath", "type": "String"
- },
- {
- "name": "Computer",
- "type": "String"
} ] }
Use the following ARM template to create or modify a DCR for collecting text log
{ "name": "FilePath", "type": "string"
- },
- {
- "name": "Computer",
- "type": "string"
} ] }
azure-monitor Alerts Create Rule Cli Powershell Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-rule-cli-powershell-arm.md
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
> [!NOTE] > When you create a metric alert on a single resource, the syntax uses the `TargetResourceId`. When you create a metric alert on multiple resources, the syntax contains the `TargetResourceScope`, `TargetResourceType`, and `TargetResourceRegion`. - To create a log search alert rule using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.-- To create an activity log alert rule using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet.
+- To create an activity log alert rule using PowerShell, use the [New-AzActivityLogAlert](/powershell/module/az.monitor/new-azactivitylogalert) cmdlet.
## Create a new alert rule using an ARM template
azure-monitor Proactive Application Security Detection Pack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-application-security-detection-pack.md
Smart detection automatically analyzes the telemetry generated by your application, and detects potential security issues. It enables you to identify potential security problems. You can mitigate these problems by fixing the application, or by taking the necessary security measures.
-This feature requires no special setup, other than [configuring your app to send telemetry](../app/usage-overview.md).
+This feature requires no special setup, other than [configuring your app to send telemetry](../app/usage.md).
## When would I get this type of smart detection notification? There are three types of security issues that are detected:
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
telemetry.trackEvent({name: "WinGame"});
### Custom events in Log Analytics
-The telemetry is available in the `customEvents` table on the [Application Insights Logs tab](../logs/log-query-overview.md) or [usage experience](usage-overview.md). Events might come from `trackEvent(..)` or the [Click Analytics Auto-collection plug-in](javascript-feature-extensions.md).
+The telemetry is available in the `customEvents` table on the [Application Insights Logs tab](../logs/log-query-overview.md) or [usage experience](usage.md). Events might come from `trackEvent(..)` or the [Click Analytics Auto-collection plug-in](javascript-feature-extensions.md).
If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than `1`. For example, `itemCount==10` means that of 10 calls to `trackEvent()`, the sampling process transmitted only one of them. To get a correct count of custom events, use code such as `customEvents | summarize sum(itemCount)`.
The function is asynchronous for the [server telemetry channel](https://www.nuge
## Authenticated users
-In a web app, users are [identified by cookies](./usage-segmentation.md#the-users-sessions-and-events-segmentation-tool) by default. A user might be counted more than once if they access your app from a different machine or browser, or if they delete cookies.
+In a web app, users are [identified by cookies](./usage.md#users-sessions-and-eventsanalyze-telemetry-from-three-perspectives) by default. A user might be counted more than once if they access your app from a different machine or browser, or if they delete cookies.
If users sign in to your app, you can get a more accurate count by setting the authenticated user ID in the browser code:
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Azure Monitor Application Insights, a feature of [Azure Monitor](..\overview.md)
Application Insights provides many experiences to enhance the performance, reliability, and quality of your applications. ### Investigate-- [Application dashboard](overview-dashboard.md): An at-a-glance assessment of your application's health and performance.-- [Application map](app-map.md): A visual overview of application architecture and components' interactions.-- [Live metrics](live-stream.md): A real-time analytics dashboard for insight into application activity and performance.-- [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search): Trace and diagnose transactions to identify issues and optimize performance.-- [Availability view](availability-overview.md): Proactively monitor and test the availability and responsiveness of application endpoints.-- [Failures view](failures-and-performance-views.md?tabs=failures-view): Identify and analyze failures in your application to minimize downtime.-- [Performance view](failures-and-performance-views.md?tabs=performance-view): Review application performance metrics and potential bottlenecks.+
+* [Application dashboard](overview-dashboard.md): An at-a-glance assessment of your application's health and performance.
+* [Application map](app-map.md): A visual overview of application architecture and components' interactions.
+* [Live metrics](live-stream.md): A real-time analytics dashboard for insight into application activity and performance.
+* [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search): Trace and diagnose transactions to identify issues and optimize performance.
+* [Availability view](availability-overview.md): Proactively monitor and test the availability and responsiveness of application endpoints.
+* [Failures view](failures-and-performance-views.md?tabs=failures-view): Identify and analyze failures in your application to minimize downtime.
+* [Performance view](failures-and-performance-views.md?tabs=performance-view): Review application performance metrics and potential bottlenecks.
### Monitoring-- [Alerts](../alerts/alerts-overview.md): Monitor a wide range of aspects of your application and trigger various actions.-- [Metrics](../essentials/metrics-getting-started.md): Dive deep into metrics data to understand usage patterns and trends.-- [Diagnostic settings](../essentials/diagnostic-settings.md): Configure streaming export of platform logs and metrics to the destination of your choice. -- [Logs](../logs/log-analytics-overview.md): Retrieve, consolidate, and analyze all data collected into Azure Monitoring Logs.-- [Workbooks](../visualize/workbooks-overview.md): Create interactive reports and dashboards that visualize application monitoring data.+
+* [Alerts](../alerts/alerts-overview.md): Monitor a wide range of aspects of your application and trigger various actions.
+* [Metrics](../essentials/metrics-getting-started.md): Dive deep into metrics data to understand usage patterns and trends.
+* [Diagnostic settings](../essentials/diagnostic-settings.md): Configure streaming export of platform logs and metrics to the destination of your choice.
+* [Logs](../logs/log-analytics-overview.md): Retrieve, consolidate, and analyze all data collected into Azure Monitoring Logs.
+* [Workbooks](../visualize/workbooks-overview.md): Create interactive reports and dashboards that visualize application monitoring data.
### Usage-- [Users, sessions, and events](usage-segmentation.md): Determine when, where, and how users interact with your web app.-- [Funnels](usage-funnels.md): Analyze conversion rates to identify where users progress or drop off in the funnel.-- [Flows](usage-flows.md): Visualize user paths on your site to identify high engagement areas and exit points.-- [Cohorts](usage-cohorts.md): Group users by shared characteristics to simplify trend identification, segmentation, and performance troubleshooting.+
+* [Users, sessions, and events](usage.md#users-sessions-and-eventsanalyze-telemetry-from-three-perspectives): Determine when, where, and how users interact with your web app.
+* [Funnels](usage.md#funnelsdiscover-how-customers-use-your-application): Analyze conversion rates to identify where users progress or drop off in the funnel.
+* [Flows](usage.md#user-flowsanalyze-user-navigation-patterns): Visualize user paths on your site to identify high engagement areas and exit points.
+* [Cohorts](usage.md#cohortsanalyze-a-specific-set-of-users-sessions-events-or-operations): Group users by shared characteristics to simplify trend identification, segmentation, and performance troubleshooting.
### Code analysis-- [Profiler](../profiler/profiler-overview.md): Capture, identify, and view performance traces for your application.-- [Code optimizations](../insights/code-optimizations.md): Harness AI to create better and more efficient applications.-- [Snapshot debugger](../snapshot-debugger/snapshot-debugger.md): Automatically collect debug snapshots when exceptions occur in .NET application+
+* [Profiler](../profiler/profiler-overview.md): Capture, identify, and view performance traces for your application.
+* [Code optimizations](../insights/code-optimizations.md): Harness AI to create better and more efficient applications.
+* [Snapshot debugger](../snapshot-debugger/snapshot-debugger.md): Automatically collect debug snapshots when exceptions occur in .NET application
## Logic model
Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/we
- [Live metrics](live-stream.md) - [Transaction search](transaction-search-and-diagnostics.md?tabs=transaction-search) - [Availability overview](availability-overview.md)-- [Users, sessions, and events](usage-segmentation.md)
+- [Users, sessions, and events](usage.md)
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
HttpContext.Features.Get<RequestTelemetry>().Properties["myProp"] = someData
## Enable client-side telemetry for web applications
-The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md) using JavaScript (Web) SDK Loader Script injection by configuration.
+The preceding steps are enough to help you start collecting server-side telemetry. If your application has client-side components, follow the next steps to start collecting [usage telemetry](./usage.md) using JavaScript (Web) SDK Loader Script injection by configuration.
1. In `_ViewImports.cshtml`, add injection:
Our [Service Updates](https://azure.microsoft.com/updates/?service=application-i
## Next steps
-* [Explore user flows](./usage-flows.md) to understand how users move through your app.
+* [Explore user flows](./usage.md#user-flowsanalyze-user-navigation-patterns) to understand how users move through your app.
* [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown. * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. * Use [availability tests](./availability-overview.md) to check your app constantly from around the world.
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
Title: Application Insights IP address collection | Microsoft Docs description: Understand how Application Insights handles IP addresses and geolocation. Previously updated : 06/23/2023-- Last updated : 07/24/2024+ # Geolocation and IP address handling
-This article explains how geolocation lookup and IP address handling work in Application Insights, along with how to modify the default behavior.
+This article explains how geolocation lookup and IP address handling work in [Application Insights](app-insights-overview.md#application-insights-overview).
## Default behavior
-By default, IP addresses are temporarily collected but not stored in Application Insights. This process follows some basic steps.
+By default, IP addresses are temporarily collected but not stored.
-When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup. Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
-
-To remove geolocation data, see the following articles:
-
-* [Remove the client IP initializer](../app/configuration-with-applicationinsights-config.md)
-* [Use a custom initializer](../app/api-filtering-sampling.md)
+When telemetry is sent to Azure, the IP address is used in a geolocation lookup. The result is used to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
The telemetry types are: * **Browser telemetry**: Application Insights collects the sender's IP address. The ingestion endpoint calculates the IP address.
-* **Server telemetry**: The Application Insights telemetry module temporarily collects the client IP address. The IP address isn't collected locally when the `X-Forwarded-For` header is set. When the incoming IP address list has more than one item, the last IP address is used to populate geolocation fields.
+* **Server telemetry**: The Application Insights telemetry module temporarily collects the client IP address when the `X-Forwarded-For` header isn't set. When the incoming IP address list has more than one item, the last IP address is used to populate geolocation fields.
-This behavior is by design to help avoid unnecessary collection of personal data and IP address location information. Whenever possible, we recommend avoiding the collection of personal data.
+This behavior is by design to help avoid unnecessary collection of personal data and IP address location information.
-> [!NOTE]
-> Although the default is to not collect IP addresses, you can override this behavior. We recommend verifying that the collection doesn't break any compliance requirements or local regulations.
->
-> To learn more about handling personal data in Application Insights, see [Guidance for personal data](../logs/personal-data-mgmt.md).
+When IP addresses aren't collected, city and other geolocation attributes also aren't collected.
+
+## Storage of IP address data
-When IP addresses aren't collected, city and other geolocation attributes populated by our pipeline by using the IP address also aren't collected. You can mask IP collection at the source. There are two ways to do it. You can:
+> [!WARNING]
+> The default and our recommendation is to not collect IP addresses. If you override this behavior, verify the collection doesn't break any compliance requirements or local regulations.
+>
+> To learn more about handling personal data, see [Guidance for personal data](../logs/personal-data-mgmt.md).
-* Remove the client IP initializer. For more information, see [Configuration with Applications Insights Configuration](configuration-with-applicationinsights-config.md).
-* Provide your own custom initializer. For more information, see an [API filtering example](api-filtering-sampling.md).
+To enable IP collection and storage, the `DisableIpMasking` property of the Application Insights component must be set to `true`.
-## Storage of IP address data
+Options to set this property include:
-To enable IP collection and storage, the `DisableIpMasking` property of the Application Insights component must be set to `true`. You can set this property through Azure Resource Manager templates (ARM templates) or by calling the REST API.
+- [ARM template](#arm-template)
+- [Portal](#portal)
+- [REST API](#rest-api)
+- [PowerShell](#powershell)
### ARM template
If you need to modify the behavior for only a single Application Insights resour
1. After the deployment is complete, new telemetry data will be recorded.
- If you select and edit the template again, you'll see only the default template without the newly added property. If you aren't seeing IP address data and want to confirm that `"DisableIpMasking": true` is set, run the following PowerShell commands:
+ If you select and edit the template again, only the default template without the newly added property. If you aren't seeing IP address data and want to confirm that `"DisableIpMasking": true` is set, run the following PowerShell commands:
```powershell # Replace `Fabrikam-dev` with the appropriate resource and resource group name.
If you need to modify the behavior for only a single Application Insights resour
$AppInsights.Properties ```
- A list of properties is returned as a result. One of the properties should read `DisableIpMasking: true`. If you run the PowerShell commands before you deploy the new property with Azure Resource Manager, the property won't exist.
+ A list of properties is returned as a result. One of the properties should read `DisableIpMasking: true`. If you run the PowerShell commands before you deploy the new property with Azure Resource Manager, the property doesn't exist.
### REST API The following [REST API](/rest/api/azure/) payload makes the same modifications:
-```
+```json
PATCH https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/microsoft.insights/components/<resource-name>?api-version=2018-05-01-preview HTTP/1.1 Host: management.azure.com Authorization: AUTH_TOKEN
Content-Length: 54
### PowerShell
-The PoweShell 'Update-AzApplicationInsights' cmdlet can disable IP masking with the `DisableIPMasking` parameter.
+The PowerShell `Update-AzApplicationInsights` cmdlet can disable IP masking with the `DisableIPMasking` parameter.
```powershell Update-AzApplicationInsights -Name "aiName" -ResourceGroupName "rgName" -DisableIPMasking:$true ```
-For more information on the 'Update-AzApplicationInsights' cmdlet, see [Update-AzApplicationInsights](/powershell/module/az.applicationinsights/update-azapplicationinsights)
-
-## Telemetry initializer
-
-If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field. The code for this class is the same across .NET versions.
-
-```csharp
-using Microsoft.ApplicationInsights.Channel;
-using Microsoft.ApplicationInsights.DataContracts;
-using Microsoft.ApplicationInsights.Extensibility;
-
-namespace MyWebApp
-{
- public class CloneIPAddress : ITelemetryInitializer
- {
- public void Initialize(ITelemetry telemetry)
- {
- ISupportProperties propTelemetry = telemetry as ISupportProperties;
-
- if (propTelemetry !=null && !propTelemetry.Properties.ContainsKey("client-ip"))
- {
- string clientIPValue = telemetry.Context.Location.Ip;
- propTelemetry.Properties.Add("client-ip", clientIPValue);
- }
- }
- }
-}
-```
-
-> [!NOTE]
-> If you can't access `ISupportProperties`, make sure you're running the latest stable release of the Application Insights SDK. `ISupportProperties` is intended for high cardinality values. `GlobalProperties` is more appropriate for low cardinality values like region name and environment name.
--
-# [.NET 6.0+](#tab/framework)
-
-```csharp
- using Microsoft.ApplicationInsights.Extensibility;
- using CustomInitializer.Telemetry;
-
-builder.services.AddSingleton<ITelemetryInitializer, CloneIPAddress>();
-```
-
-# [.NET 5.0](#tab/dotnet5)
-
-```csharp
- using Microsoft.ApplicationInsights.Extensibility;
- using CustomInitializer.Telemetry;
-
- public void ConfigureServices(IServiceCollection services)
-{
- services.AddSingleton<ITelemetryInitializer, CloneIPAddress>();
-}
-```
-
-# [ASP.NET Framework](#tab/dotnet6)
-
-```csharp
-using Microsoft.ApplicationInsights.Extensibility;
-
-namespace MyWebApp
-{
- public class MvcApplication : System.Web.HttpApplication
- {
- protected void Application_Start()
- {
- //Enable your telemetry initializer:
- TelemetryConfiguration.Active.TelemetryInitializers.Add(new CloneIPAddress());
- }
- }
-}
-
-```
---
-# [Node.js](#tab/nodejs)
-
-### Node.js
-
-```javascript
-appInsights.defaultClient.addTelemetryProcessor((envelope) => {
- const baseData = envelope.data.baseData;
- if (appInsights.Contracts.domainSupportsProperties(baseData)) {
- const ipAddress = envelope.tags[appInsights.defaultClient.context.keys.locationIp];
- if (ipAddress) {
- baseData.properties["client-ip"] = ipAddress;
- }
- }
-});
-```
-# [Client-side JavaScript](#tab/javascript)
-
-### Client-side JavaScript
-
-Unlike the server-side SDKs, the client-side JavaScript SDK doesn't calculate an IP address. By default, IP address calculation for client-side telemetry occurs at the ingestion endpoint in Azure.
-
-If you want to calculate the IP address directly on the client side, you need to add your own custom logic and use the result to set the `ai.location.ip` tag. When `ai.location.ip` is set, the ingestion endpoint doesn't perform IP address calculation, and the provided IP address is used for the geolocation lookup. In this scenario, the IP address is still zeroed out by default.
-
-To keep the entire IP address calculated from your custom logic, you could use a telemetry initializer that would copy the IP address data that you provided in `ai.location.ip` to a separate custom field. But again, unlike the server-side SDKs, the client-side SDK won't calculate the address for you if it can't rely on third-party libraries or your own custom logic.
-
-```javascript
-appInsights.addTelemetryInitializer((item) => {
- const ipAddress = item.tags && item.tags["ai.location.ip"];
- if (ipAddress) {
- item.baseData.properties = {
- ...item.baseData.properties,
- "client-ip": ipAddress
- };
- }
-});
-
-```
-
-If client-side data traverses a proxy before forwarding to the ingestion endpoint, IP address calculation might show the IP address of the proxy and not the client.
---
-### View the results of your telemetry initializer
-
-If you send new traffic to your site and wait a few minutes, you can then run a query to confirm that the collection is working:
-
-```kusto
-requests
-| where timestamp > ago(1h)
-| project appName, operation_Name, url, resultCode, client_IP, customDimensions.["client-ip"]
-```
-
-Newly collected IP addresses will appear in the `customDimensions_client-ip` column. The default `client-ip` column will still have all four octets zeroed out.
-
-If you're testing from localhost, and the value for `customDimensions_client-ip` is `::1`, this value is expected behavior. The `::1` value represents the loopback address in IPv6. It's equivalent to `127.0.0.1` in IPv4.
-
-## Frequently asked questions
-
-This section provides answers to common questions.
-
-### How is city, country/region, and other geolocation data calculated?
-
-We look up the IP address (IPv4 or IPv6) of the web client:
-
-* Browser telemetry: We collect the sender's IP address.
-* Server telemetry: The Application Insights module collects the client IP address. It's not collected if `X-Forwarded-For` is set.
-* To learn more about how IP address and geolocation data is collected in Application Insights, see [Geolocation and IP address handling](./ip-collection.md).
-
-You can configure `ClientIpHeaderTelemetryInitializer` to take the IP address from a different header. In some systems, for example, it's moved by a proxy, load balancer, or CDN to `X-Originating-IP`. [Learn more](https://apmtips.com/posts/2016-07-05-client-ip-address/).
-
-You can [use Power BI](../logs/log-powerbi.md) to display your request telemetry on a map if you've [migrated to a workspace-based resource](./convert-classic-resource.md).
+For more information on the `Update-AzApplicationInsights` cmdlet, see [Update-AzApplicationInsights](/powershell/module/az.applicationinsights/update-azapplicationinsights)
## Next steps
-* Learn more about [personal data collection](../logs/personal-data-mgmt.md) in Application Insights.
-* Learn more about how [IP address collection](https://apmtips.com/posts/2016-07-05-client-ip-address/) works in Application Insights. This article is an older external blog post written by one of our engineers. It predates the current default behavior where the IP address is recorded as `0.0.0.0`. The article goes into greater depth on the mechanics of the built-in telemetry initializer.
+* Learn more about [personal data collection](../logs/personal-data-mgmt.md) in Azure Monitor.
+* Learn how to [set the user IP](opentelemetry-add-modify.md#set-the-user-ip) using OpenTelemetry.
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
appInsights.loadAppInsights();
If you want to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext).
-If you're using a HEART workbook with the Click Analytics plug-in, you don't need to set the authenticated user context to see telemetry data. For more information, see the [HEART workbook documentation](./usage-heart.md#confirm-that-data-is-flowing).
+If you're using a HEART workbook with the Click Analytics plug-in, you don't need to set the authenticated user context to see telemetry data. For more information, see the [HEART workbook documentation](./usage.md#confirm-that-data-is-flowing).
## Use the plug-in
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap
## Next steps -- [Confirm data is flowing](./javascript-sdk.md#confirm-data-is-flowing).-- See the [documentation on utilizing HEART workbook](usage-heart.md) for expanded product analytics.-- See the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection Plug-in.-- Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.-- See [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query) if you arenΓÇÖt familiar with the process of writing a query. -- Build a [workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data.
+* [Confirm data is flowing](./javascript-sdk.md#confirm-data-is-flowing).
+* See the [documentation on utilizing HEART workbook](usage.md#heartfive-dimensions-of-customer-experience) for expanded product analytics.
+* See the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection Plug-in.
+* Use [Events Analysis in the Usage experience](usage.md#users-sessions-and-eventsanalyze-telemetry-from-three-perspectives) to analyze top clicks and slice by available dimensions.
+* See [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query) if you arenΓÇÖt familiar with the process of writing a query.
+* Build a [workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data.
azure-monitor Javascript Sdk Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-configuration.md
In this scenario, a 502 or 503 response might be returned to a client because of
## Next steps
-* [Track usage](usage-overview.md)
+* [Track usage](usage.md)
* [Custom events and metrics](api-custom-events-metrics.md)
-* [Build-measure-learn](usage-overview.md)
* [Azure file copy task](/azure/devops/pipelines/tasks/deploy/azure-file-copy) <!-- Remote URLs -->
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
Yes, the Application Insights JavaScript SDK is open source. To view the source
## Next steps
-* [Explore Application Insights usage experiences](usage-overview.md)
+* [Explore Application Insights usage experiences](usage.md)
* [Track page views](api-custom-events-metrics.md#page-views) * [Track custom events and metrics](api-custom-events-metrics.md) * [Insert a JavaScript telemetry initializer](api-filtering-sampling.md#javascript-telemetry-initializers)
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
If you open live metrics, the SDKs switch to a higher frequency mode and send ne
## Next steps
-* [Monitor usage with Application Insights](./usage-overview.md)
+* [Monitor usage with Application Insights](./usage.md)
* [Use Diagnostic Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) * [Profiler](./profiler.md) * [Snapshot Debugger](./snapshot-debugger.md)
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
```C# // Add the client IP address to the activity as a tag. // only applicable in case of activity.Kind == Server
-activity.SetTag("http.client_ip", "<IP Address>");
+activity.SetTag("client.address", "<IP Address>");
``` ##### [Java](#tab/java)
azure-monitor Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md
Application Insights Logs provides a rich query language that you can use to ana
## Next steps -- [Funnels](./usage-funnels.md)-- [Retention](./usage-retention.md)-- [User flows](./usage-flows.md)-- In the tutorial, you learned how to create custom dashboards. Now look at the rest of the Application Insights documentation, which also includes a case study.
+* [Funnels](./usage.md#funnelsdiscover-how-customers-use-your-application)
+* [Retention](./usage.md#user-retention-analysis)
+* [User flows](./usage.md#user-flowsanalyze-user-navigation-patterns)
+* In the tutorial, you learned how to create custom dashboards. Now look at the rest of the Application Insights documentation, which also includes a case study.
> [!div class="nextstepaction"] > [Deep diagnostics](../app/devops.md)
azure-monitor Usage Cohorts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-cohorts.md
- Title: Application Insights usage cohorts | Microsoft Docs
-description: Analyze different sets or users, sessions, events, or operations that have something in common.
- Previously updated : 07/01/2024--
-# Application Insights cohorts
-
-A cohort is a set of users, sessions, events, or operations that have something in common. In Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set you're interested in.
-
-## Cohorts vs. basic filters
-
-You can use cohorts in ways similar to filters. But cohorts' definitions are built from custom analytics queries, so they're much more adaptable and complex. Unlike filters, you can save cohorts so that other members of your team can reuse them.
-
-You might define a cohort of users who have all tried a new feature in your app. You can save this cohort in your Application Insights resource. It's easy to analyze this saved group of specific users in the future.
-> [!NOTE]
-> After cohorts are created, they're available from the Users, Sessions, Events, and User Flows tools.
-
-## Example: Engaged users
-
-Your team defines an engaged user as anyone who uses your app five or more times in a given month. In this section, you define a cohort of these engaged users.
-
-1. Select **Create a Cohort**.
-1. Select the **Template Gallery** tab to see a collection of templates for various cohorts.
-1. Select **Engaged Users -- by Days Used**.
-
- There are three parameters for this cohort:
- * **Activities**: Where you choose which events and page views count as usage.
- * **Period**: The definition of a month.
- * **UsedAtLeastCustom**: The number of times users need to use something within a period to count as engaged.
-
-1. Change **UsedAtLeastCustom** to **5+ days**. Leave **Period** set as the default of 28 days.
-
- Now this cohort represents all user IDs sent with any custom event or page view on 5 separate days in the past 28 days.
-
-1. Select **Save**.
-
- > [!TIP]
- > Give your cohort a name, like *Engaged Users (5+ Days)*. Save it to *My reports* or *Shared reports*, depending on whether you want other people who have access to this Application Insights resource to see this cohort.
-
-1. Select **Back to Gallery**.
-
-### What can you do by using this cohort?
-
-Open the Users tool. In the **Show** dropdown box, choose the cohort you created under **Users who belong to**.
--
-Important points to notice:
-
-* You can't create this set through normal filters. The date logic is more advanced.
-* You can further filter this cohort by using the normal filters in the Users tool. Although the cohort is defined on 28-day windows, you can still adjust the time range in the Users tool to be 30, 60, or 90 days.
-
-These filters support more sophisticated questions that are impossible to express through the query builder. An example is _people who were engaged in the past 28 days. How did those same people behave over the past 60 days?_
-
-## Example: Events cohort
-
-You can also make cohorts of events. In this section, you define a cohort of events and page views. Then you see how to use them from the other tools. This cohort might define a set of events that your team considers _active usage_ or a set related to a certain new feature.
-
-1. Select **Create a Cohort**.
-1. Select the **Template Gallery** tab to see a collection of templates for various cohorts.
-1. Select **Events Picker**.
-1. In the **Activities** dropdown box, select the events you want to be in the cohort.
-1. Save the cohort and give it a name.
-
-## Example: Active users where you modify a query
-
-The previous two cohorts were defined by using dropdown boxes. You can also define cohorts by using analytics queries for total flexibility. To see how, create a cohort of users from the United Kingdom.
-
-1. Open the Cohorts tool, select the **Template Gallery** tab, and select **Blank Users cohort**.
-
- :::image type="content" source="./media/usage-cohorts/cohort.png" alt-text="Screenshot that shows the template gallery for cohorts." lightbox="./media/usage-cohorts/cohort.png":::
-
- There are three sections:
-
- * **Markdown text**: Where you describe the cohort in more detail for other members on your team.
- * **Parameters**: Where you make your own parameters, like **Activities**, and other dropdown boxes from the previous two examples.
- * **Query**: Where you define the cohort by using an analytics query.
-
- In the query section, you [write an analytics query](/azure/kusto/query). The query selects the certain set of rows that describe the cohort you want to define. The Cohorts tool then implicitly adds a `| summarize by user_Id` clause to the query. This data appears as a preview underneath the query in a table, so you can make sure your query is returning results.
-
- > [!NOTE]
- > If you don't see the query, resize the section to make it taller and reveal the query.
-
-1. Copy and paste the following text into the query editor:
-
- ```KQL
- union customEvents, pageViews
- | where client_CountryOrRegion == "United Kingdom"
- ```
-
-1. Select **Run Query**. If you don't see user IDs appear in the table, change to a country/region where your application has users.
-
-1. Save and name the cohort.
-
-## Frequently asked question
-
-### I defined a cohort of users from a certain country/region. When I compare this cohort in the Users tool to setting a filter on that country/region, why do I see different results?
-
-Cohorts and filters are different. Suppose you have a cohort of users from the United Kingdom (defined like the previous example), and you compare its results to setting the filter `Country or region = United Kingdom`:
-
-* The cohort version shows all events from users who sent one or more events from the United Kingdom in the current time range. If you split by country or region, you likely see many countries and regions.
-* The filters version only shows events from the United Kingdom. If you split by country or region, you see only the United Kingdom.
-
-## Learn more
-
-* [Analytics query language](../logs/log-analytics-tutorial.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-* [Users, sessions, events](usage-segmentation.md)
-* [User flows](usage-flows.md)
-* [Usage overview](usage-overview.md)
azure-monitor Usage Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-flows.md
- Title: Application Insights User Flows analyzes navigation flows
-description: Analyze how users move between the pages and features of your web app.
- Previously updated : 12/15/2023---
-# Analyze user navigation patterns with User Flows in Application Insights
--
-The User Flows tool visualizes how users move between the pages and features of your site. It's great for answering questions like:
-
-* How do users move away from a page on your site?
-* What do users select on a page on your site?
-* Where are the places that users churn most from your site?
-* Are there places where users repeat the same action over and over?
-
-The User Flows tool starts from an initial custom event, exception, dependency, page view or request that you specify. From this initial event, User Flows shows the events that happened before and after user sessions. Lines of varying thickness show how many times users followed each path. Special **Session Started** nodes show where the subsequent nodes began a session. **Session Ended** nodes show how many users sent no page views or custom events after the preceding node, highlighting where users probably left your site.
-
-> [!NOTE]
-> Your Application Insights resource must contain page views or custom events to use the User Flows tool. [Learn how to set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md).
->
-
-## Choose an initial event
--
-To begin answering questions with the User Flows tool, choose an initial custom event, exception, dependency, page view or request to serve as the starting point for the visualization:
-
-1. Select the link in the **What do users do after?** title or select **Edit**.
-1. Select a custom event, exception, dependency, page view or request from the **Initial event** dropdown list.
-1. Select **Create graph**.
-
-The **Step 1** column of the visualization shows what users did most frequently after the initial event. The items are ordered from top to bottom and from most to least frequent. The **Step 2** and subsequent columns show what users did next. The information creates a picture of all the ways that users moved through your site.
-
-By default, the User Flows tool randomly samples only the last 24 hours of page views and custom events from your site. You can increase the time range and change the balance of performance and accuracy for random sampling on the **Edit** menu.
-
-If some of the page views, custom events, and exceptions aren't relevant to you, select **X** on the nodes you want to hide. After you've selected the nodes you want to hide, select **Create graph**. To see all the nodes you've hidden, select **Edit** and look at the **Excluded events** section.
-
-If page views or custom events are missing that you expect to see in the visualization:
-
-* Check the **Excluded events** section on the **Edit** menu.
-* Use the plus buttons on **Others** nodes to include less-frequent events in the visualization.
-* If the page view or custom event you expect is sent infrequently by users, increase the time range of the visualization on the **Edit** menu.
-* Make sure the custom event, exception, dependency, page view or request you expect is set up to be collected by the Application Insights SDK in the source code of your site. Learn more about [collecting custom events](./api-custom-events-metrics.md).
-
-If you want to see more steps in the visualization, use the **Previous steps** and **Next steps** dropdown lists above the visualization.
-
-## After users visit a page or feature, where do they go and what do they select?
--
-If your initial event is a page view, the first column (**Step 1**) of the visualization is a quick way to understand what users did immediately after they visited the page.
-
-Open your site in a window next to the User Flows visualization. Compare your expectations of how users interact with the page to the list of events in the **Step 1** column. Often, a UI element on the page that seems insignificant to your team can be among the most used on the page. It can be a great starting point for design improvements to your site.
-
-If your initial event is a custom event, the first column shows what users did after they performed that action. As with page views, consider if the observed behavior of your users matches your team's goals and expectations.
-
-If your selected initial event is **Added Item to Shopping Cart**, for example, look to see if **Go to Checkout** and **Completed Purchase** appear in the visualization shortly thereafter. If user behavior is different from your expectations, use the visualization to understand how users are getting "trapped" by your site's current design.
-
-## Where are the places that users churn most from your site?
-
-Watch for **Session Ended** nodes that appear high up in a column in the visualization, especially early in a flow. This positioning means many users probably churned from your site after they followed the preceding path of pages and UI interactions.
-
-Sometimes churn is expected. For example, it's expected after a user makes a purchase on an e-commerce site. But usually churn is a sign of design problems, poor performance, or other issues with your site that can be improved.
-
-Keep in mind that **Session Ended** nodes are based only on telemetry collected by this Application Insights resource. If Application Insights doesn't receive telemetry for certain user interactions, users might have interacted with your site in those ways after the User Flows tool says the session ended.
-
-## Are there places where users repeat the same action over and over?
-
-Look for a page view or custom event that's repeated by many users across subsequent steps in the visualization. This activity usually means that users are performing repetitive actions on your site. If you find repetition, think about changing the design of your site or adding new functionality to reduce repetition. For example, you might add bulk edit functionality if you find users performing repetitive actions on each row of a table element.
-
-## Frequently asked questions
-
-This section provides answers to common questions.
-
-### Does the initial event represent the first time the event appears in a session or any time it appears in a session?
-
-The initial event on the visualization only represents the first time a user sent that page view or custom event during a session. If users can send the initial event multiple times in a session, then the **Step 1** column only shows how users behave after the *first* instance of an initial event, not all instances.
-
-### Some of the nodes in my visualization have a level that's too high. How can I get more detailed nodes?
-
-Use the **Split by** options on the **Edit** menu:
-
-1. Select the event you want to break down on the **Event** menu.
-1. Select a dimension on the **Dimension** menu. For example, if you have an event called **Button Clicked**, try a custom property called **Button Name**.
-
-## Next steps
-
-* [Usage overview](usage-overview.md)
-* [Users, sessions, and events](usage-segmentation.md)
-* [Retention](usage-retention.md)
-* [Adding custom events to your app](./api-custom-events-metrics.md)
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-funnels.md
- Title: Application Insights funnels
-description: Learn how you can use funnels to discover how customers are interacting with your application.
- Previously updated : 01/31/2024---
-# Discover how customers are using your application with Application Insights funnels
-
-Understanding the customer experience is of great importance to your business. If your application involves multiple stages, you need to know if customers are progressing through the entire process or ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Application Insights funnels to gain insights into your users and monitor step-by-step conversion rates.
-
-## Create your funnel
-Before you create your funnel, decide on the question you want to answer. For example, you might want to know how many users view the home page, view a customer profile, and create a ticket.
-
-To create a funnel:
-
-1. On the **Funnels** tab, select **Edit**.
-1. Choose your **Top Step**.
-
- :::image type="content" source="./media/usage-funnels/funnel.png" alt-text="Screenshot that shows the Funnel tab and selecting steps on the Edit tab." lightbox="./media/usage-funnels/funnel.png":::
-
-1. To apply filters to the step, select **Add filters**. This option appears after you choose an item for the top step.
-1. Then choose your **Second Step** and so on.
-
- > [!NOTE]
- > Funnels are limited to a maximum of six steps.
-
-1. Select the **View** tab to see your funnel results.
-
- :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot that shows the Funnels View tab that shows results from the top and second steps." lightbox="./media/usage-funnels/funnel-2.png":::
-
-1. To save your funnel to view at another time, select **Save** at the top. Use **Open** to open your saved funnels.
-
-### Funnels features
-
-Funnels have the following features:
--- If your app is sampled, you'll see a sampling banner. Selecting the banner opens a context pane that explains how to turn off sampling.-- Select a step to see more details on the right.-- The historical conversion graph shows the conversion rates over the last 90 days.-- Understand your users better by accessing the users tool. You can use filters in each step.-
-## Next steps
-
- * [Usage overview](usage-overview.md)
- * [Users, sessions, and events](usage-segmentation.md)
- * [Retention](usage-retention.md)
- * [Workbooks](../visualize/workbooks-overview.md)
- * [Add user context](./usage-overview.md)
- * [Export to Power BI](../logs/log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md)
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
- Title: HEART analytics workbook
-description: Product teams can use the HEART workbook to measure success across five user-centric dimensions to deliver better software.
- Previously updated : 07/01/2024---
-# Analyze product usage with HEART
-This article describes how to enable and use the Heart Workbook on Azure Monitor. The HEART workbook is based on the HEART measurement framework, which was originally introduced by Google. Several Microsoft internal teams use HEART to deliver better software.
-
-## Overview
-HEART is an acronym that stands for happiness, engagement, adoption, retention, and task success. It helps product teams deliver better software by focusing on five dimensions of customer experience:
--- **Happiness**: Measure of user attitude-- **Engagement**: Level of active user involvement-- **Adoption**: Target audience penetration-- **Retention**: Rate at which users return-- **Task success**: Productivity empowerment-
-These dimensions are measured independently, but they interact with each other.
---- Adoption, engagement, and retention form a user activity funnel. Only a portion of users who adopt the tool come back to use it.-- Task success is the driver that progresses users down the funnel and moves them from adoption to retention.-- Happiness is an outcome of the other dimensions and not a stand-alone measurement. Users who have progressed down the funnel and are showing a higher level of activity are ideally happier.-
-## Get started
-### Prerequisites
-
- | Source | Attribute | Description |
- |--|-|--|
- | customEvents | session_Id | Unique session identifier |
- | customEvents | appName | Unique Application Insights app identifier |
- | customEvents | itemType | Category of customEvents record |
- | customEvents | timestamp | Datetime of event |
- | customEvents | operation_Id | Correlate telemetry events |
- | customEvents | user_Id | Unique user identifier |
- | customEvents ┬╣ | parentId | Name of feature |
- | customEvents ┬╣ | pageName | Name of page |
- | customEvents ┬╣ | actionType | Category of Click Analytics record |
- | pageViews | user_AuthenticatedId | Unique authenticated user identifier |
- | pageViews | session_Id | Unique session identifier |
- | pageViews | appName | Unique Application Insights app identifier |
- | pageViews | timestamp | Datetime of event |
- | pageViews | operation_Id | Correlate telemetry events |
- | pageViews | user_Id | Unique user identifier |
--- If you're setting up the authenticated user context, instrument the below attributes:-
-| Source | Attribute | Description |
-|--|-|--|
-| customEvents | user_AuthenticatedId | Unique authenticated user identifier |
-
-**Footnotes**
-
-┬╣: To emit these attributes, use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) via npm.
-
->[!TIP]
-> To understand how to effectively use the Click Analytics plug-in, see [Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](javascript-feature-extensions.md#use-the-plug-in).
-
-### Open the workbook
-You can find the workbook in the gallery under **Public Templates**. The workbook appears in the section **Product Analytics using the Click Analytics Plugin**.
--
-There are seven workbooks.
--
-You only have to interact with the main workbook, **HEART Analytics - All Sections**. This workbook contains the other six workbooks as tabs. You can also access the individual workbooks related to each tab through the gallery.
-
-### Confirm that data is flowing
-
-To validate that data is flowing as expected to light up the metrics accurately, select the **Development Requirements** tab.
-
-> [!IMPORTANT]
-> Unless you [set the authenticated user context](./javascript-feature-extensions.md#optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data.
--
-If data isn't flowing as expected, this tab shows the specific attributes with issues.
--
-## Workbook structure
-The workbook shows metric trends for the HEART dimensions split over seven tabs. Each tab contains descriptions of the dimensions, the metrics contained within each dimension, and how to use them.
-
-The tabs are:
--- **Summary**: Summarizes usage funnel metrics for a high-level view of visits, interactions, and repeat usage.-- **Adoption**: Helps you understand the penetration among the target audience, acquisition velocity, and total user base.-- **Engagement**: Shows frequency, depth, and breadth of usage.-- **Retention**: Shows repeat usage.-- **Task success**: Enables understanding of user flows and their time distributions.-- **Happiness**: We recommend using a survey tool to measure customer satisfaction score (CSAT) over a five-point scale. On this tab, we've provided the likelihood of happiness via usage and performance metrics.-- **Feature metrics**: Enables understanding of HEART metrics at feature granularity.-
-> [!WARNING]
-> The HEART workbook is currently built on logs and effectively are [log-based metrics](pre-aggregated-metrics-log-metrics.md). The accuracy of these metrics are negatively affected by sampling and filtering.
-
-## How HEART dimensions are defined and measured
-
-### Happiness
-
-Happiness is a user-reported dimension that measures how users feel about the product offered to them.
-
-A common approach to measure happiness is to ask users a CSAT question like How satisfied are you with this product? Users' responses on a three- or a five-point scale (for example, *no, maybe,* and *yes*) are aggregated to create a product-level score that ranges from 1 to 5. Because user-initiated feedback tends to be negatively biased, HEART tracks happiness from surveys displayed to users at predefined intervals.
-
-Common happiness metrics include values such as **Average Star Rating** and **Customer Satisfaction Score**. Send these values to Azure Monitor by using one of the custom ingestion methods described in [Custom sources](../data-sources.md#custom-sources).
-
-### Engagement
-
-Engagement is a measure of user activity. Specifically, user actions are intentional, such as clicks. Active usage can be broken down into three subdimensions:
--- **Activity frequency**: Measures how often a user interacts with the product. For example, users typically interact daily, weekly, or monthly.-- **Activity breadth**: Measures the number of features users interact with over a specific time period. For example, users interacted with a total of five features in June 2021.-- **Activity depth**: Measures the number of features users interact with each time they launch the product. For example, users interacted with two features on every launch.-
-Measuring engagement can vary based on the type of product being used. For example, a product like Microsoft Teams is expected to have a high daily usage, which makes it an important metric to track. But for a product like a paycheck portal, measurement might make more sense at a monthly or weekly level.
-
->[!IMPORTANT]
->A user who performs an intentional action, such as clicking a button or typing an input, is counted as an active user. For this reason, engagement metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application.
-
-### Adoption
-
-Adoption enables understanding of penetration among the relevant users, who you're gaining as your user base, and how you're gaining them. Adoption metrics are useful for measuring:
--- Newly released products.-- Newly updated products.-- Marketing campaigns.-
-### Retention
-
-A retained user is a user who was active in a specified reporting period and its previous reporting period. Retention is typically measured with the following metrics.
-
-| Metric | Definition | Question answered |
-|-|-|-|
-| Retained users | Count of active users who were also active the previous period | How many users are staying engaged with the product? |
-| Retention | Proportion of active users from the previous period who are also active this period | What percent of users are staying engaged with the product? |
-
->[!IMPORTANT]
->Because active users must have at least one telemetry event with an action type, retention metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application.
-
-### Task success
-
-Task success tracks whether users can do a task efficiently and effectively by using the product's features. Many products include structures that are designed to funnel users through completing a task. Some examples include:
--- Adding items to a cart and then completing a purchase.-- Searching a keyword and then selecting a result.-- Starting a new account and then completing account registration.-
-A successful task meets three requirements:
-- **Expected task flow**: The intended task flow of the feature was completed by the user and aligns with the expected task flow.-- **High performance**: The intended functionality of the feature was accomplished in a reasonable amount of time.-- **High reliability**: The intended functionality of the feature was accomplished without failure.-
-A task is considered unsuccessful if any of the preceding requirements isn't met.
-
->[!IMPORTANT]
->Task success metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application.
-
-Set up a custom task by using the following parameters.
-
-| Parameter | Description |
-|-|-|
-| First step | The feature that starts the task. In the cart/purchase example, **Adding items to a cart** is the first step. |
-| Expected task duration | The time window to consider a completed task a success. Any tasks completed outside of this constraint are considered a failure. Not all tasks necessarily have a time constraint. For such tasks, select **No Time Expectation**. |
-| Last step | The feature that completes the task. In the cart/purchase example, **Purchasing items from the cart** is the last step. |
-
-## Frequently asked questions
-
-### How do I view the data at different grains (daily, monthly, or weekly)?
-You can select the **Date Grain** filter to change the grain. The filter is available across all the dimension tabs.
--
-### How do I access insights from my application that aren't available on the HEART workbooks?
-
-You can dig into the data that feeds the HEART workbook if the visuals don't answer all your questions. To do this task, under the **Monitoring** section, select **Logs** and query the `customEvents` table. Some of the Click Analytics attributes are contained within the `customDimensions` field. A sample query is shown here.
--
-To learn more about Logs in Azure Monitor, see [Azure Monitor Logs overview](../logs/data-platform-logs.md).
-
-### Can I edit visuals in the workbook?
-
-Yes. When you select the public template of the workbook:
-
-1. Select **Edit** and make your changes.
-
- :::image type="content" source="media/usage-overview/workbook-edit-faq.png" alt-text="Screenshot that shows the Edit button in the upper-left corner of the workbook template.":::
-
-1. After you make your changes, select **Done Editing**, and then select the **Save** icon.
-
- :::image type="content" source="media/usage-overview/workbook-save-faq.png" alt-text="Screenshot that shows the Save icon at the top of the workbook template that becomes available after you make edits.":::
-
-1. To view your saved workbook, under **Monitoring**, go to the **Workbooks** section and then select the **Workbooks** tab.
-
- A copy of your customized workbook appears there. You can make any further changes you want in this copy.
-
- :::image type="content" source="media/usage-overview/workbook-view-faq.png" alt-text="Screenshot that shows the Workbooks tab next to the Public Templates tab, where the edited copy of the workbook is located.":::
-
-For more on editing workbook templates, see [Azure Workbooks templates](../visualize/workbooks-templates.md).
-
-## Next steps
-- Check out the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection plug-in.-- Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.-- Find click data under the content field within the `customDimensions` attribute in the `CustomEvents` table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). See [sample app](https://go.microsoft.com/fwlink/?linkid=2152871) for more guidance.-- Learn more about the [Google HEART framework](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36299.pdf).
azure-monitor Usage Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-impact.md
- Title: Application Insights usage impact - Azure Monitor
-description: Analyze how different properties potentially affect conversion rates for parts of your apps.
- Previously updated : 07/01/2024--
-# Impact analysis with Application Insights
-
-Impact analyzes how load times and other properties influence conversion rates for various parts of your app. To put it more precisely, it discovers how any dimension of a page view, custom event, or request affects the usage of a different page view or custom event.
-
-## Still not sure what Impact does?
-
-One way to think of Impact is as the ultimate tool for settling arguments with someone on your team about how slowness in some aspect of your site is affecting whether users stick around. Users might tolerate some slowness, but Impact gives you insight into how best to balance optimization and performance to maximize user conversion.
-
-Analyzing performance is only a subset of Impact's capabilities. Impact supports custom events and dimensions, so you can easily answer questions like, How does user browser choice correlate with different rates of conversion?
-
-> [!NOTE]
-> Your Application Insights resource must contain page views or custom events to use the Impact analysis workbook. Learn how to [set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md). Also, because you're analyzing correlation, sample size matters.
-
-## Impact analysis workbook
-
-To use the Impact analysis workbook, in your Application Insights resources go to **Usage** > **More** and select **User Impact Analysis Workbook**. Or on the **Workbooks** tab, select **Public Templates**. Then under **Usage**, select **User Impact Analysis**.
--
-### Use the workbook
--
-1. From the **Selected event** dropdown list, select an event.
-1. From the **analyze how its** dropdown list, select a metric.
-1. From the **Impacting event** dropdown list, select an event.
-1. To add a filter, use the **Add selected event filters** tab or the **Add impacting event filters** tab.
-
-## Is page load time affecting how many people convert on my page?
-
-To begin answering questions with the Impact workbook, choose an initial page view, custom event, or request.
-
-1. From the **Selected event** dropdown list, select an event.
-1. Leave the **analyze how its** dropdown list on the default selection of **Duration**. (In this context, **Duration** is an alias for **Page Load Time**.)
-1. From the **Impacting event** dropdown list, select a custom event. This event should correspond to a UI element on the page view you selected in step 1.
-
- :::image type="content" source="./media/usage-impact/impact.png" alt-text="Screenshot that shows an example with the selected event as Home Page analyzed by duration." lightbox="./media/usage-impact/impact.png":::
-
-## What if I'm tracking page views or load times in custom ways?
-
-Impact supports both standard and custom properties and measurements. Use whatever you want. Instead of duration, use filters on the primary and secondary events to get more specific.
-
-## Do users from different countries or regions convert at different rates?
-
-1. From the **Selected event** dropdown list, select an event.
-1. From the **analyze how its** dropdown list, select **Country or region**.
-1. From the **Impacting event** dropdown list, select a custom event that corresponds to a UI element on the page view you chose in step 1.
-
- :::image type="content" source="./media/usage-impact/regions.png" alt-text="Screenshot that shows an example with the selected event as GET analyzed by country and region." lightbox="./media/usage-impact/regions.png":::
-
-## How does the Impact analysis workbook calculate these conversion rates?
-
-Under the hood, the Impact analysis workbook relies on the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). Results are computed between -1 and 1. The coefficient -1 represents a negative linear correlation and 1 represents a positive linear correlation.
-
-The basic breakdown of how Impact analysis works is listed here:
-
-* Let _A_ = the main page view, custom event, or request you select in the **Selected event** dropdown list.
-* Let _B_ = the secondary page view or custom event you select in the **impacts the usage of** dropdown list.
-
-Impact looks at a sample of all the sessions from users in the selected time range. For each session, it looks for each occurrence of _A_.
-
-Sessions are then broken into two different kinds of _subsessions_ based on one of two conditions:
--- A converted subsession consists of a session ending with a _B_ event and encompasses all _A_ events that occur prior to _B_.-- An unconverted subsession occurs when all *A*s occur without a terminal _B_.-
-How Impact is ultimately calculated varies based on whether we're analyzing by metric or by dimension. For metrics, all *A*s in a subsession are averaged. For dimensions, the value of each _A_ contributes _1/N_ to the value assigned to _B_, where _N_ is the number of *A*s in the subsession.
-
-## Next steps
--- To learn more about workbooks, see the [Workbooks overview](../visualize/workbooks-overview.md).-- To enable usage experiences, start sending [custom events](./api-custom-events-metrics.md#trackevent) or [page views](./api-custom-events-metrics.md#page-views).-- If you already send custom events or page views, explore the Usage tools to learn how users use your service:
- - [Funnels](usage-funnels.md)
- - [Retention](usage-retention.md)
- - [User flows](usage-flows.md)
- - [Workbooks](../visualize/workbooks-overview.md)
- - [Add user context](./usage-overview.md)
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
- Title: Usage analysis with Application Insights | Azure Monitor
-description: Understand your users and what they do with your app.
- Previously updated : 09/12/2023---
-# Usage analysis with Application Insights
-
-Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? [Application Insights](./app-insights-overview.md) helps you gain powerful insights into how people use your app. Every time you update your app, you can assess how well it works for users. With this knowledge, you can make data-driven decisions about your next development cycles.
-
-## Send telemetry from your app
-
-The best experience is obtained by installing Application Insights both in your app server code and in your webpages. The client and server components of your app send telemetry back to the Azure portal for analysis.
-
-1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./opentelemetry-enable.md?tabs=java), [Node.js](./nodejs.md), or [other](./app-insights-overview.md#supported-languages) app.
-
- * If you don't want to install server code, [create an Application Insights resource](./create-workspace-resource.md).
-
-1. **Webpage code:** Use the JavaScript SDK to collect data from webpages. See [Get started with the JavaScript SDK](./javascript-sdk.md).
-
- [!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)]
-
- To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
-
-1. **Mobile app code:** Use the App Center SDK to collect events from your app. Then send copies of these events to Application Insights for analysis by [following this guide](https://github.com/Microsoft/appcenter).
-
-1. **Get telemetry:** Run your project in debug mode for a few minutes. Then look for results in the **Overview** pane in Application Insights.
-
- Publish your app to monitor your app's performance and find out what your users are doing with your app.
-
-## Explore usage demographics and statistics
-
-Find out when people use your app and what pages they're most interested in. You can also find out where your users are located and what browsers and operating systems they use.
-
-The **Users** and **Sessions** reports filter your data by pages or custom events. The reports segment the data by properties such as location, environment, and page. You can also add your own filters.
--
-Insights on the right point out interesting patterns in the set of data.
-
-* The **Users** report counts the numbers of unique users that access your pages within your chosen time periods. For web apps, users are counted by using cookies. If someone accesses your site with different browsers or client machines, or clears their cookies, they're counted more than once.
-* The **Sessions** report tabulates the number of user sessions that access your site. A session represents a period of activity initiated by a user and concludes with a period of inactivity exceeding half an hour.
-
-For more information about the Users, Sessions, and Events tools, see [Users, sessions, and events analysis in Application Insights](usage-segmentation.md).
-
-## Retention: How many users come back?
-
-Retention helps you understand how often your users return to use their app, based on cohorts of users that performed some business action during a certain time bucket. You can:
--- Understand what specific features cause users to come back more than others.-- Form hypotheses based on real user data.-- Determine whether retention is a problem in your product.--
-You can use the retention controls on top to define specific events and time ranges to calculate retention. The graph in the middle gives a visual representation of the overall retention percentage by the time range specified. The graph on the bottom represents individual retention in a specific time period. This level of detail allows you to understand what your users are doing and what might affect returning users on a more detailed granularity.
-
-For more information about the Retention workbook, see [User retention analysis for web applications with Application Insights](usage-retention.md).
-
-## Custom business events
-
-To understand user interactions in your app, insert code lines to log custom events. These events track various user actions, like button selections, or important business events, such as purchases or game victories.
-
-You can also use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) to collect custom events.
-
-In some cases, page views can represent useful events, but it isn't true in general. A user can open a product page without buying the product.
-
-With specific business events, you can chart your users' progress through your site. You can find out their preferences for different options and where they drop out or have difficulties. With this knowledge, you can make informed decisions about the priorities in your development backlog.
-
-Events can be logged from the client side of the app:
-
-```JavaScript
- appInsights.trackEvent({name: "incrementCount"});
-```
-
-Or events can be logged from the server side:
-
-```csharp
- var tc = new Microsoft.ApplicationInsights.TelemetryClient();
- tc.TrackEvent("CreatedAccount", new Dictionary<string,string> {"AccountType":account.Type}, null);
- ...
- tc.TrackEvent("AddedItemToCart", new Dictionary<string,string> {"Item":item.Name}, null);
- ...
- tc.TrackEvent("CompletedPurchase");
-```
-
-You can attach property values to these events so that you can filter or split the events when you inspect them in the portal. A standard set of properties is also attached to each event, such as anonymous user ID, which allows you to trace the sequence of activities of an individual user.
-
-Learn more about [custom events](./api-custom-events-metrics.md#trackevent) and [properties](./api-custom-events-metrics.md#properties).
-
-### Slice and dice events
-
-In the Users, Sessions, and Events tools, you can slice and dice custom events by user, event name, and properties.
--
-Whenever youΓÇÖre in any usage experience, select the **Open the last run query** icon to take you back to the underlying query.
--
-You can then modify the underlying query to get the kind of information youΓÇÖre looking for.
-
-HereΓÇÖs an example of an underlying query about page views. Go ahead and paste it directly into the query editor to test it out.
-
-```kusto
-// average pageView duration by name
-let timeGrain=5m;
-let dataset=pageViews
-// additional filters can be applied here
-| where timestamp > ago(1d)
-| where client_Type == "Browser" ;
-// calculate average pageView duration for all pageViews
-dataset
-| summarize avg(duration) by bin(timestamp, timeGrain)
-| extend pageView='Overall'
-// render result in a chart
-| render timechart
-```
-
-## Design the telemetry with the app
-
-When you design each feature of your app, consider how you're going to measure its success with your users. Decide what business events you need to record, and code the tracking calls for those events into your app from the start.
-
-## A | B testing
-
-If you're unsure which feature variant is more successful, release both and let different users access each variant. Measure the success of each variant, and then transition to a unified version.
-
-In this technique, you attach unique property values to all the telemetry sent by each version of your app. You can do it by defining properties in the active TelemetryContext. These default properties get included in every telemetry message sent by the application. It includes both custom messages and standard telemetry.
-
-In the Application Insights portal, filter and split your data on the property values so that you can compare the different versions.
-
-To do this step, [set up a telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer):
-
-```csharp
- // Telemetry initializer class
- public class MyTelemetryInitializer : ITelemetryInitializer
- {
- // In this example, to differentiate versions, we use the value specified in the AssemblyInfo.cs
- // for ASP.NET apps, or in your project file (.csproj) for the ASP.NET Core apps. Make sure that
- // you set a different assembly version when you deploy your application for A/B testing.
- static readonly string _version =
- System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.ToString();
-
- public void Initialize(ITelemetry item)
- {
- item.Context.Component.Version = _version;
- }
- }
-```
-
-# [NET 6.0+](#tab/aspnetcore)
-
-For [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) applications, add a new telemetry initializer to the Dependency Injection service collection in the `Program.cs` class.
-
-```csharp
-using Microsoft.ApplicationInsights.Extensibility;
-
-builder.Services.AddSingleton<ITelemetryInitializer, MyTelemetryInitializer>();
-```
-
-# [.NET Framework 4.8](#tab/aspnet-framework)
-
-In the web app initializer, such as `Global.asax.cs`:
-
-```csharp
-
- protected void Application_Start()
- {
- // ...
- TelemetryConfiguration.Active.TelemetryInitializers
- .Add(new MyTelemetryInitializer());
- }
-```
---
-## Next steps
-
- - [Users, sessions, and events](usage-segmentation.md)
- - [Funnels](usage-funnels.md)
- - [Retention](usage-retention.md)
- - [User Flows](usage-flows.md)
- - [Workbooks](../visualize/workbooks-overview.md)
azure-monitor Usage Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-retention.md
- Title: Analyze web app user retention with Application Insights
-description: This article shows you how to determine how many users return to your app.
- Previously updated : 06/23/2023---
-# User retention analysis for web applications with Application Insights
-
-The retention feature in [Application Insights](./app-insights-overview.md) helps you analyze how many users return to your app, and how often they perform particular tasks or achieve goals. For example, if you run a game site, you could compare the numbers of users who return to the site after losing a game with the number who return after winning. This knowledge can help you improve your user experience and your business strategy.
-
-## Get started
-
-If you don't yet see data in the retention tool in the Application Insights portal, [learn how to get started with the usage tools](usage-overview.md).
-
-## The Retention workbook
-
-To use the Retention workbook, in your Application Insights resources go to **Usage** > **Retention** > **Retention Analysis Workbook**. Or on the **Workbooks** tab, select **Public Templates**. Then under **Usage**, select **User Retention Analysis**.
--
-### Use the workbook
--
-Workbook capabilities:
--- By default, retention shows all users who did anything and then came back and did anything else over a defined period. You can select different combinations of events to narrow the focus on specific user activities.-- To add one or more filters on properties, select **Add Filters**. For example, you can focus on users in a particular country or region.-- The **Overall Retention** chart shows a summary of user retention across the selected time period.-- The grid shows the number of users retained. Each row represents a cohort of users who performed any event in the time period shown. Each cell in the row shows how many of that cohort returned at least once in a later period. Some users might return in more than one period.-- The insights cards show the top five initiating events and the top five returned events. This information gives users a better understanding of their retention report.-
- :::image type="content" source="./media/usage-retention/retention-2.png" alt-text="Screenshot that shows the Retention workbook showing the User returned after # of weeks chart." lightbox="./media/usage-retention/retention-2.png":::
-
-## Use business events to track retention
-
-You should measure events that represent significant business activities to get the most useful retention analysis.
-
-For more information and example code, see [Custom business events](usage-overview.md#custom-business-events).
-
-To learn more, see [writing custom events](./api-custom-events-metrics.md#trackevent).
-
-## Next steps
--- To learn more about workbooks, see the [workbooks overview](../visualize/workbooks-overview.md).-- To enable usage experiences, start sending [custom events](./api-custom-events-metrics.md#trackevent) or [page views](./api-custom-events-metrics.md#page-views).-- If you already send custom events or page views, explore the Usage tools to learn how users use your service:
- - [Users, sessions, events](usage-segmentation.md)
- - [Funnels](usage-funnels.md)
- - [User flows](usage-flows.md)
- - [Workbooks](../visualize/workbooks-overview.md)
- - [Add user context](./usage-overview.md)
azure-monitor Usage Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-segmentation.md
- Title: User, session, and event analysis in Application Insights
-description: Demographic analysis of users of your web app.
- Previously updated : 07/01/2024---
-# User, session, and event analysis in Application Insights
-
-Find out when people use your web app, what pages they're most interested in, where your users are located, and what browsers and operating systems they use. Analyze business and usage telemetry by using [Application Insights](./app-insights-overview.md).
--
-## Get started
-
-If you don't yet see data in the **Users**, **Sessions**, or **Events** panes in the Application Insights portal, [learn how to get started with the Usage tools](usage-overview.md).
-
-## The Users, Sessions, and Events segmentation tool
-
-Three of the **Usage** panes use the same tool to slice and dice telemetry from your web app from three perspectives. By filtering and splitting the data, you can uncover insights about the relative use of different pages and features.
-
-* **Users tool**: How many people used your app and its features? Users are counted by using anonymous IDs stored in browser cookies. A single person using different browsers or machines will be counted as more than one user.
-* **Sessions tool**: How many sessions of user activity have included certain pages and features of your app? A session is reset after half an hour of user inactivity, or after 24 hours of continuous use.
-* **Events tool**: How often are certain pages and features of your app used? A page view is counted when a browser loads a page from your app, provided you've [instrumented it](./javascript.md).
-
- A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent) or use the [Click Analytics](javascript-feature-extensions.md) extension.
-
-> [!NOTE]
-> For information on an alternatives to using [anonymous IDs](./data-model-complete.md#anonymous-user-id) and ensuring an accurate count, see the documentation for [authenticated IDs](./data-model-complete.md#authenticated-user-id).
-
-Clicking **View More Insights** displays the following information:
-- Application Performance: Sessions, Events, and a Performance evaluation related to users' perception of responsiveness.-- Properties: Charts containing up to six user properties such as browser version, country or region, and operating system.-- Meet Your Users: View timelines of user activity.-
-## Query for certain users
-
-Explore different groups of users by adjusting the query options at the top of the Users tool:
--- **During**: Choose a time range.-- **Show**: Choose a cohort of users to analyze.-- **Who used**: Choose custom events, requests, and page views.-- **Events**: Choose multiple events, requests, and page views that will show users who did at least one, not necessarily all, of the selected options.-- **By value x-axis**: Choose how to categorize the data, either by time range or by another property, such as browser or city.-- **Split By**: Choose a property to use to split or segment the data.-- **Add Filters**: Limit the query to certain users, sessions, or events based on their properties, such as browser or city.-
-## Meet your users
-
-The **Meet your users** section shows information about five sample users matched by the current query. Exploring the behaviors of individuals and in aggregate can provide insights about how people use your app.
-
-## Next steps
--- To enable usage experiences, start sending [custom events](./api-custom-events-metrics.md#trackevent) or [page views](./api-custom-events-metrics.md#page-views).-- If you already send custom events or page views, explore the **Usage** tools to learn how users use your service.
- - [Funnels](usage-funnels.md)
- - [Retention](usage-retention.md)
- - [User flows](usage-flows.md)
- - [Workbooks](../visualize/workbooks-overview.md)
- - [Add user context](./usage-overview.md)
azure-monitor Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage.md
+
+ Title: Usage analysis with Application Insights | Azure Monitor
+description: Understand your users and what they do with your application.
+ Last updated : 07/16/2024+++
+# Usage analysis with Application Insights
+
+Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later?
+
+[Application Insights](./app-insights-overview.md) is a powerful tool for monitoring the performance and usage of your applications. It provides insights into how users interact with your app, identifies areas for improvement, and helps you understand the impact of changes. With this knowledge, you can make data-driven decisions about your next development cycles.
+
+This article covers the following areas:
+
+* [Users, Sessions & Events](#users-sessions-and-eventsanalyze-telemetry-from-three-perspectives) - Track and analyze user interaction with your application, session trends, and specific events to gain insights into user behavior and app performance.
+* [Funnels](#funnelsdiscover-how-customers-use-your-application) - Understand how users progress through a series of steps in your application and where they might be dropping off.
+* [User Flows](#user-flowsanalyze-user-navigation-patterns) - Visualize user paths to identify the most common routes and pinpointing areas where users are most engaged users or may encounter issues.
+* [Cohorts](#cohortsanalyze-a-specific-set-of-users-sessions-events-or-operations) - Group users or events by common characteristics to analyze behavior patterns, feature usage, and the impact of changes over time.
+* [Impact Analysis](#impact-analysisdiscover-how-different-properties-influence-conversion-rates) - Analyze how application performance metrics, like load times, influence user experience and behavior, to help you to prioritize improvements.
+* [HEART](#heartfive-dimensions-of-customer-experience) - Utilize the HEART framework to measure and understand user Happiness, Engagement, Adoption, Retention, and Task success.
+
+## Send telemetry from your application
+
+To optimize your experience, consider integrating Application Insights into both your app server code and your webpages. This dual implementation enables telemetry collection from both the client and server components of your application.
+
+1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./opentelemetry-enable.md?tabs=java), [Node.js](./nodejs.md), or [other](./app-insights-overview.md#supported-languages) app.
+
+ If you don't want to install server code, [create an Application Insights resource](./create-workspace-resource.md).
+
+1. **Webpage code:** Use the JavaScript SDK to collect data from webpages, see [Get started with the JavaScript SDK](./javascript-sdk.md).
+
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)]
+
+ To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
+
+1. **Mobile app code:** Use the App Center SDK to collect events from your app. Then send copies of these events to Application Insights for analysis by [following this guide](https://github.com/Microsoft/appcenter).
+
+1. **Get telemetry:** Run your project in debug mode for a few minutes. Then look for results in the **Overview** pane in Application Insights.
+
+ Publish your app to monitor your app's performance and find out what your users are doing with your app.
+
+## Users, Sessions, and Events - Analyze telemetry from three perspectives
+
+Three of the **Usage** panes use the same tool to slice and dice telemetry from your web app from three perspectives. By filtering and splitting the data, you can uncover insights about the relative use of different pages and features.
+
+* **Users tool**: How many people used your app and its features? Users are counted by using anonymous IDs stored in browser cookies. A single person using different browsers or machines will be counted as more than one user.
+
+* **Sessions tool**: How many sessions of user activity have included certain pages and features of your app? A session is reset after half an hour of user inactivity, or after 24 hours of continuous use.
+
+* **Events tool**: How often are certain pages and features of your app used? A page view is counted when a browser loads a page from your app, provided you've [instrumented it](./javascript.md).
+
+ A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent) or use the [Click Analytics](javascript-feature-extensions.md) extension.
+
+> [!NOTE]
+> For information on an alternatives to using [anonymous IDs](./data-model-complete.md#anonymous-user-id) and ensuring an accurate count, see the documentation for [authenticated IDs](./data-model-complete.md#authenticated-user-id).
+
+Clicking **View More Insights** displays the following information:
+
+* **Application Performance:** Sessions, Events, and a Performance evaluation related to users' perception of responsiveness.
+* **Properties:** Charts containing up to six user properties such as browser version, country or region, and operating system.
+* **Meet Your Users:** View timelines of user activity.
+
+### Explore usage demographics and statistics
+
+Find out when people use your web app, what pages they're most interested in, where your users are located, and what browsers and operating systems they use. Analyze business and usage telemetry by using Application Insights.
++
+* The **Users** report counts the numbers of unique users that access your pages within your chosen time periods. For web apps, users are counted by using cookies. If someone accesses your site with different browsers or client machines, or clears their cookies, they're counted more than once.
+
+* The **Sessions** report tabulates the number of user sessions that access your site. A session represents a period of activity initiated by a user and concludes with a period of inactivity exceeding half an hour.
+
+#### Query for certain users
+
+Explore different groups of users by adjusting the query options at the top of the Users pane:
+
+| Option | Description |
+|--|-|
+| During | Choose a time range. |
+| Show | Choose a cohort of users to analyze. |
+| Who used | Choose custom events, requests, and page views. |
+| Events | Choose multiple events, requests, and page views that will show users who did at least one, not necessarily all, of the selected options. |
+| By value x-axis | Choose how to categorize the data, either by time range or by another property, such as browser or city. |
+| Split By | Choose a property to use to split or segment the data. |
+| Add Filters | Limit the query to certain users, sessions, or events based on their properties, such as browser or city. |
+
+#### Meet your users
+
+The **Meet your users** section shows information about five sample users matched by the current query. Exploring the behaviors of individuals and in aggregate can provide insights about how people use your app.
+
+### User retention analysis
+
+The Application Insights retention feature provides valuable insights into user engagement by tracking the frequency and patterns of users returning to your app and their interactions with specific features. It enables you to compare user behaviors, such as the difference in return rates between users who win or lose a game, offering actionable data to enhance user experience and inform business strategies.
+
+By analyzing cohorts of users based on their actions within a given timeframe, you can identify which features drive repeat usage. This knowledge can help you:
+
+* Understand what specific features cause users to come back more than others.
+* Determine whether retention is a problem in your product.
+* Form hypotheses based on real user data to help you improve the user experience and your business strategy.
++
+You can use the retention controls on top to define specific events and time ranges to calculate retention. The graph in the middle gives a visual representation of the overall retention percentage by the time range specified. The graph on the bottom represents individual retention in a specific time period. This level of detail allows you to understand what your users are doing and what might affect returning users on a more detailed granularity.
+
+For more information about the Retention workbook, see the section below.
+
+#### The retention workbook
+
+To use the retention workbook in Application Insights, navigate to the **Workbooks** pane, select **Public Templates** at the top, and locate the **User Retention Analysis** workbook listed under the **Usage** category.
++
+**Workbook capabilities:**
+
+* By default, retention shows all users who did anything and then came back and did anything else over a defined period. You can select different combinations of events to narrow the focus on specific user activities.
+
+* To add one or more filters on properties, select **Add Filters**. For example, you can focus on users in a particular country or region.
+
+* The **Overall Retention** chart shows a summary of user retention across the selected time period.
+
+* The grid shows the number of users retained. Each row represents a cohort of users who performed any event in the time period shown. Each cell in the row shows how many of that cohort returned at least once in a later period. Some users might return in more than one period.
+
+* The insights cards show the top five initiating events and the top five returned events. This information gives users a better understanding of their retention report.
+
+ :::image type="content" source="./media/usage-retention/retention-2.png" alt-text="Screenshot that shows the Retention workbook showing the User returned after number of weeks chart." lightbox="./media/usage-retention/retention-2.png":::
+
+#### Use business events to track retention
+
+You should measure events that represent significant business activities to get the most useful retention analysis.
+
+For more information and example code, see the section below.
+
+### Track user interactions with custom events
+
+To understand user interactions in your app, insert code lines to log custom events. These events track various user actions, like button selections, or important business events, such as purchases or game victories.
+
+You can also use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) to collect custom events.
+
+> [!TIP]
+> When you design each feature of your app, consider how you're going to measure its success with your users. Decide what business events you need to record, and code the tracking calls for those events into your app from the start.
+
+In some cases, page views can represent useful events, but it isn't true in general. A user can open a product page without buying the product.
+
+With specific business events, you can chart your users' progress through your site. You can find out their preferences for different options and where they drop out or have difficulties. With this knowledge, you can make informed decisions about the priorities in your development backlog.
+
+Events can be logged from the client side of the app:
+
+```javascript
+appInsights.trackEvent({name: "incrementCount"});
+```
+
+Or events can be logged from the server side:
+
+```csharp
+var tc = new Microsoft.ApplicationInsights.TelemetryClient();
+tc.TrackEvent("CreatedAccount", new Dictionary<string,string> {"AccountType":account.Type}, null);
+...
+tc.TrackEvent("AddedItemToCart", new Dictionary<string,string> {"Item":item.Name}, null);
+...
+tc.TrackEvent("CompletedPurchase");
+```
+
+You can attach property values to these events so that you can filter or split the events when you inspect them in the portal. A standard set of properties is also attached to each event, such as anonymous user ID, which allows you to trace the sequence of activities of an individual user.
+
+Learn more about [custom events](./api-custom-events-metrics.md#trackevent) and [properties](./api-custom-events-metrics.md#properties).
+
+#### Slice and dice events
+
+In the Users, Sessions, and Events tools, you can slice and dice custom events by user, event name, and properties.
++
+Whenever youΓÇÖre in any usage experience, select the **Open the last run query** icon to take you back to the underlying query.
++
+You can then modify the underlying query to get the kind of information youΓÇÖre looking for.
+
+HereΓÇÖs an example of an underlying query about page views. Go ahead and paste it directly into the query editor to test it out.
+
+```kusto
+// average pageView duration by name
+let timeGrain=5m;
+let dataset=pageViews
+// additional filters can be applied here
+| where timestamp > ago(1d)
+| where client_Type == "Browser" ;
+// calculate average pageView duration for all pageViews
+dataset
+| summarize avg(duration) by bin(timestamp, timeGrain)
+| extend pageView='Overall'
+// render result in a chart
+| render timechart
+```
+
+### Determine feature success with A/B testing
+
+If you're unsure which feature variant is more successful, release both and let different users access each variant. Measure the success of each variant, and then transition to a unified version.
+
+In this technique, you attach unique property values to all the telemetry sent by each version of your app. You can do it by defining properties in the active TelemetryContext. These default properties get included in every telemetry message sent by the application. It includes both custom messages and standard telemetry.
+
+In the Application Insights portal, filter and split your data on the property values so that you can compare the different versions.
+
+To do this step, [set up a telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer):
+
+```csharp
+// Telemetry initializer class
+public class MyTelemetryInitializer : ITelemetryInitializer
+{
+ // In this example, to differentiate versions, we use the value specified in the AssemblyInfo.cs
+ // for ASP.NET apps, or in your project file (.csproj) for the ASP.NET Core apps. Make sure that
+ // you set a different assembly version when you deploy your application for A/B testing.
+ static readonly string _version =
+ System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.ToString();
+
+ public void Initialize(ITelemetry item)
+ {
+ item.Context.Component.Version = _version;
+ }
+}
+```
+
+#### [.NET Core](#tab/aspnetcore)
+
+For [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) applications, add a new telemetry initializer to the Dependency Injection service collection in the `Program.cs` class:
+
+```csharp
+using Microsoft.ApplicationInsights.Extensibility;
+
+builder.Services.AddSingleton<ITelemetryInitializer, MyTelemetryInitializer>();
+```
+
+#### [.NET Framework 4.8](#tab/aspnet-framework)
+
+In the web app initializer, such as `Global.asax.cs`:
+
+```csharp
+protected void Application_Start()
+{
+ // ...
+ TelemetryConfiguration.Active.TelemetryInitializers
+ .Add(new MyTelemetryInitializer());
+}
+```
+++
+## Funnels - Discover how customers use your application
+
+Understanding the customer experience is of great importance to your business. If your application involves multiple stages, you need to know if customers are progressing through the entire process or ending the process at some point. The progression through a series of steps in a web application is known as a *funnel*. You can use Application Insights funnels to gain insights into your users and monitor step-by-step conversion rates.
+
+**Funnel features:**
+
+* If your app is sampled, you'll see a banner. Selecting it opens a context pane that explains how to turn off sampling.
+* Select a step to see more details on the right.
+* The historical conversion graph shows the conversion rates over the last 90 days.
+* Understand your users better by accessing the users tool. You can use filters in each step.
+
+### Create a funnel
+
+#### Prerequisites
+
+Before you create a funnel, decide on the question you want to answer. For example, you might want to know how many users view the home page, view a customer profile, and create a ticket.
+
+#### Get started
+
+To create a funnel:
+
+1. On the **Funnels** tab, select **Edit**.
+
+1. Choose your **Top Step**.
+
+ :::image type="content" source="./media/usage-funnels/funnel.png" alt-text="Screenshot that shows the Funnel tab and selecting steps on the Edit tab." lightbox="./media/usage-funnels/funnel.png":::
+
+1. To apply filters to the step, select **Add filters**. This option appears after you choose an item for the top step.
+
+1. Then choose your **Second Step** and so on.
+
+ > [!NOTE]
+ > Funnels are limited to a maximum of six steps.
+
+1. Select the **View** tab to see your funnel results.
+
+ :::image type="content" source="./media/usage-funnels/funnel-2.png" alt-text="Screenshot that shows the Funnels View tab that shows results from the top and second steps." lightbox="./media/usage-funnels/funnel-2.png":::
+
+1. To save your funnel to view at another time, select **Save** at the top. Use **Open** to open your saved funnels.
+
+## User Flows - Analyze user navigation patterns
++
+The User Flows tool visualizes how users move between the pages and features of your site. It's great for answering questions like:
+
+* How do users move away from a page on your site?
+* What do users select on a page on your site?
+* Where are the places that users churn most from your site?
+* Are there places where users repeat the same action over and over?
+
+The User Flows tool starts from an initial custom event, exception, dependency, page view or request that you specify. From this initial event, User Flows shows the events that happened before and after user sessions. Lines of varying thickness show how many times users followed each path. Special **Session Started** nodes show where the subsequent nodes began a session. **Session Ended** nodes show how many users sent no page views or custom events after the preceding node, highlighting where users probably left your site.
+
+> [!NOTE]
+> Your Application Insights resource must contain page views or custom events to use the User Flows tool. [Learn how to set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md).
+
+### Choose an initial event
++
+To begin answering questions with the User Flows tool, choose an initial custom event, exception, dependency, page view or request to serve as the starting point for the visualization:
+
+1. Select the link in the **What do users do after?** title or select **Edit**.
+1. Select a custom event, exception, dependency, page view or request from the **Initial event** dropdown list.
+1. Select **Create graph**.
+
+The **Step 1** column of the visualization shows what users did most frequently after the initial event. The items are ordered from top to bottom and from most to least frequent. The **Step 2** and subsequent columns show what users did next. The information creates a picture of all the ways that users moved through your site.
+
+By default, the User Flows tool randomly samples only the last 24 hours of page views and custom events from your site. You can increase the time range and change the balance of performance and accuracy for random sampling on the **Edit** menu.
+
+If some of the page views, custom events, and exceptions aren't relevant to you, select **X** on the nodes you want to hide. After you've selected the nodes you want to hide, select **Create graph**. To see all the nodes you've hidden, select **Edit** and look at the **Excluded events** section.
+
+If page views or custom events you expect to see in the visualization are missing that:
+
+* Check the **Excluded events** section on the **Edit** menu.
+* Use the plus buttons on **Others** nodes to include less-frequent events in the visualization.
+* If the page view or custom event you expect is sent infrequently by users, increase the time range of the visualization on the **Edit** menu.
+* Make sure the custom event, exception, dependency, page view or request you expect is set up to be collected by the Application Insights SDK in the source code of your site.
+
+If you want to see more steps in the visualization, use the **Previous steps** and **Next steps** dropdown lists above the visualization.
+
+### After users visit a page or feature, where do they go and what do they select?
++
+If your initial event is a page view, the first column (**Step 1**) of the visualization is a quick way to understand what users did immediately after they visited the page.
+
+Open your site in a window next to the User Flows visualization. Compare your expectations of how users interact with the page to the list of events in the **Step 1** column. Often, a UI element on the page that seems insignificant to your team can be among the most used on the page. It can be a great starting point for design improvements to your site.
+
+If your initial event is a custom event, the first column shows what users did after they performed that action. As with page views, consider if the observed behavior of your users matches your team's goals and expectations.
+
+If your selected initial event is **Added Item to Shopping Cart**, for example, look to see if **Go to Checkout** and **Completed Purchase** appear in the visualization shortly thereafter. If user behavior is different from your expectations, use the visualization to understand how users are getting "trapped" by your site's current design.
+
+### Where are the places that users churn most from your site?
+
+Watch for **Session Ended** nodes that appear high up in a column in the visualization, especially early in a flow. This positioning means many users probably churned from your site after they followed the preceding path of pages and UI interactions.
+
+Sometimes churn is expected. For example, it's expected after a user makes a purchase on an e-commerce site. But usually churn is a sign of design problems, poor performance, or other issues with your site that can be improved.
+
+Keep in mind that **Session Ended** nodes are based only on telemetry collected by this Application Insights resource. If Application Insights doesn't receive telemetry for certain user interactions, users might have interacted with your site in those ways after the User Flows tool says the session ended.
+
+### Are there places where users repeat the same action over and over?
+
+Look for a page view or custom event that's repeated by many users across subsequent steps in the visualization. This activity usually means that users are performing repetitive actions on your site. If you find repetition, think about changing the design of your site or adding new functionality to reduce repetition. For example, you might add bulk edit functionality if you find users performing repetitive actions on each row of a table element.
+
+## Cohorts - Analyze a specific set of users, sessions, events, or operations
+
+A cohort is a set of users, sessions, events, or operations that have something in common. In Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set you're interested in.
+
+### Cohorts vs basic filters
+
+You can use cohorts in ways similar to filters. But cohorts' definitions are built from custom analytics queries, so they're much more adaptable and complex. Unlike filters, you can save cohorts so that other members of your team can reuse them.
+
+You might define a cohort of users who have all tried a new feature in your app. You can save this cohort in your Application Insights resource. It's easy to analyze this saved group of specific users in the future.
+
+> [!NOTE]
+> After cohorts are created, they're available from the Users, Sessions, Events, and User Flows tools.
+
+### Example: Engaged users
+
+Your team defines an engaged user as anyone who uses your app five or more times in a given month. In this section, you define a cohort of these engaged users.
+
+1. Select **Create a Cohort**.
+
+1. Select the **Template Gallery** tab to see a collection of templates for various cohorts.
+
+1. Select **Engaged Users -- by Days Used**.
+
+ There are three parameters for this cohort:
+ * **Activities**: Where you choose which events and page views count as usage.
+ * **Period**: The definition of a month.
+ * **UsedAtLeastCustom**: The number of times users need to use something within a period to count as engaged.
+
+1. Change **UsedAtLeastCustom** to **5+ days**. Leave **Period** set as the default of 28 days.
+
+ Now this cohort represents all user IDs sent with any custom event or page view on 5 separate days in the past 28 days.
+
+1. Select **Save**.
+
+ > [!TIP]
+ > Give your cohort a name, like *Engaged Users (5+ Days)*. Save it to *My reports* or *Shared reports*, depending on whether you want other people who have access to this Application Insights resource to see this cohort.
+
+1. Select **Back to Gallery**.
+
+#### What can you do by using this cohort?
+
+Open the Users tool. In the **Show** dropdown box, choose the cohort you created under **Users who belong to**.
++
+Important points to notice:
+
+* You can't create this set through normal filters. The date logic is more advanced.
+* You can further filter this cohort by using the normal filters in the Users tool. Although the cohort is defined on 28-day windows, you can still adjust the time range in the Users tool to be 30, 60, or 90 days.
+
+These filters support more sophisticated questions that are impossible to express through the query builder. An example is *people who were engaged in the past 28 days. How did those same people behave over the past 60 days?*
+
+### Example: Events cohort
+
+You can also make cohorts of events. In this section, you define a cohort of events and page views. Then you see how to use them from the other tools. This cohort might define a set of events that your team considers *active usage* or a set related to a certain new feature.
+
+1. Select **Create a Cohort**.
+1. Select the **Template Gallery** tab to see a collection of templates for various cohorts.
+1. Select **Events Picker**.
+1. In the **Activities** dropdown box, select the events you want to be in the cohort.
+1. Save the cohort and give it a name.
+
+### Example: Active users where you modify a query
+
+The previous two cohorts were defined by using dropdown boxes. You can also define cohorts by using analytics queries for total flexibility. To see how, create a cohort of users from the United Kingdom.
+
+1. Open the Cohorts tool, select the **Template Gallery** tab, and select **Blank Users cohort**.
+
+ :::image type="content" source="./media/usage-cohorts/cohort.png" alt-text="Screenshot that shows the template gallery for cohorts." lightbox="./media/usage-cohorts/cohort.png":::
+
+ There are three sections:
+
+ * **Markdown text**: Where you describe the cohort in more detail for other members on your team.
+ * **Parameters**: Where you make your own parameters, like **Activities**, and other dropdown boxes from the previous two examples.
+ * **Query**: Where you define the cohort by using an analytics query.
+
+ In the query section, you [write an analytics query](/azure/kusto/query). The query selects the certain set of rows that describe the cohort you want to define. The Cohorts tool then implicitly adds a `| summarize by user_Id` clause to the query. This data appears as a preview underneath the query in a table, so you can make sure your query is returning results.
+
+ > [!NOTE]
+ > If you don't see the query, resize the section to make it taller and reveal the query.
+
+1. Copy and paste the following text into the query editor:
+
+ ```KQL
+ union customEvents, pageViews
+ | where client_CountryOrRegion == "United Kingdom"
+ ```
+
+1. Select **Run Query**. If you don't see user IDs appear in the table, change to a country/region where your application has users.
+
+1. Save and name the cohort.
+
+## Impact Analysis - Discover how different properties influence conversion rates
+
+Impact Analysis discovers how any dimension of a page view, custom event, or request affects the usage of a different page view or custom event.
+
+One way to think of Impact is as the ultimate tool for settling arguments with someone on your team about how slowness in some aspect of your site is affecting whether users stick around. Users might tolerate some slowness, but Impact gives you insight into how best to balance optimization and performance to maximize user conversion.
+
+Analyzing performance is only a subset of Impact's capabilities. Impact supports custom events and dimensions, so you can easily answer questions like, How does user browser choice correlate with different rates of conversion?
+
+> [!NOTE]
+> Your Application Insights resource must contain page views or custom events to use the Impact analysis workbook. Learn how to [set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md). Also, because you're analyzing correlation, sample size matters.
+
+### Impact analysis workbook
+
+To use the Impact analysis workbook, in your Application Insights resources go to **Usage** > **More** and select **User Impact Analysis Workbook**. Or on the **Workbooks** tab, select **Public Templates**. Then under **Usage**, select **User Impact Analysis**.
++
+#### Use the workbook
++
+1. From the **Selected event** dropdown list, select an event.
+1. From the **analyze how its** dropdown list, select a metric.
+1. From the **Impacting event** dropdown list, select an event.
+1. To add a filter, use the **Add selected event filters** tab or the **Add impacting event filters** tab.
+
+### Is page load time affecting how many people convert on my page?
+
+To begin answering questions with the Impact workbook, choose an initial page view, custom event, or request.
+
+1. From the **Selected event** dropdown list, select an event.
+
+1. Leave the **analyze how its** dropdown list on the default selection of **Duration**. (In this context, **Duration** is an alias for **Page Load Time**.)
+
+1. From the **Impacting event** dropdown list, select a custom event. This event should correspond to a UI element on the page view you selected in step 1.
+
+ :::image type="content" source="./media/usage-impact/impact.png" alt-text="Screenshot that shows an example with the selected event as Home Page analyzed by duration." lightbox="./media/usage-impact/impact.png":::
+
+### What if I'm tracking page views or load times in custom ways?
+
+Impact supports both standard and custom properties and measurements. Use whatever you want. Instead of duration, use filters on the primary and secondary events to get more specific.
+
+### Do users from different countries or regions convert at different rates?
+
+1. From the **Selected event** dropdown list, select an event.
+
+1. From the **analyze how its** dropdown list, select **Country or region**.
+
+1. From the **Impacting event** dropdown list, select a custom event that corresponds to a UI element on the page view you chose in step 1.
+
+ :::image type="content" source="./media/usage-impact/regions.png" alt-text="Screenshot that shows an example with the selected event as GET analyzed by country and region." lightbox="./media/usage-impact/regions.png":::
+
+### How does the Impact analysis workbook calculate these conversion rates?
+
+Under the hood, the Impact analysis workbook relies on the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). Results are computed between -1 and 1. The coefficient -1 represents a negative linear correlation and 1 represents a positive linear correlation.
+
+The basic breakdown of how Impact analysis works is listed here:
+
+* Let *A* = the main page view, custom event, or request you select in the **Selected event** dropdown list.
+* Let *B* = the secondary page view or custom event you select in the **impacts the usage of** dropdown list.
+
+Impact looks at a sample of all the sessions from users in the selected time range. For each session, it looks for each occurrence of *A*.
+
+Sessions are then broken into two different kinds of *subsessions* based on one of two conditions:
+
+* A converted subsession consists of a session ending with a *B* event and encompasses all *A* events that occur prior to *B*.
+* An unconverted subsession occurs when all *A*s occur without a terminal *B*.
+
+How Impact is ultimately calculated varies based on whether we're analyzing by metric or by dimension. For metrics, all *A*s in a subsession are averaged. For dimensions, the value of each *A* contributes *1/N* to the value assigned to *B*, where *N* is the number of *A*s in the subsession.
+
+## HEART - Five dimensions of customer experience
+
+This article describes how to enable and use the Heart Workbook on Azure Monitor. The HEART workbook is based on the HEART measurement framework, which was originally introduced by Google. Several Microsoft internal teams use HEART to deliver better software.
+
+### Overview
+
+HEART is an acronym that stands for happiness, engagement, adoption, retention, and task success. It helps product teams deliver better software by focusing on five dimensions of customer experience:
+
+* **Happiness**: Measure of user attitude
+* **Engagement**: Level of active user involvement
+* **Adoption**: Target audience penetration
+* **Retention**: Rate at which users return
+* **Task success**: Productivity empowerment
+
+These dimensions are measured independently, but they interact with each other.
++
+* Adoption, engagement, and retention form a user activity funnel. Only a portion of users who adopt the tool come back to use it.
+
+* Task success is the driver that progresses users down the funnel and moves them from adoption to retention.
+
+* Happiness is an outcome of the other dimensions and not a stand-alone measurement. Users who have progressed down the funnel and are showing a higher level of activity are ideally happier.
+
+### Get started
+
+#### Prerequisites
+
+* **Azure subscription**: [Create an Azure subscription for free](https://azure.microsoft.com/free/)
+
+* **Application Insights resource**: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)
+
+* **Click Analytics**: Set up the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md).
+
+* **Specific attributes**: Instrument the following attributes to calculate HEART metrics.
+
+ | Source | Attribute | Description |
+ |-|-|--|
+ | customEvents | session_Id | Unique session identifier |
+ | customEvents | appName | Unique Application Insights app identifier |
+ | customEvents | itemType | Category of customEvents record |
+ | customEvents | timestamp | Datetime of event |
+ | customEvents | operation_Id | Correlate telemetry events |
+ | customEvents | user_Id | Unique user identifier |
+ | customEvents ┬╣ | parentId | Name of feature |
+ | customEvents ┬╣ | pageName | Name of page |
+ | customEvents ┬╣ | actionType | Category of Click Analytics record |
+ | pageViews | user_AuthenticatedId | Unique authenticated user identifier |
+ | pageViews | session_Id | Unique session identifier |
+ | pageViews | appName | Unique Application Insights app identifier |
+ | pageViews | timestamp | Datetime of event |
+ | pageViews | operation_Id | Correlate telemetry events |
+ | pageViews | user_Id | Unique user identifier |
+
+* If you're setting up the authenticated user context, instrument the below attributes:
+
+| Source | Attribute | Description |
+|--|-|--|
+| customEvents | user_AuthenticatedId | Unique authenticated user identifier |
+
+**Footnotes**
+
+┬╣: To emit these attributes, use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) via npm.
+
+>[!TIP]
+> To understand how to effectively use the Click Analytics plug-in, see [Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](javascript-feature-extensions.md#use-the-plug-in).
+
+#### Open the workbook
+
+You can find the workbook in the gallery under **Public Templates**. The workbook appears in the section **Product Analytics using the Click Analytics Plugin**.
++
+There are seven workbooks.
++
+You only have to interact with the main workbook, **HEART Analytics - All Sections**. This workbook contains the other six workbooks as tabs. You can also access the individual workbooks related to each tab through the gallery.
+
+#### Confirm that data is flowing
+
+To validate that data is flowing as expected to light up the metrics accurately, select the **Development Requirements** tab.
+
+> [!IMPORTANT]
+> Unless you [set the authenticated user context](./javascript-feature-extensions.md#optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data.
++
+If data isn't flowing as expected, this tab shows the specific attributes with issues.
++
+### Workbook structure
+
+The workbook shows metric trends for the HEART dimensions split over seven tabs. Each tab contains descriptions of the dimensions, the metrics contained within each dimension, and how to use them.
+
+The tabs are:
+
+* **Summary**: Summarizes usage funnel metrics for a high-level view of visits, interactions, and repeat usage.
+* **Adoption**: Helps you understand the penetration among the target audience, acquisition velocity, and total user base.
+* **Engagement**: Shows frequency, depth, and breadth of usage.
+* **Retention**: Shows repeat usage.
+* **Task success**: Enables understanding of user flows and their time distributions.
+* **Happiness**: We recommend using a survey tool to measure customer satisfaction score (CSAT) over a five-point scale. On this tab, we've provided the likelihood of happiness via usage and performance metrics.
+* **Feature metrics**: Enables understanding of HEART metrics at feature granularity.
+
+> [!WARNING]
+> The HEART workbook is currently built on logs and effectively are [log-based metrics](pre-aggregated-metrics-log-metrics.md). The accuracy of these metrics are negatively affected by sampling and filtering.
+
+### How HEART dimensions are defined and measured
+
+#### Happiness
+
+Happiness is a user-reported dimension that measures how users feel about the product offered to them.
+
+A common approach to measure happiness is to ask users a CSAT question like How satisfied are you with this product? Users' responses on a three- or a five-point scale (for example, *no, maybe,* and *yes*) are aggregated to create a product-level score that ranges from 1 to 5. Because user-initiated feedback tends to be negatively biased, HEART tracks happiness from surveys displayed to users at predefined intervals.
+
+Common happiness metrics include values such as **Average Star Rating** and **Customer Satisfaction Score**. Send these values to Azure Monitor by using one of the custom ingestion methods described in [Custom sources](../data-sources.md#custom-sources).
+
+#### Engagement
+
+Engagement is a measure of user activity. Specifically, user actions are intentional, such as clicks. Active usage can be broken down into three subdimensions:
+
+* **Activity frequency**: Measures how often a user interacts with the product. For example, users typically interact daily, weekly, or monthly.
+
+* **Activity breadth**: Measures the number of features users interact with over a specific time period. For example, users interacted with a total of five features in June 2021.
+
+* **Activity depth**: Measures the number of features users interact with each time they launch the product. For example, users interacted with two features on every launch.
+
+Measuring engagement can vary based on the type of product being used. For example, a product like Microsoft Teams is expected to have a high daily usage, which makes it an important metric to track. But for a product like a paycheck portal, measurement might make more sense at a monthly or weekly level.
+
+>[!IMPORTANT]
+>A user who performs an intentional action, such as clicking a button or typing an input, is counted as an active user. For this reason, engagement metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application.
+
+#### Adoption
+
+Adoption enables understanding of penetration among the relevant users, who you're gaining as your user base, and how you're gaining them. Adoption metrics are useful for measuring:
+
+* Newly released products.
+* Newly updated products.
+* Marketing campaigns.
+
+#### Retention
+
+A retained user is a user who was active in a specified reporting period and its previous reporting period. Retention is typically measured with the following metrics.
+
+| Metric | Definition | Question answered |
+|-|-|-|
+| Retained users | Count of active users who were also active the previous period | How many users are staying engaged with the product? |
+| Retention | Proportion of active users from the previous period who are also active this period | What percent of users are staying engaged with the product? |
+
+>[!IMPORTANT]
+>Because active users must have at least one telemetry event with an action type, retention metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application.
+
+#### Task success
+
+Task success tracks whether users can do a task efficiently and effectively by using the product's features. Many products include structures that are designed to funnel users through completing a task. Some examples include:
+
+* Adding items to a cart and then completing a purchase.
+* Searching a keyword and then selecting a result.
+* Starting a new account and then completing account registration.
+
+A successful task meets three requirements:
+
+* **Expected task flow**: The intended task flow of the feature was completed by the user and aligns with the expected task flow.
+* **High performance**: The intended functionality of the feature was accomplished in a reasonable amount of time.
+* **High reliability**: The intended functionality of the feature was accomplished without failure.
+
+A task is considered unsuccessful if any of the preceding requirements isn't met.
+
+>[!IMPORTANT]
+>Task success metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application.
+
+Set up a custom task by using the following parameters.
+
+| Parameter | Description |
+|||
+| First step | The feature that starts the task. In the cart/purchase example, **Adding items to a cart** is the first step. |
+| Expected task duration | The time window to consider a completed task a success. Any tasks completed outside of this constraint are considered a failure. Not all tasks necessarily have a time constraint. For such tasks, select **No Time Expectation**. |
+| Last step | The feature that completes the task. In the cart/purchase example, **Purchasing items from the cart** is the last step. |
+
+## Frequently asked questions
+
+### Does the initial event represent the first time the event appears in a session or any time it appears in a session?
+
+The initial event on the visualization only represents the first time a user sent that page view or custom event during a session. If users can send the initial event multiple times in a session, then the **Step 1** column only shows how users behave after the *first* instance of an initial event, not all instances.
+
+### Some of the nodes in my visualization have a level that's too high. How can I get more detailed nodes?
+
+Use the **Split by** options on the **Edit** menu:
+
+1. Select the event you want to break down on the **Event** menu.
+
+1. Select a dimension on the **Dimension** menu. For example, if you have an event called **Button Clicked**, try a custom property called **Button Name**.
+
+### I defined a cohort of users from a certain country/region. When I compare this cohort in the Users tool to setting a filter on that country/region, why do I see different results?
+
+Cohorts and filters are different. Suppose you have a cohort of users from the United Kingdom (defined like the previous example), and you compare its results to setting the filter `Country or region = United Kingdom`:
+
+* The cohort version shows all events from users who sent one or more events from the United Kingdom in the current time range. If you split by country or region, you likely see many countries and regions.
+
+* The filters version only shows events from the United Kingdom. If you split by country or region, you see only the United Kingdom.
+
+### How do I view the data at different grains (daily, monthly, or weekly)?
+
+You can select the **Date Grain** filter to change the grain. The filter is available across all the dimension tabs.
++
+### How do I access insights from my application that aren't available on the HEART workbooks?
+
+You can dig into the data that feeds the HEART workbook if the visuals don't answer all your questions. To do this task, under the **Monitoring** section, select **Logs** and query the `customEvents` table. Some of the Click Analytics attributes are contained within the `customDimensions` field. A sample query is shown here.
++
+To learn more about Logs in Azure Monitor, see [Azure Monitor Logs overview](../logs/data-platform-logs.md).
+
+### Can I edit visuals in the workbook?
+
+Yes. When you select the public template of the workbook:
+
+1. Select **Edit** and make your changes.
+
+ :::image type="content" source="media/usage-overview/workbook-edit-faq.png" alt-text="Screenshot that shows the Edit button in the upper-left corner of the workbook template.":::
+
+1. After you make your changes, select **Done Editing**, and then select the **Save** icon.
+
+ :::image type="content" source="media/usage-overview/workbook-save-faq.png" alt-text="Screenshot that shows the Save icon at the top of the workbook template that becomes available after you make edits.":::
+
+1. To view your saved workbook, under **Monitoring**, go to the **Workbooks** section and then select the **Workbooks** tab. A copy of your customized workbook appears there. You can make any further changes you want in this copy.
+
+ :::image type="content" source="media/usage-overview/workbook-view-faq.png" alt-text="Screenshot that shows the Workbooks tab next to the Public Templates tab, where the edited copy of the workbook is located.":::
+
+For more on editing workbook templates, see [Azure Workbooks templates](../visualize/workbooks-templates.md).
+
+## Next steps
+
+* Check out the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection plug-in.
+* Learn more about the [Google HEART framework](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36299.pdf).
+* To learn more about workbooks, see the [Workbooks overview](../visualize/workbooks-overview.md).
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
Previously updated : 05/31/2024 Last updated : 07/24/2024 # Azure Monitor workspace
Data stored in the Azure Monitor Workspace is handled in accordance with all sta
When you create a new Azure Monitor workspace, you provide a region which sets the location in which the data is stored. Currently Azure Monitor Workspace is available in the below regions.
-|Geo|Regions|Geo|Regions|Geo|Regions|Geo|Regions|
-|||||||||
-|Africa|South Africa North|Asia Pacific|East Asia, Southeast Asia|Australia|Australia Central, Australia East, Australia Southeast|Brazil|Brazil South, Brazil Southeast|
-|Canada|Canada Central, Canada East|Europe|North Europe, West Europe|France|France Central, France South|Germany|Germany West Central|
-|India|Central India, South India|Israel|Israel Central|Italy|Italy North|Japan|Japan East, Japan West|
-|Korea|Korea Central, Korea South|Norway|Norway East, Norway West|Spain|Spain Central|Sweden|Sweden South, Sweden Central|
-|Switzerland|Switzerland North, Switzerland West|UAE|UAE North|UK|UK South, UK West|US|Central US, East US, East US 2, South Central US, West Central US, West US, West US 2, West US 3|
-|US Government|USGov Virginia, USGov Texas|||||||
+|Geo|Regions|
+|||
+|Africa|South Africa North|
+|Asia Pacific|East Asia, Southeast Asia|
+|Australia|Australia Central, Australia East, Australia Southeast|
+|Brazil|Brazil South, Brazil Southeast|
+|Canada|Canada Central, Canada East|
+|Europe|North Europe, West Europe|
+|France|France Central, France South|
+|Germany|Germany West Central|
+|India|Central India, South India|
+|Israel|Israel Central|
+|Italy|Italy North|
+|Japan|Japan East, Japan West|
+|Korea|Korea Central, Korea South|
+|Norway|Norway East, Norway West|
+|Spain|Spain Central|
+|Sweden|Sweden South, Sweden Central|
+|Switzerland|Switzerland North, Switzerland West|
+|UAE|UAE North|
+|UK|UK South, UK West|
+|US|Central US, East US, East US 2, South Central US, West Central US, West US, West US 2, West US 3|
+|US Government|USGov Virginia, USGov Texas|
+ If you have clusters in regions where Azure Monitor Workspace is not yet available, you can select another region in the same geography.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
|Containers|[Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](containers/prometheus-metrics-scrape-configuration.md)|[Azure Monitor Managed Prometheus] Docs for pod annotation scraping through configmap| |Essentials|[Custom metrics in Azure Monitor (preview)](essentials/metrics-custom-overview.md)|Article refreshed an updated| |General|[Disable monitoring of your Kubernetes cluster](containers/kubernetes-monitoring-disable.md)|New article to consolidate process for all container configurations and for both Prometheus and Container insights.|
-|Logs|[ Best practices for Azure Monitor Logs](best-practices-logs.md)|Dedicated clusters are now available in all commitment tiers, with a minimum daily ingestion of 100 GB.|
+|Logs|[Best practices for Azure Monitor Logs](best-practices-logs.md)|Dedicated clusters are now available in all commitment tiers, with a minimum daily ingestion of 100 GB.|
|Logs|[Enhance data and service resilience in Azure Monitor Logs with availability zones](logs/availability-zones.md)|Availability zones are now supported in the Israel Central, Poland Central, and Italy North regions.| |Virtual-Machines|[Dependency Agent](vm/vminsights-dependency-agent-maintenance.md)|VM Insights Dependency Agent now supports RHEL 8.6 Linux.| |Visualizations|[Composite bar renderer](visualize/workbooks-composite-bar.md)|We've edited the Workbooks content to make some features and functionality easier to find based on customer feedback. We've also removed legacy content.|
General|[What's new in Azure Monitor documentation](whats-new.md)| Subscribe to
Application-Insights|[Filter and preprocess telemetry in the Application Insights SDK](app/api-filtering-sampling.md)|An Azure Monitor Telemetry Data Types Reference has been added for quick reference.| Application-Insights|[Add and modify OpenTelemetry](app/opentelemetry-add-modify.md)|We've simplified the OpenTelemetry onboarding process by moving instructions to add and modify telemetry in this new document.| Application-Insights|[Application Map: Triage distributed applications](app/app-map.md)|Application Map Intelligent View has reached general availability. Enjoy this powerful tool that harnesses machine learning to aid in service health investigations.|
-Application-Insights|[Usage analysis with Application Insights](app/usage-overview.md)|Code samples have been updated for the latest versions of .NET.|
+Application-Insights|[Usage analysis with Application Insights](app/usage.md)|Code samples have been updated for the latest versions of .NET.|
Application-Insights|[Enable a framework extension for Application Insights JavaScript SDK](app/javascript-framework-extensions.md)|All JavaScript SDK documentation has been updated and simplified, including documentation for feature and framework extensions.| Autoscale|[Use autoscale actions to send email and webhook alert notifications in Azure Monitor](autoscale/autoscale-webhook-email.md)|Article updated and refreshed| Containers|[Query logs from Container insights](containers/container-insights-log-query.md#container-logs)|New section: Container logs, with sample queries|
Alerts|[Create a new alert rule](alerts/alerts-create-new-alert-rule.md)|Log ale
Alerts|[Monitor Azure AD B2C with Azure Monitor](/azure/active-directory-b2c/azure-monitor)|Articles on action groups have been updated.| Alerts|[Create a new alert rule](alerts/alerts-create-new-alert-rule.md)|Alert rules that use action groups support custom properties to add custom information to the alert notification payload.| Application-Insights|[Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](app/javascript-feature-extensions.md)|Most of our JavaScript SDK documentation has been updated and overhauled.|
-Application-Insights|[Analyze product usage with HEART](app/usage-heart.md)|Updated and overhauled HEART framework documentation.|
+Application-Insights|[Analyze product usage with HEART](app/usage.md#heartfive-dimensions-of-customer-experience)|Updated and overhauled HEART framework documentation.|
Application-Insights|[Dependency tracking in Application Insights](app/asp-net-dependencies.md)|All new documentation supports the Azure Monitor OpenTelemetry Distro public preview release announced on May 10, 2023. [Public Preview: Azure Monitor OpenTelemetry Distro for ASP.NET Core, JavaScript (Node.js), Python](https://azure.microsoft.com/updates/public-preview-azure-monitor-opentelemetry-distro-for-aspnet-core-javascript-nodejs-python)| Application-Insights|[Application Monitoring for Azure App Service and Java](app/azure-web-apps-java.md)|Added CATALINA_OPTS for Tomcat.| Essentials|[Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Azure Active Directory pod identity (preview)](essentials/prometheus-remote-write-azure-ad-pod-identity.md)|New article: Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Azure Active Directory pod identity|
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
Previously updated : 07/18/2024 Last updated : 07/23/2024 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Number of snapshots per volume | 255 | No | | Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | | Minimum size of a single capacity pool | 1 TiB* | No |
-| Maximum size of a single capacity pool | 2,048 TiB | Yes |
+| Maximum size of a single capacity pool | 2,048 TiB | No |
| Minimum size of a single regular volume | 100 GiB | No | | Maximum size of a single regular volume | 100 TiB | No | | Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 50 TiB | No |
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
When you use a manual QoS capacity pool with, for example, an SAP HANA system, a
- A volume's capacity consumption counts against its pool's provisioned capacity. - A volumeΓÇÖs throughput consumption counts against its poolΓÇÖs available throughput. See [Manual QoS type](#manual-qos-type). - Each volume belongs to only one pool, but a pool can contain multiple volumes. -- Volumes contain a capacity of between 100 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 and 1 PiB.
+- Volumes contain a capacity of between 100 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 TiB and 1 PiB.
## Large volumes
azure-resource-manager Bicep Core Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-core-diagnostics.md
Title: Bicep warnings and error codes
description: Lists the warnings and error codes. Previously updated : 07/23/2024 Last updated : 07/24/2024
-# Bicep warning and error codes
+# Bicep core diagnostics
-If you need more information about a particular warning or error code, select the **Feedback** button in the upper right corner of the page and specify the code.
+If you need more information about a particular diagnostic code, select the **Feedback** button in the upper right corner of the page and specify the code.
| Code | Description | ||-|
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
If your move requires setting up new dependent resources, you'll experience an i
Moving a resource only moves it to a new resource group or subscription. It doesn't change the location of the resource.
+> [!NOTE]
+> You can't move Azure resources to another resource group or another subscription if there's a read-only lock, whether in the source or in the destination.
+ ## Changed resource ID When you move a resource, you change its resource ID. The standard format for a resource ID is `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}`. When you move a resource to a new resource group or subscription, you change one or more values in that path.
azure-signalr Signalr Howto Configure Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-configure-application-firewall.md
+
+ Title: SignalR Application Firewall (Preview)
+description: An introduction about why and how to set up Application Firewall for Azure SignalR service
++++ Last updated : 07/10/2024++
+# Application Firewall for Azure SignalR Service
+
+The Application Firewall provides sophisticated control over client connections in a distributed system. Before diving into its functionality and setup, let's clarify what the Application Firewall does not do:
+
+1. It does not replace authentication. The firewall operates behind the client connection authentication layer.
+2. It is not related to network layer access control.
+
+## What Does the Application Firewall Do?
+
+The Application Firewall consists of various rule lists. Currently, there is a rule list called *Client Connection Count Rules*. Future updates will support more rule lists to control aspects like connection lifetime and message throughput.
+
+This guideline is divided into three parts:
+1. Introduction to different application firewall rules.
+2. Instructions on configuring the rules using the Portal or Bicep on the SignalR service side.
+3. Steps to configure the token on the server side.
+
+## Prerequisites
+
+* An Azure SignalR Service in [Premium tier](https://azure.microsoft.com/pricing/details/signalr-service/).
+
+## Client Connection Count Rules
+Client Connection Count Rules restrict concurrent client connections. When a client attempts to establish a new connection, the rules are checked **sequentially**. If any rule is violated, the connection is rejected with a status code 429.
+
+ #### ThrottleByUserIdRule
+ This rule limits the concurrent connections of a user. For example, if a user opens multiple browser tabs or logs in using different devices, you can use this rule to restrict the number of concurrent connections for that user.
+
+ > [!NOTE]
+ > * The **UserId** must exist in the access token for this rule to work. Refer to [Configure access token](#configure-access-token).
+
+
+ #### ThrottleByJwtSignatureRule
+ This rule limits the concurrent connections of the same token to prevent malicious users from reusing tokens to establish infinite connections, which can exhaust connection quota.
+
+ > [!NOTE]
+ > * It's not guaranteed by default that tokens generated by the SDK are different each time. Though each token contains a timestamp, this timestamp might be the same if vast tokens are generated within seconds. To avoid identical tokens, insert a random claim into the token claims. Refer to [Configure access token](#configure-access-token).
++
+ #### ThrottleByJwtCustomClaimRule
+
+ More advanced, connections could be grouped into different groups according to custom claim. Connections with the same claim are aggregated to do the check. For example, you could add a **ThrottleByJwtCustomClaimRule** to allow 5 concurrent connections with custom claim name *freeUser*.
+
+ > [!NOTE]
+ > * The rule applies to all claims with a certain claim name. The connection count aggregation is on the same claim (including claim name and claim value). The *ThrottleByUserIdRule* is a special case of this rule, applying to all connections with the userIdentity claim.
+
+
+> [!WARNING]
+> * **Avoid using too aggressive maxCount**. Client connections may close without completing the TCP handshake. SignalR service can't detect those "half-closed" connections immediately. The connection is taken as active until the heartbeat failure. Therefore, aggressive throttling strategies might unexpectedly throttle clients. A smoother approach is to **leave some buffer** for the connection count, for example: double the *maxCount*.
+++
+## Set up Application Firewall
+
+# [Portal](#tab/Portal)
+To use Application Firewall, navigate to the SignalR **Application Firewall** blade on the Azure portal and click **Add** to add a rule.
+
+![Screenshot of adding application firewall rules for Azure SignalR on Portal.](./media/signalr-howto-config-application-firewall/signalr-add-application-firewall-rule.png "Add rule")
+
+# [Bicep](#tab/Bicep)
+
+Use Visual Studio Code or your favorite editor to create a file with the following content and name it main.bicep:
+
+```bicep
+@description('The name for your SignalR service')
+param resourceName string = 'contoso'
+
+resource signalr 'Microsoft.SignalRService/signalr@2024-04-01-preview' = {
+ name: resourceName
+ properties: {
+ applicationFirewall:{
+ clientConnectionCountRules:[
+ // Add or remove rules as needed
+ {
+ // This rule will be skipped if no userId is set
+ type: 'ThrottleByUserIdRule'
+ maxCount: 5
+ }
+ {
+ type: 'ThrottleByJwtSignatureRule'
+ maxCount: 10
+ }
+ {
+ // This rule will be skipped if no freeUser claim is set
+ type: 'ThrottleByJwtCustomClaimRule'
+ maxCount: 10
+ claimName: 'freeUser'
+ }
+ {
+ // This rule will be skipped if no paidUser claim is set
+ type: 'ThrottleByJwtCustomClaimRule'
+ maxCount: 100
+ claimName: 'paidUser'
+ }
+ ]
+ }
+ }
+}
+
+```
+
+Deploy the Bicep file using Azure CLI
+ ```azurecli
+ az deployment group create --resource-group MyResourceGroup --template-file main.bicep
+ ```
+
+-
+++
+## Configure access token
+The application firewall rules only take effect when the access token contains the corresponding claim. A rule is **skipped** if the connection does not have the corresponding claim.
+
+Below is an example to add userId or custom claim in the access token in **Default Mode**:
+
+```cs
+services.AddSignalR().AddAzureSignalR(options =>
+ {
+ // Add necessary claims according to your rules.
+ options.ClaimsProvider = context => new[]
+ {
+ // Add UserId: Used in ThrottleByUserIdRule
+ new Claim(ClaimTypes.NameIdentifier, context.Request.Query["username"]),
+
+ // Add unique claim: Ensure uniqueness when using ThrottleByJwtSignatureRule.
+ // The token name is not important. You could change it as you like.
+ new Claim("uniqueToken", Guid.NewGuid().ToString()),
+
+ // Cutom claim: Used in ThrottleByJwtCustomClaimRule
+ new Claim("<Custom Claim Name>", "<Custom Claim Value>"),
+ // Custom claim example
+ new Claim("freeUser", context.Request.Query["username"]),
+ };
+ });
+```
+The logic for **Serverless Mode** is similar.
+
+For more details, refer to [Client negotiation](signalr-concept-client-negotiation.md#what-can-you-do-during-negotiation) .
+++++
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
You need an Azure account in an Azure subscription that adheres to one of the fo
- **Service:** All services > Azure VMware Solution - **Resource:** General question - **Summary:** Need capacity
- - **Problem type:** Deployment
- - **Problem subtype:** AVS Quota request
+ - **Problem type:** AVS Quota request
+
+ > [!NOTE]
+ > If the *Problem Type* is not is not visible from the short-list offered, select **None of the Above**. *AVS Quota requests* will be in the offered list of *Problem Types*.
1. In the **Description** of the support ticket, on the **Details** tab, provide information for:
azure-web-pubsub Howto Configure Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-configure-application-firewall.md
+
+ Title: Web PubSub Application Firewall (Preview)
+description: An introduction about why and how to set up Application Firewall for Azure Web PubSub service
++++ Last updated : 07/10/2024++
+# Application Firewall for Azure Web PubSub Service
+
+The Application Firewall provides sophisticated control over client connections in a distributed system. Before diving into its functionality and setup, let's clarify what the Application Firewall does not do:
+
+1. It does not replace authentication. The firewall operates behind the client connection authentication layer.
+2. It is not related to network layer access control.
+
+## What Does the Application Firewall Do?
+
+The Application Firewall consists of various rule lists. Currently, there is a rule list called *Client Connection Count Rules*. Future updates will support more rule lists to control aspects like connection lifetime and message throughput.
+
+This guideline is divided into three parts:
+1. Introduction to different application firewall rules.
+2. Instructions on configuring the rules using the Portal or Bicep on the Web PubSub service side.
+3. Steps to configure the token on the server side.
+
+## Prerequisites
+* A Web PubSub resource in [premium tier](https://azure.microsoft.com/pricing/details/web-pubsub/).
+
+## Client Connection Count Rules
+Client Connection Count Rules restrict concurrent client connections. When a client attempts to establish a new connection, the rules are checked **sequentially**. If any rule is violated, the connection is rejected with a status code 429.
+
+ #### ThrottleByUserIdRule
+ This rule limits the concurrent connections of a user. For example, if a user opens multiple browser tabs or logs in using different devices, you can use this rule to restrict the number of concurrent connections for that user.
+
+ > [!NOTE]
+ > * The UserId must exist in the access token for this rule to work. Refer to [Configure access token](#configure-access-token).
+
+
+ #### ThrottleByJwtSignatureRule
+ This rule limits the concurrent connections of the same token to prevent malicious users from reusing tokens to establish infinite connections, which can exhaust connection quota.
+
+ > [!NOTE]
+ > * It's not guaranteed by default that tokens generated by the SDK are different each time. Though each token contains a timestamp, this timestamp might be the same if vast tokens are generated within seconds. To avoid identical tokens, insert a random claim into the token claims. Refer to [Configure access token](#configure-access-token).
++
+> [!WARNING]
+> * **Avoid using too aggressive maxCount**. Client connections may close without completing the TCP handshake. SignalR service can't detect those "half-closed" connections immediately. The connection is taken as active until the heartbeat failure. Therefore, aggressive throttling strategies might unexpectedly throttle clients. A smoother approach is to **leave some buffer** for the connection count, for example: double the *maxCount*.
+++
+## Set up Application Firewall
+
+# [Portal](#tab/Portal)
+To use Application Firewall, navigate to the Web PubSub **Application Firewall** blade on the Azure portal and click **Add** to add a rule.
+
+![Screenshot of adding application firewall rules for Azure Web PubSub on Portal.](./media/howto-config-application-firewall/add-application-firewall-rule.png "Add rule")
+
+# [Bicep](#tab/Bicep)
+
+Use Visual Studio Code or your favorite editor to create a file with the following content and name it main.bicep:
+
+```bicep
+@description('The name for your Web PubSub service')
+param resourceName string = 'contoso'
+
+resource webpubsub 'Microsoft.SignalRService/webpubsub@2024-04-01-preview' = {
+ name: resourceName
+ properties: {
+ applicationFirewall:{
+ clientConnectionCountRules:[
+ // Add or remove rules as needed
+ {
+ // This rule will be skipped if no userId is set
+ type: 'ThrottleByUserIdRule'
+ maxCount: 5
+ }
+ {
+ type: 'ThrottleByJwtSignatureRule'
+ maxCount: 10
+ }
+ ]
+ }
+ }
+}
+
+```
+
+Deploy the Bicep file using Azure CLI
+ ```azurecli
+ az deployment group create --resource-group MyResourceGroup --template-file main.bicep
+ ```
+
+-
+++
+## Configure access token
+
+The application firewall rules only take effect when the access token contains the corresponding claim. A rule is **skipped** if the connection does not have the corresponding claim. *userId" and *roles* are currently supported claims in the SDK.
+
+Below is an example to add userId and insert a unique placeholder in the access token:
+
+```cs
+// The GUID role wont have any effect. But it esures this token's uniqueness when using rule ThrottleByJwtSignatureRule.
+var url = service.GetClientAccessUri((userId: "user1" , roles: new string[] { "webpubsub.joinLeaveGroup.group1", Guid.NewGuid().ToString()});
+```
+
+For more details, refer to [Client negotiation](howto-generate-client-access-url.md#generate-from-service-sdk) .
+++++
backup Azure Kubernetes Service Cluster Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore.md
Azure Backup now allows you to back up AKS clusters (cluster resources and persi
- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#register-the-trusted-access) between the Backup vault and the AKS cluster.
+- In case you are trying to restore a backup stored in Vault Tier, you need to provide a storage account in input as a staging location. Backup data is stored in the Backup vault as a blob within the Microsoft tenant. During a restore operation, the backup data is copied from one vault to staging storage account across tenants. Ensure that the staging storage account for the restore has the **AllowCrossTenantReplication** property set to **true**.
+ For more information on the limitations and supported scenarios, see the [support matrix](azure-kubernetes-service-cluster-backup-support-matrix.md). ## Restore the AKS clusters
To restore the backed-up AKS cluster, follow these steps:
:::image type="content" source="./media/azure-kubernetes-service-cluster-restore/start-kubernetes-cluster-restore.png" alt-text="Screenshot shows how to start the restore process.":::
-2. On the next page, click **Select backup instance**, and then select the *instance* that you want to restore.
+2. On the next page, select **Select backup instance**, and then select the *instance* that you want to restore.
If the instance is available in both *Primary* and *Secondary Region*, select the *region to restore* too, and then select **Continue**.
To restore the backed-up AKS cluster, follow these steps:
:::image type="content" source="./media/azure-kubernetes-service-cluster-restore/select-resources-to-restore-page.png" alt-text="Screenshot shows the Select Resources to restore page.":::
-6. If you seleted a recovery point for restore from *Vault-standard datastore*, then provide a *snapshot resource group* and *storage account* as the staging location.
+6. If you selected a recovery point for restore from *Vault-standard datastore*, then provide a *snapshot resource group* and *storage account* as the staging location.
:::image type="content" source="./media/azure-kubernetes-service-cluster-restore/restore-parameters.png" alt-text="Screenshot shows the parameters to add for restore from Vault-standard storage.":::
backup Backup Azure Backup Vault Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-vault-troubleshoot.md
+
+ Title: Troubleshoot Azure Backup Vault
+description: Symptoms, causes, and resolutions of the Azure Backup Vault related operations.
+ Last updated : 07/18/2024+++++
+# Troubleshoot Azure Backup Vault related operations
+
+This article provides troubleshooting steps that help you resolve Azure Backup Vault management errors.
+
+## Common user errors
+
+#### Error code: UserErrorSystemIdentityNotEnabledWithVault
+
+**Possible Cause:** Backup Vault is created with System Identity enabled by default. This error appears when System Identity of the Backup Vault is in a disabled state and a backup related operation fails with this error.
+
+**Resolution:** To resolve this error, enable the System Identity of the Backup Vault and reassign all the necessary roles to it. Else use a User Identity in its place with all the roles assigned and update Managed Identity for all the Backup Instances using the now disabled System Identity.
+
+#### Error code: UserErrorUserIdentityNotFoundOrNotAssociatedWithVault
+
+**Possible Cause:** Backup Instances can be created with a User Identity having all the required roles assigned to it. In addition, User Identity can also be used for operations like Encryption using a Customer Managed Key. This error appears when the particular User Identity is deleted or not attached with the Backup Vault.
+
+**Resolution:** To resolve this error, assign the same or alternate User Identity to the Backup Vault and update the Backup Instance to use the new identity in latter case. Otherwise, enable the System Identity of the Backup Vault, update the Backup Instance and assign all the necessary roles to it.
+
+## Next steps
+
+- [About Azure Backup Vault](create-manage-backup-vault.md)
backup Backup Azure Dataprotection Use Rest Api Backup Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dataprotection-use-rest-api-backup-blobs.md
Title: Back up blobs in a storage account using Azure Data Protection REST API. description: In this article, learn how to configure, initiate, and manage backup operations of blobs using REST API. Previously updated : 05/30/2024 Last updated : 07/24/2024 ms.assetid: 7c244b94-d736-40a8-b94d-c72077080bbe
The following is the request body to configure backup for all blobs within a sto
} } ```
-To configure backup with vaulted backup (preview) enabled, refer the below request body.
+
+To configure backup with vaulted backup enabled, refer the below request body.
```json {backupInstanceDataSourceType is Microsoft.Storage/storageAccounts/blobServices
The [request body](#prepare-the-request-to-configure-blob-backup) that you prepa
} } ```
-#### Example request body for vaulted backup (preview)
+
+#### Example request body for vaulted backup
```json {
backup Backup Azure Dataprotection Use Rest Api Create Update Blob Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dataprotection-use-rest-api-create-update-blob-policy.md
Title: Create Azure Backup policies for blobs using data protection REST API description: In this article, you'll learn how to create and manage backup policies for blobs using REST API. Previously updated : 05/30/2024 Last updated : 07/24/2024 ms.assetid: 472d6a4f-7914-454b-b8e4-062e8b556de3
The policy says:
} ```
-To configure a backup policy with the vaulted backup (preview), use the following JSON script:
+To configure a backup policy with the vaulted backup, use the following JSON script:
```json {
backup Backup Azure Dataprotection Use Rest Api Restore Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-dataprotection-use-rest-api-restore-blobs.md
Title: Restore blobs in a storage account using Azure Data Protection REST API description: In this article, learn how to restore blobs of a storage account using REST API. Previously updated : 05/30/2024 Last updated : 07/24/2024 ms.assetid: 9b8d21e6-3e23-4345-bb2b-e21040996afd
To illustrate the restoration steps in this article, we'll refer to blobs in a s
## Prepare for Azure Blobs restore
-You can now do the restore operation for *operational backup* and *vaulted backup (preview)* for Azure Blobs.
+You can now do the restore operation for *operational backup* and *vaulted backup* for Azure Blobs.
**Choose a backup tier**:
The key points to remember in this scenario are:
} ```
-# [Vaulted backup (preview)](#tab/vaulted-backup)
+# [Vaulted backup](#tab/vaulted-backup)
[!INCLUDE [blob-vaulted-backup-restore-restapi.md](../../includes/blob-vaulted-backup-restore-restapi.md)]
backup Backup Azure Mysql Flexible Server Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mysql-flexible-server-restore.md
This article describes how to restore the Azure Database for MySQL - Flexible Se
Learn more about the [supported scenarios. considerations, and limitations](backup-azure-mysql-flexible-server-support-matrix.md).
+## Prerequisites
+
+Backup data is stored in the Backup vault as a blob within the Microsoft tenant. During a restore operation, the backup data is copied from one storage account to another across tenants. Ensure that the target storage account for the restore has the **AllowCrossTenantReplication** property set to **true**.
+ ## Restore MySQL - Flexible Server database To restore the database, follow these steps:
To restore the database, follow these steps:
## Next steps -- [Back up the Azure Database for MySQL - Flexible Server (preview)](backup-azure-mysql-flexible-server.md)
+- [Back up the Azure Database for MySQL - Flexible Server (preview)](backup-azure-mysql-flexible-server.md)
backup Backup Azure Troubleshoot Blob Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-blob-backup.md
Title: Troubleshoot Blob backup and restore issues description: In this article, learn about symptoms, causes, and resolutions of Azure Backup failures related to Blob backup and restore. Previously updated : 11/22/2023 Last updated : 07/24/2024
backup Backup Blobs Storage Account Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-arm-template.md
Title: Quickstart - Back up blobs in a storage account via ARM template using Az
description: Learn how to back up blobs in a storage account with an ARM template. Previously updated : 05/30/2024 Last updated : 07/24/2024
-# Quickstart: Back up a storage account with Blob data using an ARM template (preview)
+# Quickstart: Back up a storage account with Blob data using an ARM template
This quickstart describes how to back up a storage account with Azure Blob data with a vaulted backup policy using an ARM template.
backup Backup Blobs Storage Account Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-bicep.md
Title: Quickstart - Back up blobs in a storage account
description: Learn how to back up blobs in a storage account with a Bicep template. Previously updated : 05/30/2024 Last updated : 07/24/2024
-# Quickstart: Back up a storage account with Blob data using Azure Backup via a Bicep template (preview)
+# Quickstart: Back up a storage account with Blob data using Azure Backup via a Bicep template
This quickstart describes how to back up a storage account with Azure Blob data with a vaulted backup policy using a Bicep template.
backup Backup Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-cli.md
Title: Back up Azure Blobs using Azure CLI
description: Learn how to back up Azure Blobs using Azure CLI. Previously updated : 05/30/2024 Last updated : 07/24/2024
After creating a vault, let's create a Backup policy to protect Azure Blobs in a
## Create a backup policy
-You can create a backup policy for *operational backup* and *vaulted backup (preview)* for Azure Blobs using Azure CLI.
+You can create a backup policy for *operational backup* and *vaulted backup* for Azure Blobs using Azure CLI.
**Choose a backup tier**:
az dataprotection backup-policy create -g testBkpVaultRG --vault-name TestBkpVau
} ```
-# [Vaulted backup (preview)](#tab/vaulted-backup)
+# [Vaulted backup](#tab/vaulted-backup)
[!INCLUDE [blob-backup-create-policy-cli.md](../../includes/blob-backup-create-policy-cli.md)]
backup Backup Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-ps.md
Title: Back up Azure blobs within a storage account using Azure PowerShell
description: Learn how to back up all Azure blobs within a storage account using Azure PowerShell. Previously updated : 05/30/2024 Last updated : 07/24/2024
blobBkpPolicy Microsoft.DataProtection/backupVaults/backupPolicies
$blobBkpPol = Get-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name "blobBkpPolicy" ```
-# [Vaulted Backup (preview)](#tab/vaulted-backup)
+# [Vaulted Backup](#tab/vaulted-backup)
[!INCLUDE [blob-vaulted-backup-create-policy-ps.md](../../includes/blob-vaulted-backup-create-policy-ps.md)]
blobrg-PSTestSA-3df6ac08-9496-4839-8fb5-8b78e594f166 Microsoft.DataProtection/ba
> [!IMPORTANT] > Once a storage account is configured for blobs backup, a few capabilities are affected, such as change feed and delete lock. [Learn more](blob-backup-configure-manage.md#effects-on-backed-up-storage-accounts).
-# [Vaulted Backup (preview)](#tab/vaulted-backup)
+# [Vaulted Backup](#tab/vaulted-backup)
[!INCLUDE [blob-vaulted-backup-prepare-request-ps.md](../../includes/blob-vaulted-backup-prepare-request-ps.md)]
backup Blob Backup Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-manage.md
Title: Configure and manage backup for Azure Blobs using Azure Backup description: Learn how to configure and manage operational and vaulted backups for Azure Blobs. Previously updated : 05/02/2023 Last updated : 07/24/2024
Azure Backup allows you to configure operational and vaulted backups to protect
# [Operational backup](#tab/operational-backup) -- Operational backup of blobs is a local backup solution that maintains data for a specified duration in the source storage account itself. This solution doesn't maintain an additional copy of data in the vault. This solution allows you to retain your data for restore for up to 360 days. Long retention durations may, however, lead to longer time taken during the restore operation.-- The solution can be used to perform restores to the source storage account only and may result in data being overwritten.-- If you delete a container from the storage account by calling the *Delete Container operation*, that container can't be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers, in addition to operational backup, to protect against accidental deletion of containers.
+- Operational backup of blobs is a local backup solution that maintains data for a specified duration in the source storage account itself. This solution doesn't maintain an additional copy of data in the vault. This solution allows you to retain your data for restore for up to 360 days. Long retention durations can, however, lead to longer time taken during the restore operation.
+- The solution can be used to perform restores to the source storage account only and can result in data being overwritten.
+- If you delete a container from the storage account by calling the *Delete Container operation*, that container can't be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you want to restore them later. Also, Microsoft recommends enabling soft delete for containers, in addition to operational backup, to protect against accidental deletion of containers.
- Ensure that the **Microsoft.DataProtection** provider is registered for your subscription. For more information about the supported scenarios, limitations, and availability, see the [support matrix](blob-backup-support-matrix.md).
To assign the required role for storage accounts that you need to protect, follo
>[!NOTE] >You can also assign the roles to the vault at the Subscription or Resource Group levels according to your convenience.
-1. In the storage account that needs to be protected, go to the **Access Control (IAM)** tab on the left navigation pane.
+1. In the storage account that needs to be protected, go to the **Access Control (IAM)** tab on the left navigation blade.
1. Select **Add role assignments** to assign the required role. ![Add role assignments](./media/blob-backup-configure-manage/add-role-assignments.png)
-1. In the Add role assignment pane:
+1. In the Add role assignment blade:
1. Under **Role**, choose **Storage Account Backup Contributor**. 1. Under **Assign access to**, choose **User, group or service principal**.
To assign the required role for storage accounts that you need to protect, follo
>[!NOTE] >The role assignment might take up to 30 minutes to take effect.
-## Create a backup policy
-A backup policy defines the schedule and frequency of the recovery points creation, and its retention duration in the Backup vault. You can use a single backup policy for your vaulted backup, operational backup, or both. You can use the same backup policy to configure backup for multiple storage accounts to a vault.
-To create a backup policy, follow these steps:
-1. Go to **Backup center**, and then select **+ Policy**. This takes you to the create policy experience.
-
- :::image type="content" source="./media/blob-backup-configure-manage/add-policy-inline.png" alt-text="Screenshot shows how to initiate adding backup policy for vaulted blob backup." lightbox="./media/blob-backup-configure-manage/add-policy-expanded.png":::
-
-2. Select the *data source type* as **Azure Blobs (Azure Storage)**, and then select **Continue**.
-
- :::image type="content" source="./media/blob-backup-configure-manage/datasource-type-selection-for-vaulted-blob-backup.png" alt-text="Screenshot shows how to select datasource type for vaulted blob backup.":::
-
-3. On the **Basics** tab, enter a name for the policy and select the vault you want this policy to be associated with.
-
- :::image type="content" source="./media/blob-backup-configure-manage/add-vaulted-backup-policy-name.png" alt-text="Screenshot shows how to add vaulted blob backup policy name.":::
-
- You can view the details of the selected vault in this tab, and then select **continue**.
-
-4. On the **Schedule + retention** tab, enter the *backup details* of the data store, schedule, and retention for these data stores, as applicable.
-
- 1. To use the backup policy for vaulted backups, operational backups, or both, select the corresponding checkboxes.
- 1. For each data store you selected, add or edit the schedule and retention settings:
- - **Vaulted backups**: Choose the frequency of backups between *daily* and *weekly*, specify the schedule when the backup recovery points need to be created, and then edit the default retention rule (selecting **Edit**) or add new rules to specify the retention of recovery points using a *grandparent-parent-child* notation.
- - **Operational backups**: These are continuous and don't require a schedule. Edit the default rule for operational backups to specify the required retention.
-
- :::image type="content" source="./media/blob-backup-configure-manage/define-vaulted-backup-schedule-and-retention-inline.png" alt-text="Screenshot shows how to configure vaulted blob backup schedule and retention." lightbox="./media/blob-backup-configure-manage/define-vaulted-backup-schedule-and-retention-expanded.png":::
-
-5. Go to **Review and create**.
-6. Once the review is complete, select **Create**.
-
-## Configure backups
-
-You can configure backup for one or more storage accounts in an Azure region if you want them to back up to the same vault using a single backup policy.
-
-To configure backup for storage accounts, follow these steps:
-
-1. Go to **Backup center** > **Overview**, and then select **+ Backup**.
-
- :::image type="content" source="./media/blob-backup-configure-manage/start-vaulted-backup.png" alt-text="Screenshot shows how to initiate vaulted blob backup.":::
-
-2. On the **Initiate: Configure Backup** tab, choose **Azure Blobs (Azure Storage)** as the **Datasource type**.
-
- :::image type="content" source="./media/blob-backup-configure-manage/choose-datasource-for-vaulted-backup.png" alt-text="Screenshot shows how to initiate configuring vaulted blob backup.":::
-
-3. On the **Basics** tab, specify **Azure Blobs (Azure Storage)** as the **Datasource type**, and then select the *Backup vault* that you want to associate with your storage accounts.
-
- You can view details of the selected vault on this tab, and then select **Next**.
-
- :::image type="content" source="./media/blob-backup-configure-manage/select-datasource-type-for-vaulted-backup.png" alt-text="Screenshot shows how to select datasource type to initiate vaulted blob backup.":::
-
-4. Select the *backup policy* that you want to use for retention.
-
- You can view the details of the selected policy. You can also create a new backup policy, if needed. Once done, select **Next**.
-
- :::image type="content" source="./media/blob-backup-configure-manage/select-policy-for-vaulted-backup.png" alt-text="Screenshot shows how to select policy for vaulted blob backup.":::
-
-5. On the **Datasources** tab, select the *storage accounts* you want to back up.
-
- :::image type="content" source="./media/blob-backup-configure-manage/select-storage-account-for-vaulted-backup.png" alt-text="Screenshot shows how to select storage account for vaulted blob backup." lightbox="./media/blob-backup-configure-manage/select-storage-account-for-vaulted-backup.png":::
-
- You can select multiple storage accounts in the region to back up using the selected policy. Search or filter the storage accounts, if required.
-
- If you've chosen the vaulted backup policy in step 4, you can also select specific containers to backup. Click "Change" under the "Selected containers" column. In the context blade, choose "browse containers to backup" and unselect the ones you don't want to backup.
-
-6. When you select the storage accounts and containers to protect, Azure Backup performs the following validations to ensure all prerequisites are met. The **Backup readiness** column shows if the Backup vault has enough permissions to configure backups for each storage account.
-
- 1. Validates that the Backup vault has the required permissions to configure backup (the vault has the **Storage account backup contributor** role on all the selected storage accounts. If validation shows errors, then the selected storage accounts don't have **Storage account backup contributor** role. You can assign the required role, based on your current permissions. The error message helps you understand if you have the required permissions, and take the appropriate action:
-
- - **Role assignment not done**: This indicates that you (the user) have permissions to assign the **Storage account backup contributor** role and the other required roles for the storage account to the vault.
-
- Select the roles, and then select **Assign missing roles** on the toolbar to automatically assign the required role to the Backup vault, and trigger an autorevalidation.
-
- The role propagation may take some time (up to 10 minutes) causing the revalidation to fail. In this scenario, you need to wait for a few minutes and select **Revalidate** to retry validation.
-
- - **Insufficient permissions for role assignment**: This indicates that the vault doesn't have the required role to configure backups, and you (the user) don't have enough permissions to assign the required role. To make the role assignment easier, Azure Backup allows you to download the role assignment template, which you can share with users with permissions to assign roles for storage accounts.
-
- To do this, select the storage accounts, and then select **Download role assignment template** to download the template. Once the role assignments are complete, select **Revalidate** to validate the permissions again, and then configure backup.
-
- :::image type="content" source="./media/blob-backup-configure-manage/vaulted-backup-role-assignment-success.png" alt-text="Screenshot shows that the role assignment is successful.":::
-
- >[!Note]
- >The template contains details for selected storage accounts only. So, if there are multiple users that need to assign roles for different storage accounts, you can select and download different templates accordingly.
-
- 1. In case of vaulted backups, validates that the number of containers to be backed up is less than *100*. By default, all containers are selected; however, you can exclude containers that shouldn't be backed up. If your storage account has *>100* containers, you must exclude containers to reduce the count to *100 or below*.
-
- >[!Note]
- >In case of vaulted backups, the storage accounts to be backed up must contain at least *1 container*. If the selected storage account doesn't contain any containers or if no containers are selected, you may get an error while configuring backups.
-
-7. Once validation succeeds, open the **Review and configure** tab.
-
-8. Review the details on the **Review + configure** tab and select **Next** to initiate the *configure backup* operation.
-
-You'll receive notifications about the status of configuring protection and its completion.
### Using Data protection settings of the storage account to configure backup You can configure backup for blobs in a storage account directly from the ΓÇÿData ProtectionΓÇÖ settings of the storage account.
-1. Go to the storage account for which you want to configure backup for blobs, and then go to **Data Protection** in left pane (under **Data management**).
+1. Go to the storage account for which you want to configure backup for blobs, and then go to **Data Protection** in left blade (under **Data management**).
1. In the available data protection options, the first one allows you to enable operational backup using Azure Backup.
You can configure backup for blobs in a storage account directly from the ΓÇÿDat
![Enable operational backup with Azure Backup](./media/blob-backup-configure-manage/enable-operational-backup-with-azure-backup.png)
- 1. On selecting **Manage identity**, brings you to the Identity pane of the storage account.
+ 1. On selecting **Manage identity**, brings you to the Identity blade of the storage account.
1. Select **Add role assignment** to initiate the role assignment.
You can configure backup for blobs in a storage account directly from the ΓÇÿDat
![Finish role assignment](./media/blob-backup-configure-manage/finish-role-assignment.png)
- 1. Select the cancel icon (**x**) on the top right corner to return to the **Data protection** pane of the storage account.<br><br>Once back, continue configuring backup.
+ 1. Select the cancel icon (**x**) on the top right corner to return to the **Data protection** blade of the storage account.<br><br>Once back, continue configuring backup.
## Effects on backed-up storage accounts # [Vaulted backup](#tab/vaulted-backup) -- In storage accounts (for which you've configured vaulted backups), the object replication rules get created under the **Object replication** item in the left pane.
+- In storage accounts (for which you've configured vaulted backups), the object replication rules get created under the **Object replication** item in the left blade.
- Object replication requires versioning and change-feed capabilities. So, Azure Backup service enables these features on the source storage account. # [Operational backup](#tab/operational-backup)
Once backup is configured, changes taking place on block blobs in the storage ac
## Manage backups
-You can use Backup Center as your single pane of glass for managing all your backups. Regarding backup for Azure Blobs, you can use Backup Center to do the following:
+You can use Backup Center as your single blade of glass for managing all your backups. Regarding backup for Azure Blobs, you can use Backup Center to do the following:
- As we've seen above, you can use it for creating Backup vaults and policies. You can also view all vaults and policies under the selected subscriptions. - Backup Center gives you an easy way to monitor the state of protection of protected storage accounts as well as storage accounts for which backup isn't currently configured.
To stop backup for a storage account, follow these steps:
![Stop operational backup](./media/blob-backup-configure-manage/stop-operational-backup.png)
-After stopping backup, you may disable other storage data protection capabilities (enabled for configuring backups) from the data protection pane of the storage account.
+After stopping backup, you can disable other storage data protection capabilities (enabled for configuring backups) from the data protection blade of the storage account.
## Next steps
backup Blob Backup Configure Quick https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-quick.md
+
+ Title: Quickstart - Configure vaulted backup for Azure Blobs using Azure Backup
+description: In this quickstart, learn how to configure vaulted backup for Azure Blobs.
+ Last updated : 07/24/2024+++++
+# Quickstart: Configure vaulted backup for Azure Blobs using Azure Backup
+
+This quickstart describes how to create a backup policy and configure vaulted backup for Azure Blobs from the Azure portal.
+++
+## Prerequisites
+
+Before you configure blob vaulted backup, ensure that:
+
+- You have a Backup vault to configure Azure Blob backup. If you haven't created the Backup vault, [create one](blob-backup-configure-manage.md?tabs=vaulted-backup#create-a-backup-vault).
+- You assign permissions to the Backup vault on the storage account. [Learn more](blob-backup-configure-manage.md?tabs=vaulted-backup#grant-permissions-to-the-backup-vault-on-storage-accounts).
+- You create a backup policy for Azure Blobs vaulted backup. [Learn more](blob-backup-configure-manage.md?tabs=vaulted-backup#create-a-backup-policy).
+
+## Before you start
+
+Things to remember before you start configuring blob vaulted backup:
+
+- Vaulted backup of blobs is a managed offsite backup solution that transfers data to the backup vault and retains as per the retention configured in the backup policy. You can retain data for a maximum of *10 years*.
+- Currently, you can use the vaulted backup solution to restore data to a different storage account only. While performing restores, ensure that the target storage account doesn't contain any *containers* with the same name as those backed up in a recovery point. If any conflicts arise due to the same name of containers, the restore operation fails.
+- Storage accounts to be backed up need to have *cross-tenant replication* enabled. To ensure if the checkbox for this setting is enabled, go to the **storage account** > **Object replication** > **Advanced settings**.
+
+For more information about the supported scenarios, limitations, and availability, see the [support matrix](blob-backup-support-matrix.md).
++++
+## Next step
+
+[Restore Azure Blobs using Azure Backup](blob-restore.md)
backup Blob Backup Configure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-configure-tutorial.md
+
+ Title: Tutorial - Configure vaulted backup for Azure Blobs using Azure Backup
+description: In this tutorial, learn how to configure vaulted backup for Azure Blobs.
+ Last updated : 07/24/2024+++++
+# Tutorial: Configure vaulted backup for Azure Blobs using Azure Backup
+
+This tutorial describes how to create a backup policy and configure vaulted backup for Azure Blobs from the Azure portal.
++
+## Prerequisites
+
+Before you configure blob vaulted backup, ensure that:
+
+- You have a Backup vault to configure Azure Blob backup. If you haven't created the Backup vault, [create one](blob-backup-configure-manage.md?tabs=vaulted-backup#create-a-backup-vault).
+- You assign permissions to the Backup vault on the storage account. [Learn more](blob-backup-configure-manage.md?tabs=vaulted-backup#grant-permissions-to-the-backup-vault-on-storage-accounts).
+
+## Before you start
+
+Things to remember before you start configuring blob vaulted backup:
+
+- Vaulted backup of blobs is a managed offsite backup solution that transfers data to the backup vault and retains as per the retention configured in the backup policy. You can retain data for a maximum of *10 years*.
+- Currently, you can use the vaulted backup solution to restore data to a different storage account only. While performing restores, ensure that the target storage account doesn't contain any *containers* with the same name as those backed up in a recovery point. If any conflicts arise due to the same name of containers, the restore operation fails.
+- Storage accounts to be backed up need to have *cross-tenant replication* enabled. To ensure if the checkbox for this setting is enabled, go to the **storage account** > **Object replication** > **Advanced settings**.
+
+For more information about the supported scenarios, limitations, and availability, see the [support matrix](blob-backup-support-matrix.md).
+++++
+## Next step
+
+[Restore Azure Blobs using Azure Backup](blob-restore.md).
backup Blob Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-overview.md
Title: Overview of Azure Blobs backup description: Learn about Azure Blobs backup.- Previously updated : 03/21/2024+ Last updated : 07/24/2024
This article gives you an understanding about configuring the following types of
- **Continuous backups**: You can configure operational backup, a managed local data protection solution, to protect your block blobs from accidental deletion or corruption. The data is stored locally within the source storage account and not transferred to the backup vault. You donΓÇÖt need to define any schedule for backups. All changes are retained, and you can restore them from the state at a selected point in time. -- **Periodic backups (preview)**: You can configure vaulted backup, a managed offsite data protection solution, to get protection against any accidental or malicious deletion of blobs or storage account. The backup data using vaulted backups is copied and stored in the Backup vault as per the schedule and frequency you define via the backup policy and retained as per the retention configured in the policy.
+- **Periodic backups**: You can configure vaulted backup, a managed offsite data protection solution, to get protection against any accidental or malicious deletion of blobs or storage account. The backup data using vaulted backups is copied and stored in the Backup vault as per the schedule and frequency you define via the backup policy and retained as per the retention configured in the policy.
You can choose to configure vaulted backups, operational backups, or both on your storage accounts using a single backup policy. The integration with [Backup center](backup-center-overview.md) enables you to govern, monitor, operate, and analyze backups at scale.
Operational backup uses blob platform capabilities to protect your data and allo
For information about the limitations of the current solution, see the [support matrix](blob-backup-support-matrix.md).
-# [Vaulted backup (preview)](#tab/vaulted-backup)
+# [Vaulted backup](#tab/vaulted-backup)
-Vaulted backup (preview) uses the platform capability of object replication to copy data to the Backup vault. Object replication asynchronously copies block blobs between a source storage account and a destination storage account. The contents of the blob, any versions associated with the blob, and the blob's metadata and properties are all copied from the source container to the destination container.
+Vaulted backup uses the platform capability of object replication to copy data to the Backup vault. Object replication asynchronously copies block blobs between a source storage account and a destination storage account. The contents of the blob, any versions associated with the blob, and the blob's metadata and properties are all copied from the source container to the destination container.
When you configure protection, Azure Backup allocates a destination storage account (Backup vault's storage account managed by Azure Backup) and enables object replication policy at container level on both destination and source storage account. When a backup job is triggered, the Azure Backup service creates a recovery point marker on the source storage account and polls the destination account for the recovery point marker replication. Once the replication point marker is present on the destination, a recovery point is created.
To allow Backup to enable these properties on the storage accounts to be protect
>[!NOTE] >Operational backup supports operations on block blobs only and operations on containers canΓÇÖt be restored. If you delete a container from the storage account by calling the **Delete Container** operation, that container canΓÇÖt be restored with a restore operation. ItΓÇÖs suggested you enable soft delete to enhance data protection and recovery.
-# [Vaulted backup (preview)](#tab/vaulted-backup)
+# [Vaulted backup](#tab/vaulted-backup)
Vaulted backup is configured at the storage account level. However, you can exclude containers that don't need backup. If your storage account has *>100* containers, you need to mandatorily exclude containers to reduce the count to *100* or below. For vaulted backups, the schedule and retention are managed via backup policy. You can set the frequency as *daily* or *weekly*, and specify when the backup recovery points need to be created. You can also configure different retention values for backups taken every day, week, month, or year. The retention rules are evaluated in a predetermined order of priority. The *yearly* rule has the priority compared to *monthly* and *weekly* rule. Default retention settings are applied if other rules don't qualify.
You can enable operational backup and vaulted backup (or both) of blobs on a sto
Once you have enabled backup on a storage account, a Backup Instance is created corresponding to the storage account in the Backup vault. You can perform any Backup-related operations for a storage account like initiating restores, monitoring, stopping protection, and so on, through its corresponding Backup Instance.
-Both operational and vaulted backups integrate directly with Backup Center to help you manage the protection of all your storage accounts centrally, along with all other Backup supported workloads. Backup Center is your single pane of glass for all your Backup requirements like monitoring jobs and state of backups and restores, ensuring compliance and governance, analyzing backup usage, and performing operations pertaining to back up and restore of data.
+Both operational and vaulted backups integrate directly with Backup Center to help you manage the protection of all your storage accounts centrally, along with all other Backup supported workloads. Backup Center is your single blade of glass for all your Backup requirements like monitoring jobs and state of backups and restores, ensuring compliance and governance, analyzing backup usage, and performing operations pertaining to back up and restore of data.
You won't incur any management charges or instance fee when using operational ba
- Retention of data because of [Soft delete for blobs](../storage/blobs/soft-delete-blob-overview.md), [Change feed support in Azure Blob Storage](../storage/blobs/storage-blob-change-feed.md), and [Blob versioning](../storage/blobs/versioning-overview.md).
-# [Vaulted backup (preview)](#tab/vaulted-backup)
+# [Vaulted backup](#tab/vaulted-backup)
-You won't incur backup storage charges or instance fees during the preview. However, you'll incur the source side cost, [associated with Object replication](../storage/blobs/object-replication-overview.md#billing), on the backed-up source account.
+You will incur backup storage charges or instance fees, and the source side cost ([associated with Object replication](../storage/blobs/object-replication-overview.md#billing)) on the backed-up source account.
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md
Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 04/01/2024 Last updated : 07/24/2024
Operational backup for blobs is available in all public cloud regions, except Fr
# [Vaulted backup](#tab/vaulted-backup)
-Vaulted backup (preview) for blobs is currently available in all public regions **except** South Africa West, Sweden Central, Sweden South, Israel Central, Poland Central, India Central, Italy North and Malaysia South.
+Vaulted backup for blobs is currently available in all public regions **except** South Africa West, Sweden Central, Sweden South, Israel Central, Poland Central, India Central, Italy North and Malaysia South.
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
- You can back up storage accounts with *up to 100 containers*. You can also select a subset of containers to back up (up to 100 containers). - If your storage account contains more than 100 containers, you need to select *up to 100 containers* to back up. - To back up any new containers that get created after backup configuration for the storage account, modify the protection of the storage account. These containers aren't backed up automatically.-- The storage accounts to be backed up must contain *a minimum of 1 container*. If the storage account doesn't contain any containers or if no containers are selected, an error may appear when you configure backup.
+- The storage accounts to be backed up must contain *a minimum of one container*. If the storage account doesn't contain any containers or if no containers are selected, an error may appear when you configure backup.
- Currently, you can perform only *one backup* per day (that includes scheduled and on-demand backups). Backup fails if you attempt to perform more than one backup operation a day. - If you stop protection (vaulted backup) on a storage account, it doesn't delete the object replication policy created on the storage account. In these scenarios, you need to manually delete the *OR policies*. - Cool and archived blobs are currently not supported.
backup Blob Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-restore.md
Title: Restore Azure Blobs description: Learn how to restore Azure Blobs. Previously updated : 03/06/2024 Last updated : 07/24/2024
backup Quick Blob Vaulted Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-blob-vaulted-backup-cli.md
+
+ Title: Quickstart - Configure vaulted backup for Azure Blobs using Azure CLI
+description: In this Quickstart, learn how to configure vaulted backup for Azure Blobs using Azure CLI.
+ms.devlang: azurecli
+ Last updated : 07/24/2024+++++
+# Quickstart: Configure vaulted backup for Azure Blobs using Azure Backup via Azure CLI
+
+This quickstart describes how to configure vaulted backup for Azure Blobs using Azure CLI.
++
+## Prerequisites
+
+Before you configure blob vaulted backup, ensure that:
+
+- You review the [support matrix](../backup/blob-backup-support-matrix.md) to learn about the Azure Blob region availability, supported scenarios, and limitations.
+- You have a Backup vault to configure Azure Blob backup. If you haven't created the Backup vault, [create one](../backup/backup-blobs-storage-account-ps.md#create-a-backup-vault).
+
+## Create a backup policy
++
+## Configure backup
++
+## Prepare the request to configure blob backup
++
+## Next step
+
+[Restore Azure Blobs using Azure CLI](/azure/backup/restore-blobs-storage-account-cli).
+++
backup Quick Blob Vaulted Backup Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-blob-vaulted-backup-powershell.md
+
+ Title: Quickstart - Configure vaulted backup for Azure Blobs using Azure PowerShell
+description: In this Quickstart, learn how to configure vaulted backup for Azure Blobs using Azure PowerShell.
+ms.devlang: azurecli
+ Last updated : 07/24/2024+++++
+# Quickstart: Configure vaulted backup for Azure Blobs using Azure Backup via Azure PowerShell
+
+This quickstart describes how to configure vaulted backup for Azure Blobs using Azure PowerShell.
++
+## Prerequisites
+
+Before you configure blob vaulted backup, ensure that:
+
+- You install the Azure PowerShell version **Az 5.9.0**.
+- You review the [support matrix](../backup/blob-backup-support-matrix.md) to learn about the Azure Blob region availability, supported scenarios, and limitations.
+- You have a Backup vault to configure Azure Blob backup. If you haven't created the Backup vault, [create one](../backup/backup-blobs-storage-account-ps.md#create-a-backup-vault).
+
+## Create a backup policy
+++
+## Configure backup
++
+## Prepare the request to configure blob backup
++
+## Next step
+
+[Restore Azure blobs using Azure PowerShell](/azure/backup/restore-blobs-storage-account-ps).
backup Restore Azure Database Postgresql Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql-flex.md
This article explains how to restore an Azure PostgreSQL -flex server backed up
## Prerequisites
-Before you restore from Azure Database for PostgreSQL Flexible server backups, ensure that you have the required [permissions for the restore operation](backup-azure-database-postgresql-flex-overview.md#permissions-for-backup).
+1. Before you restore from Azure Database for PostgreSQL Flexible server backups, ensure that you have the required [permissions for the restore operation](backup-azure-database-postgresql-flex-overview.md#permissions-for-backup).
+
+2. Backup data is stored in the Backup vault as a blob within the Microsoft tenant. During a restore operation, the backup data is copied from one storage account to another across tenants. Ensure that the target storage account for the restore has the **AllowCrossTenantReplication** property set to **true**.
## Restore Azure PostgreSQL-Flexible database
Follow these steps:
1. Submit the Restore operation and track the triggered job under **Backup jobs**. :::image type="content" source="./media/restore-azure-database-postgresql-flex/validate.png" alt-text="Screenshot showing the validate process page.":::+
+1. Once the job is finished, the backed-up data is restored into the storage account. Below are the set of files recovered in your storage account after the restore:
+
+ - The first file is a marker or timestamp file that gives the customer the time the backup was taken at. The file cannot be restored but if opened with a text editor should tell the customer the UTC time when the backup was taken.
+
+ - The Second file **_database_** is an individual database backup for database called tempdata2 taken using pg_dump. Each database has a separate file with format **ΓÇô {backup_name}_database_{db_name}.sql**
+
+ - The Third File **_roles**. Has roles backed up using pg_dumpall
+
+ - The Fourth file **_schemas**. backed up using pg_dumpall
+
+ - The Fifth file **_tablespaces**. Has the tablespaces backed up using pg_dumpall
+
+1. Post restoration completion to the target storage account, you can use pg_restore utility to restore an Azure Database for PostgreSQL flexible server database from the target. Use the following command to connect to an existing postgresql flexible server and an existing database
+
+ `pg_restore -h <hostname> -U <username> -d <db name> -Fd -j <NUM> -C <dump directory>`
+
+ * `-Fd`: The directory format.
+ * `-j`: The number of jobs.
+ * `-C`: Begin the output with a command to create the database itself and then reconnect to it.
+
+ Here's an example of how this syntax might appear:
+
+ `pg_restore -h <hostname> -U <username> -j <Num of parallel jobs> -Fd -C -d <databasename> sampledb_dir_format`
+
+ If you have more than one database to restore, re-run the earlier command for each database.
+
+ Also, by using multiple concurrent jobs **-j**, you can reduce the time it takes to restore a large database on a multi-vCore target server. The number of jobs can be equal to or less than the number of vCPUs that are allocated for the target server.
## Next steps
-[Support matrix for PostgreSQL-Flex database backup by using Azure Backup](backup-azure-database-postgresql-flex-support-matrix.md).
+[Support matrix for PostgreSQL-Flex database backup by using Azure Backup](backup-azure-database-postgresql-flex-support-matrix.md).
backup Restore Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql.md
Title: Restore Azure Database for PostgreSQL description: Learn about how to restore Azure Database for PostgreSQL backups. Previously updated : 02/01/2024 Last updated : 07/24/2024
backup Restore Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-cli.md
Title: Restore Azure Blobs via Azure CLI
description: Learn how to restore Azure Blobs to any point-in-time using Azure CLI. Previously updated : 05/30/2024 Last updated : 07/24/2024
This article describes how to restore [blobs](blob-backup-overview.md) using Azure Backup.
-You can restore Azure Blobs to point-in-time using *operational backups* and *vaulted backups (preview)* for Azure Blobs via Azure CLI. Here, let's use an existing Backup vault `TestBkpVault`, under the resource group `testBkpVaultRG` in the examples.
+You can restore Azure Blobs to point-in-time using *operational backups* and *vaulted backups* for Azure Blobs via Azure CLI. Here, let's use an existing Backup vault `TestBkpVault`, under the resource group `testBkpVaultRG` in the examples.
> [!IMPORTANT] > Before you restore Azure Blobs using Azure Backup, see [important points](blob-restore.md#before-you-start). ## Fetch details to restore a blob backup
-To restore a blob backup, you need to *fetch the valid time range for *operational backup* and *fetch the list of recovery points* for *vaulted backup (preview)*.
+To restore a blob backup, you need to *fetch the valid time range for *operational backup* and *fetch the list of recovery points* for *vaulted backup*.
**Choose a backup tier**:
az dataprotection restorable-time-range find --start-time 2021-05-30T00:00:00 --
} ```
-# [Vaulted backup (preview)](#tab/vaulted-backup)
+# [Vaulted backup](#tab/vaulted-backup)
-To fetch the list of recovery points available to restore vaulted backup (preview), use the `az dataprotection recovery-point list` command.
+To fetch the list of recovery points available to restore vaulted backup, use the `az dataprotection recovery-point list` command.
To fetch the name of the backup instance corresponding to your backed-up storage account, use the `az dataprotection backup-instance list` command.
az dataprotection backup-instance restore initialize-for-item-recovery --datasou
az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureBlob --restore-location southeastasia --source-datastore OperationalStore --backup-instance-id "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault/backupInstances/CLITestSA-CLITestSA-c3a2a98c-def8-44db-bd1d-ff6bc86ed036" --point-in-time 2021-06-02T18:53:44.4465407Z --from-prefix-pattern container1/text1 container2/text4 --to-prefix-pattern container1/text4 container2/text41 > restore.json ```
-# [Vaulted backup (preview)](#tab/vaulted-backup)
+# [Vaulted backup](#tab/vaulted-backup)
-Prepare the request body for the following restore scenarios supported by Azure Blobs vaulted backup (preview).
+Prepare the request body for the following restore scenarios supported by Azure Blobs vaulted backup.
### Restore all containers
backup Restore Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-ps.md
Title: Restore Azure Blobs using Azure PowerShell
description: Learn how to restore Azure blobs to any point-in-time using Azure PowerShell. Previously updated : 07/01/2024 Last updated : 07/24/2024 # Restore Azure Blobs using Azure PowerShell
-This article describes how to use the PowerShell to perform restores for Azure Blob from [operational](blob-backup-overview.md?tabs=operational-backup) or [vaulted](blob-backup-overview.md?tabs=vaulted-backup) backups. With operational backups, you can restore all block blobs in storage accounts with operational backup configured or a subset of blob content to any point-in-time within the retention range. With vaulted backups (preview), you can perform restores using a recovery point created, based on your backup schedule.
+This article describes how to use the PowerShell to perform restores for Azure Blob from [operational](blob-backup-overview.md?tabs=operational-backup) or [vaulted](blob-backup-overview.md?tabs=vaulted-backup) backups. With operational backups, you can restore all block blobs in storage accounts with operational backup configured or a subset of blob content to any point-in-time within the retention range. With vaulted backups, you can perform restores using a recovery point created, based on your backup schedule.
> [!IMPORTANT] > Support for Azure blobs is available from version **Az 5.9.0**.
You can restore a subset of blobs using a prefix match. You can specify up to 10
```azurepowershell-interactive $restorerequest = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureBlob -SourceDataStore OperationalStore -RestoreLocation $TestBkpVault.Location -RestoreType OriginalLocation -PointInTime (Get-Date -Date "2021-04-23T02:47:02.9500000Z") -BackupInstance $AllInstances[2] -ItemLevelRecovery -FromPrefixPattern "containerabc/aaa","containerabc/ccc" -ToPrefixPattern "containerabc/bbb","containerabc/ddd" ```
-# [Vaulted backup (preview)](#tab/vaulted-backup)
+# [Vaulted backup](#tab/vaulted-backup)
[!INCLUDE [blob-vaulted-backup-restore-ps.md](../../includes/blob-vaulted-backup-restore-ps.md)]
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup
+ Title: What's new in the Azure Backup service
description: Learn about the new features in the Azure Backup service. Previously updated : 07/02/2024 Last updated : 07/24/2024 - ignite-2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - July 2024
+ - [Azure Blob vaulted backup is now generally available](#azure-blob-vaulted-backup-is-now-generally-available)
- [Backup and restore of virtual machines with private endpoint enabled disks is now Generally Available](#backup-and-restore-of-virtual-machines-with-private-endpoint-enabled-disks-is-now-generally-available) - May 2024 - [Migration of Azure VM backups from standard to enhanced policy (preview)](#migration-of-azure-vm-backups-from-standard-to-enhanced-policy-preview)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview) +
+## Azure Blob vaulted backup is now generally available
+
+Azure Backup now enables you to perform a vaulted backup of block blob data in *general-purpose v2 storage accounts* to protect data against ransomware attacks or source data loss due to malicious or rogue admin. You can define the backup schedule to create recovery points and the retention settings that determine how long backups will be retained in the vault. You can configure and manage the vaulted and operational backups using a single backup policy.
+
+Under vaulted backups, the data is copied and stored in the Backup vault. So, you get an offsite copy of data that can be retained for up to *10 years*. If any data loss happens on the source account, you can trigger a restore to an alternate account and get access to your data. The vaulted backups can be managed at scale via the Backup center, and monitored via the rich alerting and reporting capabilities offered by the Azure Backup service.
+
+If you're currently using operational backups, we recommend you to switch to vaulted backups for complete protection against different data loss scenarios.
+
+For more information, see [Azure Blob backup overview](blob-backup-overview.md?tabs=vaulted-backup).
+ ## Backup and restore of virtual machines with private endpoint enabled disks is now Generally Available Azure Backup now allows you to back up the Azure Virtual Machines that use disks with private endpoints (disk access). This support is extended for Virtual Machines that are backed up using Enhanced backup policies, along with the existing support for those that were backed up using Standard backup policies. While initiating the restore operation, you can specify the network access settings required for the restored disks. You can choose to keep the network configuration of the restored disks the same as that of the source disks, specify the access from specific networks only, or allow public access from all networks.
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/available-sizes.md
Previously updated : 10/13/2020 Last updated : 07/24/2024 # Available sizes for Azure Cloud Services (extended support)
This article describes the available virtual machine sizes for Cloud Services (e
## Configure sizes for Cloud Services (extended support)
-You can specify the virtual machine size of a role instance as part of the service model in the service definition file. The size of the role determines the number of CPU cores, memory capacity and the local file system size.
+You can specify the virtual machine size of a role instance as part of the service model in the service definition file. The size of the role determines the number of CPU cores, memory capacity, and the local file system size.
For example, setting the web role instance size to `Standard_D2`:
To change the size of an existing role, change the virtual machine size in the s
## Get a list of available sizes
-To retrieve a list of available sizes see [Resource Skus - List](/rest/api/compute/resourceskus/list) and apply the following filters:
+To retrieve a list of available sizes, see [Resource Skus - List](/rest/api/compute/resourceskus/list) and apply the following filters:
```powershell # Update the location
cloud-services-extended-support Certificates And Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/certificates-and-key-vault.md
Previously updated : 10/13/2020 Last updated : 07/24/2024 # Use certificates with Azure Cloud Services (extended support)
Key Vault is used to store certificates that are associated to Cloud Services (e
## Upload a certificate to Key Vault
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to the Key Vault. If you do not have a Key Vault set up, you can opt to create one in this same window.
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to the Key Vault. If you don't have a Key Vault set up, you can opt to create one in this same window.
2. Select **Access Configuration** :::image type="content" source="media/certs-and-key-vault-1.png" alt-text="Image shows selecting access policies from the key vault blade.":::
-3. Ensure the access configuration include the following property:
+3. Ensure the access configuration includes the following property:
- **Enable access to Azure Virtual Machines for deployment** :::image type="content" source="media/certs-and-key-vault-2.png" alt-text="Image shows access policies window in the Azure portal.":::
cloud-services-extended-support Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/cloud-services-model-and-package.md
Previously updated : 10/13/2020 Last updated : 07/24/2024 # What is the Azure Cloud Service model and how do I package it?
-A cloud service is created from three components, the service definition *(.csdef)*, the service config *(.cscfg)*, and a service package *(.cspkg)*. Both the **ServiceDefinition.csdef** and **ServiceConfig.cscfg** files are XML-based and describe the structure of the cloud service and how it's configured; collectively called the model. The **ServicePackage.cspkg** is a zip file that is generated from the **ServiceDefinition.csdef** and among other things, contains all the required binary-based dependencies. Azure creates a cloud service from both the **ServicePackage.cspkg** and the **ServiceConfig.cscfg**.
+A cloud service is created from three components, the service definition *(.csdef)*, the service config *(.cscfg)*, and a service package *(.cspkg)*. Both the **ServiceDefinition.csdef** and **ServiceConfig.cscfg** files are XML-based and describe the structure of the cloud service and its configuration. We collectively call these files the model. The **ServicePackage.cspkg** is a zip file that is generated from the **ServiceDefinition.csdef** and among other things, contains all the required binary-based dependencies. Azure creates a cloud service from both the **ServicePackage.cspkg** and the **ServiceConfig.cscfg**.
-Once the cloud service is running in Azure, you can reconfigure it through the **ServiceConfig.cscfg** file, but you cannot alter the definition.
+Once the cloud service is running in Azure, you can reconfigure it through the **ServiceConfig.cscfg** file, but you can't alter the definition.
## What would you like to know more about? * I want to know more about the [ServiceDefinition.csdef](#csdef) and [ServiceConfig.cscfg](#cscfg) files.
The **ServiceDefinition.csdef** file specifies the settings that are used by Azu
</ServiceDefinition> ```
-You can refer to the [Service Definition Schema](schema-csdef-file.md)) for a better understanding of the XML schema used here, however, here is a quick explanation of some of the elements:
+You can refer to the [Service Definition Schema](schema-csdef-file.md)) for a better understanding of the XML schema used here, however, here's a quick explanation of some of the elements:
**Sites** Contains the definitions for websites or web applications that are hosted in IIS7.
Contains tasks that are run when the role starts. The tasks are defined in a .cm
## ServiceConfiguration.cscfg The configuration of the settings for your cloud service is determined by the values in the **ServiceConfiguration.cscfg** file. You specify the number of instances that you want to deploy for each role in this file. The values for the configuration settings that you defined in the service definition file are added to the service configuration file. The thumbprints for any management certificates that are associated with the cloud service are also added to the file. The [Azure Service Configuration Schema (.cscfg File)](schema-cscfg-file.md) provides the allowable format for a service configuration file.
-The service configuration file is not packaged with the application, but is uploaded to Azure as a separate file and is used to configure the cloud service. You can upload a new service configuration file without redeploying your cloud service. The configuration values for the cloud service can be changed while the cloud service is running. The following example shows the configuration settings that can be defined for the Web and Worker roles:
+The service configuration file isn't packaged with the application. It uploads to Azure as a separate file and is used to configure the cloud service. You can upload a new service configuration file without redeploying your cloud service. The configuration values for the cloud service can be changed while the cloud service is running. The following example shows the configuration settings that can be defined for the Web and Worker roles:
```xml <?xml version="1.0"?>
The service configuration file is not packaged with the application, but is uplo
</ServiceConfiguration> ```
-You can refer to the [Service Configuration Schema](schema-cscfg-file.md) for better understanding the XML schema used here, however, here is a quick explanation of the elements:
+You can refer to the [Service Configuration Schema](schema-cscfg-file.md) for better understanding the XML schema used here, however, here's a quick explanation of the elements:
**Instances**
-Configures the number of running instances for the role. To prevent your cloud service from potentially becoming unavailable during upgrades, it is recommended that you deploy more than one instance of your web-facing roles. By deploying more than one instance, you are adhering to the guidelines in the [Azure Compute Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/), which guarantees 99.95% external connectivity for Internet-facing roles when two or more role instances are deployed for a service.
+Configures the number of running instances for the role. To prevent your cloud service from potentially becoming unavailable during upgrades, we recommend you deploy more than one instance of your web-facing roles. By deploying more than one instance, you adhere to the guidelines in the [Azure Compute Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/), which guarantees 99.95% external connectivity for Internet-facing roles when two or more role instances are deployed for a service.
**ConfigurationSettings** Configures the settings for the running instances for a role. The name of the `<Setting>` elements must match the setting definitions in the service definition file.
Configures the certificates that are used by the service. The previous code exam
## Defining ports for role instances Azure allows only one entry point to a web role. Meaning that all traffic occurs through one IP address. You can configure your websites to share a port by configuring the host header to direct the request to the correct location. You can also configure your applications to listen to well-known ports on the IP address.
-The following sample shows the configuration for a web role with a website and web application. The website is configured as the default entry location on port 80, and the web applications are configured to receive requests from an alternate host header that is called ΓÇ£mail.mysite.cloudapp.netΓÇ¥.
+The following sample shows the configuration for a web role with a website and web application. The website is configured as the default entry location on port 80, and the web applications are configured to receive requests from an alternate host header called `mail.mysite.cloudapp.net`.
```xml <WebRole>
The following sample shows the configuration for a web role with a website and w
## Changing the configuration of a role
-You can update the configuration of your cloud service while it is running in Azure, without taking the service offline. To change configuration information, you can either upload a new configuration file, or edit the configuration file in place and apply it to your running service. The following changes can be made to the configuration of a service:
+You can update the configuration of your cloud service while it's running in Azure, without taking the service offline. To change configuration information, you can either upload a new configuration file, or edit the configuration file in place and apply it to your running service. The following changes can be made to the configuration of a service:
* **Changing the values of configuration settings** When a configuration setting changes, a role instance can choose to apply the change while the instance is online, or to recycle the instance gracefully and apply the change while the instance is offline. * **Changing the service topology of role instances**
- Topology changes do not affect running instances, except where an instance is being removed. All remaining instances generally do not need to be recycled; however, you can choose to recycle role instances in response to a topology change.
+ Topology changes don't affect running instances, except where an instance is being removed. All remaining instances generally don't need to be recycled; however, you can choose to recycle role instances in response to a topology change.
* **Changing the certificate thumbprint**
- You can only update a certificate when a role instance is offline. If a certificate is added, deleted, or changed while a role instance is online, Azure gracefully takes the instance offline to update the certificate and bring it back online after the change is complete.
+ You can only update a certificate when a role instance is offline. If a certificate is added, deleted, or changed while a role instance is online, Azure gracefully takes the instance offline to update the certificate. Azure brings it back online after the change completes.
### Handling configuration changes with Service Runtime Events The Azure Runtime Library includes the Microsoft.WindowsAzure.ServiceRuntime namespace, which provides classes for interacting with the Azure environment from a role. The RoleEnvironment class defines the following events that are raised before and after a configuration change:
Where the variables are defined as follows:
| | | | \[DirectoryName\] |The subdirectory under the root project directory that contains the .csdef file of the Azure project. | | \[ServiceDefinition\] |The name of the service definition file. By default, this file is named ServiceDefinition.csdef. |
-| \[OutputFileName\] |The name for the generated package file. Typically, this is set to the name of the application. If no file name is specified, the application package is created as \[ApplicationName\].cspkg. |
+| \[OutputFileName\] |The name for the generated package file. Typically, this variable is set to the name of the application. If no file name is specified, the application package is created as \[ApplicationName\].cspkg. |
| \[RoleName\] |The name of the role as defined in the service definition file. | | \[RoleBinariesDirectory] |The location of the binary files for the role. | | \[VirtualPath\] |The physical directories for each virtual path defined in the Sites section of the service definition. |
cloud-services-extended-support Configure Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/configure-scaling.md
Previously updated : 10/13/2020 Last updated : 07/24/2024 # Configure scaling options with Azure Cloud Services (extended support)
-Conditions can be configured to enable Cloud Services (extended support) deployments to scale in and out. These conditions can be based on CPU usage, disk load and network load.
+Conditions can be configured to enable Cloud Services (extended support) deployments to scale in and out. These conditions can be based on CPU usage, disk load, and network load.
Consider the following information when configuring scaling of your Cloud Service deployments: - Scaling impacts core usage. Larger role instances consume more cores and you can only scale within the core limit of your subscription. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
Consider the following information when configuring scaling of your Cloud Servic
:::image type="content" source="media/enable-scaling-1.png" alt-text="Image shows selecting the Remote Desktop option in the Azure portal":::
-4. A page will display a list of all the roles in which scaling can be configured. Select the role you want to configure.
+4. A page displays a list of all the roles in which scaling can be configured. Select the role you want to configure.
5. Select the type of scale you want to configure
- - **Manual scale** will set the absolute count of instances.
+ - **Manual scale** sets the absolute count of instances.
1. Select **Manual scale**. 2. Input the number of instances you want to scale up or down to. 3. Select **Save**. :::image type="content" source="media/enable-scaling-2.png" alt-text="Image shows setting up manual scaling in the Azure portal":::
- 4. The scaling operation will begin immediately.
+ 4. The scaling operation begins immediately.
- - **Custom Autoscale** will allow you to set rules that govern how much or how little to scale.
+ - **Custom Autoscale** allows you to set rules that govern how much or how little to scale.
1. Select **Custom autoscale** 2. Choose to scale based on a metric or instance count.
Consider the following information when configuring scaling of your Cloud Servic
:::image type="content" source="media/enable-scaling-4.png" alt-text="Image shows setting up custom autoscale rules in the Azure portal"::: 4. Select **Save**.
- 5. The scaling operations will begin as soon as a rule is triggered.
+ 5. The scaling operations begin as soon as a rule is triggered.
6. You can view or adjust existing scaling rules applied to your deployments by selecting the **Scale** tab.
cloud-services-extended-support Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-portal.md
Previously updated : 06/18/2024 Last updated : 07/24/2024 # Deploy Cloud Services (extended support) by using the Azure portal
To deploy Cloud Services (extended support) by using the portal:
- If you have IP input endpoints defined in your definition (.csdef) file, create a public IP address for your cloud service. - Cloud Services (extended support) supports only a Basic SKU public IP address. - If your configuration (.cscfg) file contains a reserved IP address, set the allocation type for the public IP address to **Static**.
- - (Optional) You can assign a DNS name for your cloud service endpoint by updating the DNS label property of the public IP address that's associated with the cloud service.
- - (Optional) **Start cloud service**: Select the checkbox if you want to start the service immediately after it's deployed.
+ - (Optional) You can assign a DNS name for your cloud service endpoint by updating the DNS label property of the public IP address associated with the cloud service.
+ - (Optional) **Start cloud service**: Select the checkbox if you want to start the service immediately after it deploys.
- **Key vault**: Select a key vault. - A key vault is required when you specify one or more certificates in your configuration (.cscfg) file. When you select a key vault, we attempt to find the selected certificates that are defined in your configuration (.cscfg) file based on the certificate thumbprints. If any certificates are missing from your key vault, you can upload them now , and then select **Refresh**.
cloud-services-extended-support Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-powershell.md
Previously updated : 06/18/2024 Last updated : 07/24/2024
Complete the following steps as prerequisites to creating your deployment by usi
## Deploy Cloud Services (extended support)
-Use any of the following PowerShell cmdlet options to deploy Cloud Services (extended support):
+To deploy Cloud Services (extended support), use any of the following PowerShell cmdlet options:
- Quick-create a deployment by using a [storage account](#quick-create-a-deployment-by-using-a-storage-account) - This parameter set inputs the package (.cspkg or .zip) file, the configuration (.cscfg) file, and the definition (.csdef) file for the deployment as inputs with the storage account.
- - The Cloud Services (extended support) role profile, network profile, and OS profile are created by the cmdlet with minimal input.
+ - The cmdlet creates the Cloud Services (extended support) role profile, network profile, and OS profile with minimal input.
- To input a certificate, you must specify a key vault name. The certificate thumbprints in the key vault are validated against the certificates that you specify in the configuration (.cscfg) file for the deployment. - Quick-create a deployment by using a [shared access signature URI](#quick-create-a-deployment-by-using-an-sas-uri) - This parameter set inputs the shared access signature (SAS) URI of the package (.cspkg or .zip) file with the local paths to the configuration (.cscfg) file and definition (.csdef) file. No storage account input is required.
- - The cloud service role profile, network profile, and OS profile are created by the cmdlet with minimal input.
+ - The cmdlet creates the cloud service role profile, network profile, and OS profile minimal input.
- To input a certificate, you must specify a key vault name. The certificate thumbprints in the key vault are validated against the certificates that you specify in the configuration (.cscfg) file for the deployment. - Create a deployment by using a [role profile, OS profile, network profile, and extension profile with shared access signature URIs](#create-a-deployment-by-using-profile-objects-and-sas-uris)
New-AzCloudService
Add-AzKeyVaultCertificate -VaultName "ContosKeyVault" -Name "ContosCert" -CertificatePolicy $Policy ```
-1. Create an OS profile in-memory object. An OS profile specifies the certificates that are associated with Cloud Services (extended support) roles. This is the certificate that you created in the preceding step.
+1. Create an OS profile in-memory object. An OS profile specifies the certificates that are associated with Cloud Services (extended support) roles, which is the certificate that you created in the preceding step.
```azurepowershell-interactive $keyVault = Get-AzKeyVault -ResourceGroupName ContosOrg -VaultName ContosKeyVault
New-AzCloudService
$osProfile = @{secret = @($secretGroup)} ```
-1. Create a role profile in-memory object. A role profile defines a role's SKU-specific properties such as name, capacity, and tier. In this example, two roles are defined: frontendRole and backendRole. Role profile information must match the role configuration that's defined in the deployment configuration (.cscfg) file and definition (.csdef) file.
+1. Create a role profile in-memory object. A role profile defines a role's SKU-specific properties such as name, capacity, and tier. In this example, two roles are defined: frontendRole and backendRole. Role profile information must match the role configuration defined in the deployment configuration (.cscfg) file and definition (.csdef) file.
```azurepowershell-interactive $frontendRole = New-AzCloudServiceRoleProfilePropertiesObject -Name 'ContosoFrontend' -SkuName 'Standard_D1_v2' -SkuTier 'Standard' -SkuCapacity 2
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-prerequisite.md
Previously updated : 06/16/2024 Last updated : 07/24/2024 # Prerequisites for deploying Azure Cloud Services (extended support)
The subscription that contains networking resources must have the [Network Contr
## Key vault creation
-Azure Key Vault stores certificates that are associated with Cloud Services (extended support). Add the certificates to a key vault, and then reference the certificate thumbprints in the configuration (.cscfg) file for your deployment. You also must enable the key vault access policy (in the portal) for **Azure Virtual Machines for deployment** so that the Cloud Services (extended support) resource can retrieve the certificate that's stored as secrets in the key vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). You must create the key vault in the same region and subscription as the cloud service. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
+Azure Key Vault stores certificates that are associated with Cloud Services (extended support). Add the certificates to a key vault, and then reference the certificate thumbprints in the configuration (.cscfg) file for your deployment. You also must enable the key vault access policy (in the portal) for **Azure Virtual Machines for deployment** so that the Cloud Services (extended support) resource can retrieve the certificate stored as secrets in the key vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). You must create the key vault in the same region and subscription as the cloud service. For more information, see [Use certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
## Related content
cloud-services-extended-support Deploy Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-sdk.md
Previously updated : 06/18/2024 Last updated : 07/24/2024 # Deploy Cloud Services (extended support) by using the Azure SDK
-This article shows how to use the [Azure SDK](https://azure.microsoft.com/downloads/) to create an Azure Cloud Services (extended support) deployment that has multiple roles (WebRole and WorkerRole) and the Remote Desktop Protocol (RDP) extension. Cloud Services (extended support) is a deployment model of Azure Cloud Services that's based on Azure Resource Manager.
+This article shows how to use the [Azure SDK](https://azure.microsoft.com/downloads/) to create an Azure Cloud Services (extended support) deployment that has multiple roles (WebRole and WorkerRole). It also covers how to use the Remote Desktop Protocol (RDP) extension. Cloud Services (extended support) is a deployment model of Azure Cloud Services based on Azure Resource Manager.
## Prerequisites
To deploy Cloud Services (extended support) by using the SDK:
resourceGroup = await resourceGroups.CreateOrUpdateAsync(resourceGroupName, resourceGroup); ```
-1. Create a storage account and container where you'll store the package (.cspkg or .zip) file and configuration (.cscfg) file for the deployment. Install the [Azure Storage NuGet package](https://www.nuget.org/packages/Azure.Storage.Common/). This step is optional if you're using an existing storage account. The storage account name must be unique.
+1. Create a storage account and container where you store the package (.cspkg or .zip) file and configuration (.cscfg) file for the deployment. Install the [Azure Storage NuGet package](https://www.nuget.org/packages/Azure.Storage.Common/). This step is optional if you're using an existing storage account. The storage account name must be unique.
```csharp string storageAccountName = ΓÇ£ContosoSASΓÇ¥
To deploy Cloud Services (extended support) by using the SDK:
1. Create a role profile object. A role profile defines role-specific properties for a SKU, such as name, capacity, and tier.
- This example defines two roles: ContosoFrontend and ContosoBackend. Role profile information must match the role that's defined in the configuration (.cscfg) file and definition (.csdef) file.
+ This example defines two roles: ContosoFrontend and ContosoBackend. Role profile information must match the role defined in the configuration (.cscfg) file and definition (.csdef) file.
```csharp CloudServiceRoleProfile cloudServiceRoleProfile = new CloudServiceRoleProfile()
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-template.md
Previously updated : 06/18/2024 Last updated : 07/24/2024 # Deploy Cloud Services (extended support) by using an ARM template
To deploy Cloud Services (extended support) by using a template:
] ```
-1. Create a Cloud Services (extended support) object. Add relevant `dependsOn` references if you are deploying virtual networks or public IP addresses in your template.
+1. Create a Cloud Services (extended support) object. Add relevant `dependsOn` references if you deploy virtual networks or public IP addresses in your template.
```json {
To deploy Cloud Services (extended support) by using a template:
} ```
-1. Deploy the template and parameter file (to define parameters in the template file) to create the Cloud Services (extended support) deployment. You can use these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support).
+1. To create the Cloud Services (extended support) deployment, deploy the template and parameter file (to define parameters in the template file). You can use these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support).
```powershell New-AzResourceGroupDeployment -ResourceGroupName "ContosOrg" -TemplateFile "file path to your template file" -TemplateParameterFile "file path to your parameter file"
cloud-services-extended-support Enable Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-alerts.md
Previously updated : 10/13/2020 Last updated : 07/24/2024 # Enable monitoring for Cloud Services (extended support) using the Azure portal
This article explains how to enable alerts on existing Cloud Service (extended s
4. Select the **New Alert** icon. :::image type="content" source="media/enable-alerts-2.png" alt-text="Image shows selecting the add new alert option.":::
-5. Input the desired conditions and required actions based on what metrics you are interested in tracking. You can define the rules based on individual metrics or the activity log.
+5. Input the desired conditions and required actions based on what metrics you want to track. You can define the rules based on individual metrics or the activity log.
:::image type="content" source="media/enable-alerts-3.png" alt-text="Image shows where to add conditions to alerts.":::
This article explains how to enable alerts on existing Cloud Service (extended s
:::image type="content" source="media/enable-alerts-5.png" alt-text="Image shows configuring action group logic.":::
-6. When you have finished setting up alerts, save the changes and based on the metrics configured you will begin to see the **Alerts** blade populate over time.
+6. When you finish setting up alerts, save the changes and based on the metrics configured you begin to see the **Alerts** blade populate over time.
## Next steps - Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
cloud-services-extended-support Enable Key Vault Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-key-vault-virtual-machine.md
Previously updated : 01/30/2024 Last updated : 07/24/2024
-# Apply the Key Vault VM extension to Azure Cloud Services (extended support)
+# Apply the Key Vault Virtual Machine (VM) extension to Azure Cloud Services (extended support)
This article provides basic information about the Azure Key Vault VM extension for Windows and shows you how to enable it in Azure Cloud Services.
The Key Vault VM extension provides automatic refresh of certificates stored in
The Key Vault VM extension is now supported on the Azure Cloud Services (extended support) platform to enable the management of certificates end to end. The extension can now pull certificates from a configured key vault at a predefined polling interval and install them for the service to use. ## How can I use the Key Vault VM extension?
-The following procedure will show you how to install the Key Vault VM extension on Azure Cloud Services by first creating a bootstrap certificate in your vault to get a token from Microsoft Entra ID. That token will help in the authentication of the extension with the vault. After the authentication process is set up and the extension is installed, all the latest certificates will be pulled down automatically at regular polling intervals.
+The following procedure shows you how to install the Key Vault VM extension on Azure Cloud Services by first creating a bootstrap certificate in your vault to get a token from Microsoft Entra ID. That token helps in the authentication of the extension with the vault. After the authentication process is set up and the extension is installed, all the latest certificates will be pulled down automatically at regular polling intervals.
> [!NOTE] > The Key Vault VM extension downloads all the certificates in the Windows certificate store to the location provided by the `certificateStoreLocation` property in the VM extension settings. Currently, the Key Vault VM extension grants access to the private key of the certificate only to the local system admin account.
cloud-services-extended-support Enable Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-rdp.md
Previously updated : 10/13/2020 Last updated : 07/24/2024 # Apply the Remote Desktop extension to Azure Cloud Services (extended support)
-The Azure portal uses the remote desktop extension to enable remote desktop even after the application is deployed. The remote desktop settings for your Cloud Service allows you to enable remote desktop, update the local administrator account, select the certificates used in authentication and set the expiration date for those certificates.
+The Azure portal uses the remote desktop extension to enable remote desktop even after the application is deployed. The remote desktop settings for your Cloud Service allow you to enable remote desktop, update the local administrator account, select the certificates used in authentication, and set the expiration date for those certificates.
## Apply Remote Desktop extension 1. Navigate to the Cloud Service you want to enable remote desktop for and select **"Remote Desktop"** in the left navigation pane.
The Azure portal uses the remote desktop extension to enable remote desktop even
2. Select **Add**. 3. Choose the roles to enable remote desktop for.
-4. Fill in the required fields for user name, password and expiration.
+4. Fill in the required fields for user name, password, and expiration.
> [!NOTE] > The password for remote desktop must be between 8-123 characters long and must satisfy at least 3 of password complexity requirements from the following: 1) Contains an uppercase character 2) Contains a lowercase character 3) Contains a numeric digit 4) Contains a special character 5) Control characters are not allowed :::image type="content" source="media/remote-desktop-2.png" alt-text="Image shows inputting the information required to connect to remote desktop.":::
-5. When finished, select **Save**. It will take a few moments before your role instances are ready to receive connections.
+5. When finished, select **Save**. It takes a few moments before your role instances are ready to receive connections.
## Connect to role instances with Remote Desktop enabled Once remote desktop is enabled on the roles, you can initiate a connection directly from the Azure portal.
-1. Click on **Roles and Instances** to open the instance settings.
+1. Select on **Roles and Instances** to open the instance settings.
:::image type="content" source="media/remote-desktop-3.png" alt-text="Image shows selecting the roles and instances option in the configuration blade."::: 2. Select a role instance that has remote desktop configured.
-3. Click **Connect** to download an remote desktop connection file.
+3. Select **Connect** to download a remote desktop connection file.
:::image type="content" source="media/remote-desktop-4.png" alt-text="Image shows selecting the worker role instance in the Azure portal."::: 4. Open the file to connect to the role instance. ## Update Remote Desktop Extension using PowerShell
-Follow the below steps to update your cloud service to the latest module with an RDP extension
+Follow the below steps to update your cloud service to the latest module with a Remote Desktop Protocol (RDP) extension
1. Update Az.CloudService module to the [latest version](https://www.powershellgallery.com/packages/Az.CloudService/0.5.0)
cloud-services-extended-support Enable Wad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-wad.md
Title: Apply the Windows Azure diagnostics extension in Cloud Services (extended support)
-description: Apply the Windows Azure diagnostics extension for Cloud Services (extended support)
+ Title: Apply the Microsoft Azure diagnostics extension in Cloud Services (extended support)
+description: Apply the Microsoft Azure diagnostics extension for Cloud Services (extended support)
Previously updated : 10/13/2020 Last updated : 07/24/2024
-# Apply the Windows Azure diagnostics extension in Cloud Services (extended support)
-You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the Microsoft.Azure.Diagnostics extension applied to a role, that role can collect additional points of data. For more information, see [Extensions Overview](extensions.md)
+# Apply the Microsoft Azure diagnostics extension in Cloud Services (extended support)
+You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the Microsoft.Azure.Diagnostics extension applied to a role, that role can collect more points of data. For more information, see [Extensions Overview](extensions.md)
-Windows Azure Diagnostics extension can be enabled for Cloud Services (extended support) through [PowerShell](deploy-powershell.md) or [ARM template](deploy-template.md)
+Microsoft Azure Diagnostics extension can be enabled for Cloud Services (extended support) through [PowerShell](deploy-powershell.md) or [ARM template](deploy-template.md)
-## Apply Windows Azure Diagnostics extension using PowerShell
+## Apply Microsoft Azure Diagnostics extension using PowerShell
```powershell # Create WAD extension object
Download the public configuration file schema definition by executing the follow
```powershell (Get-AzureServiceAvailableExtension -ExtensionName 'PaaSDiagnostics' -ProviderNamespace 'Microsoft.Azure.Diagnostics').PublicConfigurationSchema | Out-File -Encoding utf8 -FilePath 'PublicWadConfig.xsd' ```
-Here is an example of the public configuration XML file
+Here's an example of the public configuration XML file
``` <?xml version="1.0" encoding="utf-8"?> <PublicConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
Download the private configuration file schema definition by executing the follo
```powershell (Get-AzureServiceAvailableExtension -ExtensionName 'PaaSDiagnostics' -ProviderNamespace 'Microsoft.Azure.Diagnostics').PrivateConfigurationSchema | Out-File -Encoding utf8 -FilePath 'PrivateWadConfig.xsd' ```
-Here is an example of the private configuration XML file
+Here's an example of the private configuration XML file
``` <?xml version="1.0" encoding="utf-8"?>
Here is an example of the private configuration XML file
</PrivateConfig> ```
-## Apply Windows Azure Diagnostics extension using ARM template
+## Apply Microsoft Azure Diagnostics extension using ARM template
```json "extensionProfile": { "extensions": [
cloud-services-extended-support Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/extensions.md
Previously updated : 10/13/2020 Last updated : 07/24/2024 # Extensions for Cloud Services (extended support)
Extensions are small applications that provide post-deployment configuration and
## Key Vault Extension
-The Key Vault VM extension provides automatic refresh of certificates stored in an Azure Key Vault. Specifically, the extension monitors a list of observed certificates stored in key vaults, and upon detecting a change, retrieves, and installs the corresponding certificates. It also allows cross region/cross subscription reference of certificates for Cloud Service (extended support).
+The Key Vault Virtual Machine (VM) extension provides automatic refresh of certificates stored in an Azure Key Vault. Specifically, the extension monitors a list of observed certificates stored in key vaults, and upon detecting a change, retrieves, and installs the corresponding certificates. It also allows cross region/cross subscription reference of certificates for Cloud Service (extended support).
For more information, see [Configure key vault extension for Cloud Service (extended support)](./enable-key-vault-virtual-machine.md) ## Remote Desktop extension
-Remote Desktop enables you to access the desktop of a role running in Azure. You can use a remote desktop connection to troubleshoot and diagnose problems with your application while it is running.
+Remote Desktop enables you to access the desktop of a role running in Azure. You can use a remote desktop connection to troubleshoot and diagnose problems with your application while it's running.
You can enable a remote desktop connection in your role during development by including the remote desktop modules in your service definition or through the remote desktop extension. For more information, see [Configure remote desktop from the Azure portal](enable-rdp.md)
-## Windows Azure Diagnostics extension
+## Microsoft Azure Diagnostics extension
-You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the Microsoft.Azure.Diagnostics extension applied to a role, that role can collect additional points of data.
+You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the Microsoft.Azure.Diagnostics extension applied to a role, that role can collect more points of data.
-With basic monitoring, performance counter data from role instances is sampled and collected at 3-minute intervals. This basic monitoring data is not stored in your storage account and has no additional cost associated with it.
+With basic monitoring, performance counter data from role instances is sampled and collected at 3-minute intervals. This basic monitoring data isn't stored in your storage account and has no additional cost associated with it.
-With advanced monitoring, additional metrics are sampled and collected at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is stored in a storage account, in tables, and is purged after 10 days. The storage account used is configured by role; you can use different storage accounts for different roles.
+With advanced monitoring, more metrics are sampled and collected at intervals of 5 minutes, 1 hour, and 12 hours. The aggregated data is stored in a storage account, in tables, and is purged after 10 days. The storage account used configures based on role; you can use different storage accounts for different roles.
-For more information, see [Apply the Windows Azure diagnostics extension in Cloud Services (extended support)](enable-wad.md)
+For more information, see [Apply the Microsoft Azure diagnostics extension in Cloud Services (extended support)](enable-wad.md)
## Anti Malware Extension
-An Azure application or service can enable and configure Microsoft Antimalware for Azure Cloud Services using PowerShell cmdlets. Note that Microsoft Antimalware is installed in a disabled state in the Cloud Services platform running Windows Server 2012 R2 and older which requires an action by an Azure application to enable it. For Windows Server 2016 and above, Windows Defender is enabled by default, hence these cmdlets can be used for configuring Antimalware.
+An Azure application or service can enable and configure Microsoft Antimalware for Azure Cloud Services using PowerShell cmdlets. Microsoft Antimalware is installed in a disabled state in the Cloud Services platform running Windows Server 2012 R2 and older, which requires an action by an Azure application to enable it. For Windows Server 2016 and above, Windows Defender is enabled by default, and so, these cmdlets can be used for configuring Antimalware.
For more information, see [Add Microsoft Antimalware to Azure Cloud Service using Extended Support(CS-ES)](../security/fundamentals/antimalware-code-samples.md#add-microsoft-antimalware-to-azure-cloud-service-using-extended-support)
-To know more about Azure Antimalware, please visit [here](../security/fundamentals/antimalware.md)
+To know more about Azure Antimalware, visit [Microsoft Antimalware for Azure Cloud Services and Virtual Machines](../security/fundamentals/antimalware.md)
cloud-services-extended-support Feature Support Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/feature-support-analysis.md
Previously updated : 11/8/2022 Last updated : 07/24/2024 # Feature Analysis: Cloud Services (extended support) and Virtual Machine Scale Sets
-This article provides a feature analysis of Cloud Services (extended support) and Virtual Machine Scale Sets. For more information on Virtual Machine Scale Sets, please visit the documentation [here](../virtual-machine-scale-sets/overview.md)
+This article provides a feature analysis of Cloud Services (extended support) and Virtual Machine Scale Sets. For more information on Virtual Machine Scale Sets, visit the documentation [here](../virtual-machine-scale-sets/overview.md)
## Basic setup
This article provides a feature analysis of Cloud Services (extended support) an
||||| |Virtual machine type|Basic Azure PaaS VM (Microsoft.compute/cloudServices)|Standard Azure IaaS VM (Microsoft.compute/virtualmachines)|Scale Set specific VMs (Microsoft.compute /virtualmachinescalesets/virtualmachines)| |Maximum Instance Count (with FD guarantees)|1100|1000|3000 (1000 per Availability Zone)|
-|SKUs supported|D, Dv2, Dv3, Dav4 series, Ev3, Eav4 series, G series, H series|D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) are not supported|All SKUs|
+|SKUs supported|D, Dv2, Dv3, Dav4 series, Ev3, Eav4 series, G series, H series|D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) aren't supported|All SKUs|
|Full control over VM, NICs, Disks|Limited control over NICs and VM via CS-ES APIs. No support for Disks|Yes|Limited control with virtual machine scale sets VM API| |RBAC Permissions Required|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write, Compute VM Write, Network|Compute Virtual Machine Scale Sets Write| |Accelerated networking|No|Yes|Yes| |Spot instances and pricing|No|Yes, you can have both Spot and Regular priority instances|Yes, instances must either be all Spot or all Regular|
-|Mix operating systems|Extremely limited Windows support|Yes, Linux and Windows can reside in the same Flexible scale set|No, instances are the same operating system|
+|Mix operating systems|Limited Windows support|Yes, Linux and Windows can reside in the same Flexible scale set|No, instances are the same operating system|
|Disk Types|No Disk Support|Managed disks only, all storage types|Managed and unmanaged disks, All Storage Types |Disk Server Side Encryption with Customer Managed Keys|No|Yes| | |Write Accelerator|No|No|Yes|
This article provides a feature analysis of Cloud Services (extended support) an
| Feature | Cloud Services (extended Support) | Virtual Machine Scale Sets (Flex) | Virtual Machine Scale Sets (Uniform) | ||||| |Availability SLA|[SLA](https://azure.microsoft.com/support/legal/sla/cloud-services/v1_5/)|[SLA](https://azure.microsoft.com/support/legal/sla/virtual-machine-scale-sets/v1_1/)|[SLA](https://azure.microsoft.com/support/legal/sla/virtual-machine-scale-sets/v1_1/)|
-|Availability Zones|No|Specify instances land across 1, 2 or 3 availability zones|Specify instances land across 1, 2 or 3 availability zones|
+|Availability Zones|No|Specify instances land across 1, 2, or 3 availability zones|Specify instances land across 1, 2, or 3 availability zones|
|Assign VM to a Specific Availability Zone|No|Yes|No|
-|Fault Domain ΓÇô Max Spreading (Azure will maximally spread instances)|Yes|Yes|Yes|
-|Fault Domain ΓÇô Fixed Spreading|5 update domains|2-3 FDs (depending on regional maximum FD Count); 1 for zonal deployments|2, 3 5 FDs 1, 5 for zonal deployments|
+|Fault Domain ΓÇô Max Spreading (Azure maximally spreads instances)|Yes|Yes|Yes|
+|Fault Domain ΓÇô Fixed Spreading|Five update domains|2-3 FDs (depending on regional maximum FD Count); 1 for zonal deployments|2, 3 5 FDs 1, 5 for zonal deployments|
|Assign VM to a Specific Fault Domain|No|Yes|No|
-|Update Domains|Yes|Depreciated (platform maintenance performed FD by FD)|5 update domains|
+|Update Domains|Yes|Depreciated (platform maintenance performed FD by FD)|Five update domains|
|Perform Maintenance|No|Trigger maintenance on each instance using VM API|Yes| |VM Deallocation|No|Yes|Yes|
This article provides a feature analysis of Cloud Services (extended support) an
|Infiniband Networking|No|No|Yes, single placement group only| |Azure Load Balancer Basic SKU|Yes|No|Yes| |Network Port Forwarding|Yes (NAT Pool for role instance input endpoints)|Yes (NAT Rules for individual instances)|Yes (NAT Pool)|
-|Edge Sites|No|Yes|Yes|
+|Microsoft Edge Sites|No|Yes|Yes|
|Ipv6 Support|No|Yes|Yes| |Internal Load Balancer|No |Yes|Yes|
cloud-services-extended-support Generate Template Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/generate-template-portal.md
Previously updated : 03/07/2021 Last updated : 07/24/2024
This article explains how to download the ARM template and parameter file from t
## Get ARM template via portal
- 1. Go to the Azure portal and [create a new cloud service](deploy-portal.md). Add your cloud service configuration, package and definition files.
+ 1. Go to the Azure portal and [create a new cloud service](deploy-portal.md). Add your cloud service configuration, package, and definition files.
:::image type="content" source="media/deploy-portal-4.png" alt-text="Image shows the upload section of the basics tab during creation.":::
- 2. Once all fields have been completed, move to the Review and Create tab to validate your deployment configuration and click on **Download template for automation** your Cloud Service (extended support).
+ 2. Once you complete all fields, move to the Review and Create tab to validate your deployment configuration and select on **Download template for automation** your Cloud Service (extended support).
:::image type="content" source="media/download-template-portal-1.png" alt-text="Image shows downloading the template under cloud service (extended support) on the Azure portal."::: 3. Download your template and parameter files.
cloud-services-extended-support In Place Migration Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-common-errors.md
Previously updated : 2/08/2021 Last updated : 07/24/2024 # Common errors and known issues when migrating to Azure Cloud Services (extended support)
Following issues are known and being addressed.
| Known issues | Mitigation | |||
-| Role Instances restarting UD by UD after successful commit. | Restart operation follows the same method as monthly guest OS rollouts. Do not commit migration of cloud services with single role instance or impacted by restart.|
-| Azure portal cannot read migration state after browser refresh. | Rerun validate and prepare operation to get back to the original migration state. |
+| Role Instances restarting UD by UD after successful commit. | Restart operation follows the same method as monthly guest OS rollouts. Don't commit migration of cloud services with single role instance or impacted by restart.|
+| Azure portal can't read migration state after browser refresh. | Rerun validate and prepare operation to get back to the original migration state. |
| Certificate displayed as secret resource in key vault. | After migration, reupload the certificate as a certificate resource to simplify update operation on Cloud Services (extended support). | | Deployment labels not getting saved as tags as part of migration. | Manually create the tags after migration to maintain this information.
-| Resource Group name is in all caps. | Non-impacting. Solution not yet available. |
-| Name of the lock on Cloud Services (extended support) lock is incorrect. | Non-impacting. Solution not yet available. |
-| IP address name is incorrect on Cloud Services (extended support) portal blade. | Non-impacting. Solution not yet available. |
-| Invalid DNS name shown for virtual IP address after on update operation on a migrated cloud service. | Non-impacting. Solution not yet available. |
-| After successful prepare, linking a new Cloud Services (extended support) deployment as swappable isn't allowed. | Do not link a new cloud service as swappable to a prepared cloud service. |
-| Error messages need to be updated. | Non-impacting. |
+| Resource Group name is in all caps. | Nonimpacting. Solution not yet available. |
+| Name of the lock on Cloud Services (extended support) lock is incorrect. | Nonimpacting. Solution not yet available. |
+| IP address name is incorrect on Cloud Services (extended support) portal blade. | Nonimpacting. Solution not yet available. |
+| Invalid DNS name shown for virtual IP address after on update operation on a migrated cloud service. | Nonimpacting. Solution not yet available. |
+| After successful prepare, linking a new Cloud Services (extended support) deployment as swappable isn't allowed. | Don't link a new cloud service as swappable to a prepared cloud service. |
+| Error messages need to be updated. | Nonimpacting. |
## Common migration errors Common migration errors and mitigation steps. | Error message | Details | |||
-| The resource type could not be found in the namespace `Microsoft.Compute` for api version '2020-10-01-preview'. | [Register the subscription](in-place-migration-overview.md#setup-access-for-migration) for CloudServices feature flag to access public preview. |
+| The resource type couldn't be found in the namespace `Microsoft.Compute` for API version '2020-10-01-preview'. | [Register the subscription](in-place-migration-overview.md#set-up-access-for-migration) for CloudServices feature flag to access public preview. |
| The server encountered an internal error. Retry the request. | Retry the operation, use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or contact support. | | The server encountered an unexpected error while trying to allocate network resources for the cloud service. Retry the request. | Retry the operation, use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or contact support. |
-| Deployment deployment-name in cloud service cloud-service-name must be within a virtual network to be migrated. | Deployment isn't located in a virtual network. Refer to [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. |
+| Deployment deployment-name in cloud service cloud-service-name must be within a virtual network to be migrated. | Deployment isn't located in a virtual network. For more information, see [the Migration of deployments not in a virtual network section of Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network). |
| Migration of deployment deployment-name in cloud service cloud-service-name isn't supported because it is in region region-name. Allowed regions: [list of available regions]. | Region isn't yet supported for migration. |
-| The Deployment deployment-name in cloud service cloud-service-name cannot be migrated because there are no subnets associated with the role(s) role-name. Associate all roles with a subnet, then retry the migration of the cloud service. | Update the cloud service (classic) deployment by placing it in a subnet before migration. |
-| The deployment deployment-name in cloud service cloud-service-name cannot be migrated because the deployment requires at least one feature that not registered on the subscription in Azure Resource Manager. Register all required features to migrate this deployment. Missing feature(s): [list of missing features]. | Contact support to get the feature flags registered. |
-| The deployment cannot be migrated because the deployment's cloud service has two occupied slots. Migration of cloud services is only supported for deployments that are the only deployment in their cloud service. Delete the other deployment in the cloud service to proceed with the migration of this deployment. | Refer to the [unsupported scenario](in-place-migration-technical-details.md#unsupported-configurations--migration-scenarios) list for more details. |
-| Deployment deployment-name in HostedService cloud-service-name is in intermediate state: state. Migration not allowed. | Deployment is either being created, deleted or updated. Wait for the operation to complete and retry. |
+| The Deployment deployment-name in cloud service cloud-service-name can't be migrated because there are no subnets associated with the role(s) role-name. Associate all roles with a subnet, then retry the migration of the cloud service. | Update the cloud service (classic) deployment by placing it in a subnet before migration. |
+| The deployment deployment-name in cloud service cloud-service-name can't be migrated because the deployment requires at least one feature that not registered on the subscription in Azure Resource Manager. Register all required features to migrate this deployment. | Contact support to get the feature flags registered. |
+| The deployment can't be migrated because the deployment's cloud service has two occupied slots. Migration of cloud services is only supported for deployments that are the only deployment in their cloud service. To proceed with the migration of this deployment, delete the other deployment in the cloud service. | For more information, see the [unsupported scenario list](in-place-migration-technical-details.md#unsupported-configurations--migration-scenarios). |
+| Deployment deployment-name in HostedService cloud-service-name is in intermediate state: state. Migration not allowed. | Deployment is either being created, deleted, or updated. Wait for the operation to complete and retry. |
| The deployment deployment-name in hosted service cloud-service-name has reserved IP(s) but no reserved IP name. To resolve this issue, update reserved IP name or contact the Microsoft Azure service desk. | Update cloud service deployment. | | The deployment deployment-name in hosted service cloud-service-name has reserved IP(s) reserved-ip-name but no endpoint on the reserved IP. To resolve this issue, add at least one endpoint to the reserved IP. | Add endpoint to reserved IP. |
-| Migration of Deployment {0} in HostedService {1} is in the process of being committed and cannot be changed until it completes successfully. | Wait or retry operation. |
-| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait or retry operation. |
+| Migration of Deployment {0} in HostedService {1} is in the process of being committed and can't be changed until it completes successfully. | Wait or retry operation. |
+| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and can't be changed until it completes successfully. | Wait or retry operation. |
| One or more VMs in Deployment {0} in HostedService {1} is undergoing an update operation. It can't be migrated until the previous operation completes successfully. Retry after sometime. | Wait for operation to complete. |
-| Migration isn't supported for Deployment {0} in HostedService {1} because it uses following features not yet supported for migration: Non-vnet deployment.| Deployment isn't located in a virtual network. Refer to [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. |
-| The virtual network name cannot be null or empty. | Provide virtual network name in the REST request body |
-| The Subnet Name cannot be null or empty. | Provide subnet name in the REST request body. |
+| Migration isn't supported for Deployment {0} in HostedService {1} because it uses following features not yet supported for migration: Nonvnet deployment.| Deployment isn't located in a virtual network. For more information, see [the Migration of deployments not in a virtual network section of Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network). |
+| The virtual network name can't be null or empty. | Provide virtual network name in the REST request body |
+| The Subnet Name can't be null or empty. | Provide subnet name in the REST request body. |
| DestinationVirtualNetwork must be set to one of the following values: Default, New, or Existing. | Provide DestinationVirtualNetwork property in the REST request body. |
-| Default VNet destination option not implemented. | ΓÇ£DefaultΓÇ¥ value isn't supported for DestinationVirtualNetwork property in the REST request body. |
-| The deployment {0} cannot be migrated because the CSPKG isn't available. | Upgrade the deployment and try again. |
-| The subnet with ID '{0}' is in a different location than deployment '{1}' in hosted service '{2}'. The location for the subnet is '{3}' and the location for the hosted service is '{4}'. Specify a subnet in the same location as the deployment. | Update the cloud service to have both subnet and cloud service in the same location before migration. |
-| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait for abort to complete or retry abort. Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support otherwise. |
-| Deployment {0} in HostedService {1} has not been prepared for Migration. | Run prepare on the cloud service before running the commit operation. |
+| Default virtual network destination option not implemented. | ΓÇ£DefaultΓÇ¥ value isn't supported for DestinationVirtualNetwork property in the REST request body. |
+| The deployment {0} can't be migrated because the CSPKG isn't available. | Upgrade the deployment and try again. |
+| The subnet with ID '{0}' is in a different location than deployment '{1}' in hosted service '{2}'. The location for the subnet is '{3}' and the location for the hosted service is '{4}'. Specify a subnet in the same location as the deployment. | Update the cloud service to have both subnet and cloud service in the same location before migration. |
+| Migration of Deployment {0} in HostedService {1} is in the process of being aborted and can't be changed until it completes successfully. | Wait for abort to complete or retry abort. Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support otherwise. |
+| Deployment {0} in HostedService {1} hasn't been prepared for Migration. | Run prepare on the cloud service before running the commit operation. |
| UnknownExceptionInEndExecute: Contract.Assert failed: rgName is null or empty: Exception received in EndExecute that isn't an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | UnknownExceptionInEndExecute: A task was canceled: Exception received in EndExecute that isn't an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | XrpVirtualNetworkMigrationError: Virtual network migration failure. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | Deployment {0} in HostedService {1} belongs to Virtual Network {2}. Migrate Virtual Network {2} to migrate this HostedService {1}. | Refer to [Virtual Network migration](in-place-migration-technical-details.md#virtual-network-migration). |
-| The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota has been raised. | Follow appropriate channels to request quota increase: <br>[Quota increase for networking resources](../azure-portal/supportability/networking-quota-requests.md) <br>[Quota increase for compute resources](../azure-portal/supportability/per-vm-quota-requests.md) |
-|XrpPaaSMigrationCscfgCsdefValidationMismatch: Migration could not be completed on deployment deployment-name in hosted service service-name because the deployment's metadata is stale. Please abort the migration and upgrade the deployment before retrying migration. Validation Message: The service name 'service-name'in the service definition file does not match the name 'service-name-in-config-file' in the service configuration file|match the service names in both .csdef and .cscfg file|
+| The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota raises. | To request a quota increase, follow the appropriate channels: <br>[Quota increase for networking resources](../azure-portal/supportability/networking-quota-requests.md) <br>[Quota increase for compute resources](../azure-portal/supportability/per-vm-quota-requests.md) |
+|XrpPaaSMigrationCscfgCsdefValidationMismatch: Migration couldn't be completed on deployment deployment-name in hosted service service-name because the deployment's metadata is stale. Abort the migration and upgrade the deployment before retrying migration. Validation Message: The service name 'service-name'in the service definition file doesn't match the name 'service-name-in-config-file' in the service configuration file|match the service names in both .csdef and .cscfg file|
|NetworkingInternalOperationError when deploying Cloud Service (extended support) resource| The issue may occur if the Service name is same as role name. The recommended remediation is to use different names for service and roles| ## Next steps
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-overview.md
Previously updated : 2/08/2021 Last updated : 07/24/2024 # Migrate Azure Cloud Services (classic) to Azure Cloud Services (extended support)
This document provides an overview for migrating Cloud Services (classic) to Clo
[Cloud Services (extended support)](overview.md) has the primary benefit of providing regional resiliency along with feature parity with Azure Cloud Services deployed using Azure Service Manager. It also offers some Azure Resource Manager capabilities such as role-based access control (RBAC), tags, policy, and supports deployment templates, private link. Both deployment models (extended support and classic) are available with [similar pricing structures](https://azure.microsoft.com/pricing/details/cloud-services/).
-Cloud Services (extended support) supports two paths for customers to migrate from Azure Service Manager to Azure Resource
+Cloud Services (extended support) supports two paths for customers to migrate from Azure Service Manager to Azure Resource
-The below table highlights comparison between these two options.
+The following table highlights comparison between these two options.
| Redeploy | In-place migration |
The below table highlights comparison between these two options.
| Redeploy allows customers to: <br><br> - Define resource names. <br><br> - Organize or reuse resources as preferred. <br><br> - Reuse service configuration and definition files with minimal changes. | For in-place migration, the platform: <br><br> - Defines resource names. <br><br> - Organizes each deployment and related resources in individual Resource Groups. <br><br> - Modifies existing configuration and definition file for Azure Resource Manager. | | Customers need to orchestrate traffic to the new deployment. | Migration retains IP address and data path remains the same. | | Customers need to delete the old cloud services in Azure Resource Manager. | Platform deletes the Cloud Services (classic) resources after migration. |
-| This is a lift and shift migration which offers more flexibility but requires additional time to migrate. | This is an automated migration which offers quick migration but less flexibility. |
+| This migration is a lift and shift scenario, which offers more flexibility but requires more time to migrate. | This scenario is an automated migration that offers quick migration but less flexibility. |
-When evaluating migration plans from Cloud Services (classic) to Cloud Services (extended support) you may want to investigate additional Azure services such as: [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), [App Service](../app-service/overview.md), [Azure Kubernetes Service](../aks/intro-kubernetes.md), and [Azure Service Fabric](../service-fabric/overview-managed-cluster.md). These services will continue to feature additional capabilities, while Cloud Services (extended support) will primarily maintain feature parity with Cloud Services (classic.)
+When evaluating migration plans from Cloud Services (classic) to Cloud Services (extended support), you may want to investigate other Azure services such as: [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), [App Service](../app-service/overview.md), [Azure Kubernetes Service](../aks/intro-kubernetes.md), and [Azure Service Fabric](../service-fabric/overview-managed-cluster.md). These services continue to feature other capabilities, while Cloud Services (extended support) maintains feature parity with Cloud Services (classic).
-Depending on the application, Cloud Services (extended support) may require substantially less effort to move to Azure Resource Manager compared to other options. If your application is not evolving, Cloud Services (extended support) is a viable option to consider as it provides a quick migration path. Conversely, if your application is continuously evolving and needs a more modern feature set, do explore other Azure services to better address your current and future requirements.
+Depending on the application, Cloud Services (extended support) may require substantially less effort to move to Azure Resource Manager compared to other options. If your application isn't evolving, Cloud Services (extended support) is a viable option to consider as it provides a quick migration path. Conversely, if your application is continuously evolving and needs a more modern feature set, do explore other Azure services to better address your current and future requirements.
## Redeploy Overview
Redeploying your services with [Cloud Services (extended support)](overview.md)
- There are no changes to the design, architecture, or components of web and worker roles. - No changes are required to runtime code as the data plane is the same as cloud services. - Azure GuestOS releases and associated updates are aligned with Cloud Services (classic). -- Underlying update process with respect to update domains, how upgrade proceeds, rollback, and allowed service changes during an update will not change.
+- Underlying update process with respect to update domains, how upgrade proceeds, rollback, and allowed service changes during an update remains unchanged.
A new Cloud Service (extended support) can be deployed directly in Azure Resource Manager using the following client tools:
The platform supported migration provides following key benefits:
- Enables seamless platform orchestrated migration with no downtime for most scenarios. Learn more about [supported scenarios](in-place-migration-technical-details.md). - Migrates existing cloud services in three simple steps: validate, prepare, commit (or abort). Learn more about how the [migration tool works](in-place-migration-overview.md#migration-steps).-- Provides the ability to test migrated deployments after successful preparation. Commit and finalize the migration while abort rolls back the migration.
+- Offers testing for migrated deployments after successful preparation. Commit and finalize the migration while abort rolls back the migration.
The migration tool utilizes the same APIs and has the same experience as the [Virtual Machine (classic) migration](../virtual-machines/migration-classic-resource-manager-overview.md).
-## Setup access for migration
+## Set up access for migration
To perform this migration, you must be added as a coadministrator for the subscription and register the providers needed. 1. Sign in to the Azure portal. 3. On the Hub menu, select Subscription. If you don't see it, select All services.
-3. Find the appropriate subscription entry, and then look at the MY ROLE field. For a coadministrator, the value should be Account admin. If you're not able to add a co-administrator, contact a service administrator or co-administrator for the subscription to get yourself added.
+3. Find the appropriate subscription entry, and then look at the MY ROLE field. For a coadministrator, the value should be Account admin. If you're not able to add a coadministrator, contact a service administrator or coadministrator for the subscription to get yourself added.
-4. Register your subscription for Microsoft.ClassicInfrastructureMigrate namespace using [Portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [PowerShell](../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell) or [CLI](../azure-resource-manager/management/resource-providers-and-types.md#azure-cli)
+4. Register your subscription for Microsoft.ClassicInfrastructureMigrate namespace using [Portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [PowerShell](../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell), or [CLI](../azure-resource-manager/management/resource-providers-and-types.md#azure-cli)
```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate
The list of supported scenarios differs between Cloud Services (classic) and Vir
Customers can migrate their Cloud Services (classic) deployments using the same four operations used to migrate Virtual Machines (classic).
-1. **Validate Migration** - Validates that the migration will not be prevented by common unsupported scenarios.
-2. **Prepare Migration** ΓÇô Duplicates the resource metadata in Azure Resource Manager. All resources are locked for create/update/delete operations to ensure resource metadata is in sync across Azure Server Manager and Azure Resource Manager. All read operations will work using both Cloud Services (classic) and Cloud Services (extended support) APIs.
+1. **Validate Migration** - Validates that common unsupported scenarios won't prevent migration.
+2. **Prepare Migration** ΓÇô Duplicates the resource metadata in Azure Resource Manager. All resources are locked for create/update/delete operations to ensure resource metadata is in sync across Azure Server Manager and Azure Resource Manager. All read operations work using both Cloud Services (classic) and Cloud Services (extended support) APIs.
3. **Abort Migration** - Removes resource metadata from Azure Resource Manager. Unlocks all resources for create/update/delete operations.
-4. **Commit Migration** - Removes resource metadata from Azure Service Manager. Unlocks the resource for create/update/delete operations. Abort is no longer allowed after commit has been attempted.
+4. **Commit Migration** - Removes resource metadata from Azure Service Manager. Unlocks the resource for create/update/delete operations. Abort is no longer allowed after commit attempts.
>[!NOTE] > Prepare, Abort and Commit are idempotent and therefore, if failed, a retry should fix the issue.
For more information, see [Overview of Platform-supported migration of IaaS reso
- Network Traffic Rules ## Supported configurations / migration scenarios
-These are top scenarios involving combinations of resources, features, and Cloud Services. This list is not exhaustive.
+The following list contains top scenarios involving combinations of resources, features, and Cloud Services. This list isn't exhaustive.
| Service | Configuration | Comments | |||| | [Microsoft Entra Domain Services](../active-directory-domain-services/overview.md) | Virtual networks that contain Microsoft Entra Domain Services. | Virtual network containing both Cloud Service deployment and Microsoft Entra Domain Services is supported. Customer first needs to separately migrate Microsoft Entra Domain Services and then migrate the virtual network left only with the Cloud Service deployment |
-| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing a prod slot deployment can be migrated. It is not recommended to migrate staging slot as this can result in issues with retaining service FQDN. To migrate staging slot, first promote staging deployment to production and then migrate to ARM. |
-| Cloud Service | Deployment not in a publicly visible virtual network (default virtual network deployment) | A Cloud Service can be in a publicly visible virtual network, in a hidden virtual network or not in any virtual network. Cloud Services in a hidden virtual network and publicly visible virtual networks are supported for migration. Customer can use the Validate API to tell if a deployment is inside a default virtual network or not and thus determine if it can be migrated. |
+| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing a prod slot deployment can be migrated. It isn't recommended to migrate staging slot as this process can result in issues with retaining service FQDN. To migrate staging slot, first promote staging deployment to production and then migrate to Azure Resource Manager. |
+| Cloud Service | Deployment not in a publicly visible virtual network (default virtual network deployment) | A Cloud Service can be in a publicly visible virtual network, in a hidden virtual network or not in any virtual network. Cloud Services in a hidden virtual network and publicly visible virtual networks are supported for migration. Customer can use the Validate API to tell if a deployment is inside a default virtual network or not and thus determine if it can be migrated. |
|Cloud Service | XML extensions (BGInfo, Visual Studio Debugger, Web Deploy, and Remote Debugging). | All xml extensions are supported for migration
-| Virtual Network | Virtual network containing multiple Cloud Services. | Virtual network contain multiple cloud services is supported for migration. The virtual network and all the Cloud Services within it will be migrated together to Azure Resource Manager. |
-| Virtual Network | Migration of virtual networks created via Portal (Requires using ΓÇ£Group Resource-group-name VNet-NameΓÇ¥ in .cscfg file) | As part of migration, the virtual network name in cscfg will be changed to use Azure Resource Manager ID of the virtual network. (subscription/subscription-id/resource-group/resource-group-name/resource/vnet-name) <br><br>To manage the deployment after migration, update the local copy of .cscfg file to start using Azure Resource Manager ID instead of virtual network name. <br><br>A .cscfg file that uses the old naming scheme will not pass validation.
+| Virtual Network | Virtual network containing multiple Cloud Services. | Virtual network contain multiple cloud services is supported for migration. The virtual network and all the Cloud Services within it migrate together to Azure Resource Manager. |
+| Virtual Network | Migration of virtual networks created via Portal (Requires using ΓÇ£Group Resource-group-name VNet-NameΓÇ¥ in .cscfg file) | As part of migration, the virtual network name in cscfg changes to use Azure Resource Manager ID of the virtual network. (subscription/subscription-id/resource-group/resource-group-name/resource/vnet-name) <br><br>To manage the deployment after migration, update the local copy of .cscfg file to start using Azure Resource Manager ID instead of virtual network name. <br><br>A .cscfg file that uses the old naming scheme fails validation.
| Virtual Network | Migration of deployment with roles in different subnet. | A cloud service with different roles in different subnets is supported for migration. | ## Next steps
cloud-services-extended-support In Place Migration Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-portal.md
Previously updated : 2/08/2021 Last updated : 07/24/2024 # Migrate to Cloud Services (extended support) using the Azure portal
To perform this migration, you must be added as a coadministrator for the subscr
2. On the **Hub** menu, select **Subscription**. If you don't see it, select **All services**. 3. Find the appropriate subscription entry, and then look at the **MY ROLE** field. For a coadministrator, the value should be *Account admin*.
-If you're not able to add a co-administrator, contact a service administrator or [co-administrator](../role-based-access-control/classic-administrators.md) for the subscription to get yourself added.
+If you're not able to add a coadministrator, contact a service administrator or [coadministrator](../role-based-access-control/classic-administrators.md) for the subscription to get yourself added.
**Sign up for Migration resource provider** 1. Register with the migration resource provider `Microsoft.ClassicInfrastructureMigrate` and preview feature `Cloud Services` under Microsoft.Compute namespace using the [Azure portal](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
-1. Wait five minutes for the registration to complete then check the status of the approval.
+1. Wait five minutes for the registration to complete, then check the status of the approval.
## Migrate your Cloud Service resources
If you're not able to add a co-administrator, contact a service administrator or
:::image type="content" source="media/in-place-migration-portal-1.png" alt-text="Image shows the Migrate to ARM blade in the Azure portal.":::
- If validate fails, a list of unsupported scenarios will be displayed and need to be fixed before migration can continue.
+ If validation fails, a list of unsupported scenarios displays. They need to be fixed before migration can continue.
:::image type="content" source="media/in-place-migration-portal-3.png" alt-text="Image shows validation error in the Azure portal."::: 5. Prepare for the migration.
- If the prepare is successful, the migration is ready for commit.
+ If the preparation is successful, the migration is ready for commit.
:::image type="content" source="media/in-place-migration-portal-4.png" alt-text="Image shows validation passing in the Azure portal.":::
- If the prepare fails, review the error, address any issues, and retry the prepare.
+ If the preparation fails, review the error, address any issues, and retry the preparation.
:::image type="content" source="media/in-place-migration-portal-5.png" alt-text="Image shows validation failure error.":::
If you're not able to add a co-administrator, contact a service administrator or
>[!IMPORTANT] > Once you commit to the migration, there is no option to roll back.
- Type in "yes" to confirm and commit to the migration. The migration is now complete. The migrated Cloud Services (extended support) deployment is unlocked for all operations".
+ Type in "yes" to confirm and commit to the migration. The migration is now complete. The migrated Cloud Services (extended support) deployment is unlocked for all operations.
## Next steps
-Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
+Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation, and other attributes of your new Cloud Services (extended support) deployment.
cloud-services-extended-support In Place Migration Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-powershell.md
ms.reviwer: mimckitt Previously updated : 02/06/2020 Last updated : 07/24/2024
These steps show you how to use Azure PowerShell commands to migrate from [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) to [Cloud Services (extended support)](overview.md).
-## 1) Plan for migration
-Planning is the most important step for a successful migration experience. Review the [Cloud Services (extended support) overview](overview.md) and [Planning for migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-plan.md) prior to beginning any migration steps.
+## Plan for migration
+Planning is the most important step for a successful migration experience. Review the [Cloud Services (extended support) overview](overview.md) and [Planning for migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-plan.md) before beginning any migration steps.
-## 2) Install the latest version of PowerShell
+## Install the latest version of PowerShell
There are two main options to install Azure PowerShell: [PowerShell Gallery](https://www.powershellgallery.com/profiles/azure-sdk/) or [Web Platform Installer (WebPI)](https://aka.ms/webpi-azps). WebPI receives monthly updates. PowerShell Gallery receives updates on a continuous basis. This article is based on Azure PowerShell version 2.1.0. For installation instructions, see [How to install and configure Azure PowerShell](/powershell/azure/servicemanagement/install-azure-ps?preserve-view=true&view=azuresmps-4.0.0).
-## 3) Ensure Admin permissions
+## Ensure Admin permissions
To perform this migration, you must be added as a coadministrator for the subscription in the [Azure portal](https://portal.azure.com). 1. Sign in to the [Azure portal](https://portal.azure.com). 2. On the **Hub** menu, select **Subscription**. If you don't see it, select **All services**. 3. Find the appropriate subscription entry, and then look at the **MY ROLE** field. For a coadministrator, the value should be *Account admin*.
-If you're not able to add a co-administrator, contact a service administrator or co-administrator for the subscription to get yourself added.
+If you're not able to add a coadministrator, contact a service administrator or coadministrator for the subscription to get yourself added.
-## 4) Register the classic provider and CloudService feature
+## Register the classic provider and CloudService feature
First, start a PowerShell prompt. For migration, set up your environment for both classic and Resource Manager. Sign in to your account for the Resource Manager model.
Check the status of the classic provider approval by using the following command
Get-AzResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate ```
-Check the status of registration using the following:
+Check the status of registration using the following command:
+ ```powershell Get-AzProviderFeature -FeatureName CloudServices -ProviderNamespace Microsoft.Compute ```
Select-AzureSubscription ΓÇôSubscriptionName "My Azure Subscription"
```
-## 5) Migrate your Cloud Services
+## Migrate your Cloud Services
Before starting the migration, understand how the [migration steps](./in-place-migration-overview.md#migration-steps) works and what each step does.
-* [Migrate a Cloud Service not in a virtual network](#51-option-1migrate-a-cloud-service-not-in-a-virtual-network)
-* [Migrate a Cloud Service in a virtual network](#51-option-2migrate-a-cloud-service-in-a-virtual-network)
+* [Migrate a Cloud Service not in a virtual network](#option-1migrate-a-cloud-service-not-in-a-virtual-network)
+* [Migrate a Cloud Service in a virtual network](#option-2migrate-a-cloud-service-in-a-virtual-network)
> [!NOTE] > All the operations described here are idempotent. If you have a problem other than an unsupported feature or a configuration error, we recommend that you retry the prepare, abort, or commit operation. The platform then tries the action again.
-### 5.1) Option 1 - Migrate a Cloud Service not in a virtual network
+### Option 1 - Migrate a Cloud Service not in a virtual network
Get the list of cloud services by using the following command. Then pick the cloud service that you want to migrate. ```powershell
If you're ready to complete the migration, commit the migration
Move-AzureService -Commit -ServiceName $serviceName -DeploymentName $deploymentName ```
-### 5.1) Option 2 - Migrate a Cloud Service in a virtual network
+### Option 2 - Migrate a Cloud Service in a virtual network
To migrate a Cloud Service in a virtual network, you migrate the virtual network. The Cloud Service automatically migrates with the virtual network.
If the prepared configuration looks good, you can move forward and commit the re
Move-AzureVirtualNetwork -Commit -VirtualNetworkName $vnetName ``` - ## Next steps
-Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
+Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation, and other attributes of your new Cloud Services (extended support) deployment.
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md
ms.reviwer: mimckitt Previously updated : 02/06/2020 Last updated : 07/24/2024
This article discusses the technical details regarding the migration tool as per
### Extensions and plugin migration -- All enabled and supported extensions will be migrated.
+- All enabled and supported extensions are migrated.
- Disabled extensions won't be migrated. -- Plugins are a legacy concept and should be removed before migration. They're supported for migration and but after migration, if extension needs to be enabled, plugin needs to be removed first before installing the extension. Remote desktop plugins and extensions are most impacted by this.
+- Plugins are a legacy concept and should be removed before migration. They're supported for migration, but after migration, if extension needs to be enabled, the plugin needs to be removed before installing the extension. This limitation affects remote desktop plugins and extensions the most.
### Certificate migration - In Cloud Services (extended support), certificates are stored in a Key Vault. As part of migration, we create a Key Vault for the customers having the Cloud Service name and transfer all certificates from Azure Service Manager to Key Vault.
This article discusses the technical details regarding the migration tool as per
### Service Configuration and Service Definition files - The .cscfg and .csdef files need to be updated for Cloud Services (extended support) with minor changes. -- The names of resources like virtual network and VM SKU are different. See [Translation of resources and naming convention post migration](#translation-of-resources-and-naming-convention-post-migration)
+- The names of resources like virtual network and virtual machine (VM) SKU are different. See [Translation of resources and naming convention post migration](#translation-of-resources-and-naming-convention-post-migration)
- Customers can retrieve their new deployments through [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) and [REST API](/rest/api/compute/cloudservices/get). ### Cloud Service and deployments-- Each Cloud Services (extended support) deployment is an independent Cloud Service. Deployment are no longer grouped into a cloud service using slots.
+- Each Cloud Services (extended support) deployment is an independent Cloud Service. Deployments are no longer grouped into a cloud service using slots.
- If you have two slots in your Cloud Service (classic), you need to delete one slot (staging) and use the migration tool to move the other (production) slot to Azure Resource Manager. - The public IP address on the Cloud Service deployment remains the same after migration to Azure Resource Manager and is exposed as a Basic SKU IP (dynamic or static) resource. - The DNS name and domain (cloudapp.net) for the migrated cloud service remains the same.
This article discusses the technical details regarding the migration tool as per
### Migration of deployments not in a virtual network - In late 2018, Azure started automatically creating new deployments (without customer specified virtual network) into a platform created ΓÇ£defaultΓÇ¥ virtual network. These default virtual networks are hidden from customers. -- As part of the migration, this default virtual network will be exposed to customers once in Azure Resource Manager. To manage or update the deployment in Azure Resource Manager, customers need to add this virtual network information in the NetworkConfiguration section of the .cscfg file.
+- As part of the migration, this default virtual network is exposed to customers once in Azure Resource Manager. To manage or update the deployment in Azure Resource Manager, customers need to add this virtual network information in the NetworkConfiguration section of the .cscfg file.
- The default virtual network, when migrated to Azure Resource Manager, is placed in the same resource group as the Cloud Service. - Cloud Services created before this time (before end of 2018) won't be in any virtual network and can't be migrated using the tool. Consider redeploying these Cloud Services directly in Azure Resource Manager. Another approach is to migrate via creating new Staging deployment and VIPSwap Check more details [here](./non-vnet-migration.md)-- To check if a deployment is eligible to migrate, run the validate API on the deployment. The result of Validate API will contain error message explicitly mentioning if this deployment is eligible to migrate.
+- To check if a deployment is eligible to migrate, run the validate API on the deployment. The result of Validate API contains error message explicitly mentioning if this deployment is eligible to migrate.
### Load Balancer - For a Cloud Service using a public endpoint, a platform created load balancer associated with the Cloud Service is exposed inside the customerΓÇÖs subscription in Azure Resource Manager. The load balancer is a read-only resource, and updates are restricted only through the Service Configuration (.cscfg) and Service Definition (.csdef) files. ### Key Vault-- As part of migration, Azure automatically creates a new Key Vault and migrates all the certificates to it. The tool does not allow you to use an existing Key Vault.
+- As part of migration, Azure automatically creates a new Key Vault and migrates all the certificates to it. The tool doesn't allow you to use an existing Key Vault.
- Cloud Services (extended support) requires a Key Vault located in the same region and subscription. This Key Vault is automatically created as part of the migration. ## Resources and features not available for migration
-These are top scenarios involving combinations of resources, features and Cloud Services. This list isn't exhaustive.
+This list contains the top scenarios involving combinations of resources, features, and Cloud Services. This list isn't exhaustive.
| Resource | Next steps / work-around | |||
These are top scenarios involving combinations of resources, features and Cloud
| Alerts | Migration goes through but alerts are dropped. [Recreate the rules](./enable-alerts.md) after migration on Cloud Services (extended support). | | VPN Gateway | Remove the VPN Gateway before beginning migration and then recreate the VPN Gateway once migration is complete. | | Express Route Gateway (in the same subscription as Virtual Network only) | Remove the Express Route Gateway before beginning migration and then recreate the Gateway once migration is complete. |
-| Quota | Quota is not migrated. [Request new quota](../azure-resource-manager/templates/error-resource-quota.md#solution) on Azure Resource Manager prior to migration for the validation to be successful. |
+| Quota | Quota isn't migrated. [Request new quota](../azure-resource-manager/templates/error-resource-quota.md#solution) on Azure Resource Manager prior to migration for the validation to be successful. |
| Affinity Groups | Not supported. Remove any affinity groups before migration. | | Virtual networks using [virtual network peering](../virtual-network/virtual-network-peering-overview.md)| Before migrating a virtual network that is peered to another virtual network, delete the peering, migrate the virtual network to Resource Manager and re-create peering. This can cause downtime depending on the architecture. | | Virtual networks that contain App Service environments | Not supported |
These are top scenarios involving combinations of resources, features and Cloud
| Configuration / Scenario | Next steps / work-around | |||
-| Migration of some older deployments not in a virtual network | Some Cloud Service deployments not in a virtual network aren't supported for migration. <br><br> 1. Use the validate API to check if the deployment is eligible to migrate. <br> 2. If eligible, the deployments will be moved to Azure Resource Manager under a virtual network with prefix of ΓÇ£DefaultRdfeVnetΓÇ¥ |
+| Migration of some older deployments not in a virtual network | Some Cloud Service deployments not in a virtual network aren't supported for migration. <br><br> 1. Use the validate API to check if the deployment is eligible to migrate. <br> 2. If eligible, the deployments move to Azure Resource Manager under a virtual network with prefix of ΓÇ£DefaultRdfeVnetΓÇ¥ |
| Migration of deployments containing both production and staging slot deployment using dynamic IP addresses | Migration of a two slot Cloud Service requires deletion of the staging slot. Once the staging slot is deleted, migrate the production slot as an independent Cloud Service (extended support) in Azure Resource Manager. Then redeploy the staging environment as a new Cloud Service (extended support) and make it swappable with the first one. | | Migration of deployments containing both production and staging slot deployment using Reserved IP addresses | Not supported. | | Migration of production and staging deployment in different virtual network|Migration of a two slot cloud service requires deleting the staging slot. Once the staging slot is deleted, migrate the production slot as an independent cloud service (extended support) in Azure Resource Manager. A new Cloud Services (extended support) deployment can then be linked to the migrated deployment with swappable property enabled. Deployments files of the old staging slot deployment can be reused to create this new swappable deployment. | | Migration of empty Cloud Service (Cloud Service with no deployment) | Not supported. |
-| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration will then go through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins aren't recommended](./deploy-prerequisite.md#required-definition-file-updates) for use on Cloud Services (extended support).|
-| Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This will cause downtime. |
+| Migration of deployment containing the remote desktop plugin and the remote desktop extensions | Option 1: Remove the remote desktop plugin before migration. This requires changes to deployment files. The migration then goes through. <br><br> Option 2: Remove remote desktop extension and migrate the deployment. Post-migration, remove the plugin and install the extension. This requires changes to deployment files. <br><br> Remove the plugin and extension before migration. [Plugins aren't recommended](./deploy-prerequisite.md#required-definition-file-updates) for use on Cloud Services (extended support).|
+| Virtual networks with both PaaS and IaaS deployment |Not Supported <br><br> Move either the PaaS or IaaS deployments into a different virtual network. This causes downtime. |
Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge). | The role sizes need to be updated before migration. Update all deployment artifacts to reference these new modern role sizes. For more information, see [Available VM sizes](available-sizes.md)|
-| Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This will cause downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This will cause downtime. |
+| Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This causes downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This causes downtime. |
| Cloud Service in a virtual network but doesn't have an explicit subnet assigned | Not supported. Mitigation involves moving the role into a subnet, which requires a role restart (downtime) | ## Translation of resources and naming convention post migration
As part of migration, the resource names are changed, and few Cloud Services fea
| Cloud Services (classic) <br><br> Resource name | Cloud Services (classic) <br><br> Syntax| Cloud Services (extended support) <br><br> Resource name| Cloud Services (extended support) <br><br> Syntax | ||||| | Cloud Service | `cloudservicename` | Not associated| Not associated |
-| Deployment (portal created) <br><br> Deployment (non-portal created) | `deploymentname` | Cloud Services (extended support) | `cloudservicename` |
+| Deployment (portal created) <br><br> Deployment (nonportal created) | `deploymentname` | Cloud Services (extended support) | `cloudservicename` |
| Virtual Network | `vnetname` <br><br> `Group resourcegroupname vnetname` <br><br> Not associated | Virtual Network (not portal created) <br><br> Virtual Network (portal created) <br><br> Virtual Networks (Default) | `vnetname` <br><br> `group-resourcegroupname-vnetname` <br><br> `VNet-cloudservicename`| | Not associated | Not associated | Key Vault | `KV-cloudservicename` | | Not associated | Not associated | Resource Group for Cloud Service Deployments | `cloudservicename-migrated` | | Not associated | Not associated | Resource Group for Virtual Network | `vnetname-migrated` <br><br> `group-resourcegroupname-vnetname-migrated`| | Not associated | Not associated | Public IP (Dynamic) | `cloudservicenameContractContract` |
-| Reserved IP Name | `reservedipname` | Reserved IP (non-portal created) <br><br> Reserved IP (portal created) | `reservedipname` <br><br> `group-resourcegroupname-reservedipname` |
+| Reserved IP Name | `reservedipname` | Reserved IP (nonportal created) <br><br> Reserved IP (portal created) | `reservedipname` <br><br> `group-resourcegroupname-reservedipname` |
| Not associated| Not associated | Load Balancer | `LB-cloudservicename`|
As part of migration, the resource names are changed, and few Cloud Services fea
- Contact support to help migrate or roll back the deployment from the backend. ### Migration failed in an operation. -- If validate failed, it is because the deployment or virtual network contains an unsupported scenario/feature/resource. Use the list of unsupported scenarios to find the work-around in the documents.
+- If validation failed, it is because the deployment or virtual network contains an unsupported scenario/feature/resource. Use the list of unsupported scenarios to find the work-around in the documents.
- Prepare operation first does validation including some expensive validations (not covered in validate). Prepare failure could be due to an unsupported scenario. Find the scenario and the work-around in the public documents. Abort needs to be called to go back to the original state and unlock the deployment for updates and delete operations. - If abort failed, retry the operation. If retries fail, then contact support.-- If commit failed, retry the operation. If retry fail, then contact support. Even in commit failure, there should be no data plane issue to your deployment. Your deployment should be able to handle customer traffic without any issue.
+- If the commit failed, retry the operation. If retry fail, then contact support. Even in commit failure, there should be no data plane issue to your deployment. Your deployment should be able to handle customer traffic without any issue.
### Portal refreshed after Prepare. Experience restarted and Commit or Abort not visible anymore. - Portal stores the migration information locally and therefore after refresh, it will start from validate phase even if the Cloud Service is in the prepare phase.
As part of migration, the resource names are changed, and few Cloud Services fea
- Customers can use PowerShell or REST API to abort or commit. ### How much time can the operations take?<br>
-Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs.
+Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but takes less time compared to prepare. All operations will time out after 24 hrs.
## Next steps For assistance migrating your Cloud Services (classic) deployment to Cloud Services (extended support) see our [Support and troubleshooting](support-help.md) landing page.
cloud-services-extended-support Non Vnet Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/non-vnet-migration.md
Title: Migrate cloud services not in a virtual network to a virtual network
-description: How to migrate non-vnet cloud services to a virtual network
+description: How to migrate nonvnet cloud services to a virtual network
Previously updated : 01/24/2024 Last updated : 07/24/2024 # Migrate cloud services not in a virtual network to a virtual network
-Some legacy cloud services are still running without Vnet support. While there's a process for migrating directly through the portal, there are certain considerations that should be made prior to migration. This article walks you through the process of migrating a non Vnet supporting Cloud Service to a Vnet supporting Cloud Service.
+Some legacy cloud services are still running without virtual network support. While there's a process for migrating directly through the portal, there are certain considerations that should be made before migration. This article walks you through the process of migrating a non virtual network supporting Cloud Service to a virtual network supporting Cloud Service.
## Advantages of this approach
Some legacy cloud services are still running without Vnet support. While there's
## Migration procedure using the Azure portal
-1. Create a non vnet classic cloud service in the same region as the vnet you want to migrate to. In the Azure portal, select the 'Staging' drop-down.
+1. Create a non virtual network classic cloud service in the same region as the virtual network you want to migrate to. In the Azure portal, select the 'Staging' drop-down.
![Screenshot of the staging drop-down in the Azure portal.](./media/vnet-migrate-staging.png)
-1. Create a deployment with same configuration as existing deployment by selecting 'Upload' next to the staging drop-down. The platform creates a Default Vnet deployment in staging slot.
+1. Create a deployment with same configuration as existing deployment by selecting 'Upload' next to the staging drop-down. The platform creates a Default virtual network deployment in staging slot.
![Screenshot of the upload button in the Azure portal.](./media/vnet-migrate-upload.png) 1. Once staging deployment is created, the URL, IP address, and label populate.
cloud-services-extended-support Override Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/override-sku.md
Previously updated : 04/05/2021 Last updated : 07/24/2024
This article describes how to update the role size and instance count in Azure C
## Set the allowModelOverride property You can set the **allowModelOverride** property to `true` or `false`.
-* When **allowModelOverride** is set to `true`, an API call will update the role size and instance count for the cloud service without validating the values with the .csdef and .cscfg files.
+* When **allowModelOverride** is set to `true`, an API call updates the role size and instance count for the cloud service without validating the values with the .csdef and .cscfg files.
> [!Note] > The .cscfg file will be updated to reflect the role instance count. The .csdef file (embedded within the .cspkg) will retain the old values.
The default value is `false`. If the property is reset to `false` after being se
The following samples show how to set the **allowModelOverride** property by using an Azure Resource Manager (ARM) template, PowerShell, or the SDK. ### ARM template
-Setting the **allowModelOverride** property to `true` here will update the cloud service with the role properties defined in the `roleProfile` section:
+Setting the **allowModelOverride** property to `true` here updates the cloud service with the role properties defined in the `roleProfile` section:
```json "properties": { "packageUrl": "[parameters('packageSasUri')]",
Setting the **allowModelOverride** property to `true` here will update the cloud
``` ### PowerShell
-Setting the `AllowModelOverride` switch on the new `New-AzCloudService` cmdlet will update the cloud service with the SKU properties defined in the role profile:
+Setting the `AllowModelOverride` switch on the new `New-AzCloudService` cmdlet updates the cloud service with the SKU properties defined in the role profile:
```powershell New-AzCloudService ` -Name "ContosoCS" `
New-AzCloudService `
-Tag $tag ``` ### SDK
-Setting the `AllowModelOverride` variable to `true` will update the cloud service with the SKU properties defined in the role profile:
+Setting the `AllowModelOverride` variable to `true` updates the cloud service with the SKU properties defined in the role profile:
```csharp CloudService cloudService = new CloudService
cloud-services-extended-support Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/overview.md
Previously updated : 10/13/2020 Last updated : 07/24/2024 # About Azure Cloud Services (extended support)
-Cloud Services (extended support) is a new [Azure Resource Manager](../azure-resource-manager/management/overview.md) based deployment model for [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) product and is now generally available. Cloud Services (extended support) has the primary benefit of providing regional resiliency along with feature parity with Azure Cloud Services deployed using Azure Service Manager. It also offers some ARM capabilities such as role-based access and control (RBAC), tags, policy, and supports deployment templates.
+Cloud Services (extended support) is a new [Azure Resource Manager](../azure-resource-manager/management/overview.md) based deployment model for [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) product and is now generally available. Cloud Services (extended support) has the primary benefit of providing regional resiliency along with feature parity with Azure Cloud Services deployed using Azure Service Manager. It also offers some Azure Resource Manager capabilities such as role-based access and control (RBAC), tags, policy, and supports deployment templates.
-With this change, the Azure Service Manager based deployment model for Cloud Services will be renamed [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md). You will retain the ability to build and rapidly deploy your web and cloud applications and services. You will be able to scale your cloud services infrastructure based on current demand and ensure that the performance of your applications can keep up while simultaneously reducing costs.
+With this change, the Azure Service Manager based deployment model for Cloud Services is renamed to [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md). You retain the ability to build and rapidly deploy your web and cloud applications and services. You're able to scale your cloud services infrastructure based on current demand and ensure that the performance of your applications can keep up while simultaneously reducing costs.
:::image type="content" source="media/inside-azure-for-iot.png" alt-text="YouTube video for Cloud Services (extended support)." link="https://youtu.be/H4K9xTUvNdw":::
-## What does not change
+## What doesn't change
- You create the code, define the configurations, and deploy it to Azure. Azure sets up the compute environment, runs your code then monitors and maintains it for you. - Cloud Services (extended support) also supports two types of roles, [web and worker](../cloud-services/cloud-services-choose-me.md). There are no changes to the design, architecture, or components of web and worker roles. -- The three components of a cloud service, the service definition (.csdef), the service config (.cscfg), and the service package (.cspkg) are carried forward and there is no change in the [formats](cloud-services-model-and-package.md).
+- The three components of a cloud service, the service definition (.csdef), the service config (.cscfg), and the service package (.cspkg) are carried forward and there's no change in the [formats](cloud-services-model-and-package.md).
- No changes are required to runtime code as data plane is the same and control plane is only changing. - Azure GuestOS releases and associated updates are aligned with Cloud Services (classic) - Underlying update process with respect to update domains, how upgrade proceeds, rollback and allowed service changes during an update don't change ## Changes in deployment model
-Minimal changes are required to Service Configuration (.cscfg) and Service Definition (.csdef) files to deploy Cloud Services (extended support). No changes are required to runtime code. However, deployment scripts will need to be updated to call the new Azure Resource Manager based APIs.
+Minimal changes are required to Service Configuration (.cscfg) and Service Definition (.csdef) files to deploy Cloud Services (extended support). No changes are required to runtime code. However, deployment scripts need to be updated to call the new Azure Resource Manager based APIs.
:::image type="content" source="media/overview-image-1.png" alt-text="Image shows classic cloud service configuration with addition of template section. "::: The major differences between Cloud Services (classic) and Cloud Services (extended support) with respect to deployment are: -- Azure Resource Manager deployments use [ARM templates](../azure-resource-manager/templates/overview.md), which is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. Service Configuration and Service definition file needs to be consistent with the [ARM Template](../azure-resource-manager/templates/overview.md) while deploying Cloud Services (extended support). This can be achieved either by [manually creating the ARM template](deploy-template.md) or using [PowerShell](deploy-powershell.md), [Portal](deploy-portal.md) and [Visual Studio](deploy-visual-studio.md).
+- Azure Resource Manager deployments use [ARM templates](../azure-resource-manager/templates/overview.md), which is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. Service Configuration and Service definition file needs to be consistent with the [ARM Template](../azure-resource-manager/templates/overview.md) while deploying Cloud Services (extended support). This can be achieved either by [manually creating the ARM template](deploy-template.md) or using [PowerShell](deploy-powershell.md), [Portal](deploy-portal.md), and [Visual Studio](deploy-visual-studio.md).
-- Customers must use [Azure Key Vault](../key-vault/general/overview.md) to [manage certificates in Cloud Services (extended support)](certificates-and-key-vault.md). Azure Key Vault lets you securely store and manage application credentials such as secrets, keys and certificates in a central and secure cloud repository. Your applications can authenticate to Key Vault at run time to retrieve credentials.
+- Customers must use [Azure Key Vault](../key-vault/general/overview.md) to [manage certificates in Cloud Services (extended support)](certificates-and-key-vault.md). Azure Key Vault lets you securely store and manage application credentials such as secrets, keys, and certificates in a central and secure cloud repository. Your applications can authenticate to Key Vault at run time to retrieve credentials.
-- All resources deployed through the [Azure Resource Manager](../azure-resource-manager/templates/overview.md) must be inside a virtual network. Virtual networks and subnets are created in Azure Resource Manager using existing Azure Resource Manager APIs and will need to be referenced within the NetworkConfiguration section of the .cscfg when deploying Cloud Services (extended support).
+- All resources deployed through the [Azure Resource Manager](../azure-resource-manager/templates/overview.md) must be inside a virtual network. Virtual networks and subnets are created in Azure Resource Manager using existing Azure Resource Manager APIs. They need to be referenced within the NetworkConfiguration section of the .cscfg when deploying Cloud Services (extended support).
-- Each cloud service (extended support) is a single independent deployment. Cloud services (extended support) does not support multiple slots within a single cloud service.
+- Each cloud service (extended support) is a single independent deployment. Cloud Services (extended support) doesn't support multiple slots within a single cloud service.
- VIP Swap capability may be used to swap between two cloud services (extended support). To test and stage a new release of a cloud service, deploy a cloud service (extended support) and tag it as VIP swappable with another cloud service (extended support) - Domain Name Service (DNS) label is optional for a cloud service (extended support). In Azure Resource Manager, the DNS label is a property of the Public IP resource associated with the cloud service.
Cloud Services (extended support) provides two paths for you to migrate from [Az
### Additional migration options
-When evaluating migration plans from Cloud Services (classic) to Cloud Services (extended support) you may want to investigate additional Azure services such as: [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), [App Service](../app-service/overview.md), [Azure Kubernetes Service](../aks/intro-kubernetes.md), and [Azure Service Fabric](../service-fabric/service-fabric-overview.md). These services will continue to feature additional capabilities, while Cloud Services (extended support) will primarily maintain feature parity with Cloud Services (classic.)
+When evaluating migration plans from Cloud Services (classic) to Cloud Services (extended support), you may want to investigate other Azure services such as: [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md), [App Service](../app-service/overview.md), [Azure Kubernetes Service](../aks/intro-kubernetes.md), and [Azure Service Fabric](../service-fabric/service-fabric-overview.md). These services continue to feature additional capabilities, while Cloud Services (extended support) maintains feature parity with Cloud Services (classic.)
-Depending on the application, Cloud Services (extended support) may require substantially less effort to move to Azure Resource Manager compared to other options. If your application is not evolving, Cloud Services (extended support) is a viable option to consider as it provides a quick migration path. Conversely, if your application is continuously evolving and needs a more modern feature set, do explore other Azure services to better address your current and future requirements.
+Depending on the application, Cloud Services (extended support) may require substantially less effort to move to Azure Resource Manager compared to other options. If your application isn't evolving, Cloud Services (extended support) is a viable option to consider as it provides a quick migration path. Conversely, if your application is continuously evolving and needs a more modern feature set, do explore other Azure services to better address your current and future requirements.
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
cloud-services-extended-support Post Migration Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/post-migration-changes.md
Title: Azure Cloud Services (extended support) post migration changes
+ Title: Azure Cloud Services (extended support) post-migration changes
description: Overview of post migration changes after migrating to Cloud Services (extended support)
Previously updated : 2/08/2021 Last updated : 07/24/2024 # Post-migration changes
The Cloud Services (classic) deployment is converted to a Cloud Services (extend
## Changes to deployment files
-Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deployment files conform to the Azure Resource Manager and Cloud Services (extended support) requirements. Post migration retrieves your new deployment files or update the existing files. This will be needed for update/delete operations.
+Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deployment files conform to the Azure Resource Manager and Cloud Services (extended support) requirements. Post migration retrieves your new deployment files or updates the existing files, which are needed for update/delete operations.
- Virtual Network uses full Azure Resource Manager resource ID instead of just the resource name in the NetworkConfiguration section of the .cscfg file. For example, `/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Network/virtualNetworks/vnet-name`. For virtual networks belonging to the same resource group as the cloud service, you can choose to update the .cscfg file back to using just the virtual network name.
Customers need to update their tooling and automation to start using the new API
- Recreate rules and policies required to manage and scale cloud services - [Auto Scale rules](configure-scaling.md) aren't migrated. After migration, recreate the auto scale rules. - [Alerts](enable-alerts.md) aren't migrated. After migration, recreate the alerts.
- - The Key Vault is created without any access policies. [Create appropriate policies](../key-vault/general/assign-access-policy-portal.md) on the Key Vault to view or manage your certificates. Certificates will be visible under settings on the tab called secrets.
+ - The Key Vault is created without any access policies. To view or manage your certificates, [create appropriate policies](../key-vault/general/assign-access-policy-portal.md) on the Key Vault. Certificates are visible under settings on the tab called secrets.
## Changes to Certificate Management Post Migration
-As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell or REST API.
+As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell, or REST API.
Currently, the Azure portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate isn't found. However, if you're planning to use Certificates as secrets, then these certificates can't be validated for their thumbprint and any update operation that involves addition of secrets would fail via Portal. Customers are recommended to use PowerShell or RestAPI to continue updates involving Secrets. ## Changes for Update via Visual Studio
-If you were publishing updates via Visual Studio directly, then you would need to first download the latest CSCFG file from your deployment post migration. Use this file as reference to add Network Configuration details to your current CSCFG file in Visual Studio project. Then build the solution and publish it. You might have to choose the Key Vault and Resource Group for this update.
+If you published updates via Visual Studio directly, then you would need to first download the latest CSCFG file from your deployment post migration. Use this file as reference to add Network Configuration details to your current CSCFG file in Visual Studio project. Then build the solution and publish it. You might have to choose the Key Vault and Resource Group for this update.
## Next steps
cloud-services-extended-support Sample Create Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/sample-create-cloud-service.md
Previously updated : 10/13/2020 Last updated : 07/24/2024
cloud-services-extended-support Sample Get Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/sample-get-cloud-service.md
Previously updated : 10/13/2020 Last updated : 07/24/2024
cloud-services-extended-support Sample Reset Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/sample-reset-cloud-service.md
Previously updated : 10/13/2020 Last updated : 07/24/2024 # Reset an Azure Cloud Service (extended support)
These samples cover various ways to reset an existing Azure Cloud Service (exten
$roleInstances = @("ContosoFrontEnd_IN_0", "ContosoBackEnd_IN_1") Invoke-AzCloudServiceReimage -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances ```
-This command reimages 2 role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+This command reimages two role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
## Reimage all roles of Cloud Service ```powershell
This command reimages role instance named ContosoFrontEnd_IN_0 of cloud service
$roleInstances = @("ContosoFrontEnd_IN_0", "ContosoBackEnd_IN_1") Invoke-AzCloudServiceRebuild -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances ```
-This command rebuilds 2 role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+This command rebuilds two role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
## Rebuild all roles of cloud service ```powershell
This command rebuilds all role instances of cloud service named ContosoCS that b
$roleInstances = @("ContosoFrontEnd_IN_0", "ContosoBackEnd_IN_1") Restart-AzCloudService -ResourceGroupName "ContosOrg" -CloudServiceName "ContosoCS" -RoleInstance $roleInstances ```
-This command restarts 2 role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+This command restarts two role instances ContosoFrontEnd_IN_0 and ContosoBackEnd_IN_1 of cloud service named ContosoCS that belongs to the resource group named ContosOrg.
## Restart all roles of cloud service ```powershell
cloud-services-extended-support Sample Update Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/sample-update-cloud-service.md
Previously updated : 10/13/2020 Last updated : 07/24/2024
These samples cover various ways to update an existing Azure Cloud Service (extended support) deployment. ## Add an extension to existing Cloud Service
-Below set of commands adds a RDP extension to already existing cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+The following set of commands adds a Remote Desktop Protocol (RDP) extension to already existing cloud service named ContosoCS that belongs to the resource group named ContosOrg.
```powershell # Create RDP extension object $rdpExtension = New-AzCloudServiceRemoteDesktopExtensionObject -Name "RDPExtension" -Credential $credential -Expiration $expiration -TypeHandlerVersion "1.2.1"
$cloudService | Update-AzCloudService
``` ## Remove all extensions from a Cloud Service
-Below set of commands removes all extensions from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+The following set of commands removes all extensions from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg.
```powershell # Get existing cloud service $cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS"
$cloudService | Update-AzCloudService
``` ## Remove the remote desktop extension from Cloud Service
-Below set of commands removes RDP extension from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+The following set of commands removes RDP extension from existing cloud service named ContosoCS that belongs to the resource group named ContosOrg.
```powershell # Get existing cloud service $cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS"
$cloudService | Update-AzCloudService
``` ## Scale-out / scale-in role instances
-Below set of commands shows how to scale-out and scale-in role instance count for cloud service named ContosoCS that belongs to the resource group named ContosOrg.
+The following set of commands shows how to scale-out and scale-in role instance count for cloud service named ContosoCS that belongs to the resource group named ContosOrg.
```powershell # Get existing cloud service $cloudService = Get-AzCloudService -ResourceGroup "ContosOrg" -CloudServiceName "ContosoCS"
cloud-services-extended-support Schema Cscfg File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-file.md
Title: Azure Cloud Services (extended support) Definition Schema (.cscfg File) |
description: Information related to the definition schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024
# Azure Cloud Services (extended support) config schema (cscfg File)
-The service configuration file specifies the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role. If the service is part of a Virtual Network, configuration information for the network must be provided in the service configuration file, as well as in the virtual networking configuration file. The default extension for the service configuration file is cscfg.
+The service configuration file specifies the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role. If the service is part of a Virtual Network, configuration information for the network must be provided in the service configuration file and the virtual networking configuration file. The default extension for the service configuration file is cscfg.
-The service model is described by the [Cloud Service (extended support) definition schema](schema-csdef-file.md).
+The [Cloud Service (extended support) definition schema](schema-csdef-file.md) describes the service model.
By default, the Azure Diagnostics configuration schema file is installed to the `C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\<version>\schemas` directory. Replace `<version>` with the installed version of the [Azure SDK](https://azure.microsoft.com/downloads/).
The basic format of the service configuration file is as follows.
``` ## Schema definitions
-The following topics describe the schema for the `ServiceConfiguration` element:
+The following articles describe the schema for the `ServiceConfiguration` element:
- [Role Schema](schema-cscfg-role.md) - [NetworkConfiguration Schema](schema-cscfg-networkconfiguration.md)
The following table describes the attributes of the `ServiceConfiguration` eleme
| Attribute | Description | | | -- | |serviceName|Required. The name of the Cloud Service. The name given here must match the name specified in the service definition file.|
-|osFamily|Optional. Specifies the Guest OS that will run on role instances in the Cloud Service. For information about supported Guest OS releases, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> If you do not include an `osFamily` value and you have not set the `osVersion` attribute to a specific Guest OS version, a default value of 1 is used.|
-|osVersion|Optional. Specifies the version of the Guest OS that will run on role instances in the Cloud Service. For more information about Guest OS versions, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> You can specify that the Guest OS should be automatically upgraded to the latest version. To do this, set the value of the `osVersion` attribute to `*`. When set to `*`, the role instances are deployed using the latest version of the Guest OS for the specified OS family and will be automatically upgraded when new versions of the Guest OS are released.<br /><br /> To specify a specific version manually, use the `Configuration String` from the table in the **Future, Current and Transitional Guest OS Versions** section of [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> The default value for the `osVersion` attribute is `*`.|
+|osFamily|Optional. Specifies the Guest OS that runs on role instances in the Cloud Service. For information about supported Guest OS releases, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> If you don't include an `osFamily` value and you have not set the `osVersion` attribute to a specific Guest OS version, a default value of 1 is used.|
+|osVersion|Optional. Specifies the version of the Guest OS that runs on role instances in the Cloud Service. For more information about Guest OS versions, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> You can specify that the Guest OS should be automatically upgraded to the latest version. To do this, set the value of the `osVersion` attribute to `*`. When set to `*`, the role instances are deployed using the latest version of the Guest OS for the specified OS family and automatically upgrades when new versions of the Guest OS are released.<br /><br /> To specify a specific version manually, use the `Configuration String` from the table in the **Future, Current, and Transitional Guest OS Versions** section of [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md).<br /><br /> The default value for the `osVersion` attribute is `*`.|
|schemaVersion|Optional. Specifies the version of the Service Configuration schema. The schema version allows Visual Studio to select the correct SDK tools to use for schema validation if more than one version of the SDK is installed side-by-side. For more information about schema and version compatibility, see [Azure Guest OS Releases and SDK Compatibility Matrix](../cloud-services/cloud-services-guestos-update-matrix.md)| The service configuration file must contain one `ServiceConfiguration` element. The `ServiceConfiguration` element may include any number of `Role` elements and zero or 1 `NetworkConfiguration` elements.
cloud-services-extended-support Schema Cscfg Networkconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-networkconfiguration.md
Title: Azure Cloud Services (extended support) NetworkConfiguration Schema | Mic
description: Information related to the network configuration schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024
# Azure Cloud Services (extended support) config networkConfiguration schema
-The `NetworkConfiguration` element of the service configuration file specifies Virtual Network and DNS values. These settings are optional for Cloud Services (classic).
+The `NetworkConfiguration` element of the service configuration file specifies Virtual Network and Domain Name System (DNS) values. These settings are optional for Cloud Services (classic).
You can use the following resource to learn more about Virtual Networks and the associated schemas:
The following table describes the child elements of the `NetworkConfiguration` e
| Rule | Optional. Specifies the action that should be taken for a specified subnet range of IP addresses. The order of the rule is defined by a string value for the `order` attribute. The lower the rule number the higher the priority. For example, rules could be specified with order numbers of 100, 200, and 300. The rule with the order number of 100 takes precedence over the rule that has an order of 200.<br /><br /> The action for the rule is defined by a string for the `action` attribute. Possible values are:<br /><br /> - `permit` ΓÇô Specifies that only packets from the specified subnet range can communicate with the endpoint.<br />- `deny` ΓÇô Specifies that access is denied to the endpoints in the specified subnet range.<br /><br /> The subnet range of IP addresses that are affected by the rule are defined by a string for the `remoteSubnet` attribute. The description for the rule is defined by a string for the `description` attribute.| | EndpointAcl | Optional. Specifies the assignment of access control rules to an endpoint. The name of the role that contains the endpoint is defined by a string for the `role` attribute. The name of the endpoint is defined by a string for the `endpoint` attribute. The name of the set of `AccessControl` rules that should be applied to the endpoint are defined in a string for the `accessControl` attribute. More than one `EndpointAcl` elements can be defined.| | DnsServer | Optional. Specifies the settings for a DNS server. You can specify settings for DNS servers without a Virtual Network. The name of the DNS server is defined by a string for the `name` attribute. The IP address of the DNS server is defined by a string for the `IPAddress` attribute. The IP address must be a valid IPv4 address.|
-| VirtualNetworkSite | Mandatory. Specifies the name of the Virtual Network site in which you want deploy your Cloud Service. This setting does not create a Virtual Network Site. It references a site that has been previously defined in the network file for your Virtual Network. A Cloud Service (extended support) can only be a member of one Virtual Network. The name of the Virtual Network site is defined by a string for the `name` attribute.|
-| InstanceAddress | Mandatory. Specifies the association of a role to a subnet or set of subnets in the Virtual Network. When you associate a role name to an instance address, you can specify the subnets to which you want this role to be associated. The `InstanceAddress` contains a Subnets element. The name of the role that is associated with the subnet or subnets is defined by a string for the `roleName` attribute.You need to specify one instance address for each role defined for your cloud service|
+| VirtualNetworkSite | Mandatory. Specifies the name of the Virtual Network site in which you want to deploy your Cloud Service. This setting doesn't create a Virtual Network Site. It references a site previously defined in the network file for your Virtual Network. A Cloud Service (extended support) can only be a member of one Virtual Network. The name of the Virtual Network site is defined by a string for the `name` attribute.|
+| InstanceAddress | Mandatory. Specifies the association of a role to a subnet or set of subnets in the Virtual Network. When you associate a role name to an instance address, you can specify the subnets to which you want this role to be associated. The `InstanceAddress` contains a Subnets element. The name of the role that is associated with the subnet or subnets is defined by a string for the `roleName` attribute. You need to specify one instance address for each role defined for your cloud service|
| Subnet | Mandatory. Specifies the subnet that corresponds to the subnet name in the network configuration file. The name of the subnet is defined by a string for the `name` attribute.|
-| ReservedIP | Optional. Specifies the reserved IP address that should be associated with the deployment. The allocation method for a reserved IP needs to be specified as `Static` for template and powershell deployments. Each deployment in a Cloud Service can be associated with only one reserved IP address. The name of the reserved IP address is defined by a string for the `name` attribute.|
+| ReservedIP | Optional. Specifies the reserved IP address that should be associated with the deployment. The allocation method for a reserved IP needs to be specified as `Static` for template and PowerShell deployments. Each deployment in a Cloud Service can be associated with only one reserved IP address. The name of the reserved IP address is defined by a string for the `name` attribute.|
## See also [Cloud Service (extended support) Configuration Schema](schema-cscfg-file.md).
cloud-services-extended-support Schema Cscfg Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-role.md
Title: Azure Cloud Services (extended support) Role Schema | Microsoft Docs
description: Information related to the role schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024
The `Role` element of the configuration file specifies the number of role instan
For more information about the Azure Service Configuration Schema, see [Cloud Service (extended support) Configuration Schema](schema-cscfg-file.md). For more information about the Azure Service Definition Schema, see [Cloud Service (extended support) Definition Schema](schema-csdef-file.md).
-## <a name="Role"></a> role element
+## <a name="Role"></a> Role element
The following example shows the `Role` element and its child elements. ```xml
cloud-services-extended-support Schema Csdef File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-file.md
Title: Azure Cloud Services (extended support) Definition Schema (csdef File) |
description: Information related to the definition schema (csdef) for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024
By default, the Azure Diagnostics configuration schema file is installed to the
The default extension for the service definition file is csdef. ## Basic service definition schema
-The service definition file must contain one `ServiceDefinition` element. The service definition must contain at least one role (`WebRole` or `WorkerRole`) element. It can contain up to 25 roles defined in a single definition and you can mix role types. The service definition also contains the optional `NetworkTrafficRules` element which restricts which roles can communicate to specified internal endpoints. The service definition also contains the optional `LoadBalancerProbes` element which contains customer defined health probes of endpoints.
+The service definition file must contain one `ServiceDefinition` element. The service definition must contain at least one role (`WebRole` or `WorkerRole`) element. It can contain up to 25 roles defined in a single definition and you can mix role types. The service definition also contains the optional `NetworkTrafficRules` element, which restricts which roles can communicate to specified internal endpoints. The service definition also contains the optional `LoadBalancerProbes` element, which contains customer defined health probes of endpoints.
The basic format of the service definition file is as follows.
The basic format of the service definition file is as follows.
``` ## Schema definitions
-The following topics describe the schema:
+The following articles describe the schema:
- [LoadBalancerProbe Schema](schema-csdef-loadbalancerprobe.md) - [WebRole Schema](schema-csdef-webrole.md)
The following table describes the attributes of the `ServiceDefinition` element.
| Attribute | Description | | -- | -- | | name |Required. The name of the service. The name must be unique within the service account.|
-| topologyChangeDiscovery | Optional. Specifies the type of topology change notification. Possible values are:<br /><br /> - `Blast` - Sends the update as soon as possible to all role instances. If you choose option, the role should be able to handle the topology update without being restarted.<br />- `UpgradeDomainWalk` ΓÇô Sends the update to each role instance in a sequential manner after the previous instance has successfully accepted the update.|
+| topologyChangeDiscovery | Optional. Specifies the type of topology change notification. Possible values are:<br /><br /> - `Blast` - Sends the update as soon as possible to all role instances. If you choose option, the role should be able to handle the topology update without being restarted.<br />- `UpgradeDomainWalk` ΓÇô Sends the update to each role instance in a sequential manner after the previous instance successfully accepts the update.|
| schemaVersion | Optional. Specifies the version of the service definition schema. The schema version allows Visual Studio to select the correct SDK tools to use for schema validation if more than one version of the SDK is installed side-by-side.| | upgradeDomainCount | Optional. Specifies the number of upgrade domains across which roles in this service are allocated. Role instances are allocated to an upgrade domain when the service is deployed. For more information, see [Update a Cloud Service role or deployment](sample-update-cloud-service.md) and [Manage the availability of virtual machines](../virtual-machines/availability.md) You can specify up to 20 upgrade domains. If not specified, the default number of upgrade domains is 5.|
cloud-services-extended-support Schema Csdef Loadbalancerprobe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-loadbalancerprobe.md
Title: Azure Cloud Services (extended support) Def. LoadBalancerProbe Schema | M
description: Information related to the load balancer probe schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024
# Azure Cloud Services (extended support) definition LoadBalancerProbe schema
-The load balancer probe is a customer defined health probe of UDP endpoints and endpoints in role instances. The `LoadBalancerProbe` is not a standalone element; it is combined with the web role or worker role in a service definition file. A `LoadBalancerProbe` can be used by more than one role.
+The load balancer probe is a customer defined health probe of UDP endpoints and endpoints in role instances. The `LoadBalancerProbe` isn't a standalone element; it's combined with the web role or worker role in a service definition file. More than one role can use a `LoadBalancerProbe`.
The default extension for the service definition file is csdef. ## The function of a load balancer probe The Azure Load Balancer is responsible for routing incoming traffic to your role instances. The load balancer determines which instances can receive traffic by regularly probing each instance in order to determine the health of that instance. The load balancer probes every instance multiple times per minute. There are two different options for providing instance health to the load balancer ΓÇô the default load balancer probe, or a custom load balancer probe, which is implemented by defining the LoadBalancerProbe in the csdef file.
-The default load balancer probe utilizes the Guest Agent inside the virtual machine, which listens and responds with an HTTP 200 OK response only when the instance is in the Ready state (like when the instance is not in the Busy, Recycling, Stopping, etc. states). If the Guest Agent fails to respond with HTTP 200 OK, the Azure Load Balancer marks the instance as unresponsive and stops sending traffic to that instance. The Azure Load Balancer continues to ping the instance, and if the Guest Agent responds with an HTTP 200, the Azure Load Balancer sends traffic to that instance again. When using a web role your website code typically runs in w3wp.exe which is not monitored by the Azure fabric or guest agent, which means failures in w3wp.exe (eg. HTTP 500 responses) is not be reported to the guest agent and the load balancer does not know to take that instance out of rotation.
+The default load balancer probe utilizes the Guest Agent inside the virtual machine, which listens and responds with an HTTP 200 OK response only when the instance is in the Ready state (like when the instance isn't in the Busy, Recycling, Stopping, etc. states). If the Guest Agent fails to respond with HTTP 200 OK, the Azure Load Balancer marks the instance as unresponsive and stops sending traffic to that instance. The Azure Load Balancer continues to ping the instance, and if the Guest Agent responds with an HTTP 200, the Azure Load Balancer sends traffic to that instance again. When using a web role your website code typically runs in w3wp.exe, which isn't monitored by the Azure fabric or guest agent, which means failures in w3wp.exe (for example, HTTP 500 responses) isn't be reported to the guest agent and the load balancer doesn't know to take that instance out of rotation.
-The custom load balancer probe overrides the default guest agent probe and allows you to create your own custom logic to determine the health of the role instance. The load balancer regularly probes your endpoint (every 15 seconds, by default) and the instance is be considered in rotation if it responds with a TCP ACK or HTTP 200 within the timeout period (default of 31 seconds). This can be useful to implement your own logic to remove instances from load balancer rotation, for example returning a non-200 status if the instance is above 90% CPU. For web roles using w3wp.exe, this also means you get automatic monitoring of your website, since failures in your website code return a non-200 status to the load balancer probe. If you do not define a LoadBalancerProbe in the csdef file, then the default load balancer behavior (as previously described) is be used.
+The custom load balancer probe overrides the default guest agent probe and allows you to create your own custom logic to determine the health of the role instance. The load balancer regularly probes your endpoint (every 15 seconds, by default), and the instance is considered in rotation if it responds with a TCP ACK or HTTP 200 within the timeout period (default of 31 seconds). This can be useful to implement your own logic to remove instances from load balancer rotation, for example returning a non-200 status if the instance is above 90% CPU. For web roles using w3wp.exe, this also means you get automatic monitoring of your website, since failures in your website code return a non-200 status to the load balancer probe. If you don't define a LoadBalancerProbe in the csdef file, then the default load balancer behavior (as previously described) is used.
-If you use a custom load balancer probe, you must ensure that your logic takes into consideration the RoleEnvironment.OnStop method. When using the default load balancer probe, the instance is taken out of rotation prior to OnStop being called, but a custom load balancer probe can continue to return a 200 OK during the OnStop event. If you are using the OnStop event to clean up cache, stop service, or otherwise making changes that can affect the runtime behavior of your service, then you need to ensure that your custom load balancer probe logic removes the instance from rotation.
+If you use a custom load balancer probe, you must ensure that your logic takes into consideration the RoleEnvironment.OnStop method. When you use the default load balancer probe, the instance is taken out of rotation before OnStop is called, but a custom load balancer probe can continue to return a 200 OK during the OnStop event. If you use the OnStop event to clean up cache, stop service, or otherwise making changes that can affect the runtime behavior of your service, then you need to ensure that your custom load balancer probe logic removes the instance from rotation.
## Basic service definition schema for a load balancer probe The basic format of a service definition file containing a load balancer probe is as follows.
The following table describes the attributes of the `LoadBalancerProbe` element:
| - | -- | --| | `name` | `string` | Required. The name of the load balancer probe. The name must be unique.| | `protocol` | `string` | Required. Specifies the protocol of the end point. Possible values are `http` or `tcp`. If `tcp` is specified, a received ACK is required for the probe to be successful. If `http` is specified, a 200 OK response from the specified URI is required for the probe to be successful.|
-| `path` | `string` | The URI used for requesting health status from the VM. `path` is required if `protocol` is set to `http`. Otherwise, it is not allowed.<br /><br /> There is no default value.|
-| `port` | `integer` | Optional. The port for communicating the probe. This is optional for any endpoint, as the same port will then be used for the probe. You can configure a different port for their probing, as well. Possible values range from 1 to 65535, inclusive.<br /><br /> The default value is set by the endpoint.|
-| `intervalInSeconds` | `integer` | Optional. The interval, in seconds, for how frequently to probe the endpoint for health status. Typically, the interval is slightly less than half the allocated timeout period (in seconds) which allows two full probes before taking the instance out of rotation.<br /><br /> The default value is 15, the minimum value is 5.|
-| `timeoutInSeconds` | `integer` | Optional. The timeout period, in seconds, applied to the probe where no response will result in stopping further traffic from being delivered to the endpoint. This value allows endpoints to be taken out of rotation faster or slower than the typical times used in Azure (which are the defaults).<br /><br /> The default value is 31, the minimum value is 11.|
+| `path` | `string` | The URI used for requesting health status from the VM. `path` is required if `protocol` is set to `http`. Otherwise, it isn't allowed.<br /><br /> There's no default value.|
+| `port` | `integer` | Optional. The port for communicating the probe. This attribute is optional for any endpoint, as the same port is used for the probe. You can configure a different port for their probing, as well. Possible values range from 1 to 65535, inclusive.<br /><br /> The default value set by the endpoint.|
+| `intervalInSeconds` | `integer` | Optional. The interval, in seconds, for how frequently to probe the endpoint for health status. Typically, the interval is slightly less than half the allocated timeout period (in seconds) which allows two full probes before taking the instance out of rotation.<br /><br /> The default value is 15. The minimum value is 5.|
+| `timeoutInSeconds` | `integer` | Optional. The timeout period, in seconds, applied to the probe where no response results in stopping further traffic from being delivered to the endpoint. This value allows endpoints to be taken out of rotation faster or slower than the typical times used in Azure (which are the defaults).<br /><br /> The default value is 31. The minimum value is 11.|
## See also [Cloud Service (extended support) Definition Schema](schema-csdef-file.md).
cloud-services-extended-support Schema Csdef Networktrafficrules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-networktrafficrules.md
Title: Azure Cloud Services (extended support) Def. NetworkTrafficRules Schema |
description: Information related to the network traffic rules associated with Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024
# Azure Cloud Services (extended support) definition NetworkTrafficRules schema
-The `NetworkTrafficRules` node is an optional element in the service definition file that specifies how roles communicate with each other. It limits which roles can access the internal endpoints of the specific role. The `NetworkTrafficRules` is not a standalone element; it is combined with two or more roles in a service definition file.
+The `NetworkTrafficRules` node is an optional element in the service definition file that specifies how roles communicate with each other. It limits which roles can access the internal endpoints of the specific role. The `NetworkTrafficRules` isn't a standalone element; it's combined with two or more roles in a service definition file.
The default extension for the service definition file is csdef.
The basic format of a service definition file containing network traffic definit
``` ## Schema elements
-The `NetworkTrafficRules` node of the service definition file includes these elements, described in detail in subsequent sections in this topic:
+The `NetworkTrafficRules` node of the service definition file includes these elements, described in detail in subsequent sections in this article:
[NetworkTrafficRules Element](#NetworkTrafficRules)
The `NetworkTrafficRules` element specifies which roles can communicate with whi
The `OnlyAllowTrafficTo` element describes a collection of destination endpoints and the roles that can communicate with them. You can specify multiple `OnlyAllowTrafficTo` nodes. ## <a name="Destinations"></a> Destinations element
-The `Destinations` element describes a collection of RoleEndpoints than can be communicated with.
+The `Destinations` element describes a collection of RoleEndpoints that can be communicated with.
## <a name="RoleEndpoint"></a> RoleEndpoint element The `RoleEndpoint` element describes an endpoint on a role to allow communications with. You can specify multiple `RoleEndpoint` elements if there are more than one endpoint on the role.
The `RoleEndpoint` element describes an endpoint on a role to allow communicatio
The `AllowAllTraffic` element is a rule that allows all roles to communicate with the endpoints defined in the `Destinations` node. ## <a name="WhenSource"></a> WhenSource element
-The `WhenSource` element describes a collection of roles than can communicate with the endpoints defined in the `Destinations` node.
+The `WhenSource` element describes a collection of roles that can communicate with the endpoints defined in the `Destinations` node.
| Attribute | Type | Description | | | -- | -- |
cloud-services-extended-support Schema Csdef Webrole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-webrole.md
Title: Azure Cloud Services (extended support) Def. WebRole Schema | Microsoft D
description: Information related to the web role for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024
The basic format of a service definition file containing a web role is as follow
``` ## Schema elements
-The service definition file includes these elements, described in detail in subsequent sections in this topic:
+The service definition file includes these elements, described in detail in subsequent sections in this article:
[WebRole](#WebRole)
The name of the directory allocated to the local storage resource corresponds to
## <a name="Endpoints"></a> Endpoints The `Endpoints` element describes the collection of input (external), internal, and instance input endpoints for a role. This element is the parent of the `InputEndpoint`, `InternalEndpoint`, and `InstanceInputEndpoint` elements.
-Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints which can be allocated across the 25 roles allowed in a service. For example, if have 5 roles you can allocate 5 input endpoints per role or you can allocate 25 input endpoints to a single role or you can allocate 1 input endpoint each to 25 roles.
+Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints, which can be allocated across the 25 roles allowed in a service. For example, if you have five roles, you can allocate five input endpoints per role, or you can allocate 25 input endpoints to a single role or you can allocate one input endpoint each to 25 roles.
> [!NOTE] > Each role deployed requires one instance per role. The default provisioning for a subscription is limited to 20 cores and thus is limited to 20 instances of a role. If your application requires more instances than is provided by the default provisioning see [Billing, Subscription Management and Quota Support](https://azure.microsoft.com/support/options/) for more information on increasing your quota.
The following table describes the attributes of the `InputEndpoint` element.
|protocol|string|Required. The transport protocol for the external endpoint. For a web role, possible values are `HTTP`, `HTTPS`, `UDP`, or `TCP`.| |port|int|Required. The port for the external endpoint. You can specify any port number you choose, but the port numbers specified for each role in the service must be unique.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| |certificate|string|Required for an HTTPS endpoint. The name of a certificate defined by a `Certificate` element.|
-|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.|
-|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored and the endpoint will not be removed by the load balancer. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role is not in a Ready state.|
+|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This attribute is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.|
+|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored, and the load balancer won't remove the endpoint. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role isn't in a Ready state.|
|loadBalancerProbe|string|Optional. The name of the load balancer probe associated with the input endpoint. For more information, see [LoadBalancerProbe Schema](schema-csdef-loadbalancerprobe.md).| ## <a name="InternalEndpoint"></a> InternalEndpoint
-The `InternalEndpoint` element describes an internal endpoint to a web role. An internal endpoint is available only to other role instances running within the service; it is not available to clients outside the service. Web roles that do not include the `Sites` element can only have a single HTTP, UDP, or TCP internal endpoint.
+The `InternalEndpoint` element describes an internal endpoint to a web role. An internal endpoint is available only to other role instances running within the service; it isn't available to clients outside the service. Web roles that don't include the `Sites` element can only have a single HTTP, UDP, or TCP internal endpoint.
The following table describes the attributes of the `InternalEndpoint` element.
The following table describes the attributes of the `InternalEndpoint` element.
| | - | -- | |name|string|Required. A unique name for the internal endpoint.| |protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `HTTP`, `TCP`, `UDP`, or `ANY`.<br /><br /> A value of `ANY` specifies that any protocol, any port is allowed.|
-|port|int|Optional. The port used for internal load balanced connections on the endpoint. A Load balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically these are these are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.|
+|port|int|Optional. The port used for internal load-balanced connections on the endpoint. A load-balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically, these values are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.|
## <a name="InstanceInputEndpoint"></a> InstanceInputEndpoint The `InstanceInputEndpoint` element describes an instance input endpoint to a web role. An instance input endpoint is associated with a specific role instance by using port forwarding in the load balancer. Each instance input endpoint is mapped to a specific port from a range of possible ports. This element is the parent of the `AllocatePublicPortFrom` element.
The following table describes the attributes of the `InstanceInputEndpoint` elem
| Attribute | Type | Description | | | - | -- | |name|string|Required. A unique name for the endpoint.|
-|localPort|int|Required. Specifies the internal port that all role instances will listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.|
+|localPort|int|Required. Specifies the internal port that all role instances listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.|
|protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `udp` or `tcp`. Use `tcp` for http/https based traffic.| ## <a name="AllocatePublicPortFrom"></a> AllocatePublicPortFrom
-The `AllocatePublicPortFrom` element describes the public port range that can be used by external customers to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element.
+The `AllocatePublicPortFrom` element describes the public port range that external customers can use to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element.
The `AllocatePublicPortFrom` element is only available using the Azure SDK version 1.7 or higher.
The following table describes the attributes of the `FixedPort` element.
| Attribute | Type | Description | | | - | -- |
-|port|int|Required. The port for the internal endpoint. This has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).|
+|port|int|Required. The port for the internal endpoint. This attribute has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).|
## <a name="FixedPortRange"></a> FixedPortRange The `FixedPortRange` element specifies the range of ports that are assigned to the internal endpoint or instance input endpoint, and sets the port used for load balanced connections on the endpoint.
The following table describes the attributes of the `Certificate` element.
| Attribute | Type | Description | | | - | -- |
-|name|string|Required. A name for this certificate, which is used to refer to it when it is associated with an HTTPS `InputEndpoint` element.|
+|name|string|Required. A name for this certificate, which is used to refer to it when it's associated with an HTTPS `InputEndpoint` element.|
|storeLocation|string|Required. The location of the certificate store where this certificate may be found on the local machine. Possible values are `CurrentUser` and `LocalMachine`.| |storeName|string|Required. The name of the certificate store where this certificate resides on the local machine. Possible values include the built-in store names `My`, `Root`, `CA`, `Trust`, `Disallowed`, `TrustedPeople`, `TrustedPublisher`, `AuthRoot`, `AddressBook`, or any custom store name. If a custom store name is specified, the store is automatically created.| |permissionLevel|string|Optional. Specifies the access permissions given to the role processes. If you want only elevated processes to be able to access the private key, then specify `elevated` permission. `limitedOrElevated` permission allows all role processes to access the private key. Possible values are `limitedOrElevated` or `elevated`. The default value is `limitedOrElevated`.|
The following table describes the attributes of the `Import` element.
| Attribute | Type | Description | | | - | -- |
-|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information see [Extensions](extensions.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance.|
+|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information, see [Extensions](extensions.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance.|
## <a name="Runtime"></a> Runtime The `Runtime` element describes a collection of environment variable settings for a web role that control the runtime environment of the Azure host process. This element is the parent of the `Environment` element. This element is optional and a role can have only one runtime block.
The following table describes the attributes of the `NetFxEntryPoint` element.
| Attribute | Type | Description | | | - | -- |
-|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (do not specify **\\%ROLEROOT%\Approot** in `commandLine`, it is assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> For HWC roles the path is always relative to the **\\%ROLEROOT%\Approot\bin** folder.<br /><br /> For full IIS and IIS Express web roles, if the assembly cannot be found relative to **\\%ROLEROOT%\Approot** folder, the **\\%ROLEROOT%\Approot\bin** is searched.<br /><br /> This fall back behavior for full IIS is not a recommend best practice and maybe removed in future versions.|
+|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (don't specify **\\%ROLEROOT%\Approot** in command line; it's assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> For HWC roles the path is always relative to the **\\%ROLEROOT%\Approot\bin** folder.<br /><br /> For full IIS and IIS Express web roles, if the assembly can't be found relative to **\\%ROLEROOT%\Approot** folder, the **\\%ROLEROOT%\Approot\bin** is searched.<br /><br /> This fall back behavior for full IIS isn't a recommend best practice and maybe removed in future versions.|
|targetFrameworkVersion|string|Required. The version of the .NET framework on which the assembly was built. For example, `targetFrameworkVersion="v4.0"`.| ## <a name="Sites"></a> Sites
-The `Sites` element describes a collection of websites and web applications that are hosted in a web role. This element is the parent of the `Site` element. If you do not specify a `Sites` element, your web role is hosted as legacy web role and you can only have one website hosted in your web role. This element is optional and a role can have only one sites block.
+The `Sites` element describes a collection of websites and web applications that are hosted in a web role. This element is the parent of the `Site` element. If you don't specify a `Sites` element, your web role is hosted as legacy web role, and you can only have one website hosted in your web role. This element is optional and a role can have only one sites block.
The `Sites` element is only available using the Azure SDK version 1.3 or higher.
The following table describes the attributes of the `VirtualApplication` element
| Attribute | Type | Description | | | - | -- | |name|string|Required. Specifies a name to identify the virtual application.|
-|physicalDirectory|string|Required. Specifies the path on the development machine that contains the virtual application. In the compute emulator, IIS is configured to retrieve content from this location. When deploying to the Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.|
+|physicalDirectory|string|Required. Specifies the path on the development machine that contains the virtual application. In the compute emulator, IIS is configured to retrieve content from this location. When deployed to Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.|
## <a name="VirtualDirectory"></a> VirtualDirectory The `VirtualDirectory` element specifies a directory name (also referred to as path) that you specify in IIS and map to a physical directory on a local or remote server.
The following table describes the attributes of the `VirtualDirectory` element.
| Attribute | Type | Description | | | - | -- | |name|string|Required. Specifies a name to identify the virtual directory.|
-|value|physicalDirectory|Required. Specifies the path on the development machine that contains the website or Virtual directory contents. In the compute emulator, IIS is configured to retrieve content from this location. When deploying to the Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.|
+|value|physicalDirectory|Required. Specifies the path on the development machine that contains the website or Virtual directory contents. In the compute emulator, IIS is configured to retrieve content from this location. When deployed to Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.|
## <a name="Bindings"></a> Bindings
-The `Bindings` element describes a collection of bindings for a website. It is the parent element of the `Binding` element. The element is required for every `Site` element. For more information about configuring endpoints, see [Enable Communication for Role Instances](../cloud-services/cloud-services-enable-communication-role-instances.md).
+The `Bindings` element describes a collection of bindings for a website. It's the parent element of the `Binding` element. The element is required for every `Site` element. For more information about configuring endpoints, see [Enable Communication for Role Instances](../cloud-services/cloud-services-enable-communication-role-instances.md).
The `Bindings` element is only available using the Azure SDK version 1.3 or higher.
The following table describes the attributes of the `Task` element.
| Attribute | Type | Description | | | - | -- |
-|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file will not process properly.|
+|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file processes incorrectly.|
|executionContext|string|Specifies the context in which the script is run.<br /><br /> - `limited` [Default] ΓÇô Run with the same privileges as the role hosting the process.<br />- `elevated` ΓÇô Run with administrator privileges.|
-|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System does not wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role is not restarted until all foreground tasks exit.|
+|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System doesn't wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role isn't restarted until all foreground tasks exit.|
## <a name="Contents"></a> Contents The `Contents` element describes the collection of content for a web role. This element is the parent of the `Content` element.
The `Contents` element describes the collection of content for a web role. This
The `Contents` element is only available using the Azure SDK version 1.5 or higher. ## <a name="Content"></a> Content
-The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it is copied.
+The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it copies.
The `Content` element is only available using the Azure SDK version 1.5 or higher.
The following table describes the attributes of the `SourceDirectory` element.
| Attribute | Type | Description | | | - | -- |
-|path|string|Required. Relative or absolute path of a local directory whose contents will be copied to the Azure virtual machine. Expansion of environment variables in the directory path is supported.|
+|path|string|Required. Relative or absolute path of a local directory whose contents copy to the Azure virtual machine. Expansion of environment variables in the directory path is supported.|
-## See also
+## Next steps
[Cloud Service (extended support) Definition Schema](schema-csdef-file.md).----
cloud-services-extended-support Schema Csdef Workerrole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-workerrole.md
Title: Azure Cloud Services (extended support) Def. WorkerRole Schema | Microsof
description: Information related to the worker role schema for Cloud Services (extended support) Previously updated : 10/14/2020 Last updated : 07/24/2024
The basic format of the service definition file containing a worker role is as f
``` ## Schema elements
-The service definition file includes these elements, described in detail in subsequent sections in this topic:
+The service definition file includes these elements, described in detail in subsequent sections in this article:
[WorkerRole](#WorkerRole)
The name of the directory allocated to the local storage resource corresponds to
## <a name="Endpoints"></a> Endpoints The `Endpoints` element describes the collection of input (external), internal, and instance input endpoints for a role. This element is the parent of the `InputEndpoint`, `InternalEndpoint`, and `InstanceInputEndpoint` elements.
-Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints which can be allocated across the 25 roles allowed in a service. For example, if have 5 roles you can allocate 5 input endpoints per role or you can allocate 25 input endpoints to a single role or you can allocate 1 input endpoint each to 25 roles.
+Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints, which can be allocated across the 25 roles allowed in a service. For example, if you have five roles, you can allocate five input endpoints per role, or you can allocate 25 input endpoints to a single role or you can allocate one input endpoint each to 25 roles.
> [!NOTE] > Each role deployed requires one instance per role. The default provisioning for a subscription is limited to 20 cores and thus is limited to 20 instances of a role. If your application requires more instances than is provided by the default provisioning see [Billing, Subscription Management and Quota Support](https://azure.microsoft.com/support/options/) for more information on increasing your quota.
The following table describes the attributes of the `InputEndpoint` element.
|protocol|string|Required. The transport protocol for the external endpoint. For a worker role, possible values are `HTTP`, `HTTPS`, `UDP`, or `TCP`.| |port|int|Required. The port for the external endpoint. You can specify any port number you choose, but the port numbers specified for each role in the service must be unique.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| |certificate|string|Required for an HTTPS endpoint. The name of a certificate defined by a `Certificate` element.|
-|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.|
-|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored and the endpoint will not be removed by the load balancer. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role is not in a Ready state.|
+|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This attribute is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.|
+|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored and the endpoint won't be removed by the load balancer. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role isn't in a Ready state.|
|loadBalancerProbe|string|Optional. The name of the load balancer probe associated with the input endpoint. For more information, see [LoadBalancerProbe Schema](schema-csdef-loadbalancerprobe.md).| ## <a name="InternalEndpoint"></a> InternalEndpoint
-The `InternalEndpoint` element describes an internal endpoint to a worker role. An internal endpoint is available only to other role instances running within the service; it is not available to clients outside the service. A worker role may have up to five HTTP, UDP, or TCP internal endpoints.
+The `InternalEndpoint` element describes an internal endpoint to a worker role. An internal endpoint is available only to other role instances running within the service; it isn't available to clients outside the service. A worker role may have up to five HTTP, UDP, or TCP internal endpoints.
The following table describes the attributes of the `InternalEndpoint` element.
The following table describes the attributes of the `InternalEndpoint` element.
| | - | -- | |name|string|Required. A unique name for the internal endpoint.| |protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `HTTP`, `TCP`, `UDP`, or `ANY`.<br /><br /> A value of `ANY` specifies that any protocol, any port is allowed.|
-|port|int|Optional. The port used for internal load balanced connections on the endpoint. A Load balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically these are these are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.|
+|port|int|Optional. The port used for internal load-balanced connections on the endpoint. A load-balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically, these values are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.|
## <a name="InstanceInputEndpoint"></a> InstanceInputEndpoint The `InstanceInputEndpoint` element describes an instance input endpoint to a worker role. An instance input endpoint is associated with a specific role instance by using port forwarding in the load balancer. Each instance input endpoint is mapped to a specific port from a range of possible ports. This element is the parent of the `AllocatePublicPortFrom` element.
The following table describes the attributes of the `InstanceInputEndpoint` elem
| Attribute | Type | Description | | | - | -- | |name|string|Required. A unique name for the endpoint.|
-|localPort|int|Required. Specifies the internal port that all role instances will listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.|
+|localPort|int|Required. Specifies the internal port that all role instances listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.|
|protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `udp` or `tcp`. Use `tcp` for http/https based traffic.| ## <a name="AllocatePublicPortFrom"></a> AllocatePublicPortFrom
-The `AllocatePublicPortFrom` element describes the public port range that can be used by external customers to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element.
+The `AllocatePublicPortFrom` element describes the public port range that external customers can use to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element.
The `AllocatePublicPortFrom` element is only available using the Azure SDK version 1.7 or higher.
The following table describes the attributes of the `FixedPort` element.
| Attribute | Type | Description | | | - | -- |
-|port|int|Required. The port for the internal endpoint. This has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).|
+|port|int|Required. The port for the internal endpoint. This attribute has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).|
## <a name="FixedPortRange"></a> FixedPortRange The `FixedPortRange` element specifies the range of ports that are assigned to the internal endpoint or instance input endpoint, and sets the port used for load balanced connections on the endpoint.
The following table describes the attributes of the `Certificate` element.
| Attribute | Type | Description | | | - | -- |
-|name|string|Required. A name for this certificate, which is used to refer to it when it is associated with an HTTPS `InputEndpoint` element.|
+|name|string|Required. A name for this certificate, which is used to refer to it when it's associated with an HTTPS `InputEndpoint` element.|
|storeLocation|string|Required. The location of the certificate store where this certificate may be found on the local machine. Possible values are `CurrentUser` and `LocalMachine`.| |storeName|string|Required. The name of the certificate store where this certificate resides on the local machine. Possible values include the built-in store names `My`, `Root`, `CA`, `Trust`, `Disallowed`, `TrustedPeople`, `TrustedPublisher`, `AuthRoot`, `AddressBook`, or any custom store name. If a custom store name is specified, the store is automatically created.| |permissionLevel|string|Optional. Specifies the access permissions given to the role processes. If you want only elevated processes to be able to access the private key, then specify `elevated` permission. `limitedOrElevated` permission allows all role processes to access the private key. Possible values are `limitedOrElevated` or `elevated`. The default value is `limitedOrElevated`.|
The following table describes the attributes of the `Import` element.
| Attribute | Type | Description | | | - | -- |
-|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information see [Extensions](extensions.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance|
+|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information, see [Extensions](extensions.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance|
## <a name="Runtime"></a> Runtime The `Runtime` element describes a collection of environment variable settings for a worker role that control the runtime environment of the Azure host process. This element is the parent of the `Environment` element. This element is optional and a role can have only one runtime block.
The following table describes the attributes of the `NetFxEntryPoint` element.
| Attribute | Type | Description | | | - | -- |
-|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (do not specify **\\%ROLEROOT%\Approot** in `commandLine`, it is assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.|
+|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (don't specify **\\%ROLEROOT%\Approot** in the command line; it's assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.|
|targetFrameworkVersion|string|Required. The version of the .NET framework on which the assembly was built. For example, `targetFrameworkVersion="v4.0"`.| ## <a name="ProgramEntryPoint"></a> ProgramEntryPoint
-The `ProgramEntryPoint` element specifies the program to run for a role. The `ProgramEntryPoint` element allows you to specify a program entry point that is not based on a .NET assembly.
+The `ProgramEntryPoint` element specifies the program to run for a role. The `ProgramEntryPoint` element allows you to specify a program entry point that isn't based on a .NET assembly.
> [!NOTE] > The `ProgramEntryPoint` element is only available using the Azure SDK version 1.5 or higher.
The following table describes the attributes of the `ProgramEntryPoint` element.
| Attribute | Type | Description | | | - | -- |
-|commandLine|string|Required. The path, file name, and any command line arguments of the program to execute. The path is relative to the folder **%ROLEROOT%\Approot** (do not specify **%ROLEROOT%\Approot** in commandLine, it is assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> If the program ends, the role is recycled, so generally set the program to continue to run, instead of being a program that just starts up and runs a finite task.|
-|setReadyOnProcessStart|boolean|Required. Specifies whether the role instance waits for the command line program to signal it is started. This value must be set to `true` at this time. Setting the value to `false` is reserved for future use.|
+|commandLine|string|Required. The path, file name, and any command line arguments of the program to execute. The path is relative to the folder **%ROLEROOT%\Approot** (don't specify **%ROLEROOT%\Approot** in the command line; it's assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> If the program ends, the role is recycled, so generally set the program to continue to run, instead of being a program that just starts up and runs a finite task.|
+|setReadyOnProcessStart|boolean|Required. Specifies whether the role instance waits for the command line program to signal when it starts. This value must be set to `true` at this time. Setting the value to `false` is reserved for future use.|
## <a name="Startup"></a> Startup The `Startup` element describes a collection of tasks that run when the role is started. This element can be the parent of the `Variable` element. For more information about using the role startup tasks, see [How to configure startup tasks](../cloud-services/cloud-services-startup-tasks.md). This element is optional and a role can have only one startup block.
The following table describes the attributes of the `Task` element.
| Attribute | Type | Description | | | - | -- |
-|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file will not process properly.|
+|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file processes incorrectly.|
|executionContext|string|Specifies the context in which the script is run.<br /><br /> - `limited` [Default] ΓÇô Run with the same privileges as the role hosting the process.<br />- `elevated` ΓÇô Run with administrator privileges.|
-|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System does not wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role is not restarted until all foreground tasks exit.|
+|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System doesn't wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role isn't restarted until all foreground tasks exit.|
## <a name="Contents"></a> Contents The `Contents` element describes the collection of content for a worker role. This element is the parent of the `Content` element.
The `Contents` element describes the collection of content for a worker role. Th
The `Contents` element is only available using the Azure SDK version 1.5 or higher. ## <a name="Content"></a> Content
-The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it is copied.
+The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it copies.
The `Content` element is only available using the Azure SDK version 1.5 or higher.
The following table describes the attributes of the `SourceDirectory` element.
| Attribute | Type | Description | | | - | -- |
-|path|string|Required. Relative or absolute path of a local directory whose contents will be copied to the Azure virtual machine. Expansion of environment variables in the directory path is supported.|
+|path|string|Required. Relative or absolute path of a local directory whose contents copy to the Azure virtual machine. Expansion of environment variables in the directory path is supported.|
## See also [Cloud Service (extended support) Definition Schema](schema-csdef-file.md).
cloud-services-extended-support States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/states.md
Previously updated : 04/05/2022 Last updated : 07/24/2024 # Available Provisioning and Power States for Azure Cloud Services (extended support)
This table lists the different power states for Cloud Services (extended support
|Started|The Role Instance is healthy and is currently running| |Stopping|The Role Instance is in the process of getting stopped| |Stopped|The Role Instance is in the Stopped State|
-|Unknown|The Role Instance is either in the process of creating or is not ready to service the traffic|
+|Unknown|The Role Instance is either in the process of creating or isn't ready to service the traffic|
|Starting|The Role Instance is in the process of moving to healthy/running state|
-|Busy|The Role Instance is not responding|
+|Busy|The Role Instance isn't responding|
|Destroyed|The Role instance is destroyed|
cloud-services-extended-support Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/support-help.md
Previously updated : 4/28/2021 Last updated : 07/24/2024
cloud-services-extended-support Swap Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/swap-cloud-service.md
Previously updated : 04/01/2021 Last updated : 07/24/2024 # Swap or switch deployments in Azure Cloud Services (extended support) You can swap between two independent cloud service deployments in Azure Cloud Services (extended support). Unlike in Azure Cloud Services (classic), the Azure Resource Manager model in Azure Cloud Services (extended support) doesn't use deployment slots. In Azure Cloud Services (extended support), when you deploy a new release of a cloud service, you can make the cloud service "swappable" with an existing cloud service in Azure Cloud Services (extended support).
-After you swap the deployments, you can stage and test your new release by using the new cloud service deployment. In effect, swapping promotes a new cloud service that's staged to production release.
+After you swap the deployments, you can stage and test your new release by using the new cloud service deployment. In effect, swapping promotes a new cloud service staged to production release.
> [!NOTE] > You can't swap between an Azure Cloud Services (classic) deployment and an Azure Cloud Services (extended support) deployment.
-You must make a cloud service swappable with another cloud service when you deploy the second of a pair of cloud services for the first time. Once the second pair of cloud service is deployed, it can not be made swappable with an existing cloud service in subsequent updates.
+You must make a cloud service swappable with another cloud service when you deploy the second of a pair of cloud services for the first time. Once the second pair of cloud service is deployed, it canΓÇÖt be made swappable with an existing cloud service in subsequent updates.
You can swap the deployments by using an Azure Resource Manager template (ARM template), the Azure portal, or the REST API.
-Upon deployment of the second cloud service, both the cloud services have their SwappableCloudService property set to point to each other. Any subsequent update to these cloud services will need to specify this property failing which an error will be returned indicating that the SwappableCloudService property cannot be deleted or updated.
+Upon deployment of the second cloud service, both the cloud services have their SwappableCloudService property set to point to each other. Any subsequent update to these cloud services needs to specify this property, failing which an error is returned indicating that the SwappableCloudService property can't delete or update.
-Once set, the SwappableCloudService property is treated as readonly. It cannot be deleted or changed to another value. Deleting one of the cloud services (of the swappable pair) will result in the SwappableCloudService property of the remaining cloud service being cleared.
+Once set, the SwappableCloudService property is treated as readonly. It can't delete or change to another value. Deleting one of the cloud services (of the swappable pair) results in the SwappableCloudService property of the remaining cloud service being cleared.
## ARM template
To swap a deployment in the Azure portal:
:::image type="content" source="media/swap-cloud-service-portal-confirm.png" alt-text="Screenshot that shows confirming the deployment swap information.":::
-Deployments swap quickly because the only thing that changes is the virtual IP address for the cloud service that's deployed.
+Deployments swap quickly because the only thing that changes is the virtual IP address for the cloud service deployed.
To save compute costs, you can delete one of the cloud services (designated as a staging environment for your application deployment) after you verify that your swapped cloud service works as expected.
cloud-services Cloud Services Role Config Xpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-config-xpath.md
description: The various XPath settings you can use in the cloud service role co
Previously updated : 02/21/2023 Last updated : 07/23/2024
Retrieves the endpoint port for the instance.
| Code |var port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint.Port; | ## Example
-Here is an example of a worker role that creates a startup task with an environment variable named `TestIsEmulated` set to the [@emulated xpath value](#app-running-in-emulator).
+Here's an example of a worker role that creates a startup task with an environment variable named `TestIsEmulated` set to the [@emulated xpath value](#app-running-in-emulator).
```xml <WorkerRole name="Role1">
cloud-services Cloud Services Role Enable Remote Desktop New Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-new-portal.md
Title: Use the portal to enable Remote Desktop for a Role
-description: How to configure your azure cloud service application to allow remote desktop connections
+description: How to configure your Azure cloud service application to allow remote desktop connections through the Azure portal.
Previously updated : 02/21/2023 Last updated : 07/23/2024
> * [PowerShell](cloud-services-role-enable-remote-desktop-powershell.md) > * [Visual Studio](cloud-services-role-enable-remote-desktop-visual-studio.md)
-Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it is running.
+Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it runs.
-You can enable a Remote Desktop connection in your role during development by including the Remote Desktop modules in your service definition or you can choose to enable Remote Desktop through the Remote Desktop Extension. The preferred approach is to use the Remote Desktop extension as you can enable Remote Desktop even after the application is deployed without having to redeploy your application.
+You can enable a Remote Desktop connection in your role during development by including the Remote Desktop modules in your service definition. Alternatively, you can choose to enable Remote Desktop through the Remote Desktop extension. The preferred approach is to use the Remote Desktop extension, as you can enable Remote Desktop even after the application is deployed without having to redeploy your application.
## Configure Remote Desktop from the Azure portal
-The Azure portal uses the Remote Desktop Extension approach so you can enable Remote Desktop even after the application is deployed. The **Remote Desktop** settings for your cloud service allows you to enable Remote Desktop, change the local Administrator account used to connect to the virtual machines, the certificate used in authentication and set the expiration date.
+The Azure portal uses the Remote Desktop Extension approach so you can enable Remote Desktop even after the application is deployed. The **Remote Desktop** setting for your cloud service allows you to enable Remote Desktop, change the local Administrator account used to connect to the virtual machines, the certificate used in authentication and set the expiration date.
-1. Click **Cloud Services**, select the name of the cloud service, and then select **Remote Desktop**.
+1. Select **Cloud Services**, select the name of the cloud service, and then select **Remote Desktop**.
![image shows Cloud services remote desktop](./media/cloud-services-role-enable-remote-desktop-new-portal/CloudServices_Remote_Desktop.png)
The Azure portal uses the Remote Desktop Extension approach so you can enable Re
4. In **Roles**, select the role you want to update or select **All** for all roles.
-5. When you finish your configuration updates, select **Save**. It will take a few moments before your role instances are ready to receive connections.
+5. When you finish your configuration updates, select **Save**. It takes a few moments before your role instances are ready to receive connections.
## Remote into role instances Once Remote Desktop is enabled on the roles, you can initiate a connection directly from the Azure portal:
-1. Click **Instances** to open the **Instances** settings.
-2. Select a role instance that has Remote Desktop configured.
-3. Click **Connect** to download an RDP file for the role instance.
+1. Select **Instances** to open the **Instances** settings.
+2. Choose a role instance that has Remote Desktop configured.
+3. Select **Connect** to download a Remote Desktop Protocol (RDP) file for the role instance.
![Cloud services remote desktop image](./media/cloud-services-role-enable-remote-desktop-new-portal/CloudServices_Remote_Desktop_Connect.png)
-4. Click **Open** and then **Connect** to start the Remote Desktop connection.
+4. Choose **Open** and then **Connect** to start the Remote Desktop connection.
>[!NOTE] > If your cloud service is sitting behind an NSG, you may need to create rules that allow traffic on ports **3389** and **20000**. Remote Desktop uses port **3389**. Cloud Service instances are load balanced, so you can't directly control which instance to connect to. The *RemoteForwarder* and *RemoteAccess* agents manage RDP traffic and allow the client to send an RDP cookie and specify an individual instance to connect to. The *RemoteForwarder* and *RemoteAccess* agents require that port **20000** is open, which may be blocked if you have an NSG.
-## Additional resources
+## Next steps
-[How to Configure Cloud Services](cloud-services-how-to-configure-portal.md)
+* [How to Configure Cloud Services](cloud-services-how-to-configure-portal.md)
cloud-services Cloud Services Role Enable Remote Desktop Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-powershell.md
Title: Use PowerShell to enable Remote Desktop for a Role
-description: How to configure your azure cloud service application using PowerShell to allow remote desktop connections
+description: How to configure your Azure cloud service application using PowerShell to allow remote desktop connections through PowerShell.
Previously updated : 02/21/2023 Last updated : 07/23/2024
> * [PowerShell](cloud-services-role-enable-remote-desktop-powershell.md) > * [Visual Studio](cloud-services-role-enable-remote-desktop-visual-studio.md)
-Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it is running.
+Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it runs.
This article describes how to enable remote desktop on your Cloud Service Roles using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article. PowerShell utilizes the Remote Desktop Extension so you can enable Remote Desktop after the application is deployed. ## Configure Remote Desktop from PowerShell The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/set-azureserviceremotedesktopextension) cmdlet allows you to enable Remote Desktop on specified roles or all roles of your cloud service deployment. The cmdlet lets you specify the Username and Password for the remote desktop user through the *Credential* parameter that accepts a PSCredential object.
-If you are using PowerShell interactively, you can easily set the PSCredential object by calling the [Get-Credentials](/powershell/module/microsoft.powershell.security/get-credential) cmdlet.
+If you use PowerShell interactively, you can easily set the PSCredential object by calling the [Get-Credentials](/powershell/module/microsoft.powershell.security/get-credential) cmdlet.
```powershell $remoteusercredentials = Get-Credential
$expiry = $(Get-Date).AddDays(1)
$credential = New-Object System.Management.Automation.PSCredential $username,$securepassword Set-AzureServiceRemoteDesktopExtension -ServiceName $servicename -Credential $credential -Expiration $expiry ```
-You can also optionally specify the deployment slot and roles that you want to enable remote desktop on. If these parameters are not specified, the cmdlet enables remote desktop on all roles in the **Production** deployment slot.
+You can also optionally specify the deployment slot and roles that you want to enable remote desktop on. If these parameters aren't specified, the cmdlet enables remote desktop on all roles in the **Production** deployment slot.
The Remote Desktop extension is associated with a deployment. If you create a new deployment for the service, you have to enable remote desktop on that deployment. If you always want to have remote desktop enabled, then you should consider integrating the PowerShell scripts into your deployment workflow. ## Remote Desktop into a role instance
-The [Get-AzureRemoteDesktopFile](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet is used to remote desktop into a specific role instance of your cloud service. You can use the *LocalPath* parameter to download the RDP file locally. Or you can use the *Launch* parameter to directly launch the Remote Desktop Connection dialog to access the cloud service role instance.
+The [Get-AzureRemoteDesktopFile](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet is used to remote desktop into a specific role instance of your cloud service. You can use the *LocalPath* parameter to download the Remote Desktop Protocol (RDP) file locally. Or you can use the *Launch* parameter to directly launch the Remote Desktop Connection dialog to access the cloud service role instance.
```powershell Get-AzureRemoteDesktopFile -ServiceName $servicename -Name "WorkerRole1_IN_0" -Launch
Get-AzureRemoteDesktopFile -ServiceName $servicename -Name "WorkerRole1_IN_0" -L
## Check if Remote Desktop extension is enabled on a service
-The [Get-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet displays that remote desktop is enabled or disabled on a service deployment. The cmdlet returns the username for the remote desktop user and the roles that the remote desktop extension is enabled for. By default, this happens on the deployment slot and you can choose to use the staging slot instead.
+The [Get-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/get-azureremotedesktopfile) cmdlet displays that remote desktop is enabled or disabled on a service deployment. The cmdlet returns the username for the remote desktop user and the roles that the remote desktop extension is enabled for. By default, the deployment slot is used, but you can choose to use the staging slot instead.
```powershell Get-AzureServiceRemoteDesktopExtension -ServiceName $servicename
Get-AzureServiceRemoteDesktopExtension -ServiceName $servicename
## Remove Remote Desktop extension from a service
-If you have already enabled the remote desktop extension on a deployment, and need to update the remote desktop settings, first remove the extension. And enable it again with the new settings. For example, if you want to set a new password for the remote user account, or the account expired. Doing this is required on existing deployments that have the remote desktop extension enabled. For new deployments, you can simply apply the extension directly.
+If you already enabled the remote desktop extension on a deployment and need to update the remote desktop settings, first remove the extension. Then, enable it again with the new settings. For example, if you want to set a new password for the remote user account or the account expired. Doing this step is required on existing deployments that have the remote desktop extension enabled. For new deployments, you can apply the extension directly.
To remove the remote desktop extension from the deployment, you can use the [Remove-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure/remove-azureserviceremotedesktopextension) cmdlet. You can also optionally specify the deployment slot and role from which you want to remove the remote desktop extension.
Remove-AzureServiceRemoteDesktopExtension -ServiceName $servicename -UninstallCo
> > The **UninstallConfiguration** parameter uninstalls any extension configuration that is applied to the service. Every extension configuration is associated with the service configuration. Calling the *remove* cmdlet without **UninstallConfiguration** disassociates the **deployment** from the extension configuration, thus effectively removing the extension. However, the extension configuration remains associated with the service.
-## Additional resources
+## Next steps
-[How to Configure Cloud Services](cloud-services-how-to-configure-portal.md)
+* [How to Configure Cloud Services](cloud-services-how-to-configure-portal.md)
cloud-services Cloud Services Role Enable Remote Desktop Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-visual-studio.md
Title: Using Visual Studio, enable Remote Desktop for a Role (Azure Cloud Services classic)
-description: How to configure your Azure cloud service application to allow remote desktop connections
+description: How to configure your Azure cloud service application to allow remote desktop connections through Visual Studio.
Previously updated : 02/21/2023 Last updated : 07/23/2024
> * [PowerShell](cloud-services-role-enable-remote-desktop-powershell.md) > * [Visual Studio](cloud-services-role-enable-remote-desktop-visual-studio.md)
-Remote Desktop enables you to access the desktop of a role running in Azure. You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it is running.
+Remote Desktop enables you to access the desktop of a role running in Azure, using Remote Desktop Protocol (RDP). You can use a Remote Desktop connection to troubleshoot and diagnose problems with your application while it runs.
The publish wizard that Visual Studio provides for cloud services includes an option to enable Remote Desktop during the publishing process, using credentials that you provide. Using this option is suitable when using Visual Studio 2017 version 15.4 and earlier.
-With Visual Studio 2017 version 15.5 and later, however, it's recommended that you avoid enabling Remote Desktop through the publish wizard unless you're working only as a single developer. For any situation in which the project might be opened by other developers, you instead enable Remote Desktop through the Azure portal, through PowerShell, or from a release pipeline in a continuous deployment workflow. This recommendation is due to a change in how Visual Studio communicates with Remote Desktop on the cloud service VM, as is explained in this article.
+With Visual Studio 2017 version 15.5 and later, we recommend you avoid enabling Remote Desktop through the publish wizard unless you're working as a single developer. For any situation in which multiple developers open the project, you should instead enable Remote Desktop through the Azure portal, through PowerShell, or from a release pipeline in a continuous deployment workflow. This recommendation is due to a change in how Visual Studio communicates with Remote Desktop on the cloud service virtual machine (VM), as is explained in this article.
## Configure Remote Desktop through Visual Studio 2017 version 15.4 and earlier
When using Visual Studio 2017 version 15.4 and earlier, you can use the **Enable
6. Provide a user name and a password. You canΓÇÖt use an existing account. DonΓÇÖt use "Administrator" as the user name for the new account.
-7. Choose a date on which the account will expire and after which Remote Desktop connections will be blocked.
+7. Choose a date on which the account will expire. An expired account automatically blocks further Remote Desktop connections.
-8. After you've provided all the required information, select **OK**. Visual Studio adds the Remote Desktop settings to your project's `.cscfg` and `.csdef` files, including the password that's encrypted using the chosen certificate.
+8. After you provide all the required information, select **OK**. Visual Studio adds the Remote Desktop settings to your project's `.cscfg` and `.csdef` files, including the password that's encrypted using the chosen certificate.
9. Complete any remaining steps using the **Next** button, then select **Publish** when youΓÇÖre ready to publish your cloud service. If you're not ready to publish, select **Cancel** and answer **Yes** when prompted to save changes. You can publish your cloud service later with these settings.
With Visual Studio 2017 version 15.5 and later, you can still use the publish wi
If you're working as part of a team, you should instead enable remote desktop on the Azure cloud service by using either the [Azure portal](cloud-services-role-enable-remote-desktop-new-portal.md) or [PowerShell](cloud-services-role-enable-remote-desktop-powershell.md).
-This recommendation is due to a change in how Visual Studio 2017 version 15.5 and later communicates with the cloud service VM. When enabling Remote Desktop through the publish wizard, earlier versions of Visual Studio communicate with the VM through what's called the "RDP plugin." Visual Studio 2017 version 15.5 and later communicates instead using the "RDP extension" that is more secure and more flexible. This change also aligns with the fact that the Azure portal and PowerShell methods to enable Remote Desktop also use the RDP extension.
+This recommendation is due to a change in how Visual Studio 2017 version 15.5 and later communicates with the cloud service VM. When you enable Remote Desktop through the publish wizard, earlier versions of Visual Studio communicate with the VM through the "RDP plugin." Visual Studio 2017 version 15.5 and later communicates instead using the "RDP extension" that is more secure and more flexible. This change also aligns with the fact that the Azure portal and PowerShell methods to enable Remote Desktop also use the RDP extension.
-When Visual Studio communicates with the RDP extension, it transmit a plain text password over TLS. However, the project's configuration files store only an encrypted password, which can be decrypted into plain text only with the local certificate that was originally used to encrypt it.
+When Visual Studio communicates with the RDP extension, it transmits a plain text password over Transport Layer Security (TLS). However, the project's configuration files store only an encrypted password, which can be decrypted into plain text only with the local certificate that was originally used to encrypt it.
If you deploy the cloud service project from the same development computer each time, then that local certificate is available. In this case, you can still use the **Enable Remote Desktop for all roles** option in the publish wizard.
-If you or other developers want to deploy the cloud service project from different computers, however, then those other computers won't have the necessary certificate to decrypt the password. As a result, you see the following error message:
+However, if you or other developers want to deploy the cloud service project from different computers, then those other computers lack the necessary certificate to decrypt the password. As a result, you see the following error message:
```output
-Applying remote desktop protocol (RDP) extension.
+Applying remote desktop protocol extension.
Certificate with thumbprint [thumbprint] doesn't exist. ``` You could change the password every time you deploy the cloud service, but that action becomes inconvenient for everyone who needs to use Remote Desktop.
-If you're sharing the project with a team, then, it's best to clear the option in the publish wizard and instead enable Remote Desktop directly through the [Azure portal](cloud-services-role-enable-remote-desktop-new-portal.md) or by using [PowerShell](cloud-services-role-enable-remote-desktop-powershell.md).
+If you're sharing the project with a team, then it's best to clear the option in the publish wizard and instead enable Remote Desktop directly through the [Azure portal](cloud-services-role-enable-remote-desktop-new-portal.md) or by using [PowerShell](cloud-services-role-enable-remote-desktop-powershell.md).
### Deploying from a build server with Visual Studio 2017 version 15.5 and later
To use the RDP extension from Azure DevOps Services, include the following detai
1. After the deployment step, add an **Azure PowerShell** step, set its **Display name** property to "Azure Deployment: Enable RDP Extension" (or another suitable name), and select your appropriate Azure subscription.
-1. Set **Script Type** to "Inline" and paste the code below into the **Inline Script** field. (You can also create a `.ps1` file in your project with this script, set **Script Type** to "Script File Path", and set **Script Path** to point to the file.)
+1. Set **Script Type** to "Inline" and paste the following below into the **Inline Script** field. (You can also create a `.ps1` file in your project with this script, set **Script Type** to "Script File Path", and set **Script Path** to point to the file.)
```ps Param(
To use the RDP extension from Azure DevOps Services, include the following detai
## Connect to an Azure Role by using Remote Desktop
-After you publish your cloud service on Azure and have enabled Remote Desktop, you can use Visual Studio Server Explorer to log into the cloud service VM:
+After you publish your cloud service on Azure and enable Remote Desktop, you can use Visual Studio Server Explorer to log into the cloud service VM:
1. In Server Explorer, expand the **Azure** node, and then expand the node for a cloud service and one of its roles to display a list of instances. 2. Right-click an instance node and select **Connect Using Remote Desktop**.
-3. Enter the user name and password that you created previously. You are now logged into your remote session.
+3. Enter the user name and password that you created previously. You're now signed into your remote session.
-## Additional resources
+## Next steps
-[How to Configure Cloud Services](cloud-services-how-to-configure-portal.md)
+* [How to Configure Cloud Services](cloud-services-how-to-configure-portal.md)
cloud-services Cloud Services Role Lifecycle Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-lifecycle-dotnet.md
Title: Handle Cloud Service (classic) lifecycle events | Microsoft Docs
description: Learn how to use the lifecycle methods of a Cloud Service role in .NET, including RoleEntryPoint, which provides methods to respond to lifecycle events. Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-When you create a worker role, you extend the [RoleEntryPoint](/previous-versions/azure/reference/ee758619(v=azure.100)) class which provides methods for you to override that let you respond to lifecycle events. For web roles this class is optional, so you must use it to respond to lifecycle events.
+When you create a worker role, you extend the [RoleEntryPoint](/previous-versions/azure/reference/ee758619(v=azure.100)) class, which provides methods for you to override that let you respond to lifecycle events. For web roles, this class is optional, so you must use it to respond to lifecycle events.
## Extend the RoleEntryPoint class
-The [RoleEntryPoint](/previous-versions/azure/reference/ee758619(v=azure.100)) class includes methods that are called by Azure when it **starts**, **runs**, or **stops** a web or worker role. You can optionally override these methods to manage role initialization, role shutdown sequences, or the execution thread of the role.
+The [RoleEntryPoint](/previous-versions/azure/reference/ee758619(v=azure.100)) class includes methods that are called by Azure when it **starts**, **runs**, or **stops** a web or worker role. You can optionally override these methods to manage role initialization, role shut down sequences, or the execution thread of the role.
When extending **RoleEntryPoint**, you should be aware of the following behaviors of the methods:
-* The [OnStart](/previous-versions/azure/reference/ee772851(v=azure.100)) method returns a boolean value, so it is possible to return **false** from this method.
+* The [OnStart](/previous-versions/azure/reference/ee772851(v=azure.100)) method returns a boolean value, so it's possible to return **false** from this method.
If your code returns **false**, the role process is abruptly terminated, without running any shutdown sequence you may have in place. In general, you should avoid returning **false** from the **OnStart** method. * Any uncaught exception within an overload of a **RoleEntryPoint** method is treated as an unhandled exception.
- If an exception occurs within one of the lifecycle methods, Azure will raise the [UnhandledException](/dotnet/api/system.appdomain.unhandledexception) event, and then the process is terminated. After your role has been taken offline, it will be restarted by Azure. When an unhandled exception occurs, the [Stopping](/previous-versions/azure/reference/ee758136(v=azure.100)) event is not raised, and the **OnStop** method is not called.
+ If an exception occurs within one of the lifecycle methods, Azure raises the [UnhandledException](/dotnet/api/system.appdomain.unhandledexception) event, and then the process is terminated. After your role goes offline, Azure restarts it. When an unhandled exception occurs, the [Stopping](/previous-versions/azure/reference/ee758136(v=azure.100)) event isn't raised, and the **OnStop** method isn't called.
-If your role does not start, or is recycling between the initializing, busy, and stopping states, your code may be throwing an unhandled exception within one of the lifecycle events each time the role restarts. In this case, use the [UnhandledException](/dotnet/api/system.appdomain.unhandledexception) event to determine the cause of the exception and handle it appropriately. Your role may also be returning from the [Run](/previous-versions/azure/reference/ee772746(v=azure.100)) method, which causes the role to restart. For more information about deployment states, see [Common Issues Which Cause Roles to Recycle](cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md).
+If your role doesn't start, or is recycling between the initializing, busy, and stopping states, your code may be throwing an unhandled exception within one of the lifecycle events each time the role restarts. In this case, use the [UnhandledException](/dotnet/api/system.appdomain.unhandledexception) event to determine the cause of the exception and handle it appropriately. Your role may also be returning from the [Run](/previous-versions/azure/reference/ee772746(v=azure.100)) method, which causes the role to restart. For more information about deployment states, see [Common Issues Which Cause Roles to Recycle](cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md).
> [!NOTE] > If you are using the **Azure Tools for Microsoft Visual Studio** to develop your application, the role project templates automatically extend the **RoleEntryPoint** class for you, in the *WebRole.cs* and *WorkerRole.cs* files.
If your role does not start, or is recycling between the initializing, busy, and
> ## OnStart method
-The **OnStart** method is called when your role instance is brought online by Azure. While the OnStart code is executing, the role instance is marked as **Busy** and no external traffic will be directed to it by the load balancer. You can override this method to perform initialization work, such as implementing event handlers and starting [Azure Diagnostics](cloud-services-how-to-monitor.md).
+The **OnStart** method is called when your role instance is brought online by Azure. While the OnStart code is executing, the role instance is marked as **Busy** and the load balancer doesn't direct any external traffic to it. You can override this method to perform initialization work, such as implementing event handlers and starting [Azure Diagnostics](cloud-services-how-to-monitor.md).
If **OnStart** returns **true**, the instance is successfully initialized and Azure calls the **RoleEntryPoint.Run** method. If **OnStart** returns **false**, the role terminates immediately, without executing any planned shutdown sequences.
public override bool OnStart()
``` ## OnStop method
-The **OnStop** method is called after a role instance has been taken offline by Azure and before the process exits. You can override this method to call code required for your role instance to cleanly shut down.
+The **OnStop** method is called after Azures takes a role instance offline and before the process exits. You can override this method to call code required for your role instance to cleanly shut down.
> [!IMPORTANT] > Code running in the **OnStop** method has a limited time to finish when it is called for reasons other than a user-initiated shutdown. After this time elapses, the process is terminated, so you must make sure that code in the **OnStop** method can run quickly or tolerates not running to completion. The **OnStop** method is called after the **Stopping** event is raised.
The **OnStop** method is called after a role instance has been taken offline by
## Run method You can override the **Run** method to implement a long-running thread for your role instance.
-Overriding the **Run** method is not required; the default implementation starts a thread that sleeps forever. If you do override the **Run** method, your code should block indefinitely. If the **Run** method returns, the role is automatically gracefully recycled; in other words, Azure raises the **Stopping** event and calls the **OnStop** method so that your shutdown sequences may be executed before the role is taken offline.
+Overriding the **Run** method isn't required; the default implementation starts a thread that sleeps forever. If you do override the **Run** method, your code should block indefinitely. If the **Run** method returns, the role is automatically recycled; in other words, Azure raises the **Stopping** event and calls the **OnStop** method so that your shutdown sequences may be executed before the role is taken offline.
### Implementing the ASP.NET lifecycle methods for a web role
-You can use the ASP.NET lifecycle methods, in addition to those provided by the **RoleEntryPoint** class, to manage initialization and shutdown sequences for a web role. This may be useful for compatibility purposes if you are porting an existing ASP.NET application to Azure. The ASP.NET lifecycle methods are called from within the **RoleEntryPoint** methods. The **Application\_Start** method is called after the **RoleEntryPoint.OnStart** method finishes. The **Application\_End** method is called before the **RoleEntryPoint.OnStop** method is called.
+You can use the ASP.NET lifecycle methods, in addition to the methods provided by the **RoleEntryPoint** class, to manage initialization and shutdown sequences for a web role. This approach may be useful for compatibility purposes if you're porting an existing ASP.NET application to Azure. The ASP.NET lifecycle methods are called from within the **RoleEntryPoint** methods. The **Application\_Start** method is called after the **RoleEntryPoint.OnStart** method finishes. The **Application\_End** method is called before the **RoleEntryPoint.OnStop** method is called.
## Next steps Learn how to [create a cloud service package](cloud-services-model-and-package.md).
cloud-services Cloud Services Sizes Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-sizes-specs.md
description: Lists the different virtual machine sizes (and IDs) for Azure cloud
Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-This topic describes the available sizes and options for Cloud Service role instances (web roles and worker roles). It also provides deployment considerations to be aware of when planning to use these resources. Each size has an ID that you put in your [service definition file](cloud-services-model-and-package.md#csdef). Prices for each size are available on the [Cloud Services Pricing](https://azure.microsoft.com/pricing/details/cloud-services/) page.
+This article describes the available sizes and options for Cloud Service role instances (web roles and worker roles). It also provides deployment considerations to be aware of when planning to use these resources. Each size has an ID that you put in your [service definition file](cloud-services-model-and-package.md#csdef). Prices for each size are available on the [Cloud Services Pricing](https://azure.microsoft.com/pricing/details/cloud-services/) page.
> [!NOTE]
-> To see related Azure limits, see [Azure Subscription and Service Limits, Quotas, and Constraints](../azure-resource-manager/management/azure-subscription-service-limits.md)
->
->
+> To see related Azure limits, visit [Azure Subscription and Service Limits, Quotas, and Constraints](../azure-resource-manager/management/azure-subscription-service-limits.md)
## Sizes for web and worker role instances There are multiple standard sizes to choose from on Azure. Considerations for some of these sizes include: * D-series VMs are designed to run applications that demand higher compute power and temporary disk performance. D-series VMs provide faster processors, a higher memory-to-core ratio, and a solid-state drive (SSD) for the temporary disk. For details, see the announcement on the Azure blog, [New D-Series Virtual Machine Sizes](https://azure.microsoft.com/updates/d-series-virtual-machine-sizes).
-* Dv3-series, Dv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-series CPU is about 35% faster than the D-series CPU. It is based on the latest generation 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, and with the Intel Turbo Boost Technology 2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the D-series.
+* Dv3-series, Dv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-series CPU is about 35% faster than the D-series CPU. It bases itself on the latest generation 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, and with the Intel Turbo Boost Technology 2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the D-series.
* G-series VMs offer the most memory and run on hosts that have Intel Xeon E5 V3 family processors.
-* The A-series VMs can be deployed on various hardware types and processors. The size is throttled, based on the hardware, to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine.
-* The A0 size is over-subscribed on the physical hardware. For this specific size only, other customer deployments may impact the performance of your running workload. The relative performance is outlined below as the expected baseline, subject to an approximate variability of 15 percent.
+* The A-series VMs can be deployed on various hardware types and processors. The size is throttled based on the hardware to offer consistent processor performance for the running instance, regardless of the deployment scenario hardware. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine.
+* The A0 size is over-subscribed on the physical hardware. For this specific size only, other customer deployments may affect the performance of your running workload. We outline the expected baseline of relative performance, subject to an approximate variability of 15 percent, later in the article.
The size of the virtual machine affects the pricing. The size also affects the processing, memory, and storage capacity of the virtual machine. Storage costs are calculated separately based on used pages in the storage account. For details, see [Cloud Services Pricing Details](https://azure.microsoft.com/pricing/details/cloud-services/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/). The following considerations might help you decide on a size:
-* The A8-A11 and H-series sizes are also known as *compute-intensive instances*. The hardware that runs these sizes is designed and optimized for compute-intensive and network-intensive applications, including high-performance computing (HPC) cluster applications, modeling, and simulations. The A8-A11 series uses Intel Xeon E5-2670 @ 2.6 GHZ and the H-series uses Intel Xeon E5-2667 v3 @ 3.2 GHz. For detailed information and considerations about using these sizes, see [High performance compute VM sizes](../virtual-machines/sizes-hpc.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
+* The A8-A11 and H-series sizes are also known as *compute-intensive instances*. The hardware that runs these sizes is designed and optimized for compute-intensive and network-intensive applications, including high-performance computing (HPC) cluster applications, modeling, and simulations. The A8-A11 series uses Intel Xeon E5-2670 @ 2.6 GHz and the H-series uses Intel Xeon E5-2667 v3 @ 3.2 GHz. For detailed information and considerations about using these sizes, see [High performance compute virtual machine (VM) sizes](../virtual-machines/sizes-hpc.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
* Dv3-series, Dv2-series, D-series, G-series, are ideal for applications that demand faster CPUs, better local disk performance, or have higher memory demands. They offer a powerful combination for many enterprise-grade applications. * Some of the physical hosts in Azure data centers may not support larger virtual machine sizes, such as A5 ΓÇô A11. As a result, you may see the error message **Failed to configure virtual machine {machine name}** or **Failed to create virtual machine {machine name}** when resizing an existing virtual machine to a new size; creating a new virtual machine in a virtual network created before April 16, 2013; or adding a new virtual machine to an existing cloud service. See [Error: ΓÇ£Failed to configure virtual machineΓÇ¥](https://social.msdn.microsoft.com/Forums/9693f56c-fcd3-4d42-850e-5e3b56c7d6be/error-failed-to-configure-virtual-machine-with-a5-a6-or-a7-vm-size?forum=WAVirtualMachinesforWindows) on the support forum for workarounds for each deployment scenario. * Your subscription might also limit the number of cores you can deploy in certain size families. To increase a quota, contact Azure Support. ## Performance considerations
-We have created the concept of the Azure Compute Unit (ACU) to provide a way of comparing compute (CPU) performance across Azure SKUs and to identify which SKU is most likely to satisfy your performance needs. ACU is currently standardized on a Small (Standard_A1) VM being 100 and all other SKUs then represent approximately how much faster that SKU can run a standard benchmark.
+We created the concept of the Azure Compute Unit (ACU) to provide a way of comparing compute (CPU) performance across Azure SKUs and to identify which SKU is most likely to satisfy your performance needs. ACU is currently standardized on a Small (Standard_A1) VM being 100. Following that sandard, all other SKUs represent approximately how much faster that SKU can run a standard benchmark.
> [!IMPORTANT] > The ACU is only a guideline. The results for your workload may vary.
->
->
<br>
ACUs marked with a * use Intel® Turbo technology to increase CPU frequency and
## Size tables The following tables show the sizes and the capacities they provide.
-* Storage capacity is shown in units of GiB or 1024^3 bytes. When comparing disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB
+* Storage capacity is shown in units of GiB or 1024^3 bytes. When comparing disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3), remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB
* Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec. * Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to **ReadOnly** or **ReadWrite**. For uncached data disk operation, the host cache mode is set to **None**.
-* Maximum network bandwidth is the maximum aggregated bandwidth allocated and assigned per VM type. The maximum bandwidth provides guidance for selecting the right VM type to ensure adequate network capacity is available. When moving between Low, Moderate, High and Very High, the throughput increases accordingly. Actual network performance will depend on many factors including network and application loads, and application network settings.
+* Maximum network bandwidth is the maximum aggregated bandwidth allocated and assigned per VM type. The maximum bandwidth provides guidance for selecting the right VM type to ensure adequate network capacity is available. When moving between Low, Moderate, High and Very High, the throughput increases accordingly. Actual network performance depends on many factors including network and application loads, and application network settings.
## A-series | Size | CPU cores | Memory: GiB | Temporary Storage: GiB | Max NICs / Network bandwidth |
In addition to the substantial CPU power, the H-series offers diverse options fo
## Configure sizes for Cloud Services You can specify the Virtual Machine size of a role instance as part of the service model described by the [service definition file](cloud-services-model-and-package.md#csdef). The size of the role determines the number of CPU cores, the memory capacity, and the local file system size that is allocated to a running instance. Choose the role size based on your application's resource requirement.
-Here is an example for setting the role size to be Standard_D2 for a Web Role instance:
+Here's an example for setting the role size to be Standard_D2 for a Web Role instance:
```xml <WorkerRole name="Worker1" vmsize="Standard_D2">
Here is an example for setting the role size to be Standard_D2 for a Web Role in
## Changing the size of an existing role
-As the nature of your workload changes or new VM sizes become available, you may want to change the size of your role. To do so, you must change the VM size in your service definition file (as shown above), repackage your Cloud Service, and deploy it.
+As the nature of your workload changes or new VM sizes become available, you may want to change the size of your role. To do so, you must change the VM size in your service definition file (as previously shown), repackage your Cloud Service, and deploy it.
>[!TIP] > You may want to use different VM sizes for your role in different environments (eg. test vs production). One way to do this is to create multiple service definition (.csdef) files in your project, then create different cloud service packages per environment during your automated build using the CSPack tool. To learn more about the elements of a cloud services package and how to create them, see [What is the cloud services model and how do I package it?](cloud-services-model-and-package.md)
As the nature of your workload changes or new VM sizes become available, you may
> ## Get a list of sizes
-You can use PowerShell or the REST API to get a list of sizes. The REST API is documented [here](/previous-versions/azure/reference/dn469422(v=azure.100)). The following code is a PowerShell command that will list all the sizes available for Cloud Services.
+You can use PowerShell or the REST API to get a list of sizes. The REST API is documented [here](/previous-versions/azure/reference/dn469422(v=azure.100)). The following code is a PowerShell command that lists all the sizes available for Cloud Services.
```powershell Get-AzureRoleSize | where SupportedByWebWorkerRoles -eq $true | select InstanceSize, RoleSizeLabel ``` ## Next steps
-* Learn about [azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
+* Learn about [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
* Learn more [about high performance compute VM sizes](../virtual-machines/sizes-hpc.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) for HPC workloads.
cloud-services Cloud Services Startup Tasks Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-startup-tasks-common.md
Title: Common startup tasks for Cloud Services (classic) | Microsoft Docs
description: Provides some examples of common startup tasks you may want to perform in your cloud services web role or worker role. Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-This article provides some examples of common startup tasks you may want to perform in your cloud service. You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering COM components, setting registry keys, or starting a long running process.
+This article provides some examples of common startup tasks you may want to perform in your cloud service. You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering Component Object Model (COM) components, setting registry keys, or starting a long running process.
See [this article](cloud-services-startup-tasks.md) to understand how startup tasks work, and specifically how to create the entries that define a startup task.
See [this article](cloud-services-startup-tasks.md) to understand how startup ta
> ## Define environment variables before a role starts+ If you need environment variables defined for a specific task, use the [Environment] element inside the [Task] element. ```xml
Variables can also use a [valid Azure XPath value](cloud-services-role-config-xp
## Configure IIS startup with AppCmd.exe
-The [AppCmd.exe](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj635852(v=ws.11)) command-line tool can be used to manage IIS settings at startup on Azure. *AppCmd.exe* provides convenient, command-line access to configuration settings for use in startup tasks on Azure. Using *AppCmd.exe*, Website settings can be added, modified, or removed for applications and sites.
+
+The [AppCmd.exe](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj635852(v=ws.11)) command-line tool can be used to manage Internet Information Service (IIS) settings at startup on Azure. *AppCmd.exe* provides convenient, command-line access to configuration settings for use in startup tasks on Azure. When you use *AppCmd.exe*, Website settings can be added, modified, or removed for applications and sites.
However, there are a few things to watch out for in the use of *AppCmd.exe* as a startup task: * Startup tasks can be run more than once between reboots. For instance, when a role recycles. * If a *AppCmd.exe* action is performed more than once, it may generate an error. For example, attempting to add a section to *Web.config* twice could generate an error.
-* Startup tasks fail if they return a non-zero exit code or **errorlevel**. For example, when *AppCmd.exe* generates an error.
+* Startup tasks fail if they return a nonzero exit code or **errorlevel**. For example, when *AppCmd.exe* generates an error.
-It is a good practice to check the **errorlevel** after calling *AppCmd.exe*, which is easy to do if you wrap the call to *AppCmd.exe* with a *.cmd* file. If you detect a known **errorlevel** response, you can ignore it, or pass it back.
+It's a good practice to check the **errorlevel** after calling *AppCmd.exe*, which is easy to do if you wrap the call to *AppCmd.exe* with a *.cmd* file. If you detect a known **errorlevel** response, you can ignore it, or pass it back.
-The errorlevel returned by *AppCmd.exe* are listed in the winerror.h file, and can also be seen on [MSDN](/windows/desktop/Debug/system-error-codes--0-499-).
+The errorlevel values returned by *AppCmd.exe* are listed in the winerror.h file and can also be seen on the [Microsoft Developer Network (MSDN)](/windows/desktop/Debug/system-error-codes--0-499-).
### Example of managing the error level+ This example adds a compression section and a compression entry for JSON to the *Web.config* file, with error handling and logging. The relevant sections of the [ServiceDefinition.csdef] file are shown here, which include setting the [executionContext](/previous-versions/azure/reference/gg557552(v=azure.100)#task) attribute to `elevated` to give *AppCmd.exe* sufficient permissions to change the settings in the *Web.config* file:
EXIT %ERRORLEVEL%
``` ## Add firewall rules
-In Azure, there are effectively two firewalls. The first firewall controls connections between the virtual machine and the outside world. This firewall is controlled by the [EndPoints] element in the [ServiceDefinition.csdef] file.
-The second firewall controls connections between the virtual machine and the processes within that virtual machine. This firewall can be controlled by the `netsh advfirewall firewall` command-line tool.
+In Azure, there are effectively two firewalls. The first firewall controls connections between the virtual machine and the outside world. The [EndPoints] element in the [ServiceDefinition.csdef] file controls this firewall.
+
+The second firewall controls connections between the virtual machine and the processes within that virtual machine. You can control this firewall from the `netsh advfirewall firewall` command-line tool.
-Azure creates firewall rules for the processes started within your roles. For example, when you start a service or program, Azure automatically creates the necessary firewall rules to allow that service to communicate with the Internet. However, if you create a service that is started by a process outside your role (like a COM+ service or a Windows Scheduled Task), you need to manually create a firewall rule to allow access to that service. These firewall rules can be created by using a startup task.
+Azure creates firewall rules for the processes started within your roles. For example, when you start a service or program, Azure automatically creates the necessary firewall rules to allow that service to communicate with the Internet. However, if you create a service started by a process outside your role (like a COM+ service or a Windows Scheduled Task), you need to manually create a firewall rule to allow access to that service. These firewall rules can be created by using a startup task.
A startup task that creates a firewall rule must have an [executionContext][Task] of **elevated**. Add the following startup task to the [ServiceDefinition.csdef] file.
A startup task that creates a firewall rule must have an [executionContext][Task
</ServiceDefinition> ```
-To add the firewall rule, you must use the appropriate `netsh advfirewall firewall` commands in your startup batch file. In this example, the startup task requires security and encryption for TCP port 80.
+To add the firewall rule, you must use the appropriate `netsh advfirewall firewall` commands in your startup batch file. In this example, the startup task requires security and encryption for Transmission Control Protocol (TCP) port 80.
```cmd REM Add a firewall rule in a startup task.
EXIT /B %errorlevel%
``` ## Block a specific IP address
-You can restrict an Azure web role access to a set of specified IP addresses by modifying your IIS **web.config** file. You also need to use a command file which unlocks the **ipSecurity** section of the **ApplicationHost.config** file.
-To do unlock the **ipSecurity** section of the **ApplicationHost.config** file, create a command file that runs at role start. Create a folder at the root level of your web role called **startup** and, within this folder, create a batch file called **startup.cmd**. Add this file to your Visual Studio project and set the properties to **Copy Always** to ensure that it is included in your package.
+You can restrict an Azure web role access to a set of specified IP addresses by modifying your IIS **web.config** file. You also need to use a command file that unlocks the **ipSecurity** section of the **ApplicationHost.config** file.
+
+To do unlock the **ipSecurity** section of the **ApplicationHost.config** file, create a command file that runs at role start. Create a folder at the root level of your web role called **startup** and, within this folder, create a batch file called **startup.cmd**. Add this file to your Visual Studio project and set the properties to **Copy Always** to ensure you include it in your package.
Add the following startup task to the [ServiceDefinition.csdef] file.
This sample config **denies** all IPs from accessing the server except for the t
``` ## Create a PowerShell startup task
-Windows PowerShell scripts cannot be called directly from the [ServiceDefinition.csdef] file, but they can be invoked from within a startup batch file.
-PowerShell (by default) does not run unsigned scripts. Unless you sign your script, you need to configure PowerShell to run unsigned scripts. To run unsigned scripts, the **ExecutionPolicy** must be set to **Unrestricted**. The **ExecutionPolicy** setting that you use is based on the version of Windows PowerShell.
+Windows PowerShell scripts can't be called directly from the [ServiceDefinition.csdef] file, but they can be invoked from within a startup batch file.
+
+PowerShell (by default) doesn't run unsigned scripts. Unless you sign your script, you need to configure PowerShell to run unsigned scripts. To run unsigned scripts, the **ExecutionPolicy** must be set to **Unrestricted**. The **ExecutionPolicy** setting that you use is based on the version of Windows PowerShell.
```cmd REM Run an unsigned PowerShell script and log the output
EXIT /B %errorlevel%
``` ## Create files in local storage from a startup task
-You can use a local storage resource to store files created by your startup task that is accessed later by your application.
+
+You can use a local storage resource to store files created by your startup task that your application later accesses.
To create the local storage resource, add a [LocalResources] section to the [ServiceDefinition.csdef] file and then add the [LocalStorage] child element. Give the local storage resource a unique name and an appropriate size for your startup task.
string fileContent = System.IO.File.ReadAllText(System.IO.Path.Combine(localStor
``` ## Run in the emulator or cloud
-You can have your startup task perform different steps when it is operating in the cloud compared to when it is in the compute emulator. For example, you may want to use a fresh copy of your SQL data only when running in the emulator. Or you may want to do some performance optimizations for the cloud that you don't need to do when running in the emulator.
+
+You can have your startup task perform different steps when it's operating in the cloud compared to when it is in the compute emulator. For example, you may want to use a fresh copy of your SQL data only when running in the emulator. Or you may want to do some performance optimizations for the cloud that you don't need to do when running in the emulator.
This ability to perform different actions on the compute emulator and the cloud can be accomplished by creating an environment variable in the [ServiceDefinition.csdef] file. You then test that environment variable for a value in your startup task.
To create the environment variable, add the [Variable]/[RoleInstanceValue] eleme
</ServiceDefinition> ```
-The task can now check the **%ComputeEmulatorRunning%** environment variable to perform different actions based on whether the role is running in the cloud or the emulator. Here is a .cmd shell script that checks for that environment variable.
+The task can now check the **%ComputeEmulatorRunning%** environment variable to perform different actions based on whether the role is running in the cloud or the emulator. Here's a .cmd shell script that checks for that environment variable.
```cmd REM Check if this task is running on the compute emulator.
IF "%ComputeEmulatorRunning%" == "true" (
## Detect that your task has already run
-The role may recycle without a reboot causing your startup tasks to run again. There is no flag to indicate that a task has already run on the hosting VM. You may have some tasks where it doesn't matter that they run multiple times. However, you may run into a situation where you need to prevent a task from running more than once.
-The simplest way to detect that a task has already run is to create a file in the **%TEMP%** folder when the task is successful and look for it at the start of the task. Here is a sample cmd shell script that does that for you.
+The role may recycle without a reboot causing your startup tasks to run again. There's no flag to indicate that a task already ran on the host virtual machine (VM). You may have some tasks where it doesn't matter that they run multiple times. However, you may run into a situation where you need to prevent a task from running more than once.
+
+The simplest way to detect that a task has already run is to create a file in the **%TEMP%** folder when the task is successful and look for it at the start of the task. Here's a sample cmd shell script that does that for you.
```cmd REM If Task1_Success.txt exists, then Application 1 is already installed.
EXIT /B 0
``` ## Task best practices+ Here are some best practices you should follow when configuring task for your web or worker role. ### Always log startup activities
-Visual Studio does not provide a debugger to step through batch files, so it's good to get as much data on the operation of batch files as possible. Logging the output of batch files, both **stdout** and **stderr**, can give you important information when trying to debug and fix batch files. To log both **stdout** and **stderr** to the StartupLog.txt file in the directory pointed to by the **%TEMP%** environment variable, add the text `>> "%TEMP%\\StartupLog.txt" 2>&1` to the end of specific lines you want to log. For example, to execute setup.exe in the **%PathToApp1Install%** directory: `"%PathToApp1Install%\setup.exe" >> "%TEMP%\StartupLog.txt" 2>&1`
+
+Visual Studio doesn't provide a debugger to step through batch files, so it's good to get as much data on the operation of batch files as possible. Logging the output of batch files, both **stdout** and **stderr**, can give you important information when trying to debug and fix batch files. To log both **stdout** and **stderr** to the StartupLog.txt file in the directory pointed to by the **%TEMP%** environment variable, add the text `>> "%TEMP%\\StartupLog.txt" 2>&1` to the end of specific lines you want to log. For example, to execute setup.exe in the **%PathToApp1Install%** directory: `"%PathToApp1Install%\setup.exe" >> "%TEMP%\StartupLog.txt" 2>&1`
To simplify your xml, you can create a wrapper *cmd* file that calls all of your startup tasks along with logging and ensures each child-task shares the same environment variables.
-You may find it annoying though to use `>> "%TEMP%\StartupLog.txt" 2>&1` on the end of each startup task. You can enforce task logging by creating a wrapper that handles logging for you. This wrapper calls the real batch file you want to run. Any output from the target batch file will be redirected to the *Startuplog.txt* file.
+You may find it annoying though to use `>> "%TEMP%\StartupLog.txt" 2>&1` on the end of each startup task. You can enforce task logging by creating a wrapper that handles logging for you. This wrapper calls the real batch file you want to run. Any output from the target batch file redirects to the *Startuplog.txt* file.
The following example shows how to redirect all output from a startup batch file. In this example, the ServerDefinition.csdef file creates a startup task that calls *logwrap.cmd*. *logwrap.cmd* calls *Startup2.cmd*, redirecting all output to **%TEMP%\\StartupLog.txt**.
Sample output in the **StartupLog.txt** file:
> ### Set executionContext appropriately for startup tasks+ Set privileges appropriately for the startup task. Sometimes startup tasks must run with elevated privileges even though the role runs with normal privileges. The [executionContext][Task] attribute sets the privilege level of the startup task. Using `executionContext="limited"` means the startup task has the same privilege level as the role. Using `executionContext="elevated"` means the startup task has administrator privileges, which allows the startup task to perform administrator tasks without giving administrator privileges to your role.
The [executionContext][Task] attribute sets the privilege level of the startup t
An example of a startup task that requires elevated privileges is a startup task that uses **AppCmd.exe** to configure IIS. **AppCmd.exe** requires `executionContext="elevated"`. ### Use the appropriate taskType+ The [taskType][Task] attribute determines the way the startup task is executed. There are three values: **simple**, **background**, and **foreground**. The background and foreground tasks are started asynchronously, and then the simple tasks are executed synchronously one at a time.
-With **simple** startup tasks, you can set the order in which the tasks run by the order in which the tasks are listed in the ServiceDefinition.csdef file. If a **simple** task ends with a non-zero exit code, then the startup procedure stops and the role does not start.
+With **simple** startup tasks, you can set the order in which the tasks run by the order in which the tasks are listed in the ServiceDefinition.csdef file. If a **simple** task ends with a nonzero exit code, then the startup procedure stops and the role doesn't start.
-The difference between **background** startup tasks and **foreground** startup tasks is that **foreground** tasks keep the role running until the **foreground** task ends. This also means that if the **foreground** task hangs or crashes, the role will not recycle until the **foreground** task is forced closed. For this reason, **background** tasks are recommended for asynchronous startup tasks unless you need that feature of the **foreground** task.
+The difference between **background** startup tasks and **foreground** startup tasks is that **foreground** tasks keep the role running until the **foreground** task ends. This structure means that if the **foreground** task hangs or crashes, the role remains unrecycled until the **foreground** task is forced closed. For this reason, **background** tasks are recommended for asynchronous startup tasks unless you need that feature of the **foreground** task.
### End batch files with EXIT /B 0
-The role will only start if the **errorlevel** from each of your simple startup task is zero. Not all programs set the **errorlevel** (exit code) correctly, so the batch file should end with an `EXIT /B 0` if everything ran correctly.
-A missing `EXIT /B 0` at the end of a startup batch file is a common cause of roles that do not start.
+The role only starts if the **errorlevel** from each of your simple startup task is zero. Not all programs set the **errorlevel** (exit code) correctly, so the batch file should end with an `EXIT /B 0` if everything ran correctly.
+
+A missing `EXIT /B 0` at the end of a startup batch file is a common cause of roles that don't start.
> [!NOTE] > I've noticed that nested batch files sometimes stop responding when using the `/B` parameter. You may want to make sure that this problem does not happen if another batch file calls your current batch file, like if you use the [log wrapper](#always-log-startup-activities). You can omit the `/B` parameter in this case.
A missing `EXIT /B 0` at the end of a startup batch file is a common cause of ro
> ### Expect startup tasks to run more than once
-Not all role recycles include a reboot, but all role recycles include running all startup tasks. This means that startup tasks must be able to run multiple times between reboots without any problems. This is discussed in the [preceding section](#detect-that-your-task-has-already-run).
+
+Not all role recycles include a reboot, but all role recycles include running all startup tasks. This design means that startup tasks must be able to run multiple times between reboots without any problems, which is discussed in the [preceding section](#detect-that-your-task-has-already-run).
### Use local storage to store files that must be accessed in the role+ If you want to copy or create a file during your startup task that is then accessible to your role, then that file must be placed in local storage. See the [preceding section](#create-files-in-local-storage-from-a-startup-task). ## Next steps
cloud-services Cloud Services Startup Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-startup-tasks.md
Title: Run Startup Tasks in Azure Cloud Services (classic) | Microsoft Docs
-description: Startup tasks help prepare your cloud service environment for your app. This teaches you how startup tasks work and how to make them
+description: Startup tasks help prepare your cloud service environment for your app. This article teaches you how startup tasks work and how to make them
Previously updated : 02/21/2023 Last updated : 07/23/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering COM components, setting registry keys, or starting a long running process.
+You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering Component Object Model (COM) components, setting registry keys, or starting a long running process.
> [!NOTE] > Startup tasks are not applicable to Virtual Machines, only to Cloud Service Web and Worker roles.
You can use startup tasks to perform operations before a role starts. Operations
> ## How startup tasks work
-Startup tasks are actions that are taken before your roles begin and are defined in the [ServiceDefinition.csdef] file by using the [Task] element within the [Startup] element. Frequently startup tasks are batch files, but they can also be console applications, or batch files that start PowerShell scripts.
-Environment variables pass information into a startup task, and local storage can be used to pass information out of a startup task. For example, an environment variable can specify the path to a program you want to install, and files can be written to local storage that can then be read later by your roles.
+Startup tasks are actions taken before your roles begin. The [ServiceDefinition.csdef] file defines startup tasks by using the [Task] element within the [Startup] element. Frequently startup tasks are batch files, but they can also be console applications, or batch files that start PowerShell scripts.
+
+Environment variables pass information into a startup task, and local storage can be used to pass information out of a startup task. For example, an environment variable can specify the path to a program you want to install, and files can be written to local storage. From there, your roles can read the files.
Your startup task can log information and errors to the directory specified by the **TEMP** environment variable. During the startup task, the **TEMP** environment variable resolves to the *C:\\Resources\\temp\\[guid].[rolename]\\RoleTemp* directory when running on the cloud.
-Startup tasks can also be executed several times between reboots. For example, the startup task will be run each time the role recycles, and role recycles may not always include a reboot. Startup tasks should be written in a way that allows them to run several times without problems.
+Startup tasks can also be executed several times between reboots. For example, the startup task runs each time the role recycles, and role recycles may not always include a reboot. Startup tasks should be written in a way that allows them to run several times without problems.
-Startup tasks must end with an **errorlevel** (or exit code) of zero for the startup process to complete. If a startup task ends with a non-zero **errorlevel**, the role will not start.
+Startup tasks must end with an **errorlevel** (or exit code) of zero for the startup process to complete. If a startup task ends with a nonzero **errorlevel**, the role fails to start.
## Role startup order+ The following lists the role startup procedure in Azure:
-1. The instance is marked as **Starting** and does not receive traffic.
+1. The instance is marked as **Starting** and doesn't receive traffic.
2. All startup tasks are executed according to their **taskType** attribute. * The **simple** tasks are executed synchronously, one at a time.
The following lists the role startup procedure in Azure:
> IIS may not be fully configured during the startup task stage in the startup process, so role-specific data may not be available. Startup tasks that require role-specific data should use [Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.OnStart](/previous-versions/azure/reference/ee772851(v=azure.100)). > >
-3. The role host process is started and the site is created in IIS.
+3. The role host process is started and the site is created in Internet Information Services (IIS).
4. The [Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.OnStart](/previous-versions/azure/reference/ee772851(v=azure.100)) method is called. 5. The instance is marked as **Ready** and traffic is routed to the instance. 6. The [Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint.Run](/previous-versions/azure/reference/ee772746(v=azure.100)) method is called. ## Example of a startup task
-Startup tasks are defined in the [ServiceDefinition.csdef] file, in the **Task** element. The **commandLine** attribute specifies the name and parameters of the startup batch file or console command, the **executionContext** attribute specifies the privilege level of the startup task, and the **taskType** attribute specifies how the task will be executed.
+
+Startup tasks are defined in the [ServiceDefinition.csdef] file, in the **Task** element. The **commandLine** attribute specifies the name and parameters of the startup batch file or console command, the **executionContext** attribute specifies the privilege level of the startup task, and the **taskType** attribute specifies how the task executes.
In this example, an environment variable, **MyVersionNumber**, is created for the startup task and set to the value "**1.0.0.0**".
EXIT /B 0
> ## Description of task attributes+ The following describes the attributes of the **Task** element in the [ServiceDefinition.csdef] file: **commandLine** - Specifies the command line for the startup task: * The command, with optional command line parameters, which begins the startup task.
-* Frequently this is the filename of a .cmd or .bat batch file.
-* The task is relative to the AppRoot\\Bin folder for the deployment. Environment variables are not expanded in determining the path and file of the task. If environment expansion is required, you can create a small .cmd script that calls your startup task.
+* Frequently this attribute is the filename of a .cmd or .bat batch file.
+* The task is relative to the AppRoot\\Bin folder for the deployment. Environment variables aren't expanded in determining the path and file of the task. If environment expansion is required, you can create a small .cmd script that calls your startup task.
* Can be a console application or a batch file that starts a [PowerShell script](cloud-services-startup-tasks-common.md#create-a-powershell-startup-task). **executionContext** - Specifies the privilege level for the startup task. The privilege level can be limited or elevated:
The following describes the attributes of the **Task** element in the [ServiceDe
* **limited** The startup task runs with the same privileges as the role. When the **executionContext** attribute for the [Runtime] element is also **limited**, then user privileges are used. * **elevated**
- The startup task runs with administrator privileges. This allows startup tasks to install programs, make IIS configuration changes, perform registry changes, and other administrator level tasks, without increasing the privilege level of the role itself.
+ The startup task runs with administrator privileges. These privileges allow startup tasks to install programs, make IIS configuration changes, perform registry changes, and other administrator level tasks, without increasing the privilege level of the role itself.
> [!NOTE] > The privilege level of a startup task does not need to be the same as the role itself.
The following describes the attributes of the **Task** element in the [ServiceDe
**taskType** - Specifies the way a startup task is executed. * **simple**
- Tasks are executed synchronously, one at a time, in the order specified in the [ServiceDefinition.csdef] file. When one **simple** startup task ends with an **errorlevel** of zero, the next **simple** startup task is executed. If there are no more **simple** startup tasks to execute, then the role itself will be started.
+ Tasks are executed synchronously, one at a time, in the order specified in the [ServiceDefinition.csdef] file. When one **simple** startup task ends with an **errorlevel** of zero, the next **simple** startup task is executed. If there are no more **simple** startup tasks to execute, then the role itself starts.
> [!NOTE] > If the **simple** task ends with a non-zero **errorlevel**, the instance will be blocked. Subsequent **simple** startup tasks, and the role itself, will not start.
The following describes the attributes of the **Task** element in the [ServiceDe
* **background** Tasks are executed asynchronously, in parallel with the startup of the role. * **foreground**
- Tasks are executed asynchronously, in parallel with the startup of the role. The key difference between a **foreground** and a **background** task is that a **foreground** task prevents the role from recycling or shutting down until the task has ended. The **background** tasks do not have this restriction.
+ Tasks are executed asynchronously, in parallel with the startup of the role. The key difference between a **foreground** and a **background** task is that a **foreground** task prevents the role from recycling or shutting down until the task ends. The **background** tasks don't have this restriction.
## Environment variables
-Environment variables are a way to pass information to a startup task. For example, you can put the path to a blob that contains a program to install, or port numbers that your role will use, or settings to control features of your startup task.
+
+Environment variables are a way to pass information to a startup task. For example, you can put the path to a blob that contains a program to install, or port numbers that your role uses, or settings to control features of your startup task.
There are two kinds of environment variables for startup tasks; static environment variables and environment variables based on members of the [RoleEnvironment] class. Both are in the [Environment] section of the [ServiceDefinition.csdef] file, and both use the [Variable] element and **name** attribute.
-Static environment variables uses the **value** attribute of the [Variable] element. The example above creates the environment variable **MyVersionNumber** which has a static value of "**1.0.0.0**". Another example would be to create a **StagingOrProduction** environment variable which you can manually set to values of "**staging**" or "**production**" to perform different startup actions based on the value of the **StagingOrProduction** environment variable.
+Static environment variables use the **value** attribute of the [Variable] element. The preceding example creates the environment variable **MyVersionNumber** which has a static value of "**1.0.0.0**". Another example would be to create a **StagingOrProduction** environment variable, which you can manually set to values of "**staging**" or "**production**" to perform different startup actions based on the value of the **StagingOrProduction** environment variable.
-Environment variables based on members of the RoleEnvironment class do not use the **value** attribute of the [Variable] element. Instead, the [RoleInstanceValue] child element, with the appropriate **XPath** attribute value, are used to create an environment variable based on a specific member of the [RoleEnvironment] class. Values for the **XPath** attribute to access various [RoleEnvironment] values can be found [here](cloud-services-role-config-xpath.md).
+Environment variables based on members of the RoleEnvironment class don't use the **value** attribute of the [Variable] element. Instead, the [RoleInstanceValue] child element, with the appropriate **XPath** attribute value, are used to create an environment variable based on a specific member of the [RoleEnvironment] class. Values for the **XPath** attribute to access various [RoleEnvironment] values can be found [here](cloud-services-role-config-xpath.md).
For example, to create an environment variable that is "**true**" when the instance is running in the compute emulator, and "**false**" when running in the cloud, use the following [Variable] and [RoleInstanceValue] elements:
For example, to create an environment variable that is "**true**" when the insta
``` ## Next steps+ Learn how to perform some [common startup tasks](cloud-services-startup-tasks-common.md) with your Cloud Service. [Package](cloud-services-model-and-package.md) your Cloud Service.
cloud-services Cloud Services Troubleshoot Common Issues Which Cause Roles Recycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md
Title: Common causes of Cloud Service (classic) roles recycling | Microsoft Docs
description: A cloud service role that suddenly recycles can cause significant downtime. Here are some common issues that cause roles to be recycled, which may help you reduce downtime. Previously updated : 02/21/2023 Last updated : 07/23/2024
This article discusses some of the common causes of deployment problems and prov
[!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)] ## Missing runtime dependencies
-If a role in your application relies on any assembly that is not part of the .NET Framework or the Azure managed library, you must explicitly include that assembly in the application package. Keep in mind that other Microsoft frameworks are not available on Azure by default. If your role relies on such a framework, you must add those assemblies to the application package.
-Before you build and package your application, verify the following:
+If a role in your application relies on any assembly that isn't part of the .NET Framework or the Azure managed library, you must explicitly include that assembly in the application package. Keep in mind that other Microsoft frameworks aren't available on Azure by default. If your role relies on such a framework, you must add those assemblies to the application package.
-* If using Visual studio, make sure the **Copy Local** property is set to **True** for each referenced assembly in your project that is not part of the Azure SDK or the .NET Framework.
-* Make sure the web.config file does not reference any unused assemblies in the compilation element.
-* The **Build Action** of every .cshtml file is set to **Content**. This ensures that the files will appear correctly in the package and enables other referenced files to appear in the package.
+Before you build and package your application, verify the following statements are true:
+
+* If using Visual studio, make sure the **Copy Local** property is set to **True** for each referenced assembly in your project that isn't part of the Azure SDK or the .NET Framework.
+* Make sure the web.config file doesn't reference any unused assemblies in the compilation element.
+* The **Build Action** of every .cshtml file is set to **Content**. This setting ensures that the files appear correctly in the package and enables other referenced files to appear in the package.
## Assembly targets wrong platform
-Azure is a 64-bit environment. Therefore, .NET assemblies compiled for a 32-bit target won't work on Azure.
+
+Azure is a 64-bit environment. Therefore, .NET assemblies compiled for a 32-bit target aren't compatible with Azure.
## Role throws unhandled exceptions while initializing or stopping
-Any exceptions that are thrown by the methods of the [RoleEntryPoint] class, which includes the [OnStart], [OnStop], and [Run] methods, are unhandled exceptions. If an unhandled exception occurs in one of these methods, the role will recycle. If the role is recycling repeatedly, it may be throwing an unhandled exception each time it tries to start.
+
+Any exceptions thrown by the methods of the [RoleEntryPoint] class, which includes the [OnStart], [OnStop], and [Run] methods, are unhandled exceptions. If an unhandled exception occurs in one of these methods, the role recycles. If the role is recycling repeatedly, it may be throwing an unhandled exception each time it tries to start.
## Role returns from Run method+ The [Run] method is intended to run indefinitely. If your code overrides the [Run] method, it should sleep indefinitely. If the [Run] method returns, the role recycles. ## Incorrect DiagnosticsConnectionString setting+ If application uses Azure Diagnostics, your service configuration file must specify the `DiagnosticsConnectionString` configuration setting. This setting should specify an HTTPS connection to your storage account in Azure.
-To ensure that your `DiagnosticsConnectionString` setting is correct before you deploy your application package to Azure, verify the following:
+To ensure that your `DiagnosticsConnectionString` setting is correct before you deploy your application package to Azure, verify the following statements are true:
* The `DiagnosticsConnectionString` setting points to a valid storage account in Azure.
- By default, this setting points to the emulated storage account, so you must explicitly change this setting before you deploy your application package. If you do not change this setting, an exception is thrown when the role instance attempts to start the diagnostic monitor. This may cause the role instance to recycle indefinitely.
+ By default, this setting points to the emulated storage account, so you must explicitly change this setting before you deploy your application package. If you don't change this setting, an exception is thrown when the role instance attempts to start the diagnostic monitor. This event may cause the role instance to recycle indefinitely.
* The connection string is specified in the following [format](../storage/common/storage-configure-connection-string.md). (The protocol must be specified as HTTPS.) Replace *MyAccountName* with the name of your storage account, and *MyAccountKey* with your access key: ```console DefaultEndpointsProtocol=https;AccountName=MyAccountName;AccountKey=MyAccountKey ```
- If you are developing your application by using Azure Tools for Microsoft Visual Studio, you can use the property pages to set this value.
+ If you're developing your application by using Azure Tools for Microsoft Visual Studio, you can use the property pages to set this value.
+
+## Exported certificate doesn't include private key
-## Exported certificate does not include private key
-To run a web role under TLS, you must ensure that your exported management certificate includes the private key. If you use the *Windows Certificate Manager* to export the certificate, be sure to select **Yes** for the **Export the private key** option. The certificate must be exported in the PFX format, which is the only format currently supported.
+To run a web role under Transport Layer Security (TLS), you must ensure that your exported management certificate includes the private key. If you use the *Windows Certificate Manager* to export the certificate, be sure to select **Yes** for the **Export the private key** option. The certificate must be exported in the .pfx format, which is the only format currently supported.
## Next steps+ View more [troubleshooting articles](../index.yml?product=cloud-services&tag=top-support-issue) for cloud services. View more role recycling scenarios at [Kevin Williamson's blog series](/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data).
cloud-services Cloud Services Troubleshoot Constrained Allocation Failed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-constrained-allocation-failed.md
Previously updated : 02/21/2023 Last updated : 07/24/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-In this article, you'll troubleshoot allocation failures where Azure Cloud services (classic) can't deploy because of allocation constraints.
+In this article, you troubleshoot allocation failures where Azure Cloud services (classic) can't deploy because of allocation constraints.
When you deploy instances to a Cloud service (classic) or add new web or worker role instances, Microsoft Azure allocates compute resources.
In Azure portal, navigate to your Cloud service (classic) and in the sidebar sel
![Image shows the Operation log (classic) blade.](./media/cloud-services-troubleshoot-constrained-allocation-failed/cloud-services-troubleshoot-allocation-logs.png)
-When you're inspecting the logs of your Cloud service (classic), you'll see the following exception:
+When you inspect the logs of your Cloud service (classic), you see the following exception:
|Exception Type |Error Message | |||
-|ConstrainedAllocationFailed |Azure operation '`{Operation ID}`' failed with code Compute.ConstrainedAllocationFailed. Details: Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there is an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Retry later or try reducing the VM size or number of role instances. Alternatively, if possible, remove the aforementioned constraints or try deploying to a different region.|
+|ConstrainedAllocationFailed |Azure operation '`{Operation ID}`' failed with code Compute.ConstrainedAllocationFailed. Details: Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there's an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Retry later or try reducing the virtual machine (VM) size or number of role instances. Alternatively, if possible, remove the constraints or try deploying to a different region.|
## Cause When the first instance is deployed to a Cloud service (in either staging or production), that Cloud service gets pinned to a cluster.
-Over time, the resources in this cluster may become fully utilized. If a Cloud service (classic) makes an allocation request for more resources when insufficient resources are available in the pinned cluster, the request will result in an allocation failure. For more information, see the [allocation failure common issues](cloud-services-allocation-failures.md#common-issues).
+Over time, the resources in this cluster may become fully utilized. If a Cloud service (classic) makes an allocation request for more resources when insufficient resources are available in the pinned cluster, the request results in an allocation failure. For more information, see the [allocation failure common issues](cloud-services-allocation-failures.md#common-issues).
## Solution
-Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) will happen in the same cluster.
+Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) happen in the same cluster.
When you experience an allocation error in this scenario, the recommended course of action is to redeploy to a new Cloud service (classic) (and update the *CNAME*).
For more allocation failure solutions and background information:
> [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md)
-If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
+If your Azure issue isn't addressed in this article, visit the Azure forums on [the Microsoft Developer Network (MSDN) and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
cloud-services Cloud Services Troubleshoot Default Temp Folder Size Too Small Web Worker Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-default-temp-folder-size-too-small-web-worker-role.md
Title: Default TEMP folder size is too small for a role | Microsoft Docs
description: A cloud service role has a limited amount of space for the TEMP folder. This article provides some suggestions on how to avoid running out of space. Previously updated : 02/21/2023 Last updated : 07/24/2024
The default temporary directory of a cloud service worker or web role has a maxi
[!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)] ## Why do I run out of space?
-The standard Windows environment variables TEMP and TMP are available to code that is running in your application. Both TEMP and TMP point to a single directory that has a maximum size of 100 MB. Any data that is stored in this directory is not persisted across the lifecycle of the cloud service; if the role instances in a cloud service are recycled, the directory is cleaned.
+The standard Windows environment variables TEMP and TMP are available to code that is running in your application. Both TEMP and TMP point to a single directory that has a maximum size of 100 MB. Any data stored in this directory isn't persisted across the lifecycle of the cloud service. If the role instances in a cloud service are recycled, the directory is cleaned.
## Suggestion to fix the problem Implement one of the following alternatives:
cloud-services Cloud Services Troubleshoot Deployment Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-deployment-problems.md
Title: Troubleshoot cloud service (classic) deployment problems | Microsoft Docs
description: There are a few common problems you may run into when deploying a cloud service to Azure. This article provides solutions to some of them. Previously updated : 02/21/2023 Last updated : 07/24/2024
When you deploy a cloud service application package to Azure, you can obtain inf
You can find the **Properties** pane as follows:
-* In the Azure portal, click the deployment of your cloud service, click **All settings**, and then click **Properties**.
+* In the Azure portal, choose the deployment of your cloud service, select **All settings**, and then select **Properties**.
> [!NOTE] > You can copy the contents of the **Properties** pane to the clipboard by clicking the icon in the upper-right corner of the pane.
You can find the **Properties** pane as follows:
[!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)]
-## Problem: I cannot access my website, but my deployment is started and all role instances are ready
-The website URL link shown in the portal does not include the port. The default port for websites is 80. If your application is configured to run in a different port, you must add the correct port number to the URL when accessing the website.
+## Problem: I can't access my website, but my deployment is started and all role instances are ready
+The website URL link shown in the portal doesn't include the port. The default port for websites is 80. If your application is configured to run in a different port, you must add the correct port number to the URL when accessing the website.
-1. In the Azure portal, click the deployment of your cloud service.
+1. In the Azure portal, choose the deployment of your cloud service.
2. In the **Properties** pane of the Azure portal, check the ports for the role instances (under **Input Endpoints**).
-3. If the port is not 80, add the correct port value to the URL when you access the application. To specify a non-default port, type the URL, followed by a colon (:), followed by the port number, with no spaces.
+3. If the port isn't 80, add the correct port value to the URL when you access the application. To specify a nondefault port, type the URL, followed by a colon (:), followed by the port number, with no spaces.
## Problem: My role instances recycled without me doing anything
-Service healing occurs automatically when Azure detects problem nodes and therefore moves role instances to new nodes. When this occurs, you might see your role instances recycling automatically. To find out if service healing occurred:
+Service healing occurs automatically when Azure detects problem nodes and therefore moves role instances to new nodes. When these moves occur, you might see your role instances recycling automatically. To find out if service healing occurred:
-1. In the Azure portal, click the deployment of your cloud service.
+1. In the Azure portal, choose the deployment of your cloud service.
2. In the **Properties** pane of the Azure portal, review the information and determine whether service healing occurred during the time that you observed the roles recycling.
-Roles will also recycle roughly once per month during host-OS and guest-OS updates.
+Roles recycle roughly once per month during host-OS and guest-OS updates.
For more information, see the blog post [Role Instance Restarts Due to OS Upgrades](/archive/blogs/kwill/role-instance-restarts-due-to-os-upgrades)
-## Problem: I cannot do a VIP swap and receive an error
-A VIP swap is not allowed if a deployment update is in progress. Deployment updates can occur automatically when:
+## Problem: I can't do a VIP swap and receive an error
+A VIP swap isn't allowed if a deployment update is in progress. Deployment updates can occur automatically when:
-* A new guest operating system is available and you are configured for automatic updates.
+* A new guest operating system is available and you configured for automatic updates.
* Service healing occurs. To find out if an automatic update is preventing you from doing a VIP swap:
-1. In the Azure portal, click the deployment of your cloud service.
-2. In the **Properties** pane of the Azure portal, look at the value of **Status**. If it is **Ready**, then check **Last operation** to see if one recently happened that might prevent the VIP swap.
+1. In the Azure portal, choose the deployment of your cloud service.
+2. In the **Properties** pane of the Azure portal, look at the value of **Status**. If it's **Ready**, then check **Last operation** to see if one recently happened that might prevent the VIP swap.
3. Repeat steps 1 and 2 for the production deployment. 4. If an automatic update is in process, wait for it to finish before trying to do the VIP swap. ## Problem: A role instance is looping between Started, Initializing, Busy, and Stopped
-This condition could indicate a problem with your application code, package, or configuration file. In that case, you should be able to see the status changing every few minutes and the Azure portal may say something like **Recycling**, **Busy**, or **Initializing**. This indicates that there is something wrong with the application that is keeping the role instance from running.
+This condition could indicate a problem with your application code, package, or configuration file. In that case, you should be able to see the status changing every few minutes and the Azure portal may say something like **Recycling**, **Busy**, or **Initializing**. This fluctuation of status indicates that there's something wrong with the application that is keeping the role instance from running.
For more information on how to troubleshoot for this problem, see the blog post [Azure PaaS Compute Diagnostics Data](/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data) and [Common issues that cause roles to recycle](cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md). ## Problem: My application stopped working
-1. In the Azure portal, click the role instance.
+1. In the Azure portal, choose the role instance.
2. In the **Properties** pane of the Azure portal, consider the following conditions to resolve your problem:
- * If the role instance has recently stopped (you can check the value of **Abort count**), the deployment could be updating. Wait to see if the role instance resumes functioning on its own.
+ * If the role instance recently stopped (you can check the value of **Abort count**), the deployment could be updating. Wait to see if the role instance resumes functioning on its own.
* If the role instance is **Busy**, check your application code to see if the [StatusCheck](/previous-versions/azure/reference/ee758135(v=azure.100)) event is handled. You might need to add or fix some code that handles this event. * Go through the diagnostic data and troubleshooting scenarios in the blog post [Azure PaaS Compute Diagnostics Data](/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data).
cloud-services Cloud Services Troubleshoot Fabric Internal Server Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-fabric-internal-server-error.md
Previously updated : 02/21/2023 Last updated : 07/24/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-In this article, you'll troubleshoot allocation failures where the fabric controller cannot allocate when deploying an Azure Cloud service (classic).
+In this article, you troubleshoot allocation failures where the fabric controller can't allocate when deploying an Azure Cloud service (classic).
When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources.
In Azure portal, navigate to your Cloud service (classic) and in the sidebar sel
![Image shows the Operation log (classic) blade.](./media/cloud-services-troubleshoot-fabric-internal-server-error/cloud-services-troubleshoot-allocation-logs.png)
-When you're inspecting the logs of your Cloud service (classic), you'll see the following exception:
+When you inspect the logs of your Cloud service (classic), you see the following exception:
|Exception |Error Message | |||
Follow the guidance for allocation failures in the following scenarios.
### Not pinned to a cluster
-The first time you deploy a Cloud service (classic), the cluster hasn't been selected yet, so the cloud service isn't *pinned*. Azure may have a deployment failure because:
+The first time you deploy a Cloud service (classic), the cluster is unselected, so the cloud service isn't *pinned*. Azure may have a deployment failure because:
-- You've selected a particular size that isn't available in the region.
+- You selected a particular size that isn't available in the region.
- The combination of sizes that are needed across different roles isn't available in the region. When you experience an allocation error in this scenario, the recommended course of action is to check the available sizes in the region and change the size you previously specified.
When you experience an allocation error in this scenario, the recommended course
### Pinned to a cluster
-Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) will happen in the same cluster.
+Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) happen in the same cluster.
When you experience an allocation error in this scenario, the recommended course of action is to redeploy to a new Cloud service (classic) (and update the *CNAME*).
For more allocation failure solutions and background information:
> [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md)
-If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
+If your Azure issue isn't addressed in this article, visit the Azure forums on [the Microsoft Developer Network (MSDN) and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
cloud-services Cloud Services Troubleshoot Location Not Found For Role Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-location-not-found-for-role-size.md
Previously updated : 02/21/2023 Last updated : 07/24/2024
In the [Azure portal](https://portal.azure.com/), navigate to your Cloud service
:::image type="content" source="./media/cloud-services-troubleshoot-location-not-found-for-role-size/cloud-services-troubleshoot-allocation-logs.png" alt-text="Screenshot shows the Operation log (classic) pane.":::
-When you inspect the logs of your Cloud service (classic), you'll see the following exception:
+When you inspect the logs of your Cloud service (classic), you see the following exception:
|Exception Type |Error Message | |||
When you inspect the logs of your Cloud service (classic), you'll see the follow
## Cause
-There's a capacity issue with the region or cluster that you're deploying to. The `LocationNotFoundForRoleSize` exception occurs when the resource SKU you've selected, the virtual machine size, isn't available for the region specified.
+There's a capacity issue with the region or cluster that you're deploying to. The `LocationNotFoundForRoleSize` exception occurs when the resource SKU you selected, the virtual machine size, isn't available for the region specified.
## Find SKUs in a region
-In this scenario, you should select a different region or SKU for your Cloud service (classic) deployment. Before you deploy or upgrade your Cloud service (classic), determine which SKUs are available in a region or availability zone. Follow the [Azure CLI](#list-skus-in-region-using-azure-cli), [PowerShell](#list-skus-in-region-using-powershell), or [REST API](#list-skus-in-region-using-rest-api) processes below.
+In this scenario, you should select a different region or SKU for your Cloud service (classic) deployment. Before you deploy or upgrade your Cloud service (classic), determine which SKUs are available in a region or availability zone. Use the following the [Azure CLI](#list-skus-in-region-using-azure-cli), [PowerShell](#list-skus-in-region-using-powershell), or [REST API](#list-skus-in-region-using-rest-api) processes.
### List SKUs in region using Azure CLI
You can use the [Resource Skus - List](/rest/api/compute/resourceskus/list) oper
## Next steps
-For more allocation failure solutions and to better understand how they're generated:
+For more allocation failure solutions and to better understand how allocation failures occur:
> [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md)
cloud-services Cloud Services Troubleshoot Overconstrained Allocation Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-overconstrained-allocation-request.md
Previously updated : 02/21/2023 Last updated : 07/24/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-In this article, you'll troubleshoot over constrained allocation failures that prevent deployment of Azure Cloud Services (classic).
+In this article, you troubleshoot over constrained allocation failures that prevent deployment of Azure Cloud Services (classic).
When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources.
You may occasionally receive errors during these operations even before you reac
|Exception Type |Error Message | |||
-|OverconstrainedAllocationRequest |The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region.|
+|OverconstrainedAllocationRequest |The virtual machine (VM) size (or combination of VM sizes) required by this deployment can't be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings. Also try deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group. You can try deploying to a different region altogether.|
## Cause
Follow the guidance for allocation failures in the following scenarios.
### Not pinned to a cluster
-The first time you deploy a Cloud service (classic), the cluster hasn't been selected yet, so the cloud service isn't *pinned*. Azure may have a deployment failure because:
+The first time you deploy a Cloud service (classic), the cluster is unselected, so the cloud service isn't *pinned*. Azure may have a deployment failure because:
-- You've selected a particular size that isn't available in the region.
+- You selected a particular size that isn't available in the region.
- The combination of sizes that are needed across different roles isn't available in the region. When you experience an allocation error in this scenario, the recommended course of action is to check the available sizes in the region and change the size you previously specified.
When you experience an allocation error in this scenario, the recommended course
### Pinned to a cluster
-Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) will happen in the same cluster.
+Existing cloud services are *pinned* to a cluster. Any further deployments for the Cloud service (classic) happen in the same cluster.
When you experience an allocation error in this scenario, the recommended course of action is to redeploy to a new Cloud service (classic) (and update the *CNAME*).
For more allocation failure solutions and background information:
> [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md)
-If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
+If your Azure issue isn't addressed in this article, visit the Azure forums on [the Microsoft Developer Network (MSDN) and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
cloud-services Cloud Services Troubleshoot Roles That Fail Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-roles-that-fail-start.md
Title: Troubleshoot roles that fail to start | Microsoft Docs
description: Here are some common reasons why a Cloud Service role may fail to start. Solutions to these problems are also provided. Previously updated : 02/21/2023 Last updated : 07/24/2024
Here are some common problems and solutions related to Azure Cloud Services role
[!INCLUDE [support-disclaimer](~/reusable-content/ce-skilling/azure/includes/support-disclaimer.md)] ## Missing DLLs or dependencies
-Unresponsive roles and roles that are cycling between **Initializing**, **Busy**, and **Stopping** states can be caused by missing DLLs or assemblies.
+Unresponsive roles and roles that are cycling between **Initializing**, **Busy**, and **Stopping** states can be caused by missing dynamic link libraries (DLLs) or assemblies.
Symptoms of missing DLLs or assemblies can be: * Your role instance is cycling through **Initializing**, **Busy**, and **Stopping** states.
-* Your role instance has moved to **Ready** but if you navigate to your web application, the page does not appear.
+* Your role instance moved to **Ready** but if you navigate to your web application, the page doesn't appear.
There are several recommended methods for investigating these issues. ## Diagnose missing DLL issues in a web role
-When you navigate to a website that is deployed in a web role, and the browser displays a server error similar to the following, it may indicate that a DLL is missing.
+When you navigate to a website deployed in a web role, and the browser displays a server error similar to the following, it may indicate a DLL is missing.
![Server Error in '/' Application.](./media/cloud-services-troubleshoot-roles-that-fail-start/ic503388.png)
To view more complete errors without using Remote Desktop:
4. Save the file. 5. Repackage and redeploy the service.
-Once the service is redeployed, you will see an error message with the name of the missing assembly or DLL.
+Once the service redeploys, you see an error message with the name of the missing assembly or DLL.
## Diagnose issues by viewing the error remotely You can use Remote Desktop to access the role and view more complete error information remotely. Use the following steps to view the errors by using Remote Desktop:
You can use Remote Desktop to access the role and view more complete error infor
9. Open Internet Explorer. 10. Type the address and the name of the web application. For example, `http://<IPV4 Address>/default.aspx`.
-Navigating to the website will now return more explicit error messages:
+Navigating to the website now returns more explicit error messages:
* Server Error in '/' Application. * Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
For best results in using this method of diagnosis, you should use a computer or
1. Install the standalone version of the [Azure SDK](https://azure.microsoft.com/downloads/). 2. On the development machine, build the cloud service project. 3. In Windows Explorer, navigate to the bin\debug folder of the cloud service project.
-4. Copy the .csx folder and .cscfg file to the computer that you are using to debug the issues.
+4. Copy the .csx folder and .cscfg file to the computer you're using to debug the issues.
5. On the clean machine, open an Azure SDK Command Prompt window and type `csrun.exe /devstore:start`. 6. At the command prompt, type `run csrun <path to .csx folder> <path to .cscfg file> /launchBrowser`.
-7. When the role starts, you will see detailed error information in Internet Explorer. You can also use standard Windows troubleshooting tools to further diagnose the problem.
+7. When the role starts, you see detailed error information in Internet Explorer. You can also use standard Windows troubleshooting tools to further diagnose the problem.
## Diagnose issues by using IntelliTrace For worker and web roles that use .NET Framework 4, you can use [IntelliTrace](/visualstudio/debugger/intellitrace), which is available in Microsoft Visual Studio Enterprise.
Follow these steps to deploy the service with IntelliTrace enabled:
3. Once the instance starts, open the **Server Explorer**. 4. Expand the **Azure\\Cloud Services** node and locate the deployment. 5. Expand the deployment until you see the role instances. Right-click on one of the instances.
-6. Choose **View IntelliTrace logs**. The **IntelliTrace Summary** will open.
-7. Locate the exceptions section of the summary. If there are exceptions, the section will be labeled **Exception Data**.
+6. Choose **View IntelliTrace logs**. The **IntelliTrace Summary** opens.
+7. Locate the exceptions section of the summary. If there are exceptions, the section is labeled **Exception Data**.
8. Expand the **Exception Data** and look for **System.IO.FileNotFoundException** errors similar to the following: ![Exception data, missing file, or assembly](./media/cloud-services-troubleshoot-roles-that-fail-start/ic503390.png)
To address missing DLL and assembly errors, follow these steps:
1. Open the solution in Visual Studio. 2. In **Solution Explorer**, open the **References** folder.
-3. Click the assembly identified in the error.
+3. Select the assembly identified in the error.
4. In the **Properties** pane, locate **Copy Local property** and set the value to **True**. 5. Redeploy the cloud service.
-Once you have verified that all errors have been corrected, you can deploy the service without checking the **Enable IntelliTrace for .NET 4 roles** check box.
+Once you verify all errors are corrected, you can deploy the service without checking the **Enable IntelliTrace for .NET 4 roles** check box.
## Next steps View more [troubleshooting articles](../index.yml?product=cloud-services&tag=top-support-issue) for cloud services.
cloud-services Cloud Services Update Azure Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-update-azure-service.md
Title: How to update a cloud service (classic) | Microsoft Docs
description: Learn how to update cloud services in Azure. Learn how an update on a cloud service proceeds to ensure availability. Previously updated : 02/21/2023 Last updated : 07/24/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-Updating a cloud service, including both its roles and guest OS, is a three step process. First, the binaries and configuration files for the new cloud service or OS version must be uploaded. Next, Azure reserves compute and network resources for the cloud service based on the requirements of the new cloud service version. Finally, Azure performs a rolling upgrade to incrementally update the tenant to the new version or guest OS, while preserving your availability. This article discusses the details of this last step ΓÇô the rolling upgrade.
+The process to update a cloud service, including both its roles and guest OS, takes three steps. First, the binaries and configuration files for the new cloud service or OS version must be uploaded. Next, Azure reserves compute and network resources for the cloud service based on the requirements of the new cloud service version. Finally, Azure performs a rolling upgrade to incrementally update the tenant to the new version or guest OS, while preserving your availability. This article discusses the details of this last step ΓÇô the rolling upgrade.
## Update an Azure Service
-Azure organizes your role instances into logical groupings called upgrade domains (UD). Upgrade domains (UD) are logical sets of role instances that are updated as a group. Azure updates a cloud service one UD at a time, which allows instances in other UDs to continue serving traffic.
+Azure organizes your role instances into logical groupings called upgrade domains (UD). Upgrade domains (UD) are logical sets of role instances that are updated as a group. Azure updates a cloud service one UD at a time, which allows instances in other UDs to continue serving traffic.
The default number of upgrade domains is 5. You can specify a different number of upgrade domains by including the upgradeDomainCount attribute in the serviceΓÇÖs definition file (.csdef). For more information about the upgradeDomainCount attribute, see [Azure Cloud Services Definition Schema (.csdef File)](./schema-csdef-file.md).
When you perform an in-place update of one or more roles in your service, Azure
> [!NOTE] > While the terms **update** and **upgrade** have slightly different meaning in the context Azure, they can be used interchangeably for the processes and descriptions of the features in this document.
->
->
-Your service must define at least two instances of a role for that role to be updated in-place without downtime. If the service consists of only one instance of one role, your service will be unavailable until the in-place update has finished.
+Your service must define at least two instances of a role for that role to be updated in-place without downtime. If the service consists of only one instance of one role, your service is unavailable until the in-place update finishes.
-This topic covers the following information about Azure updates:
+This article covers the following information about Azure updates:
* [Allowed service changes during an update](#AllowedChanges) * [How an upgrade proceeds](#howanupgradeproceeds)
This topic covers the following information about Azure updates:
## Allowed service changes during an update The following table shows the allowed changes to a service during an update:
-| Changes permitted to hosting, services, and roles | In-place update | Staged (VIP swap) | Delete and re-deploy |
+| Changes permitted to hosting, services, and roles | In-place update | Staged (VIP swap) | Delete and redeploy |
| | | | | | Operating system version |Yes |Yes |Yes | | .NET trust level |Yes |Yes |Yes |
The following table shows the allowed changes to a service during an update:
> >
-The following items are not supported during an update:
+The following items aren't supported during an update:
* Changing the name of a role. Remove and then add the role with the new name. * Changing of the Upgrade Domain count. * Decreasing the size of the local resources.
-If you are making other updates to your service's definition, such as decreasing the size of local resource, you must perform a VIP swap update instead. For more information, see [Swap Deployment](/previous-versions/azure/reference/ee460814(v=azure.100)).
+If you make other updates to your service's definition, such as decreasing the size of local resource, you must perform a VIP swap update instead. For more information, see [Swap Deployment](/previous-versions/azure/reference/ee460814(v=azure.100)).
<a name="howanupgradeproceeds"></a> ## How an upgrade proceeds
-You can decide whether you want to update all of the roles in your service or a single role in the service. In either case, all instances of each role that is being upgraded and belong to the first upgrade domain are stopped, upgraded, and brought back online. Once they are back online, the instances in the second upgrade domain are stopped, upgraded, and brought back online. A cloud service can have at most one upgrade active at a time. The upgrade is always performed against the latest version of the cloud service.
+You can decide whether you want to update all of the roles in your service or a single role in the service. In either case, all instances of each role that is being upgraded and belong to the first upgrade domain are stopped, upgraded, and brought back online. Once they're back online, the instances in the second upgrade domain are stopped, upgraded, and brought back online. A cloud service can have at most one upgrade active at a time. The upgrade is always performed against the latest version of the cloud service.
-The following diagram illustrates how the upgrade proceeds if you are upgrading all of the roles in the service:
+The following diagram illustrates how the upgrade proceeds if you upgrade all of the roles in the service:
![Upgrade service](media/cloud-services-update-azure-service/IC345879.png "Upgrade service")
-This next diagram illustrates how the update proceeds if you are upgrading only a single role:
+This next diagram illustrates how the update proceeds if you upgrade only a single role:
![Upgrade role](media/cloud-services-update-azure-service/IC345880.png "Upgrade role")
-During an automatic update, the Azure Fabric Controller periodically evaluates the health of the cloud service to determine when itΓÇÖs safe to walk the next UD. This health evaluation is performed on a per-role basis and considers only instances in the latest version (i.e. instances from UDs that have already been walked). It verifies that a minimum number of role instances, for each role, have achieved a satisfactory terminal state.
+During an automatic update, the Azure Fabric Controller periodically evaluates the health of the cloud service to determine when itΓÇÖs safe to walk the next UD. This health evaluation is performed on a per-role basis and considers only instances in the latest version (that is, instances from UDs that already walked). It verifies that, for each role, a minimum number of role instances achieved a satisfactory terminal state.
### Role Instance Start Timeout
-The Fabric Controller will wait 30 minutes for each role instance to reach a Started state. If the timeout duration elapses, the Fabric Controller will continue walking to the next role instance.
+The Fabric Controller waits 30 minutes for each role instance to reach a Started state. If the timeout duration elapses, the Fabric Controller will continue walking to the next role instance.
### Impact to drive data during Cloud Service upgrades
-When upgrading a service from a single instance to multiple instances your service will be brought down while the upgrade is performed due to the way Azure upgrades services. The service level agreement guaranteeing service availability only applies to services that are deployed with more than one instance. The following list describes how the data on each drive is affected by each Azure service upgrade scenario:
+When you upgrade a service from a single instance to multiple instances, Azure brings your services down while the upgrade is performed. The service level agreement guaranteeing service availability only applies to services that are deployed with more than one instance. The following list describes how each Azure service upgrade scenario affects the data on each drive:
|Scenario|C Drive|D Drive|E Drive| |--|-|-|-|
-|VM reboot|Preserved|Preserved|Preserved|
+|Virtual machine (VM) reboot|Preserved|Preserved|Preserved|
|Portal reboot|Preserved|Preserved|Destroyed| |Portal reimage|Preserved|Destroyed|Destroyed| |In-Place Upgrade|Preserved|Preserved|Destroyed| |Node migration|Destroyed|Destroyed|Destroyed|
-Note that, in the above list, the E: drive represents the roleΓÇÖs root drive, and should not be hard-coded. Instead, use the **%RoleRoot%** environment variable to represent the drive.
+In the preceding list, the E: drive represents the roleΓÇÖs root drive, and shouldn't be hard-coded. Instead, use the **%RoleRoot%** environment variable to represent the drive.
To minimize the downtime when upgrading a single-instance service, deploy a new multi-instance service to the staging server and perform a VIP swap. <a name="RollbackofanUpdate"></a> ## Rollback of an update
-Azure provides flexibility in managing services during an update by letting you initiate additional operations on a service, after the initial update request is accepted by the Azure Fabric Controller. A rollback can only be performed when an update (configuration change) or upgrade is in the **in progress** state on the deployment. An update or upgrade is considered to be in-progress as long as there is at least one instance of the service which has not yet been updated to the new version. To test whether a rollback is allowed, check the value of the RollbackAllowed flag, returned by [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)) operations, is set to true.
+Azure provides flexibility in managing services during an update by letting you initiate more operations on a service, after the Azure Fabric Controller accepts the initial update request. A rollback can only be performed when an update (configuration change) or upgrade is in the **in progress** state on the deployment. An update or upgrade is considered to be in-progress as long as there is at least one instance of the service that remains unupdated to the new version. To test whether a rollback is allowed, check the value of the RollbackAllowed flag is set to true. [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)) operations return the RollbackAllowed flag for your reference.
> [!NOTE] > It only makes sense to call Rollback on an **in-place** update or upgrade because VIP swap upgrades involve replacing one entire running instance of your service with another.
->
->
Rollback of an in-progress update has the following effects on the deployment:
-* Any role instances which had not yet been updated or upgraded to the new version are not updated or upgraded, because those instances are already running the target version of the service.
-* Any role instances which had already been updated or upgraded to the new version of the service package (\*.cspkg) file or the service configuration (\*.cscfg) file (or both files) are reverted to the pre-upgrade version of these files.
+* Any role instances that remain unupdated or unupgraded to the new version aren't updated or upgraded, because those instances are already running the target version of the service.
+* Any role instances that already updated or upgraded to the new version of the service package (\*.cspkg) file or the service configuration (\*.cscfg) file (or both files) are reverted to the preupgrade version of these files.
-This functionally is provided by the following features:
+The following features provide this functionality:
-* The [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation, which can be called on a configuration update (triggered by calling [Change Deployment Configuration](/previous-versions/azure/reference/ee460809(v=azure.100))) or an upgrade (triggered by calling [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100))) as long as there is at least one instance in the service which has not yet been updated to the new version.
+* The [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation, which can be called on a configuration update (triggered by calling [Change Deployment Configuration](/previous-versions/azure/reference/ee460809(v=azure.100))) or an upgrade (triggered by calling [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100))) as long as there is at least one instance in the service that remains unupdated to the new version.
* The Locked element and the RollbackAllowed element, which are returned as part of the response body of the [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)) operations: 1. The Locked element allows you to detect when a mutating operation can be invoked on a given deployment. 2. The RollbackAllowed element allows you to detect when the [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation can be called on a given deployment.
- In order to perform a rollback, you do not have to check both the Locked and the RollbackAllowed elements. It suffices to confirm that RollbackAllowed is set to true. These elements are only returned if these methods are invoked by using the request header set to ΓÇ£x-ms-version: 2011-10-01ΓÇ¥ or a later version. For more information about versioning headers, see [Service Management Versioning](/previous-versions/azure/gg592580(v=azure.100)).
+ In order to perform a rollback, you don't have to check both the Locked and the RollbackAllowed elements. It suffices to confirm that RollbackAllowed is set to true. These elements are only returned if these methods are invoked by using the request header set to ΓÇ£x-ms-version: 2011-10-01ΓÇ¥ or a later version. For more information about versioning headers, see [the versioning of the classic deployment model](/previous-versions/azure/gg592580(v=azure.100)).
-There are some situations where a rollback of an update or upgrade is not supported, these are as follows:
+There are some situations where a rollback of an update or upgrade isn't supported, these situations are as follows:
-* Reduction in local resources - If the update increases the local resources for a role the Azure platform does not allow rolling back.
-* Quota limitations - If the update was a scale down operation you may no longer have sufficient compute quota to complete the rollback operation. Each Azure subscription has a quota associated with it that specifies the maximum number of cores which can be consumed by all hosted services that belong to that subscription. If performing a rollback of a given update would put your subscription over quota then that a rollback will not be enabled.
-* Race condition - If the initial update has completed, a rollback is not possible.
+* Reduction in local resources - If the update increases the local resources for a role the Azure platform doesn't allow rolling back.
+* Quota limitations - If the update was a scale down operation you may no longer have sufficient compute quota to complete the rollback operation. Each Azure subscription has a quota associated with it. The quota specifies the maximum number of cores that all hosted services belonging to that subscription can consume. If performing a rollback of a given update would put your subscription over quota, then that rollback won't be enabled.
+* Race condition - If the initial update completes, a rollback isn't possible.
-An example of when the rollback of an update might be useful is if you are using the [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)) operation in manual mode to control the rate at which a major in-place upgrade to your Azure hosted service is rolled out.
+An example of when the rollback of an update might be useful is if you use the [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)) operation in manual mode to control the rate at which a major in-place upgrade rolls out to your Azure hosted service.
-During the rollout of the upgrade you call [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)) in manual mode and begin to walk upgrade domains. If at some point, as you monitor the upgrade, you note some role instances in the first upgrade domains that you examine have become unresponsive, you can call the [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation on the deployment, which will leave untouched the instances which had not yet been upgraded and rollback instances which had been upgraded to the previous service package and configuration.
+During the rollout of the upgrade, you call [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)) in manual mode and begin to walk upgrade domains. If at some point, as you monitor the upgrade, you note some role instances in the first upgrade domains are unresponsive, you can call the [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)) operation on the deployment. This operation leaves untouched the instances that remain unupgraded and rolls back upgraded instances to the previous service package and configuration.
<a name="multiplemutatingoperations"></a> ## Initiating multiple mutating operations on an ongoing deployment
-In some cases you may want to initiate multiple simultaneous mutating operations on an ongoing deployment. For example, you may perform a service update and, while that update is being rolled out across your service, you want to make some change, e.g. to roll the update back, apply a different update, or even delete the deployment. A case in which this might be necessary is if a service upgrade contains buggy code which causes an upgraded role instance to repeatedly crash. In this case, the Azure Fabric Controller will not be able to make progress in applying that upgrade because an insufficient number of instances in the upgraded domain are healthy. This state is referred to as a *stuck deployment*. You can unstick the deployment by rolling back the update or applying a fresh update over top of the failing one.
+In some cases, you may want to initiate multiple simultaneous mutating operations on an ongoing deployment. For example, you may perform a service update and, while the update rolls out across your service, you want to make some change, like rolling back the update, applying a different update, or even deleting the deployment. A case in which this scenario might arise is if a service upgrade contains buggy code that causes an upgraded role instance to repeatedly crash. In this case, the Azure Fabric Controller is unable to make progress in applying that upgrade because an insufficient number of instances in the upgraded domain are healthy. This state is referred to as a *stuck deployment*. You can unstick the deployment by rolling back the update or applying a fresh update over top of the failing one.
-Once the initial request to update or upgrade the service has been received by the Azure Fabric Controller, you can start subsequent mutating operations. That is, you do not have to wait for the initial operation to complete before you can start another mutating operation.
+Once the Azure Fabric Controller receives the initial request to update or upgrade the service, you can start subsequent mutating operations. That is, you don't have to wait for the initial operation to complete before you can start another mutating operation.
-Initiating a second update operation while the first update is ongoing will perform similar to the rollback operation. If the second update is in automatic mode, the first upgrade domain will be upgraded immediately, possibly leading to instances from multiple upgrade domains being offline at the same point in time.
+Initiating a second update operation while the first update is ongoing plays out similarly to the rollback operation. If the second update is in automatic mode, the first upgrade domain upgrades immediately, possibly leading to instances from multiple upgrade domains being offline at the same time.
The mutating operations are as follows: [Change Deployment Configuration](/previous-versions/azure/reference/ee460809(v=azure.100)), [Upgrade Deployment](/previous-versions/azure/reference/ee460793(v=azure.100)), [Update Deployment Status](/previous-versions/azure/reference/ee460808(v=azure.100)), [Delete Deployment](/previous-versions/azure/reference/ee460815(v=azure.100)), and [Rollback Update Or Upgrade](/previous-versions/azure/reference/hh403977(v=azure.100)).
-Two operations, [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)), return the Locked flag which can be examined to determine whether a mutating operation can be invoked on a given deployment.
+Two operations, [Get Deployment](/previous-versions/azure/reference/ee460804(v=azure.100)) and [Get Cloud Service Properties](/previous-versions/azure/reference/ee460806(v=azure.100)), return the Locked flag. You can examine the Locked flag to determine whether you can invoke a mutating operation on a given deployment.
-In order to call the version of these methods which returns the Locked flag, you must set request header to ΓÇ£x-ms-version: 2011-10-01ΓÇ¥ or a later. For more information about versioning headers, see [Service Management Versioning](/previous-versions/azure/gg592580(v=azure.100)).
+In order to call the version of these methods that returns the Locked flag, you must set request header to ΓÇ£x-ms-version: 2011-10-01ΓÇ¥ or a later. For more information about versioning headers, see [the versioning of the classic deployment model](/previous-versions/azure/gg592580(v=azure.100)).
<a name="distributiondfroles"></a> ## Distribution of roles across upgrade domains Azure distributes instances of a role evenly across a set number of upgrade domains, which can be configured as part of the service definition (.csdef) file. The max number of upgrade domains is 20 and the default is 5. For more information about how to modify the service definition file, see [Azure Service Definition Schema (.csdef File)](cloud-services-model-and-package.md#csdef).
-For example, if your role has ten instances, by default each upgrade domain contains two instances. If your role has 14 instances, then four of the upgrade domains contain three instances, and a fifth domain contains two.
+For example, if your role has 10 instances, by default each upgrade domain contains two instances. If your role has 14 instances, then four of the upgrade domains contain three instances, and a fifth domain contains two.
Upgrade domains are identified with a zero-based index: the first upgrade domain has an ID of 0, and the second upgrade domain has an ID of 1, and so on.
-The following diagram illustrates how a service than contains two roles are distributed when the service defines two upgrade domains. The service is running eight instances of the web role and nine instances of the worker role.
+The following diagram illustrates how the roles in a service containing two roles are distributed when the service defines two upgrade domains. The service is running eight instances of the web role and nine instances of the worker role.
![Distribution of Upgrade Domains](media/cloud-services-update-azure-service/IC345533.png "Distribution of Upgrade Domains")
cloud-services Cloud Services Workflow Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-workflow-process.md
Title: Workflow of Windows Azure VM Architecture | Microsoft Docs
+ Title: Workflow of Microsoft Azure Virtual Machine (VM) Architecture | Microsoft Docs
description: This article provides overview of the workflow processes when you deploy a service. Previously updated : 02/21/2023 Last updated : 07/24/2024
-# Workflow of Windows Azure classic VM Architecture
+# Workflow of Microsoft Azure classic Virtual Machine (VM) Architecture
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
The following diagram presents the architecture of Azure resources.
## Workflow basics
-**A**. RDFE / FFE is the communication path from the user to the fabric. RDFE (RedDog Front End) is the publicly exposed API that is the front end to the Management Portal and the Service Management API such as Visual Studio, Azure MMC, and so on. All requests from the user go through RDFE. FFE (Fabric Front End) is the layer that translates requests from RDFE into fabric commands. All requests from RDFE go through the FFE to reach the fabric controllers.
+**A**. RDFE / FFE is the communication path from the user to the fabric. RDFE (RedDog Front End) is the publicly exposed API that is the front end to the Management Portal and the classic deployment model API, such as Visual Studio, Azure MMC, and so on. All requests from the user go through RDFE. FFE (Fabric Front End) is the layer that translates requests from RDFE into fabric commands. All requests from RDFE go through the FFE to reach the fabric controllers.
**B**. The fabric controller is responsible for maintaining and monitoring all the resources in the data center. It communicates with fabric host agents on the fabric OS sending information such as the Guest OS version, service package, service configuration, and service state.
-**C**. The Host Agent lives on the Host OS and is responsible for setting up Guest OS and communicating with Guest Agent (WindowsAzureGuestAgent) in order to update the role toward an intended goal state and do heartbeat checks with the Guest agent. If Host Agent does not receive heartbeat response for 10 minutes, Host Agent restarts the Guest OS.
+**C**. The Host Agent lives on the Host OS and is responsible for setting up Guest OS. It also handles communicating with Guest Agent (WindowsAzureGuestAgent) to update the role toward an intended goal state and do heartbeat checks with the Guest Agent. If Host Agent doesn't receive heartbeat response for 10 minutes, Host Agent restarts the Guest OS.
**C2**. WaAppAgent is responsible for installing, configuring, and updating WindowsAzureGuestAgent.exe.
-**D**. WindowsAzureGuestAgent is responsible for the following:
+**D**. WindowsAzureGuestAgent is responsible for the following tasks:
-1. Configuring the Guest OS including firewall, ACLs, LocalStorage resources, service package and configuration, and certificates.
-2. Setting up the SID for the user account that the role will run under.
-3. Communicating the role status to the fabric.
-4. Starting WaHostBootstrapper and monitoring it to make sure that the role is in goal state.
+* Configuring the Guest OS including firewall, ACLs, LocalStorage resources, service package and configuration, and certificates.
+* Setting up the SID for the user account that the role runs under.
+* Communicating the role status to the fabric.
+* Starting WaHostBootstrapper and monitoring it to make sure that the role is in goal state.
**E**. WaHostBootstrapper is responsible for:
-1. Reading the role configuration, and starting all the appropriate tasks and processes to configure and run the role.
-2. Monitoring all its child processes.
-3. Raising the StatusCheck event on the role host process.
+* Reading the role configuration, and starting all the appropriate tasks and processes to configure and run the role.
+* Monitoring all its child processes.
+* Raising the StatusCheck event on the role host process.
-**F**. IISConfigurator runs if the role is configured as a Full IIS web role. It is responsible for:
+**F**. IISConfigurator runs if the role is configured as a Full IIS web role. It's responsible for:
-1. Starting the standard IIS services
-2. Configuring the rewrite module in the web configuration
-3. Setting up the AppPool for the configured role in the service model
-4. Setting up IIS logging to point to the DiagnosticStore LocalStorage folder
-5. Configuring permissions and ACLs
-6. The website resides in %roleroot%:\sitesroot\0, and the AppPool points to this location to run IIS.
+* Starting the standard IIS services
+* Configuring the rewrite module in the web configuration
+* Setting up the AppPool for the configured role in the service model
+* Setting up IIS logging to point to the DiagnosticStore LocalStorage folder
+* Configuring permissions and ACLs
+* The website resides in %roleroot%:\sitesroot\0, and the AppPool points to this location to run IIS.
-**G**. Startup tasks are defined by the role model and started by WaHostBootstrapper. Startup tasks can be configured to run in the background asynchronously, and the host bootstrapper will start the startup task and then continue on to other startup tasks. Startup tasks can also be configured to run in Simple (default) mode in which the host bootstrapper will wait for the startup task to finish running and return a success (0) exit code before continuing to the next startup task.
+**G**. The role model defines startup tasks, and WaHostBootstrapper starts them. Startup tasks can be configured to run in the background asynchronously, and the host bootstrapper starts the startup task and then continue on to other startup tasks. Startup tasks can also be configured to run in Simple (default) mode. In Simple mode, the host bootstrapper waits for the startup task to finish running and return a success (0) exit code before continuing to the next startup task.
-**H**. These tasks are part of the SDK and are defined as plugins in the roleΓÇÖs service definition (.csdef). When expanded into startup tasks, the **DiagnosticsAgent** and **RemoteAccessAgent** are unique in that they each define two startup tasks, one regular and one that has a **/blockStartup** parameter. The normal startup task is defined as a Background startup task so that it can run in the background while the role itself is running. The **/blockStartup** startup task is defined as a Simple startup task so that WaHostBootstrapper waits for it to exit before continuing. The **/blockStartup** task waits for the regular task to finish initializing, and then it exits and allow the host bootstrapper to continue. This is done so that diagnostics and RDP access can be configured before the role processes start (this is done through the /blockStartup task). This also allows diagnostics and RDP access to continue running after the host bootstrapper has finished the startup tasks (this is done through the Normal task).
+**H**. These tasks are part of the SDK and are defined as plugins in the roleΓÇÖs service definition (.csdef). When expanded into startup tasks, the **DiagnosticsAgent** and **RemoteAccessAgent** are unique in that they each define two startup tasks, one regular and one that has a **/blockStartup** parameter. The normal startup task is defined as a Background startup task so that it can run in the background while the role itself is running. The **/blockStartup** startup task is defined as a Simple startup task so that WaHostBootstrapper waits for it to exit before continuing. The **/blockStartup** task waits for the regular task to finish initializing, and then it exits and allow the host bootstrapper to continue. This process is done so that diagnostics and RDP access can be configured before the role processes start, which is done through the /blockStartup task. This process also allows diagnostics and RDP access to continue running after the host bootstrapper finishes the startup tasks, which is done through the Normal task.
-**I**. WaWorkerHost is the standard host process for normal worker roles. This host process hosts all the roleΓÇÖs DLLs and entry point code, such as OnStart and Run.
+**I**. WaWorkerHost is the standard host process for normal worker roles. This host process hosts all the roleΓÇÖs DLLs and entry point code, such as OnStart and Run.
**J**. WaIISHost is the host process for role entry point code for web roles that use Full IIS. This process loads the first DLL that is found that uses the **RoleEntryPoint** class and executes the code from this class (OnStart, Run, OnStop). Any **RoleEnvironment** events (such as StatusCheck and Changed) that are created in the RoleEntryPoint class are raised in this process.
-**K**. W3WP is the standard IIS worker process that is used if the role is configured to use Full IIS. This runs the AppPool that is configured from IISConfigurator. Any RoleEnvironment events (such as StatusCheck and Changed) that are created here are raised in this process. Note that RoleEnvironment events will fire in both locations (WaIISHost and w3wp.exe) if you subscribe to events in both processes.
+**K**. W3WP is the standard IIS worker process used if the role is configured to use Full IIS. This process runs the AppPool configured from IISConfigurator. Any RoleEnvironment events (such as StatusCheck and Changed) that are created here are raised in this process. RoleEnvironment events fire in both locations (WaIISHost and w3wp.exe) if you subscribe to events in both processes.
## Workflow processes
-1. A user makes a request, such as uploading ".cspkg" and ".cscfg" files, telling a resource to stop or making a configuration change, and so on. This can be done through the Azure portal or a tool that uses the Service Management API, such as the Visual Studio Publish feature. This request goes to RDFE to do all the subscription-related work and then communicate the request to FFE. The rest of these workflow steps are to deploy a new package and start it.
+1. A user makes a request, such as uploading ".cspkg" and ".cscfg" files, telling a resource to stop or making a configuration change, and so on. Requests can be made through the Azure portal or tools that use the classic deployment model API, such as the Visual Studio Publish feature. This request goes to RDFE to do all the subscription-related work and then communicate the request to FFE. The rest of these workflow steps are to deploy a new package and start it.
2. FFE finds the correct machine pool (based on customer input such, as affinity group or geographical location plus input from the fabric, such as machine availability) and communicates with the master fabric controller in that machine pool. 3. The fabric controller finds a host that has available CPU cores (or spins up a new host). The service package and configuration is copied to the host, and the fabric controller communicates with the host agent on the host OS to deploy the package (configure DIPs, ports, guest OS, and so on). 4. The host agent starts the Guest OS and communicates with the guest agent (WindowsAzureGuestAgent). The host sends heartbeats to the guest to make sure that the role is working towards its goal state. 5. WindowsAzureGuestAgent sets up the guest OS (firewall, ACLs, LocalStorage, and so on), copies a new XML configuration file to c:\Config, and then starts the WaHostBootstrapper process. 6. For Full IIS web roles, WaHostBootstrapper starts IISConfigurator and tells it to delete any existing AppPools for the web role from IIS.
-7. WaHostBootstrapper reads the **Startup** tasks from E:\RoleModel.xml and begins executing startup tasks. WaHostBootstrapper waits until all Simple startup tasks have finished and returned a ΓÇ£successΓÇ¥ message.
+7. WaHostBootstrapper reads the **Startup** tasks from E:\RoleModel.xml and begins executing startup tasks. WaHostBootstrapper waits until all Simple startup tasks finish and return a success message.
8. For Full IIS web roles, WaHostBootstrapper tells IISConfigurator to configure the IIS AppPool and points the site to `E:\Sitesroot\<index>`, where `<index>` is a zero-based index into the number of `<Sites>` elements defined for the service.
-9. WaHostBootstrapper will start the host process depending on the role type:
- 1. **Worker Role**: WaWorkerHost.exe is started. WaHostBootstrapper executes the OnStart() method. After it returns, WaHostBootstrapper starts to execute the Run() method, and then simultaneously marks the role as Ready and puts it into the load balancer rotation (if InputEndpoints are defined). WaHostBootsrapper then goes into a loop of checking the role status.
+9. WaHostBootstrapper starts the host process depending on the role type:
+ 1. **Worker Role**: WaWorkerHost.exe is started. WaHostBootstrapper executes the OnStart() method. After it returns, WaHostBootstrapper starts to execute the Run() method, and then simultaneously marks the role as Ready and puts it into the load balancer rotation (if InputEndpoints are defined). WaHostBootsrapper then goes into a loop of checking the role status.
2. **Full IIS Web Role**: aIISHost is started. WaHostBootstrapper executes the OnStart() method. After it returns, it starts to execute the Run() method, and then simultaneously marks the role as Ready and puts it into the load balancer rotation. WaHostBootsrapper then goes into a loop of checking the role status.
-10. Incoming web requests to a Full IIS web role triggers IIS to start the W3WP process and serve the request, the same as it would in an on-premises IIS environment.
+10. Incoming web requests to a Full IIS web role trigger IIS to start the W3WP process and serve the request, the same as it would in an on-premises IIS environment.
## Log File locations **WindowsAzureGuestAgent** - C:\Logs\AppAgentRuntime.Log.
-This log contains changes to the service including starts, stops, and new configurations. If the service does not change, you can expect to see large gaps of time in this log file.
+This log contains changes to the service including starts, stops, and new configurations. If the service doesn't change, you can expect to see large gaps of time in this log file.
- C:\Logs\WaAppAgent.Log.
-This log contains status updates and heartbeat notifications and is updated every 2-3 seconds. This log contains a historic view of the status of the instance and will tell you when the instance was not in the Ready state.
+This log contains status updates and heartbeat notifications and is updated every 2-3 seconds. This log contains a historic view of the status of the instance and tells you when the instance wasn't in the Ready state.
**WaHostBootstrapper**
cloud-services Diagnostics Extension To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/diagnostics-extension-to-storage.md
Previously updated : 02/21/2023 Last updated : 07/24/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-Diagnostic data is not permanently stored unless you transfer it to the Microsoft Azure Storage Emulator or to Azure Storage. Once in storage, it can be viewed with one of several available tools.
+Diagnostic data isn't permanently stored unless you transfer it to the Microsoft Azure Storage Emulator or to Azure Storage. Once in storage, it can be viewed with one of several available tools.
## Specify a storage account You specify the storage account that you want to use in the ServiceConfiguration.cscfg file. The account information is defined as a connection string in a configuration setting. The following example shows the default connection string created for a new Cloud Service project in Visual Studio:
Depending on the type of diagnostic data that is being collected, Azure Diagnost
## Transfer diagnostic data For SDK 2.5 and later, the request to transfer diagnostic data can occur through the configuration file. You can transfer diagnostic data at scheduled intervals as specified in the configuration.
-For SDK 2.4 and previous you can request to transfer the diagnostic data through the configuration file as well as programmatically. The programmatic approach also allows you to do on-demand transfers.
+For SDK 2.4 and earlier, you can request to transfer the diagnostic data programmatically and through the configuration file. The programmatic approach also allows you to do on-demand transfers.
> [!IMPORTANT] > When you transfer diagnostic data to an Azure storage account, you incur costs for the storage resources that your diagnostic data uses.
Log data is stored in either Blob or Table storage with the following names:
* **WadLogsTable** - Logs written in code using the trace listener. * **WADDiagnosticInfrastructureLogsTable** - Diagnostic monitor and configuration changes.
-* **WADDirectoriesTable** ΓÇô Directories that the diagnostic monitor is monitoring. This includes IIS logs, IIS failed request logs, and custom directories. The location of the blob log file is specified in the Container field and the name of the blob is in the RelativePath field. The AbsolutePath field indicates the location and name of the file as it existed on the Azure virtual machine.
+* **WADDirectoriesTable** ΓÇô Directories that the diagnostic monitor is monitoring. These directories include IIS logs, IIS failed request logs, and custom directories. The location of the blob log file is specified in the Container field and the name of the blob is in the RelativePath field. The AbsolutePath field indicates the location and name of the file as it existed on the Azure virtual machine.
* **WADPerformanceCountersTable** ΓÇô Performance counters. * **WADWindowsEventLogsTable** ΓÇô Windows Event logs. **Blobs**
-* **wad-control-container** ΓÇô (Only for SDK 2.4 and previous) Contains the XML configuration files that controls the Azure diagnostics .
+* **wad-control-container** ΓÇô (Only for SDK 2.4 and previous) Contains the XML configuration files that control the Azure diagnostics.
* **wad-iis-failedreqlogfiles** ΓÇô Contains information from IIS Failed Request logs. * **wad-iis-logfiles** ΓÇô Contains information about IIS logs.
-* **"custom"** ΓÇô A custom container based on configuring directories that are monitored by the diagnostic monitor. The name of this blob container will be specified in WADDirectoriesTable.
+* **"custom"** ΓÇô A custom container based on configuring directories that are monitored by the diagnostic monitor. WADDirectoriesTable specifies the name of this blob container.
## Tools to view diagnostic data
-Several tools are available to view the data after it is transferred to storage. For example:
+Several tools are available to view the data after it transfers to storage. For example:
-* Server Explorer in Visual Studio - If you have installed the Azure Tools for Microsoft Visual Studio, you can use the Azure Storage node in Server Explorer to view read-only blob and table data from your Azure storage accounts. You can display data from your local storage emulator account and also from storage accounts you have created for Azure. For more information, see [Browsing and Managing Storage Resources with Server Explorer](/visualstudio/azure/vs-azure-tools-storage-resources-server-explorer-browse-manage).
+* Server Explorer in Visual Studio - If you installed the Azure Tools for Microsoft Visual Studio, you can use the Azure Storage node in Server Explorer to view read-only blob and table data from your Azure storage accounts. You can display data from your local storage emulator account and also from storage accounts you created for Azure. For more information, see [Browsing and Managing Storage Resources with Server Explorer](/visualstudio/azure/vs-azure-tools-storage-resources-server-explorer-browse-manage).
* [Microsoft Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) is a standalone app that enables you to easily work with Azure Storage data on Windows, OSX, and Linux.
-* [Azure Management Studio](https://cerebrata.com/blog/introducing-azure-management-studio-and-azure-explorer) includes Azure Diagnostics Manager which allows you to view, download and manage the diagnostics data collected by the applications running on Azure.
+* [Azure Management Studio](https://cerebrata.com/blog/introducing-azure-management-studio-and-azure-explorer) includes Azure Diagnostics Manager, which allows you to view, download, and manage the diagnostics data collected by the applications running on Azure.
## Next Steps [Trace the flow in a Cloud Services application with Azure Diagnostics](../cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md)
cloud-services Diagnostics Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/diagnostics-performance-counters.md
Title: Collect on Performance Counters in Azure Cloud Services (classic) | Micro
description: Learn how to discover, use, and create performance counters in Cloud Services with Azure Diagnostics and Application Insights. Previously updated : 02/21/2023 Last updated : 07/24/2024
A performance counter can be added to your cloud service for either Azure Diagno
Azure Application Insights for Cloud Services allows you specify what performance counters you want to collect. After you [add Application Insights to your project](../azure-monitor/app/azure-web-apps-net-core.md), a config file named **ApplicationInsights.config** is added to your Visual Studio project. This config file defines what type of information Application Insights collects and sends to Azure.
-Open the **ApplicationInsights.config** file and find the **ApplicationInsights** > **TelemetryModules** element. Each `<Add>` child-element defines a type of telemetry to collect, along with its configuration. The performance counter telemetry module type is `Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector`. If this element is already defined, do not add it a second time. Each performance counter to collect is defined under a node named `<Counters>`. Here is an example that collects drive performance counters:
+Open the **ApplicationInsights.config** file and find the **ApplicationInsights** > **TelemetryModules** element. Each `<Add>` child-element defines a type of telemetry to collect, along with its configuration. The performance counter telemetry module type is `Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector`. If this element is already defined, don't add it a second time. Each performance counter to collect is defined under a node named `<Counters>`. Here's an example that collects drive performance counters:
```xml <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
Open the **ApplicationInsights.config** file and find the **ApplicationInsights*
<!-- ... cut to save space ... --> ```
-Each performance counter is represented as an `<Add>` element under `<Counters>`. The `PerformanceCounter` attribute defines which performance counter to collect. The `ReportAs` attribute is the title to display in the Azure portal for the performance counter. Any performance counter you collect is put into a category named **Custom** in the portal. Unlike Azure Diagnostics, you cannot set the interval these performance counters are collected and sent to Azure. With Application Insights, performance counters are collected and sent every minute.
+Each performance counter is represented as an `<Add>` element under `<Counters>`. The `PerformanceCounter` attribute defines which performance counter to collect. The `ReportAs` attribute is the title to display in the Azure portal for the performance counter. Any performance counter you collect is put into a category named **Custom** in the portal. Unlike Azure Diagnostics, you can't set the interval these performance counters are collected and sent to Azure. With Application Insights, performance counters are collected and sent every minute.
Application Insights automatically collects the following performance counters:
For more information, see [System performance counters in Application Insights](
The Azure Diagnostics extension for Cloud Services allows you specify what performance counters you want to collect. To set up Azure Diagnostics, see [Cloud Service Monitoring Overview](cloud-services-how-to-monitor.md#setup-diagnostics-extension).
-The performance counters you want to collect are defined in the **diagnostics.wadcfgx** file. Open this file (it is defined per role) in Visual Studio and find the **DiagnosticsConfiguration** > **PublicConfig** > **WadCfg** > **DiagnosticMonitorConfiguration** > **PerformanceCounters** element. Add a new **PerformanceCounterConfiguration** element as a child. This element has two attributes: `counterSpecifier` and `sampleRate`. The `counterSpecifier` attribute defines which system performance counter set (outlined in the previous section) to collect. The `sampleRate` value indicates how often that value is polled. As a whole, all performance counters are transferred to Azure according to the parent `PerformanceCounters` element's `scheduledTransferPeriod` attribute value.
+The performance counters you want to collect are defined in the **diagnostics.wadcfgx** file. Open this file in Visual Studio and find the **DiagnosticsConfiguration** > **PublicConfig** > **WadCfg** > **DiagnosticMonitorConfiguration** > **PerformanceCounters** element. Add a new **PerformanceCounterConfiguration** element as a child. This element has two attributes: `counterSpecifier` and `sampleRate`. The `counterSpecifier` attribute defines which system performance counter set (outlined in the previous section) to collect. The `sampleRate` value indicates how often that value is polled. As a whole, all performance counters are transferred to Azure according to the parent `PerformanceCounters` element's `scheduledTransferPeriod` attribute value.
For more information about the `PerformanceCounters` schema element, see the [Azure Diagnostics Schema](../azure-monitor/agents/diagnostics-extension-schema-windows.md#performancecounters-element).
-The period defined by the `sampleRate` attribute uses the XML duration data type to indicate how often the performance counter is polled. In the example below, the rate is set to `PT3M`, which means `[P]eriod[T]ime[3][M]inutes`: every three minutes.
+The period defined by the `sampleRate` attribute uses the XML duration data type to indicate how often the performance counter is polled. In the following example, the rate is set to `PT3M`, which means `[P]eriod[T]ime[3][M]inutes`: every three minutes.
For more information about how the `sampleRate` and `scheduledTransferPeriod` are defined, see the **Duration Data Type** section in the [W3 XML Date and Time Date Types](https://www.w3schools.com/XML/schema_dtypes_date.asp) tutorial.
For more information about how the `sampleRate` and `scheduledTransferPeriod` ar
## Create a new perf counter
-A new performance counter can be created and used by your code. Your code that creates a new performance counter must be running elevated, otherwise it will fail. Your cloud service `OnStart` startup code can create the performance counter, requiring you to run the role in an elevated context. Or you can create a startup task that runs elevated and creates the performance counter. For more information about startup tasks, see [How to configure and run startup tasks for a cloud service](cloud-services-startup-tasks.md).
+A new performance counter can be created and used by your code. Your code that creates a new performance counter must be running elevated, otherwise it fails. Your cloud service `OnStart` startup code can create the performance counter, requiring you to run the role in an elevated context. Or you can create a startup task that runs elevated and creates the performance counter. For more information about startup tasks, see [How to configure and run startup tasks for a cloud service](cloud-services-startup-tasks.md).
To configure your role to run elevated, add a `<Runtime>` element to the [.csdef](cloud-services-model-and-package.md#servicedefinitioncsdef) file.
As previously stated, the performance counters for Application Insights are defi
### Azure Diagnostics
-As previously stated, the performance counters you want to collect are defined in the **diagnostics.wadcfgx** file. Open this file (it is defined per role) in Visual Studio and find the **DiagnosticsConfiguration** > **PublicConfig** > **WadCfg** > **DiagnosticMonitorConfiguration** > **PerformanceCounters** element. Add a new **PerformanceCounterConfiguration** element as a child. Set the `counterSpecifier` attribute to the category and name of the performance counter you created in your code.
+As previously stated, the performance counters you want to collect are defined in the **diagnostics.wadcfgx** file. Open this file in Visual Studio and find the **DiagnosticsConfiguration** > **PublicConfig** > **WadCfg** > **DiagnosticMonitorConfiguration** > **PerformanceCounters** element. Add a new **PerformanceCounterConfiguration** element as a child. Set the `counterSpecifier` attribute to the category and name of the performance counter you created in your code.
```xml <?xml version="1.0" encoding="utf-8"?>
As previously stated, the performance counters you want to collect are defined i
</DiagnosticsConfiguration> ```
-## More information
+## Next steps
- [Application Insights for Azure Cloud Services](../azure-monitor/app/azure-web-apps-net-core.md) - [System performance counters in Application Insights](../azure-monitor/app/performance-counters.md)
cloud-services Mitigate Se https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/mitigate-se.md
keywords: spectre,meltdown,specter
vm-windows Previously updated : 02/21/2023 Last updated : 07/24/2024
cloud-services Resource Health For Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/resource-health-for-cloud-services.md
description: This article talks about Resource Health Check (RHC) Support for Mi
Previously updated : 02/21/2023 Last updated : 07/24/2024
This article talks about Resource Health Check (RHC) Support for [Microsoft Azu
[Azure Resource Health](../service-health/resource-health-overview.md) for cloud services helps you diagnose and get support for service problems that affect your Cloud Service deployment, Roles & Role Instances. It reports on the current and past health of your cloud services at Deployment, Role & Role Instance level.
-Azure status reports on problems that affect a broad set of Azure customers. Resource Health gives you a personalized dashboard of the health of your resources. Resource Health shows all the times that your resources have been unavailable because of Azure service problems. This data makes it easy for you to see if an SLA was violated.
+Azure status reports on problems that affect a broad set of Azure customers. Resource Health gives you a personalized dashboard of the health of your resources. Resource Health shows all the times that your resources were unavailable because of Azure service problems. This data makes it easy for you to see if a Service Level Agreement (SLA) was violated.
:::image type="content" source="media/cloud-services-allocation-failure/rhc-blade-cloud-services.png" alt-text="Image shows the resource health check blade in the Azure portal."::: ## How health is checked and reported?
-Resource health is reported at a deployment or role level. The health check happens at role instance level, we aggregate the status and report it on Role level. E.g. If all role instances are available, then the role status is available. Similarly, we aggregate the health status of all roles and report it on deployment level. E.g. If all roles are available then deployment status becomes available.
+Resource health is reported at a deployment or role level. The health check happens at role instance level. We aggregate the status and report it on Role level. For example, if all role instances are available, then the role status is available. Similarly, we aggregate the health status of all roles and report it on deployment level. For example, if all roles are available, then deployment status becomes available.
-## Why I cannot see health status for my staging slot deployment?
-Resource health checks only work for production slot deployment. Staging slot deployment is not yet supported.
+## Why I can't see health status for my staging slot deployment?
+Resource health checks only work for production slot deployment. Staging slot deployment isn't yet supported.
## Does Resource Health Check also check the health of the application?
-No, health check only happens for role instances and it does not monitor Application health. E.g. Even if 1 out of 3 role instances are unhealthy, the application can still be available. RHC does not use [load balancer probes](../load-balancer/load-balancer-custom-probe-overview.md) or Guest agent probe. Therefore,
+No, health check only happens for role instances and it doesn't monitor Application health. For example, even if one out of three role instances are unhealthy, the application can still be available. RHC doesn't use [load balancer probes](../load-balancer/load-balancer-custom-probe-overview.md) or Guest agent probe. Therefore,
Customers should continue to using load balancer probes to monitor the health of their application. ## What are the annotations for Cloud Services? Annotations are the health status of the deployment or roles. There are different annotations based on health status, reason for status change, etc. ## What does it mean by Role Instance being "unavailable"?
-This means the role instance is not emitting a healthy signal to the platform. Please check the role instance status for detailed explanation of why healthy signal is not being emitted.
+Unavailable means the role instance isn't emitting a healthy signal to the platform. Check the role instance status for detailed explanation of why healthy signal isn't being emitted.
## What does it mean by deployment being "unknown"?
-Unknown means the aggregated health of the Cloud Service deployment cannot be determined. Usually this indicates either there is no production deployment created for the Cloud Service, the deployment was newly created (and that Azure is starting to collect health events), or platform is having issues collecting health events for this deployment.
+Unknown means the aggregated health of the Cloud Service deployment can't be determined. Usually, unknown indicates one of the following scenarios:
+* There's no production deployment created for the Cloud Service
+* The deployment was newly created (and that Azure is starting to collect health events)
+* The platform is having issues collecting health events for this deployment.
-## Why does Role Instance Annotations mentions VMs instead of Role Instances?
-Since Role Instances are basically VMs and the health check for VMs is reused for Role Instances, the VM term is used to represent Role Instances.
+## Why does Role Instance Annotations mention VMs instead of Role Instances?
+Since Role Instances are, in essence, virtual machines (VMs), and the health check for VMs is reused for Role Instances, the VM term is used to represent Role Instances.
## Cloud Services (Deployment Level) Annotations & their meanings | Annotation | Description | | | | | Available| There aren't any known Azure platform problems affecting this Cloud Service deployment |
-| Unknown | We are currently unable to determine the health of this Cloud Service deployment |
-| Setting up Resource Health | Setting up Resource health for this resource. Resource health watches your Azure resources to provide details about ongoing and past events that have impacted them|
-| Degraded | Your Cloud Service deployment is degraded. We're working to automatically recover your Cloud Service deployment and to determine the source of the problem. No additional action is required from you at this time |
+| Unknown | We're currently unable to determine the health of this Cloud Service deployment |
+| Setting up Resource Health | Setting up Resource health for this resource. Resource health watches your Azure resources to provide details about ongoing and past events that affected them|
+| Degraded | Your Cloud Service deployment is degraded. We're working to automatically recover your Cloud Service deployment and to determine the source of the problem. No further action is required from you at this time |
| Unhealthy | Your Cloud Service deployment is unhealthy because {0} out of {1} role instances are unavailable | | Degraded | Your Cloud Service deployment is degraded because {0} out of {1} role instances are unavailable |
-| Available and maybe impacted | Your Cloud Service deployment is running, however an ongoing Azure service outage may prevent you from connecting to it. Connectivity will be restored once the outage is resolved |
-| Unavailable and maybe impacted | The health of this Cloud Service deployment may be impacted by an Azure service outage. Your Cloud Service deployment will automatically recover when the outage is resolved |
-| Unknown and maybe impacted | We are currently unable to determine the health of this Cloud Service deployment. This could be caused by an ongoing Azure service outage that may be impacting this virtual machine, which will automatically recover when the outage is resolved |
+| Available and maybe impacted | Your Cloud Service deployment is running, however an ongoing Azure service outage may prevent you from connecting to it. Connectivity restores once the outage is resolved |
+| Unavailable and maybe impacted | An Azure service outage possibly affected the health of this Cloud Service deployment. Your Cloud Service deployment recovers automatically when the outage is resolved |
+| Unknown and maybe impacted | We're currently unable to determine the health of this Cloud Service deployment. This status could be a result of an ongoing Azure service outage that may be impacting this virtual machine, which recovers automatically when the outage is resolved |
## Cloud Services (Role Instance Level) Annotations & their meanings | Annotation | Description | | | | | Available | There aren't any known Azure platform problems affecting this virtual machine |
-| Unknown | We are currently unable to determine the health of this virtual machine |
+| Unknown | We're currently unable to determine the health of this virtual machine |
| Stopped and deallocating | This virtual machine is stopping and deallocating as requested by an authorized user or process |
-| Setting up Resource Health | Setting up Resource health for this resource. Resource health watches your Azure resources to provide details about ongoing and past events that have impacted them |
-| Unavailable | Your virtual machine is unavailable. We're working to automatically recover your virtual machine and to determine the source of the problem. No additional action is required from you at this time |
-| Degraded | Your virtual machine is degraded. We're working to automatically recover your virtual machine and to determine the source of the problem. No additional action is required from you at this time |
-| Host server hardware failure | This virtual machine is impacted by a fatal {HardwareCategory} failure on the host server. Azure will redeploy your virtual machine to a healthy host server |
-| Migration scheduled due to degraded hardware | Azure has identified that the host server has a degraded {0} that is predicted to fail soon. If feasible, we will Live Migrate your virtual machine as soon as possible, or otherwise redeploy it after {1} UTC time. To minimize risk to your service, and in case the hardware fails before the system initiated migration occurs, we recommend that you self-redeploy your virtual machine as soon as possible |
-| Available and maybe impacted | Your virtual machine is running, however an ongoing Azure service outage may prevent you from connecting to it. Connectivity will be restored once the outage is resolved |
-| Unavailable and maybe impacted | The health of this virtual machine may be impacted by an Azure service outage. Your virtual machine will automatically recover when the outage is resolved |
-| Unknown and maybe impacted | We are currently unable to determine the health of this virtual machine. This could be caused by an ongoing Azure service outage that may be impacting this virtual machine, which will automatically recover when the outage is resolved |
-| Hardware resources allocated | Hardware resources have been assigned to the virtual machine and it will be online shortly |
+| Setting up Resource Health | Setting up Resource health for this resource. Resource health watches your Azure resources to provide details about ongoing and past events that affected them |
+| Unavailable | Your virtual machine is unavailable. We're working to automatically recover your virtual machine and to determine the source of the problem. No further action is required from you at this time |
+| Degraded | Your virtual machine is degraded. We're working to automatically recover your virtual machine and to determine the source of the problem. No further action is required from you at this time |
+| Host server hardware failure | A fatal {HardwareCategory} failure on the host server affected this virtual machine. Azure redeploys your virtual machine to a healthy host server |
+| Migration scheduled due to degraded hardware | Azure identified that the host server has a degraded {0} that is predicted to fail soon. If feasible, we Live Migrate your virtual machine as soon as possible, or otherwise redeploy it after {1} UTC time. To minimize risk to your service, and in case the hardware fails before the system initiated migration occurs, we recommend you self-redeploy your virtual machine as soon as possible |
+| Available and maybe impacted | Your virtual machine is running, however an ongoing Azure service outage may prevent you from connecting to it. Connectivity restores once the outage is resolved |
+| Unavailable and maybe impacted | An Azure service outage possibly affected the health of this virtual machine. Your virtual machine recovers automatically when the outage is resolved |
+| Unknown and maybe impacted | We're currently unable to determine the health of this virtual machine. An ongoing Azure service outage possibly affects this virtual machine. This virtual machine recovers automatically when the outage is resolved |
+| Hardware resources allocated | Hardware resources are assigned to the virtual machine. Expect the virtual machine to be online shortly |
| Stopping and deallocating | This virtual machine is stopping and deallocating as requested by an authorized user or process | | Configuration being updated | The configuration of this virtual machine is being updated as requested by an authorized user or process | | Rebooted by user | This virtual machine is rebooting as requested by an authorized user or process. It will be back online after the reboot completes | | Redeploying to different host | This virtual machine is being redeployed to a different host as requested by an authorized user or process. It will be back online after the redeployment completes | | Stopped by user | This virtual machine is stopping as requested by an authorized user or a process | | Stopped by user or process | This virtual machine is stopping as requested by an authorized user or by a process running inside the virtual machine |
-| Started by user | This virtual machine is starting as requested by an authorized user or process. It will be online shortly |
+| Started by user | This virtual machine is starting as requested by an authorized user or process. Expect the virtual machine to be online shortly |
| Maintenance redeploy to different host | This virtual machine is being redeployed to a different host server as part of a planned maintenance activity. It will be back online after the redeployment completes |
-| Reboot initiated from inside the machine | A reboot was triggered from inside the virtual machine. This could be due to a virtual machine operating system failure or as requested by an authorized user or process. The virtual machine will be back online after the reboot completes |
+| Reboot initiated from inside the machine | A reboot was triggered from inside the virtual machine. This event could be due to a virtual machine operating system failure or as requested by an authorized user or process. The virtual machine will be back online after the reboot completes |
| Resized by user | This virtual machine is being resized as requested by an authorized user or process. It will be back online after the resizing completes |
-| Machine crashed | A reboot was triggered from inside the virtual machine. This could be due to a virtual machine operating system failure or as requested by an authorized user or process. The virtual machine will be back online after the reboot completes |
-| Maintenance rebooting for host update | Maintenance updates are being applied to the host server running this virtual machine. No additional action is required from you at this time. It will be back online after the maintenance completes |
-| Maintenance redeploy to new hardware | This virtual machine is unavailable because it is being redeployed to newer hardware as part of a planned maintenance event. No additional action is required from you at this time. It will be back online after the planned maintenance completes |
-| Low priority machine preempted | This virtual machine has been preempted. This low-priority virtual machine is being stopped and deallocated |
-| Host server reboot | We're sorry, your virtual machine isn't available because of an unexpected host server reboot. The host server is currently rebooting. The virtual machine will be back online after the reboot completes. No additional action is required from you at this time |
-| Redeploying due to host failure | We're sorry, your virtual machine isn't available and it is being redeployed due to an unexpected failure on the host server. Azure has begun the auto-recovery process and is currently starting the virtual machine on a different host. No additional action is required from you at this time |
-| Unexpected host failure | We're sorry, your virtual machine isn't available because an unexpected failure on the host server. Azure has begun the auto-recovery process and is currently rebooting the host server. No additional action is required from you at this time. The virtual machine will be back online after the reboot completes |
-| Redeploying due to unplanned host maintenance | We're sorry, your virtual machine isn't available and it is being redeployed due to an unexpected failure on the host server. Azure has begun the auto-recovery process and is currently starting the virtual machine on a different host server. No additional action is required from you at this time |
-| Provisioning failure | We're sorry, your virtual machine isn't available due to unexpected provisioning problems. The provisioning of your virtual machine has failed due to an unexpected error |
-| Live Migration | This virtual machine is paused because of a memory-preserving Live Migration operation. The virtual machine typically resumes within 10 seconds. No additional action is required from you at this time |
-| Live Migration | This virtual machine is paused because of a memory-preserving Live Migration operation. The virtual machine typically resumes within 10 seconds. No additional action is required from you at this time |
-| Remote disk disconnected | We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. We're working to reestablish disk connectivity. No additional action is required from you at this time |
-| Azure service issue | Your virtual machine is impacted by Azure service issue |
-| Network issue | This virtual machine is impacted by a top-of-rack network device |
-| Unavailable | Your virtual machine is unavailable. We are currently unable to determine the reason for this downtime |
+| Machine crashed | A reboot was triggered from inside the virtual machine. This event could be due to a virtual machine operating system failure or as requested by an authorized user or process. The virtual machine will be back online after the reboot completes |
+| Maintenance rebooting for host update | Maintenance updates are being applied to the host server running this virtual machine. No further action is required from you at this time. It will be back online after the maintenance completes |
+| Maintenance redeploy to new hardware | This virtual machine is unavailable because it's being redeployed to newer hardware as part of a planned maintenance event. No further action is required from you at this time. It will be back online after the planned maintenance completes |
+| Low priority machine preempted | This virtual machine was preempted. This low-priority virtual machine is being stopped and deallocated |
+| Host server reboot | We're sorry, your virtual machine isn't available because of an unexpected host server reboot. The host server is currently rebooting. The virtual machine will be back online after the reboot completes. No further action is required from you at this time |
+| Redeploying due to host failure | We're sorry, your virtual machine isn't available and it's being redeployed due to an unexpected failure on the host server. Azure began the autorecovery process and is currently starting the virtual machine on a different host. No further action is required from you at this time |
+| Unexpected host failure | We're sorry, your virtual machine isn't available because an unexpected failure on the host server. Azure began the autorecovery process and is currently rebooting the host server. No further action is required from you at this time. The virtual machine will be back online after the reboot completes |
+| Redeploying due to unplanned host maintenance | We're sorry, your virtual machine isn't available and it's being redeployed due to an unexpected failure on the host server. Azure began the autorecovery process and is currently starting the virtual machine on a different host server. No further action is required from you at this time |
+| Provisioning failure | We're sorry, your virtual machine isn't available due to unexpected provisioning problems. The provisioning of your virtual machine failed due to an unexpected error |
+| Live Migration | This virtual machine is paused because of a memory-preserving Live Migration operation. The virtual machine typically resumes within 10 seconds. No further action is required from you at this time |
+| Live Migration | This virtual machine is paused because of a memory-preserving Live Migration operation. The virtual machine typically resumes within 10 seconds. No further action is required from you at this time |
+| Remote disk disconnected | We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. We're working to reestablish disk connectivity. No further action is required from you at this time |
+| Azure service issue | An Azure service issue affects your virtual machine |
+| Network issue | A top-of-rack network device affected this virtual machine |
+| Unavailable | Your virtual machine is unavailable. We're currently unable to determine the reason for this downtime |
| Host server reboot | We're sorry, your virtual machine isn't available because of an unexpected host server reboot. An unexpected problem with the host server is preventing us from automatically recovering your virtual machine | | Redeploying due to host failure | We're sorry, your virtual machine isn't available because an unexpected failure on the host server. An unexpected problem with the host is preventing us from automatically recovering your virtual machine | | Unexpected host failure | We're sorry, your virtual machine isn't available because an unexpected failure on the host server. An unexpected problem with the host is preventing us from automatically recovering your virtual machine | | Redeploying due to unplanned host maintenance | We're sorry, your virtual machine isn't available because an unexpected failure on the host server. An unexpected problem with the host is preventing us from automatically recovering your virtual machine |
-| Provisioning failure | We're sorry, your virtual machine isn't available due to unexpected provisioning problems. The provisioning of your virtual machine has failed due to an unexpected error |
+| Provisioning failure | We're sorry, your virtual machine isn't available due to unexpected provisioning problems. The provisioning of your virtual machine failed due to an unexpected error |
| Remote disk disconnected | We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. An unexpected problem is preventing us from automatically recovering your virtual machine |
-| Reboot due to Guest OS update | A reboot was initiated by the Azure platform to apply a new Guest OS update. The virtual machine will be back online after the reboot completes |
+| Reboot due to Guest OS update | The Azure platform initiated a reboot to apply a new Guest OS update. The virtual machine will be back online after the reboot completes |
cloud-services Schema Cscfg File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-cscfg-file.md
description: A service configuration (.cscfg) file specifies how many role insta
Previously updated : 02/21/2023 Last updated : 07/24/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-The service configuration file specifies the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role. If the service is part of a Virtual Network, configuration information for the network must be provided in the service configuration file, as well as in the virtual networking configuration file. The default extension for the service configuration file is .cscfg.
+The service configuration file specifies the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role. If the service is part of a Virtual Network, configuration information for the network must be provided in the service configuration file and the virtual networking configuration file. The default extension for the service configuration file is .cscfg.
-The service model is described by the [Cloud Service (classic) Definition Schema](schema-csdef-file.md).
+The [Cloud Service (classic) Definition Schema](schema-csdef-file.md) describes the service model.
By default, the Azure Diagnostics configuration schema file is installed to the `C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\<version>\schemas` directory. Replace `<version>` with the installed version of the [Azure SDK](https://azure.microsoft.com/downloads/).
The following table describes the attributes of the `ServiceConfiguration` eleme
| Attribute | Description | | | -- | |serviceName|Required. The name of the cloud service. The name given here must match the name specified in the service definition file.|
-|osFamily|Optional. Specifies the Guest OS that will run on role instances in the cloud service. For information about supported Guest OS releases, see [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).<br /><br /> If you do not include an `osFamily` value and you have not set the `osVersion` attribute to a specific Guest OS version, a default value of 1 is used.|
-|osVersion|Optional. Specifies the version of the Guest OS that will run on role instances in the cloud service. For more information about Guest OS versions, see [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).<br /><br /> You can specify that the Guest OS should be automatically upgraded to the latest version. To do this, set the value of the `osVersion` attribute to `*`. When set to `*`, the role instances are deployed using the latest version of the Guest OS for the specified OS family and will be automatically upgraded when new versions of the Guest OS are released.<br /><br /> To specify a specific version manually, use the `Configuration String` from the table in the **Future, Current and Transitional Guest OS Versions** section of [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).<br /><br /> The default value for the `osVersion` attribute is `*`.|
+|osFamily|Optional. Specifies the Guest OS that runs on role instances in the cloud service. For information about supported Guest OS releases, see [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).<br /><br /> If you don't include an `osFamily` value and you have not set the `osVersion` attribute to a specific Guest OS version, a default value of 1 is used.|
+|osVersion|Optional. Specifies the version of the Guest OS that runs on role instances in the cloud service. For more information about Guest OS versions, see [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).<br /><br /> You can specify that the Guest OS should be automatically upgraded to the latest version. To do this, set the value of the `osVersion` attribute to `*`. When set to `*`, the role instances are deployed using the latest version of the Guest OS for the specified OS family and automatically upgrades when new versions of the Guest OS are released.<br /><br /> To specify a specific version manually, use the `Configuration String` from the table in the **Future, Current, and Transitional Guest OS Versions** section of [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md).<br /><br /> The default value for the `osVersion` attribute is `*`.|
|schemaVersion|Optional. Specifies the version of the Service Configuration schema. The schema version allows Visual Studio to select the correct SDK tools to use for schema validation if more than one version of the SDK is installed side-by-side. For more information about schema and version compatibility, see [Azure Guest OS Releases and SDK Compatibility Matrix](cloud-services-guestos-update-matrix.md)| The service configuration file must contain one `ServiceConfiguration` element. The `ServiceConfiguration` element may include any number of `Role` elements and zero or 1 `NetworkConfiguration` elements.
cloud-services Schema Cscfg Networkconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-cscfg-networkconfiguration.md
Title: Azure Cloud Services (classic) NetworkConfiguration Schema | Microsoft Docs
-description: Learn about the child elements of the NetworkConfiguration element of the service configuration file, which specifies Virtual Network and DNS values.
+description: Learn about the child elements of the NetworkConfiguration element of the service configuration file, which specifies Virtual Network and Domain Name System (DNS) values.
Previously updated : 02/21/2023 Last updated : 07/24/2024
The following table describes the child elements of the `NetworkConfiguration` e
| Rule | Optional. Specifies the action that should be taken for a specified subnet range of IP addresses. The order of the rule is defined by a string value for the `order` attribute. The lower the rule number the higher the priority. For example, rules could be specified with order numbers of 100, 200, and 300. The rule with the order number of 100 takes precedence over the rule that has an order of 200.<br /><br /> The action for the rule is defined by a string for the `action` attribute. Possible values are:<br /><br /> - `permit` ΓÇô Specifies that only packets from the specified subnet range can communicate with the endpoint.<br />- `deny` ΓÇô Specifies that access is denied to the endpoints in the specified subnet range.<br /><br /> The subnet range of IP addresses that are affected by the rule are defined by a string for the `remoteSubnet` attribute. The description for the rule is defined by a string for the `description` attribute.| | EndpointAcl | Optional. Specifies the assignment of access control rules to an endpoint. The name of the role that contains the endpoint is defined by a string for the `role` attribute. The name of the endpoint is defined by a string for the `endpoint` attribute. The name of the set of `AccessControl` rules that should be applied to the endpoint are defined in a string for the `accessControl` attribute. More than one `EndpointAcl` elements can be defined.| | DnsServer | Optional. Specifies the settings for a DNS server. You can specify settings for DNS servers without a Virtual Network. The name of the DNS server is defined by a string for the `name` attribute. The IP address of the DNS server is defined by a string for the `IPAddress` attribute. The IP address must be a valid IPv4 address.|
-| VirtualNetworkSite | Optional. Specifies the name of the Virtual Network site in which you want deploy your cloud service. This setting does not create a Virtual Network Site. It references a site that has been previously defined in the network file for your Virtual Network. A cloud service can only be a member of one Virtual Network. If you do not specify this setting, the cloud service will not be deployed to a Virtual Network. The name of the Virtual Network site is defined by a string for the `name` attribute.|
+| VirtualNetworkSite | Optional. Specifies the name of the Virtual Network site in which you want to deploy your cloud service. This setting doesn't create a Virtual Network Site. It references a site that was previously defined in the network file for your Virtual Network. A cloud service can only be a member of one Virtual Network. If you don't specify this setting, the cloud service doesn't deploy to a Virtual Network. The name of the Virtual Network site is defined by a string for the `name` attribute.|
| InstanceAddress | Optional. Specifies the association of a role to a subnet or set of subnets in the Virtual Network. When you associate a role name to an instance address, you can specify the subnets to which you want this role to be associated. The `InstanceAddress` contains a Subnets element. The name of the role that is associated with the subnet or subnets is defined by a string for the `roleName` attribute.| | Subnet | Optional. Specifies the subnet that corresponds to the subnet name in the network configuration file. The name of the subnet is defined by a string for the `name` attribute.| | ReservedIP | Optional. Specifies the reserved IP address that should be associated with the deployment. You must use Create Reserved IP Address to create the reserved IP address. Each deployment in a cloud service can be associated with one reserved IP address. The name of the reserved IP address is defined by a string for the `name` attribute.|
cloud-services Schema Cscfg Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-cscfg-role.md
description: The Role element of a service configuration file specifies how many
Previously updated : 02/21/2023 Last updated : 07/24/2024
The following table describes the attributes for the `Role` element.
| Attribute | Description | | | -- | | name | Required. Specifies the name of the role. The name must match the name provided for the role in the service definition file.|
-| vmName | Optional. Specifies the DNS name for a Virtual Machine. The name must be 10 characters or less.|
+| vmName | Optional. Specifies the Domain Name System (DNS) name for a Virtual Machine. The name must be 10 characters or less.|
The following table describes the child elements of the `Role` element.
cloud-services Schema Csdef File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-file.md
description: A service definition (.csdef) file defines a service model for an a
Previously updated : 02/21/2023 Last updated : 07/24/2024
By default, the Azure Diagnostics configuration schema file is installed to the
The default extension for the service definition file is .csdef. ## Basic service definition schema
-The service definition file must contain one `ServiceDefinition` element. The service definition must contain at least one role (`WebRole` or `WorkerRole`) element. It can contain up to 25 roles defined in a single definition and you can mix role types. The service definition also contains the optional `NetworkTrafficRules` element which restricts which roles can communicate to specified internal endpoints. The service definition also contains the optional `LoadBalancerProbes` element which contains customer defined health probes of endpoints.
+The service definition file must contain one `ServiceDefinition` element. The service definition must contain at least one role (`WebRole` or `WorkerRole`) element. It can contain up to 25 roles defined in a single definition and you can mix role types. The service definition also contains the optional `NetworkTrafficRules` element, which restricts which roles can communicate to specified internal endpoints. The service definition also contains the optional `LoadBalancerProbes` element, which contains customer defined health probes of endpoints.
The basic format of the service definition file is as follows.
The following table describes the attributes of the `ServiceDefinition` element.
| Attribute | Description | | -- | -- | | name |Required. The name of the service. The name must be unique within the service account.|
-| topologyChangeDiscovery | Optional. Specifies the type of topology change notification. Possible values are:<br /><br /> - `Blast` - Sends the update as soon as possible to all role instances. If you choose option, the role should be able to handle the topology update without being restarted.<br />- `UpgradeDomainWalk` ΓÇô Sends the update to each role instance in a sequential manner after the previous instance has successfully accepted the update.|
+| topologyChangeDiscovery | Optional. Specifies the type of topology change notification. Possible values are:<br /><br /> - `Blast` - Sends the update as soon as possible to all role instances. If you choose option, the role should be able to handle the topology update without being restarted.<br />- `UpgradeDomainWalk` ΓÇô Sends the update to each role instance in a sequential manner after the previous instance successfully accepts the update.|
| schemaVersion | Optional. Specifies the version of the service definition schema. The schema version allows Visual Studio to select the correct SDK tools to use for schema validation if more than one version of the SDK is installed side-by-side.| | upgradeDomainCount | Optional. Specifies the number of upgrade domains across which roles in this service are allocated. Role instances are allocated to an upgrade domain when the service is deployed. For more information, see [Update a cloud service role or deployment](cloud-services-how-to-manage-portal.md#update-a-cloud-service-role-or-deployment), [Manage the availability of virtual machines](../virtual-machines/availability.md) and [What is a Cloud Service Model](./cloud-services-model-and-package.md).<br /><br /> You can specify up to 20 upgrade domains. If not specified, the default number of upgrade domains is 5.|
cloud-services Schema Csdef Loadbalancerprobe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-loadbalancerprobe.md
description: The customer defined LoadBalancerProbe is a health probe of endpoin
Previously updated : 02/21/2023 Last updated : 07/24/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-The load balancer probe is a customer defined health probe of UDP endpoints and endpoints in role instances. The `LoadBalancerProbe` is not a standalone element; it is combined with the web role or worker role in a service definition file. A `LoadBalancerProbe` can be used by more than one role.
+The load balancer probe is a customer defined health probe of UDP endpoints and endpoints in role instances. The `LoadBalancerProbe` isn't a standalone element; it's combined with the web role or worker role in a service definition file. More than one role can use a `LoadBalancerProbe`.
The default extension for the service definition file is .csdef. ## The function of a load balancer probe The Azure Load Balancer is responsible for routing incoming traffic to your role instances. The load balancer determines which instances can receive traffic by regularly probing each instance in order to determine the health of that instance. The load balancer probes every instance multiple times per minute. There are two different options for providing instance health to the load balancer ΓÇô the default load balancer probe, or a custom load balancer probe, which is implemented by defining the LoadBalancerProbe in the .csdef file.
-The default load balancer probe utilizes the Guest Agent inside the virtual machine, which listens and responds with an HTTP 200 OK response only when the instance is in the Ready state (like when the instance is not in the Busy, Recycling, Stopping, etc. states). If the Guest Agent fails to respond with HTTP 200 OK, the Azure Load Balancer marks the instance as unresponsive and stops sending traffic to that instance. The Azure Load Balancer continues to ping the instance, and if the Guest Agent responds with an HTTP 200, the Azure Load Balancer sends traffic to that instance again. When using a web role your website code typically runs in w3wp.exe which is not monitored by the Azure fabric or guest agent, which means failures in w3wp.exe (eg. HTTP 500 responses) is not be reported to the guest agent and the load balancer does not know to take that instance out of rotation.
+The default load balancer probe utilizes the Guest Agent inside the virtual machine, which listens and responds with an HTTP 200 OK response only when the instance is in the Ready state (like when the instance isn't in the Busy, Recycling, Stopping, etc. states). If the Guest Agent fails to respond with HTTP 200 OK, the Azure Load Balancer marks the instance as unresponsive and stops sending traffic to that instance. The Azure Load Balancer continues to ping the instance, and if the Guest Agent responds with an HTTP 200, the Azure Load Balancer sends traffic to that instance again. When using a web role your website code typically runs in w3wp.exe, which the Azure fabric and guest agent don't monitor. Failures in w3wp.exe (for example, HTTP 500 responses) aren't reported to the guest agent, and the load balancer doesn't know to take that instance out of rotation.
-The custom load balancer probe overrides the default guest agent probe and allows you to create your own custom logic to determine the health of the role instance. The load balancer regularly probes your endpoint (every 15 seconds, by default) and the instance is be considered in rotation if it responds with a TCP ACK or HTTP 200 within the timeout period (default of 31 seconds). This can be useful to implement your own logic to remove instances from load balancer rotation, for example returning a non-200 status if the instance is above 90% CPU. For web roles using w3wp.exe, this also means you get automatic monitoring of your website, since failures in your website code return a non-200 status to the load balancer probe. If you do not define a LoadBalancerProbe in the .csdef file, then the default load balancer behavior (as previously described) is be used.
+The custom load balancer probe overrides the default guest agent probe and allows you to create your own custom logic to determine the health of the role instance. The load balancer regularly probes your endpoint (every 15 seconds, by default). The instance is considered in rotation if it responds with a TCP ACK or HTTP 200 within the timeout period (default of 31 seconds). This process can be useful to implement your own logic to remove instances from load balancer rotation (for example, returning a non-200 status if the instance is above 90% CPU). For web roles using w3wp.exe, this setup also means you get automatic monitoring of your website, since failures in your website code return a non-200 status to the load balancer probe. If you don't define a LoadBalancerProbe in the .csdef file, then the default load balancer behavior (as previously described) is used.
-If you use a custom load balancer probe, you must ensure that your logic takes into consideration the RoleEnvironment.OnStop method. When using the default load balancer probe, the instance is taken out of rotation prior to OnStop being called, but a custom load balancer probe can continue to return a 200 OK during the OnStop event. If you are using the OnStop event to clean up cache, stop service, or otherwise making changes that can affect the runtime behavior of your service, then you need to ensure that your custom load balancer probe logic removes the instance from rotation.
+If you use a custom load balancer probe, you must ensure that your logic takes into consideration the RoleEnvironment.OnStop method. When you use the default load balancer probe, the instance is taken out of rotation before OnStop is called, but a custom load balancer probe can continue to return a 200 OK during the OnStop event. If you're using the OnStop event to clean up the cache, stop the service, or otherwise make changes that can affect the runtime behavior of your service, then you need to ensure your custom load balancer probe logic removes the instance from rotation.
## Basic service definition schema for a load balancer probe The basic format of a service definition file containing a load balancer probe is as follows.
The following table describes the attributes of the `LoadBalancerProbe` element:
| - | -- | --| | `name` | `string` | Required. The name of the load balancer probe. The name must be unique.| | `protocol` | `string` | Required. Specifies the protocol of the end point. Possible values are `http` or `tcp`. If `tcp` is specified, a received ACK is required for the probe to be successful. If `http` is specified, a 200 OK response from the specified URI is required for the probe to be successful.|
-| `path` | `string` | The URI used for requesting health status from the VM. `path` is required if `protocol` is set to `http`. Otherwise, it is not allowed.<br /><br /> There is no default value.|
-| `port` | `integer` | Optional. The port for communicating the probe. This is optional for any endpoint, as the same port will then be used for the probe. You can configure a different port for their probing, as well. Possible values range from 1 to 65535, inclusive.<br /><br /> The default value is set by the endpoint.|
-| `intervalInSeconds` | `integer` | Optional. The interval, in seconds, for how frequently to probe the endpoint for health status. Typically, the interval is slightly less than half the allocated timeout period (in seconds) which allows two full probes before taking the instance out of rotation.<br /><br /> The default value is 15, the minimum value is 5.|
-| `timeoutInSeconds` | `integer` | Optional. The timeout period, in seconds, applied to the probe where no response will result in stopping further traffic from being delivered to the endpoint. This value allows endpoints to be taken out of rotation faster or slower than the typical times used in Azure (which are the defaults).<br /><br /> The default value is 31, the minimum value is 11.|
+| `path` | `string` | The URI used for requesting health status from the VM. `path` is required if `protocol` is set to `http`. Otherwise, it isn't allowed.<br /><br /> There's no default value.|
+| `port` | `integer` | Optional. The port for communicating the probe. This attribute is optional for any endpoint, as the same port is used for the probe. You can configure a different port for their probing, as well. Possible values range from 1 to 65535, inclusive.<br /><br /> The default value set by the endpoint.|
+| `intervalInSeconds` | `integer` | Optional. The interval, in seconds, for how frequently to probe the endpoint for health status. Typically, the interval is slightly less than half the allocated timeout period (in seconds) which allows two full probes before taking the instance out of rotation.<br /><br /> The default value is 15. The minimum value is 5.|
+| `timeoutInSeconds` | `integer` | Optional. The timeout period, in seconds, applied to the probe where no response results in stopping further traffic from being delivered to the endpoint. This value allows endpoints to be taken out of rotation faster or slower than the typical times used in Azure (which are the defaults).<br /><br /> The default value is 31. The minimum value is 11.|
## See Also [Cloud Service (classic) Definition Schema](schema-csdef-file.md)
cloud-services Schema Csdef Networktrafficrules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-networktrafficrules.md
description: Learn about NetworkTrafficRules, which limits the roles that can ac
Previously updated : 02/21/2023 Last updated : 07/24/2024
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-The `NetworkTrafficRules` node is an optional element in the service definition file that specifies how roles communicate with each other. It limits which roles can access the internal endpoints of the specific role. The `NetworkTrafficRules` is not a standalone element; it is combined with two or more roles in a service definition file.
+The `NetworkTrafficRules` node is an optional element in the service definition file that specifies how roles communicate with each other. It limits which roles can access the internal endpoints of the specific role. The `NetworkTrafficRules` isn't a standalone element; it's combined with two or more roles in a service definition file.
The default extension for the service definition file is .csdef.
The basic format of a service definition file containing network traffic definit
``` ## Schema Elements
-The `NetworkTrafficRules` node of the service definition file includes these elements, described in detail in subsequent sections in this topic:
+The `NetworkTrafficRules` node of the service definition file includes these elements, described in detail in subsequent sections in this article:
[NetworkTrafficRules Element](#NetworkTrafficRules)
The `NetworkTrafficRules` element specifies which roles can communicate with whi
The `OnlyAllowTrafficTo` element describes a collection of destination endpoints and the roles that can communicate with them. You can specify multiple `OnlyAllowTrafficTo` nodes. ## <a name="Destinations"></a> Destinations Element
-The `Destinations` element describes a collection of RoleEndpoints than can be communicated with.
+The `Destinations` element describes a collection of RoleEndpoints that can be communicated with.
## <a name="RoleEndpoint"></a> RoleEndpoint Element The `RoleEndpoint` element describes an endpoint on a role to allow communications with. You can specify multiple `RoleEndpoint` elements if there are more than one endpoint on the role.
The `RoleEndpoint` element describes an endpoint on a role to allow communicatio
The `AllowAllTraffic` element is a rule that allows all roles to communicate with the endpoints defined in the `Destinations` node. ## <a name="WhenSource"></a> WhenSource Element
-The `WhenSource` element describes a collection of roles than can communicate with the endpoints defined in the `Destinations` node.
+The `WhenSource` element describes a collection of roles that can communicate with the endpoints defined in the `Destinations` node.
| Attribute | Type | Description | | | -- | -- |
cloud-services Schema Csdef Webrole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-webrole.md
description: Azure web role is customized for web application programming suppor
Previously updated : 02/21/2023 Last updated : 07/24/2024
The basic format of a service definition file containing a web role is as follow
``` ## Schema elements
-The service definition file includes these elements, described in detail in subsequent sections in this topic:
+The service definition file includes these elements, described in detail in subsequent sections in this article:
[WebRole](#WebRole)
The name of the directory allocated to the local storage resource corresponds to
## <a name="Endpoints"></a> Endpoints The `Endpoints` element describes the collection of input (external), internal, and instance input endpoints for a role. This element is the parent of the `InputEndpoint`, `InternalEndpoint`, and `InstanceInputEndpoint` elements.
-Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints which can be allocated across the 25 roles allowed in a service. For example, if have 5 roles you can allocate 5 input endpoints per role or you can allocate 25 input endpoints to a single role or you can allocate 1 input endpoint each to 25 roles.
+Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints, which can be allocated across the 25 roles allowed in a service. For example, if you have five roles, you can allocate five input endpoints per role, or you can allocate 25 input endpoints to a single role or you can allocate one input endpoint each to 25 roles.
> [!NOTE] > Each role deployed requires one instance per role. The default provisioning for a subscription is limited to 20 cores and thus is limited to 20 instances of a role. If your application requires more instances than is provided by the default provisioning see [Billing, Subscription Management and Quota Support](https://azure.microsoft.com/support/options/) for more information on increasing your quota.
The following table describes the attributes of the `InputEndpoint` element.
|protocol|string|Required. The transport protocol for the external endpoint. For a web role, possible values are `HTTP`, `HTTPS`, `UDP`, or `TCP`.| |port|int|Required. The port for the external endpoint. You can specify any port number you choose, but the port numbers specified for each role in the service must be unique.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| |certificate|string|Required for an HTTPS endpoint. The name of a certificate defined by a `Certificate` element.|
-|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.|
-|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored and the endpoint will not be removed by the load balancer. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role is not in a Ready state.|
+|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This attribute is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.|
+|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored, the load balancer doesn't remove the endpoint. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role isn't in a Ready state.|
|loadBalancerProbe|string|Optional. The name of the load balancer probe associated with the input endpoint. For more information, see [LoadBalancerProbe Schema](schema-csdef-loadbalancerprobe.md).| ## <a name="InternalEndpoint"></a> InternalEndpoint
-The `InternalEndpoint` element describes an internal endpoint to a web role. An internal endpoint is available only to other role instances running within the service; it is not available to clients outside the service. Web roles that do not include the `Sites` element can only have a single HTTP, UDP, or TCP internal endpoint.
+The `InternalEndpoint` element describes an internal endpoint to a web role. An internal endpoint is available only to other role instances running within the service; it isn't available to clients outside the service. Web roles that don't include the `Sites` element can only have a single HTTP, UDP, or TCP internal endpoint.
The following table describes the attributes of the `InternalEndpoint` element.
The following table describes the attributes of the `InternalEndpoint` element.
| | - | -- | |name|string|Required. A unique name for the internal endpoint.| |protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `HTTP`, `TCP`, `UDP`, or `ANY`.<br /><br /> A value of `ANY` specifies that any protocol, any port is allowed.|
-|port|int|Optional. The port used for internal load balanced connections on the endpoint. A Load balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically these are these are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.|
+|port|int|Optional. The port used for internal load-balanced connections on the endpoint. A load-balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically, these values are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.|
## <a name="InstanceInputEndpoint"></a> InstanceInputEndpoint The `InstanceInputEndpoint` element describes an instance input endpoint to a web role. An instance input endpoint is associated with a specific role instance by using port forwarding in the load balancer. Each instance input endpoint is mapped to a specific port from a range of possible ports. This element is the parent of the `AllocatePublicPortFrom` element.
The following table describes the attributes of the `InstanceInputEndpoint` elem
| Attribute | Type | Description | | | - | -- | |name|string|Required. A unique name for the endpoint.|
-|localPort|int|Required. Specifies the internal port that all role instances will listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.|
+|localPort|int|Required. Specifies the internal port that all role instances listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.|
|protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `udp` or `tcp`. Use `tcp` for http/https based traffic.| ## <a name="AllocatePublicPortFrom"></a> AllocatePublicPortFrom
-The `AllocatePublicPortFrom` element describes the public port range that can be used by external customers to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element.
+The `AllocatePublicPortFrom` element describes the public port range that external customers can use to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element.
The `AllocatePublicPortFrom` element is only available using the Azure SDK version 1.7 or higher.
The following table describes the attributes of the `FixedPort` element.
| Attribute | Type | Description | | | - | -- |
-|port|int|Required. The port for the internal endpoint. This has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).|
+|port|int|Required. The port for the internal endpoint. This attribute has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).|
## <a name="FixedPortRange"></a> FixedPortRange The `FixedPortRange` element specifies the range of ports that are assigned to the internal endpoint or instance input endpoint, and sets the port used for load balanced connections on the endpoint.
The following table describes the attributes of the `Certificate` element.
| Attribute | Type | Description | | | - | -- |
-|name|string|Required. A name for this certificate, which is used to refer to it when it is associated with an HTTPS `InputEndpoint` element.|
+|name|string|Required. A name for this certificate, which is used to refer to it when it's associated with an HTTPS `InputEndpoint` element.|
|storeLocation|string|Required. The location of the certificate store where this certificate may be found on the local machine. Possible values are `CurrentUser` and `LocalMachine`.| |storeName|string|Required. The name of the certificate store where this certificate resides on the local machine. Possible values include the built-in store names `My`, `Root`, `CA`, `Trust`, `Disallowed`, `TrustedPeople`, `TrustedPublisher`, `AuthRoot`, `AddressBook`, or any custom store name. If a custom store name is specified, the store is automatically created.| |permissionLevel|string|Optional. Specifies the access permissions given to the role processes. If you want only elevated processes to be able to access the private key, then specify `elevated` permission. `limitedOrElevated` permission allows all role processes to access the private key. Possible values are `limitedOrElevated` or `elevated`. The default value is `limitedOrElevated`.|
The following table describes the attributes of the `Import` element.
| Attribute | Type | Description | | | - | -- |
-|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information see [Enable Remote Desktop Connection](cloud-services-role-enable-remote-desktop-new-portal.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance.|
+|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information, see [Enable Remote Desktop Connection](cloud-services-role-enable-remote-desktop-new-portal.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance.|
## <a name="Runtime"></a> Runtime The `Runtime` element describes a collection of environment variable settings for a web role that control the runtime environment of the Azure host process. This element is the parent of the `Environment` element. This element is optional and a role can have only one runtime block.
The following table describes the attributes of the `NetFxEntryPoint` element.
| Attribute | Type | Description | | | - | -- |
-|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (do not specify **\\%ROLEROOT%\Approot** in `commandLine`, it is assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> For HWC roles the path is always relative to the **\\%ROLEROOT%\Approot\bin** folder.<br /><br /> For full IIS and IIS Express web roles, if the assembly cannot be found relative to **\\%ROLEROOT%\Approot** folder, the **\\%ROLEROOT%\Approot\bin** is searched.<br /><br /> This fall back behavior for full IIS is not a recommend best practice and maybe removed in future versions.|
+|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (don't specify **\\%ROLEROOT%\Approot** in `commandLine`; it's assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> For HWC roles, the path is always relative to the **\\%ROLEROOT%\Approot\bin** folder.<br /><br /> For full IIS and IIS Express web roles, if the assembly can't be found relative to **\\%ROLEROOT%\Approot** folder, the **\\%ROLEROOT%\Approot\bin** is searched.<br /><br /> This fall back behavior for full IIS isn't a recommend best practice and maybe removed in future versions.|
|targetFrameworkVersion|string|Required. The version of the .NET framework on which the assembly was built. For example, `targetFrameworkVersion="v4.0"`.| ## <a name="Sites"></a> Sites
-The `Sites` element describes a collection of websites and web applications that are hosted in a web role. This element is the parent of the `Site` element. If you do not specify a `Sites` element, your web role is hosted as legacy web role and you can only have one website hosted in your web role. This element is optional and a role can have only one sites block.
+The `Sites` element describes a collection of websites and web applications that are hosted in a web role. This element is the parent of the `Site` element. If you don't specify a `Sites` element, your web role is hosted as legacy web role, and you can only have one website hosted in your web role. This element is optional and a role can have only one sites block.
The `Sites` element is only available using the Azure SDK version 1.3 or higher.
The following table describes the attributes of the `VirtualApplication` element
| Attribute | Type | Description | | | - | -- | |name|string|Required. Specifies a name to identify the virtual application.|
-|physicalDirectory|string|Required. Specifies the path on the development machine that contains the virtual application. In the compute emulator, IIS is configured to retrieve content from this location. When deploying to the Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.|
+|physicalDirectory|string|Required. Specifies the path on the development machine that contains the virtual application. In the compute emulator, IIS is configured to retrieve content from this location. During deployment to Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.|
## <a name="VirtualDirectory"></a> VirtualDirectory The `VirtualDirectory` element specifies a directory name (also referred to as path) that you specify in IIS and map to a physical directory on a local or remote server.
The following table describes the attributes of the `VirtualDirectory` element.
| Attribute | Type | Description | | | - | -- | |name|string|Required. Specifies a name to identify the virtual directory.|
-|value|physicalDirectory|Required. Specifies the path on the development machine that contains the website or Virtual directory contents. In the compute emulator, IIS is configured to retrieve content from this location. When deploying to the Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.|
+|value|physicalDirectory|Required. Specifies the path on the development machine that contains the website or Virtual directory contents. In the compute emulator, IIS is configured to retrieve content from this location. During deployment to Azure, the contents of the physical directory are packaged along with the rest of the service. When the service package is deployed to Azure, IIS is configured with the location of the unpacked contents.|
## <a name="Bindings"></a> Bindings
-The `Bindings` element describes a collection of bindings for a website. It is the parent element of the `Binding` element. The element is required for every `Site` element. For more information about configuring endpoints, see [Enable Communication for Role Instances](cloud-services-enable-communication-role-instances.md).
+The `Bindings` element describes a collection of bindings for a website. It's the parent element of the `Binding` element. The element is required for every `Site` element. For more information about configuring endpoints, see [Enable Communication for Role Instances](cloud-services-enable-communication-role-instances.md).
The `Bindings` element is only available using the Azure SDK version 1.3 or higher.
The following table describes the attributes of the `Task` element.
| Attribute | Type | Description | | | - | -- |
-|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file will not process properly.|
+|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats set a byte-order marker at the start of the file process incorrectly.|
|executionContext|string|Specifies the context in which the script is run.<br /><br /> - `limited` [Default] ΓÇô Run with the same privileges as the role hosting the process.<br />- `elevated` ΓÇô Run with administrator privileges.|
-|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System does not wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role is not restarted until all foreground tasks exit.|
+|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System doesn't wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role isn't restarted until all foreground tasks exit.|
## <a name="Contents"></a> Contents The `Contents` element describes the collection of content for a web role. This element is the parent of the `Content` element.
The `Contents` element describes the collection of content for a web role. This
The `Contents` element is only available using the Azure SDK version 1.5 or higher. ## <a name="Content"></a> Content
-The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it is copied.
+The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it copies.
The `Content` element is only available using the Azure SDK version 1.5 or higher.
The following table describes the attributes of the `SourceDirectory` element.
| Attribute | Type | Description | | | - | -- |
-|path|string|Required. Relative or absolute path of a local directory whose contents will be copied to the Azure virtual machine. Expansion of environment variables in the directory path is supported.|
+|path|string|Required. Relative or absolute path of a local directory whose contents copy to the Azure virtual machine. Expansion of environment variables in the directory path is supported.|
## See Also [Cloud Service (classic) Definition Schema](schema-csdef-file.md)
cloud-services Schema Csdef Workerrole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-csdef-workerrole.md
description: The Azure worker role is used for generalized development and may p
Previously updated : 02/21/2023 Last updated : 07/24/2024
The basic format of the service definition file containing a worker role is as f
``` ## Schema Elements
-The service definition file includes these elements, described in detail in subsequent sections in this topic:
+The service definition file includes these elements, described in detail in subsequent sections in this article:
[WorkerRole](#WorkerRole)
The name of the directory allocated to the local storage resource corresponds to
## <a name="Endpoints"></a> Endpoints The `Endpoints` element describes the collection of input (external), internal, and instance input endpoints for a role. This element is the parent of the `InputEndpoint`, `InternalEndpoint`, and `InstanceInputEndpoint` elements.
-Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints which can be allocated across the 25 roles allowed in a service. For example, if have 5 roles you can allocate 5 input endpoints per role or you can allocate 25 input endpoints to a single role or you can allocate 1 input endpoint each to 25 roles.
+Input and Internal endpoints are allocated separately. A service can have a total of 25 input, internal, and instance input endpoints, which can be allocated across the 25 roles allowed in a service. For example, if you have five roles, you can allocate five input endpoints per role, or you can allocate 25 input endpoints to a single role or you can allocate one input endpoint each to 25 roles.
> [!NOTE] > Each role deployed requires one instance per role. The default provisioning for a subscription is limited to 20 cores and thus is limited to 20 instances of a role. If your application requires more instances than is provided by the default provisioning see [Billing, Subscription Management and Quota Support](https://azure.microsoft.com/support/options/) for more information on increasing your quota.
The following table describes the attributes of the `InputEndpoint` element.
|protocol|string|Required. The transport protocol for the external endpoint. For a worker role, possible values are `HTTP`, `HTTPS`, `UDP`, or `TCP`.| |port|int|Required. The port for the external endpoint. You can specify any port number you choose, but the port numbers specified for each role in the service must be unique.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).| |certificate|string|Required for an HTTPS endpoint. The name of a certificate defined by a `Certificate` element.|
-|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.|
-|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored and the endpoint will not be removed by the load balancer. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role is not in a Ready state.|
+|localPort|int|Optional. Specifies a port used for internal connections on the endpoint. The `localPort` attribute maps the external port on the endpoint to an internal port on a role. This attribute is useful in scenarios where a role must communicate to an internal component on a port that different from the one that is exposed externally.<br /><br /> If not specified, the value of `localPort` is the same as the `port` attribute. Set the value of `localPort` to ΓÇ£*ΓÇ¥ to automatically assign an unallocated port that is discoverable using the runtime API.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `localPort` attribute is only available using the Azure SDK version 1.3 or higher.|
+|ignoreRoleInstanceStatus|boolean|Optional. When the value of this attribute is set to `true`, the status of a service is ignored, and the load balancer doesn't remove the endpoint. Setting this value to `true` useful for debugging busy instances of a service. The default value is `false`. **Note:** An endpoint can still receive traffic even when the role isn't in a Ready state.|
|loadBalancerProbe|string|Optional. The name of the load balancer probe associated with the input endpoint. For more information, see [LoadBalancerProbe Schema](schema-csdef-loadbalancerprobe.md).| ## <a name="InternalEndpoint"></a> InternalEndpoint
-The `InternalEndpoint` element describes an internal endpoint to a worker role. An internal endpoint is available only to other role instances running within the service; it is not available to clients outside the service. A worker role may have up to five HTTP, UDP, or TCP internal endpoints.
+The `InternalEndpoint` element describes an internal endpoint to a worker role. An internal endpoint is available only to other role instances running within the service; it isn't available to clients outside the service. A worker role may have up to five HTTP, UDP, or TCP internal endpoints.
The following table describes the attributes of the `InternalEndpoint` element.
The following table describes the attributes of the `InternalEndpoint` element.
| | - | -- | |name|string|Required. A unique name for the internal endpoint.| |protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `HTTP`, `TCP`, `UDP`, or `ANY`.<br /><br /> A value of `ANY` specifies that any protocol, any port is allowed.|
-|port|int|Optional. The port used for internal load balanced connections on the endpoint. A Load balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically these are these are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.|
+|port|int|Optional. The port used for internal load-balanced connections on the endpoint. A load-balanced endpoint uses two ports. The port used for the public IP address, and the port used on the private IP address. Typically, these values are set to the same, but you can choose to use different ports.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).<br /><br /> The `Port` attribute is only available using the Azure SDK version 1.3 or higher.|
## <a name="InstanceInputEndpoint"></a> InstanceInputEndpoint The `InstanceInputEndpoint` element describes an instance input endpoint to a worker role. An instance input endpoint is associated with a specific role instance by using port forwarding in the load balancer. Each instance input endpoint is mapped to a specific port from a range of possible ports. This element is the parent of the `AllocatePublicPortFrom` element.
The following table describes the attributes of the `InstanceInputEndpoint` elem
| Attribute | Type | Description | | | - | -- | |name|string|Required. A unique name for the endpoint.|
-|localPort|int|Required. Specifies the internal port that all role instances will listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.|
+|localPort|int|Required. Specifies the internal port that all role instances listen to in order to receive incoming traffic forwarded from the load balancer. Possible values range between 1 and 65535, inclusive.|
|protocol|string|Required. The transport protocol for the internal endpoint. Possible values are `udp` or `tcp`. Use `tcp` for http/https based traffic.| ## <a name="AllocatePublicPortFrom"></a> AllocatePublicPortFrom
-The `AllocatePublicPortFrom` element describes the public port range that can be used by external customers to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element.
+The `AllocatePublicPortFrom` element describes the public port range that external customers can use to access each instance input endpoint. The public (VIP) port number is allocated from this range and assigned to each individual role instance endpoint during tenant deployment and update. This element is the parent of the `FixedPortRange` element.
The `AllocatePublicPortFrom` element is only available using the Azure SDK version 1.7 or higher.
The following table describes the attributes of the `FixedPort` element.
| Attribute | Type | Description | | | - | -- |
-|port|int|Required. The port for the internal endpoint. This has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).|
+|port|int|Required. The port for the internal endpoint. This attribute has the same effect as setting the `FixedPortRange` min and max to the same port.<br /><br /> Possible values range between 1 and 65535, inclusive (Azure SDK version 1.7 or higher).|
## <a name="FixedPortRange"></a> FixedPortRange The `FixedPortRange` element specifies the range of ports that are assigned to the internal endpoint or instance input endpoint, and sets the port used for load balanced connections on the endpoint.
The following table describes the attributes of the `Certificate` element.
| Attribute | Type | Description | | | - | -- |
-|name|string|Required. A name for this certificate, which is used to refer to it when it is associated with an HTTPS `InputEndpoint` element.|
+|name|string|Required. A name for this certificate, which is used to refer to it when it's associated with an HTTPS `InputEndpoint` element.|
|storeLocation|string|Required. The location of the certificate store where this certificate may be found on the local machine. Possible values are `CurrentUser` and `LocalMachine`.| |storeName|string|Required. The name of the certificate store where this certificate resides on the local machine. Possible values include the built-in store names `My`, `Root`, `CA`, `Trust`, `Disallowed`, `TrustedPeople`, `TrustedPublisher`, `AuthRoot`, `AddressBook`, or any custom store name. If a custom store name is specified, the store is automatically created.| |permissionLevel|string|Optional. Specifies the access permissions given to the role processes. If you want only elevated processes to be able to access the private key, then specify `elevated` permission. `limitedOrElevated` permission allows all role processes to access the private key. Possible values are `limitedOrElevated` or `elevated`. The default value is `limitedOrElevated`.|
The following table describes the attributes of the `Import` element.
| Attribute | Type | Description | | | - | -- |
-|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information see [Enable Remote Desktop Connection](cloud-services-role-enable-remote-desktop-new-portal.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance|
+|moduleName|string|Required. The name of the module to import. Valid import modules are:<br /><br /> - RemoteAccess<br />- RemoteForwarder<br />- Diagnostics<br /><br /> The RemoteAccess and RemoteForwarder modules allow you to configure your role instance for remote desktop connections. For more information, see [Enable Remote Desktop Connection](cloud-services-role-enable-remote-desktop-new-portal.md).<br /><br /> The Diagnostics module allows you to collect diagnostic data for a role instance|
## <a name="Runtime"></a> Runtime The `Runtime` element describes a collection of environment variable settings for a worker role that control the runtime environment of the Azure host process. This element is the parent of the `Environment` element. This element is optional and a role can have only one runtime block.
The following table describes the attributes of the `NetFxEntryPoint` element.
| Attribute | Type | Description | | | - | -- |
-|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (do not specify **\\%ROLEROOT%\Approot** in `commandLine`, it is assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.|
+|assemblyName|string|Required. The path and file name of the assembly containing the entry point. The path is relative to the folder **\\%ROLEROOT%\Approot** (don't specify **\\%ROLEROOT%\Approot** in the command line; it's assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **\\%ROLEROOT%\Approot** folder represents the application folder for your role.|
|targetFrameworkVersion|string|Required. The version of the .NET framework on which the assembly was built. For example, `targetFrameworkVersion="v4.0"`.| ## <a name="ProgramEntryPoint"></a> ProgramEntryPoint
-The `ProgramEntryPoint` element specifies the program to run for a role. The `ProgramEntryPoint` element allows you to specify a program entry point that is not based on a .NET assembly.
+The `ProgramEntryPoint` element specifies the program to run for a role. The `ProgramEntryPoint` element allows you to specify a program entry point that isn't based on a .NET assembly.
> [!NOTE] > The `ProgramEntryPoint` element is only available using the Azure SDK version 1.5 or higher.
The following table describes the attributes of the `ProgramEntryPoint` element.
| Attribute | Type | Description | | | - | -- |
-|commandLine|string|Required. The path, file name, and any command line arguments of the program to execute. The path is relative to the folder **%ROLEROOT%\Approot** (do not specify **%ROLEROOT%\Approot** in commandLine, it is assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> If the program ends, the role is recycled, so generally set the program to continue to run, instead of being a program that just starts up and runs a finite task.|
-|setReadyOnProcessStart|boolean|Required. Specifies whether the role instance waits for the command line program to signal it is started. This value must be set to `true` at this time. Setting the value to `false` is reserved for future use.|
+|commandLine|string|Required. The path, file name, and any command line arguments of the program to execute. The path is relative to the folder **%ROLEROOT%\Approot** (don't specify **%ROLEROOT%\Approot** in the command line; it's assumed). **%ROLEROOT%** is an environment variable maintained by Azure and it represents the root folder location for your role. The **%ROLEROOT%\Approot** folder represents the application folder for your role.<br /><br /> If the program ends, the role is recycled, so generally set the program to continue to run, instead of being a program that just starts up and runs a finite task.|
+|setReadyOnProcessStart|boolean|Required. Specifies whether the role instance waits for the command line program to signal when it starts. This value must be set to `true` at this time. Setting the value to `false` is reserved for future use.|
## <a name="Startup"></a> Startup The `Startup` element describes a collection of tasks that run when the role is started. This element can be the parent of the `Variable` element. For more information about using the role startup tasks, see [How to configure startup tasks](cloud-services-startup-tasks.md). This element is optional and a role can have only one startup block.
The following table describes the attributes of the `Task` element.
| Attribute | Type | Description | | | - | -- |
-|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file will not process properly.|
+|commandLine|string|Required. A script, such as a CMD file, containing the commands to run. Startup command and batch files must be saved in ANSI format. File formats that set a byte-order marker at the start of the file processes incorrectly.|
|executionContext|string|Specifies the context in which the script is run.<br /><br /> - `limited` [Default] ΓÇô Run with the same privileges as the role hosting the process.<br />- `elevated` ΓÇô Run with administrator privileges.|
-|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System does not wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role is not restarted until all foreground tasks exit.|
+|taskType|string|Specifies the execution behavior of the command.<br /><br /> - `simple` [Default] ΓÇô System waits for the task to exit before any other tasks are launched.<br />- `background` ΓÇô System doesn't wait for the task to exit.<br />- `foreground` ΓÇô Similar to background, except role isn't restarted until all foreground tasks exit.|
## <a name="Contents"></a> Contents The `Contents` element describes the collection of content for a worker role. This element is the parent of the `Content` element.
The `Contents` element describes the collection of content for a worker role. Th
The `Contents` element is only available using the Azure SDK version 1.5 or higher. ## <a name="Content"></a> Content
-The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it is copied.
+The `Content` element defines the source location of content to be copied to the Azure virtual machine and the destination path to which it copies.
The `Content` element is only available using the Azure SDK version 1.5 or higher.
The following table describes the attributes of the `SourceDirectory` element.
| Attribute | Type | Description | | | - | -- |
-|path|string|Required. Relative or absolute path of a local directory whose contents will be copied to the Azure virtual machine. Expansion of environment variables in the directory path is supported.|
+|path|string|Required. Relative or absolute path of a local directory whose contents copy to the Azure virtual machine. Expansion of environment variables in the directory path is supported.|
## See Also [Cloud Service (classic) Definition Schema](schema-csdef-file.md)
communication-services Handle Email Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/handle-email-events.md
To generate and receive Email events, take the steps in the following sections.
To view event triggers, we need to generate some events. To trigger an event, [send email](../email/send-email.md) using the Email domain resource attached to the Communication Services resource. -- `Email Delivery Report Received` events are generated when the Email status is in terminal state, i.e. Delivered, Failed, FilteredSpam, Quarantined.
+- `Email Delivery Report Received` events are generated when the Email status is in terminal state, like Delivered, Failed, FilteredSpam, Quarantined.
+ - `Email Engagement Tracking Report Received` events are generated when the email sent is either opened or a link within the email is clicked. To trigger an event, you need to turn on the `User Interaction Tracking` option on the Email domain resource Check out the full list of [events that Communication Services supports](../../../event-grid/event-schema-communication-services.md).
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
Previously updated : 02/23/2024 Last updated : 07/12/2024 # Comparing Container Apps with other Azure container options
-There are many options for teams to build and deploy cloud native and containerized applications on Azure. This article will help you understand which scenarios and use cases are best suited for Azure Container Apps and how it compares to other container options on Azure including:
+There are many options for teams to build and deploy cloud native and containerized applications on Azure. This article helps you understand which scenarios and use cases are best suited for Azure Container Apps and how it compares to other container options on Azure including:
- [Azure Container Apps](#azure-container-apps) - [Azure App Service](#azure-app-service) - [Azure Container Instances](#azure-container-instances)
There's no perfect solution for every use case and every team. The following exp
## Container option comparisons ### Azure Container Apps
-Azure Container Apps enables you to build serverless microservices and jobs based on containers. Distinctive features of Container Apps include:
+[Azure Container Apps](../container-apps/index.yml) enables you to build serverless microservices and jobs based on containers. Distinctive features of Container Apps include:
-* Optimized for running general purpose containers, especially for applications that span many microservices deployed in containers.
+* Optimized to run general purpose containers, especially for applications that span many microservices deployed in containers.
* Powered by Kubernetes and open-source technologies like [Dapr](https://dapr.io/), [KEDA](https://keda.sh/), and [envoy](https://www.envoyproxy.io/). * Supports Kubernetes-style apps and microservices with features like [service discovery](connect-apps.md) and [traffic splitting](revisions.md). * Enables event-driven application architectures by supporting scale based on traffic and pulling from [event sources like queues](scale-app.md), including [scale to zero](scale-app.md). * Supports running on demand, scheduled, and event-driven [jobs](jobs.md).
-Azure Container Apps doesn't provide direct access to the underlying Kubernetes APIs. If you require access to the Kubernetes APIs and control plane, you should use [Azure Kubernetes Service](../aks/intro-kubernetes.md). However, if you would like to build Kubernetes-style applications and don't require direct access to all the native Kubernetes APIs and cluster management, Container Apps provides a fully managed experience based on best-practices. For these reasons, many teams may prefer to start building container microservices with Azure Container Apps.
+Azure Container Apps doesn't provide direct access to the underlying Kubernetes APIs. If you require access to the Kubernetes APIs and control plane, you should use [Azure Kubernetes Service](../aks/intro-kubernetes.md). However, if you would like to build Kubernetes-style applications and don't require direct access to all the native Kubernetes APIs and cluster management, Container Apps provides a fully managed experience based on best-practices. For these reasons, many teams prefer to start building container microservices with Azure Container Apps.
You can get started building your first container app [using the quickstarts](get-started.md). ### Azure App Service
-[Azure App Service](../app-service/index.yml) provides fully managed hosting for web applications including websites and web APIs. These web applications may be deployed using code or containers. Azure App Service is optimized for web applications. Azure App Service is integrated with other Azure services including Azure Container Apps or Azure Functions. When building web apps, Azure App Service is an ideal option.
+[Azure App Service](../app-service/index.yml) provides fully managed hosting for web applications including websites and web APIs. You can deploy these web applications using code or containers. Azure App Service is optimized for web applications. Azure App Service is integrated with other Azure services including Azure Container Apps or Azure Functions. When building web apps, Azure App Service is an ideal option.
### Azure Container Instances
-[Azure Container Instances (ACI)](../container-instances/index.yml) provides a single pod of Hyper-V isolated containers on demand. It can be thought of as a lower-level "building block" option compared to Container Apps. Concepts like scale, load balancing, and certificates are not provided with ACI containers. For example, to scale to five container instances, you create five distinct container instances. Azure Container Apps provide many application-specific concepts on top of containers, including certificates, revisions, scale, and environments. Users often interact with Azure Container Instances through other services. For example, Azure Kubernetes Service can layer orchestration and scale on top of ACI through [virtual nodes](../aks/virtual-nodes.md). If you need a less "opinionated" building block that doesn't align with the scenarios Azure Container Apps is optimizing for, Azure Container Instances is an ideal option.
+[Azure Container Instances (ACI)](../container-instances/index.yml) provides a single pod of Hyper-V isolated containers on demand. It can be thought of as a lower-level "building block" option compared to Container Apps. Concepts like scale, load balancing, and certificates aren't provided with ACI containers. For example, to scale to five container instances, you create five distinct container instances. Azure Container Apps provide many application-specific concepts on top of containers, including certificates, revisions, scale, and environments. Users often interact with Azure Container Instances through other services. For example, Azure Kubernetes Service can layer orchestration and scale on top of ACI through [virtual nodes](../aks/virtual-nodes.md). If you need a less "opinionated" building block that doesn't align with the scenarios Azure Container Apps is optimizing for, Azure Container Instances is an ideal option.
### Azure Kubernetes Service [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) provides a fully managed Kubernetes option in Azure. It supports direct access to the Kubernetes API and runs any Kubernetes workload. The full cluster resides in your subscription, with the cluster configurations and operations within your control and responsibility. Teams looking for a fully managed version of Kubernetes in Azure, Azure Kubernetes Service is an ideal option.
You can get started building your first container app [using the quickstarts](ge
[Azure Spring Apps](../spring-apps/enterprise/overview.md) is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. ### Azure Red Hat OpenShift
-[Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
+[Azure Red Hat OpenShift](../openshift/intro-openshift.md) is an integrated product with Red Hat and Microsoft jointly engineered, operated, and supported. This collaboration provides an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions. Alternatively, they can use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
## Next steps
container-apps Containerapp Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containerapp-up.md
- devx-track-azurecli - ignite-2023 Previously updated : 11/08/2022 Last updated : 07/24/2024
Run the following command to deploy a container app from local source code:
When the Dockerfile includes the EXPOSE instruction, the `up` command configures the container app's ingress and target port using the information in the Dockerfile.
-If you've configured ingress through your Dockerfile or your app doesn't require ingress, you can omit the `ingress` option.
+If you configure ingress through your Dockerfile or your app doesn't require ingress, you can omit the `ingress` option.
The output of the command includes the URL for the container app.
az containerapp up \
--ingress external ```
-If you've configured ingress through your Dockerfile or your app doesn't require ingress, you can omit the `ingress` option.
+If you configure ingress through your Dockerfile or your app doesn't require ingress, you can omit the `ingress` option.
Because the `up` command creates a GitHub Actions workflow, rerunning it to deploy changes to your app's image has the unwanted effect of creating multiple workflows. Instead, push changes to your GitHub repository, and the GitHub workflow automatically builds and deploys your app. To change the workflow, edit the workflow file in GitHub.
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
Azure Container Apps manages the details of Kubernetes and container orchestrati
Azure Container Apps supports: -- Any Linux-based x86-64 (`linux/amd64`) container image with no required base image
+- Any Linux-based x86-64 (`linux/amd64`) container image
- Containers from any public or private container registry - [Sidecar](#sidecar-containers) and [init](#init-containers) containers
Jobs features include:
## Configuration
-The following code is an example of the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section of a container app resource template. The excerpt shows the available configuration options when setting up a container.
+Most container apps have a single container. In advanced scenarios, an app may also have sidecar and init containers. In a container app definition, the main app and its sidecar containers are listed in the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section, and init containers are listed in the `initContainers` array. The following excerpt shows the available configuration options when setting up an app's containers.
```json {
The following code is an example of the `containers` array in the [`properties.t
| `name` | Friendly name of the container. | Used for reporting and identification. | | `command` | The container's startup command. | Equivalent to Docker's [entrypoint](https://docs.docker.com/engine/reference/builder/) field. | | `args` | Start up command arguments. | Entries in the array are joined together to create a parameter list to pass to the startup command. |
-| `env` | An array of key/value pairs that define environment variables. | Use `secretRef` instead of the `value` field to refer to a secret. |
-| `resources.cpu` | The number of CPUs allocated to the container. | With the [Consumption plan](plans.md), values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to 2<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br> For example, `1.25` is valid, but `1.555` is invalid.<br> The default is 0.25 CPUs per container.<br><br>When you use the Consumption workload profile on the Dedicated plan, the same rules apply, except CPUs must be less than or equal to 4.<br><br>When you use the [Dedicated plan](plans.md), the maximum CPUs must be less than or equal to the number of cores available in the profile where the container app is running. |
-| `resources.memory` | The amount of RAM allocated to the container. | With the [Consumption plan](plans.md), values must adhere to the following rules:<br><br>ΓÇó greater than zero<br>ΓÇó less than or equal to `4Gi`<br>ΓÇó can be any decimal number (with a max of two decimal places)<br><br>For example, `1.25Gi` is valid, but `1.555Gi` is invalid.<br>The default is `0.5Gi` per container.<br><br>When you use the Consumption workload on the [Dedicated plan](plans.md), the same rules apply except memory must be less than or equal to `8Gi`.<br><br>When you use the Dedicated plan, the maximum memory must be less than or equal to the amount of memory available in the profile where the container app is running. |
-| `volumeMounts` | An array of volume mount definitions. | You can define a temporary volume or multiple permanent storage volumes for your container. For more information about storage volumes, see [Use storage mounts in Azure Container Apps](storage-mounts.md).|
-| `probes`| An array of health probes enabled in the container. | This feature is based on Kubernetes health probes. For more information about probes settings, see [Health probes in Azure Container Apps](health-probes.md).|
-
-<a id="allocations"></a>
-
-When you use either the Consumption plan or a Consumption workload on the Dedicated plan, the total CPU and memory allocations requested for all the containers in a container app must add up to one of the following combinations.
-
-| vCPUs (cores) | Memory | Consumption plan | Consumption workload profile |
-|||||
-| `0.25` | `0.5Gi` | Γ£ö | Γ£ö |
-| `0.5` | `1.0Gi` | Γ£ö | Γ£ö |
-| `0.75` | `1.5Gi` | Γ£ö | Γ£ö |
-| `1.0` | `2.0Gi` | Γ£ö | Γ£ö |
-| `1.25` | `2.5Gi` | Γ£ö | Γ£ö |
-| `1.5` | `3.0Gi` | Γ£ö | Γ£ö |
-| `1.75` | `3.5Gi` | Γ£ö | Γ£ö |
-| `2.0` | `4.0Gi` | Γ£ö | Γ£ö |
-| `2.25` | `4.5Gi` | | Γ£ö |
-| `2.5` | `5.0Gi` | | Γ£ö |
-| `2.75` | `5.5Gi` | | Γ£ö |
-| `3.0` | `6.0Gi` | | Γ£ö |
-| `3.25` | `6.5Gi` | | Γ£ö |
-| `3.5` | `7.0Gi` | | Γ£ö |
-| `3.75` | `7.5Gi` | | Γ£ö |
-| `4.0` | `8.0Gi` | | Γ£ö |
--- The total of the CPU requests in all of your containers must match one of the values in the *vCPUs* column.--- The total of the memory requests in all your containers must match the memory value in the memory column in the same row of the CPU column.-
-When you use the Consumption profile on the Dedicated plan, the total CPU and memory allocations requested for all the containers in a container app must be less than or equal to the cores and memory available in the profile.
+| `env` | An array of name/value pairs that define environment variables. | Use `secretRef` instead of the `value` field to refer to a secret. |
+| `resources.cpu` | The number of CPUs allocated to the container. | See [vCPU and memory allocation requirements](#allocations) |
+| `resources.memory` | The amount of RAM allocated to the container. | See [vCPU and memory allocation requirements](#allocations) |
+| `volumeMounts` | An array of volume mount definitions. | You can define a temporary or permanent storage volumes for your container. For more information about storage volumes, see [Use storage mounts in Azure Container Apps](storage-mounts.md).|
+| `probes`| An array of health probes enabled in the container. | For more information about probes settings, see [Health probes in Azure Container Apps](health-probes.md).|
+
+### <a name="allocations"></a>vCPU and memory allocation requirements
+
+When you use the Consumption plan, the total CPU and memory allocated to all the containers in a container app must add up to one of the following combinations.
+
+| vCPUs (cores) | Memory |
+|||
+| `0.25` | `0.5Gi` |
+| `0.5` | `1.0Gi` |
+| `0.75` | `1.5Gi` |
+| `1.0` | `2.0Gi` |
+| `1.25` | `2.5Gi` |
+| `1.5` | `3.0Gi` |
+| `1.75` | `3.5Gi` |
+| `2.0` | `4.0Gi` |
+| `2.25` | `4.5Gi` |
+| `2.5` | `5.0Gi` |
+| `2.75` | `5.5Gi` |
+| `3.0` | `6.0Gi` |
+| `3.25` | `6.5Gi` |
+| `3.5` | `7.0Gi` |
+| `3.75` | `7.5Gi` |
+| `4.0` | `8.0Gi` |
+
+> [!NOTE]
+> Apps using the Consumption plan in a *Consumption only* environment are limited to a maximum of 2 cores and 4Gi of memory.
## Multiple containers
In advanced scenarios, you can run multiple containers in a single container app
For most microservice scenarios, the best practice is to deploy each service as a separate container app.
-The multiple containers in the same container app share hard disk and network resources and experience the same [application lifecycle](./application-lifecycle-management.md).
+Multiple containers in the same container app share hard disk and network resources and experience the same [application lifecycle](./application-lifecycle-management.md).
-There are two ways to run multiple containers in a container app: [sidecar containers](#sidecar-containers) and [init containers](#init-containers).
+There are two ways to run additional containers in a container app: [sidecar containers](#sidecar-containers) and [init containers](#init-containers).
### Sidecar containers
You can define one or more [init containers](https://kubernetes.io/docs/concepts
Init containers are defined in the `initContainers` array of the container app template. The containers run in the order they're defined in the array and must complete successfully before the primary app container starts. > [!NOTE]
-> Init containers support [image pulls using managed identities](#managed-identity-with-azure-container-registry), but processes running in init containers don't have access to managed identities.
+> Init containers in apps using the Dedicated plan or running in a *Consumption only* environment can't access managed identity at run time.
## Container registries You can deploy images hosted on private registries by providing credentials in the Container Apps configuration.
-To use a container registry, you define the required fields in `registries` array in the [`properties.configuration`](azure-resource-manager-api-spec.md) section of the container app resource template. The `passwordSecretRef` field identifies the name of the secret in the `secrets` array name where you defined the password.
+To use a container registry, you define the registry in the `registries` array in the [`properties.configuration`](azure-resource-manager-api-spec.md) section of the container app resource template. The `passwordSecretRef` field identifies the name of the secret in the `secrets` array name where you defined the password.
```json {
The following example shows how to configure Azure Container Registry credential
"configuration": { "secrets": [ {
- "name": "acr-password",
- "value": "my-acr-password"
+ "name": "docker-hub-password",
+ "value": "my-docker-hub-password"
} ], ... "registries": [ {
- "server": "myacr.azurecr.io",
+ "server": "docker.io",
"username": "someuser",
- "passwordSecretRef": "acr-password"
+ "passwordSecretRef": "docker-hub-password"
} ] }
The following example shows how to configure Azure Container Registry credential
You can use an Azure managed identity to authenticate with Azure Container Registry instead of using a username and password. For more information, see [Managed identities in Azure Container Apps](managed-identity.md).
-When assigning a managed identity to a registry, use the managed identity resource ID for a user-assigned identity, or `system` for the system-assigned identity.
+To use managed identity with a registry, the identity must be enabled in the app and it must be assigned `acrPull` role in the registry. To configure the registry, use the managed identity resource ID for a user-assigned identity, or `system` for the system-assigned identity in the `identity` property of the registry. Don't configure a username and password when using managed identity.
```json {
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
Previously updated : 01/10/2024 Last updated : 07/23/2024
In this quickstart, you create a secure Container Apps environment and deploy yo
<!-- Create --> [!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)]
-7. Select the **Container** tab.
+3. Select the **Container** tab.
-8. Check the box next to the *Use quickstart image* box.
-
-9. Select the **Create** button at the bottom of the *Create Container Apps Environment* page.
+4. Select *Use quickstart image*.
<!-- Deploy the container app --> [!INCLUDE [container-apps-create-portal-deploy.md](../../includes/container-apps-create-portal-deploy.md)]
Select the link next to *Application URL* to view your application. The followin
## Clean up resources
-If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
+If you're not going to continue to use this application, you can delete the container app and all the associated services by removing the resource group.
1. Select the **my-container-apps** resource group from the *Overview* section. 1. Select the **Delete resource group** button at the top of the resource group *Overview*.
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
You can [request a quota increase in the Azure portal](/azure/quotas/quickstart-
| Environments | Region | 15 | Unlimited | Up to 15 environments per subscription, per region. Quota name: Managed Environment Count | | Environments | Global | 20 | Unlimited | Up to 20 environments per subscription, across all regions. Adjusted through Managed Environment Count quota (usually 20% more than Managed Environment Count) | | Container Apps | Environment | Unlimited | Unlimited | |
-| Revisions | Container app | Up to 100 | Unlimited | |
+| Revisions | Container app | Unlimited | Unlimited | |
| Replicas | Revision | Unlimited | Unlimited | Maximum replicas configurable are 300 in Azure portal and 1000 in Azure CLI. There must also be enough cores quota available. | | Session pools | Global | Up to 6 | 10,000 | Maximum number of dynamic session pools per subscription. No official Azure quota yet, please raise support case. |
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
KEDA scalers can use secrets in a [TriggerAuthentication](https://keda.sh/docs/l
1. Find the `TriggerAuthentication` object referenced by the KEDA `ScaledObject` specification.
-1. From the KEDA specification, find each `secretTargetRef` of the `TriggerAuthentication` object and its associated secret.
+1. In the `TriggerAuthentication` object, find each `secretTargetRef` and its associated secret.
:::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-auth.yml" highlight="8,16,17,18":::
-1. In the ARM template, add all entries to the `auth` array of the scale rule.
+1. In the ARM template, for each secret:
- 1. Add a [secret](./manage-secrets.md) to the container app's `secrets` array containing the secret value.
+ 1. Add a [secret](./manage-secrets.md) to the container app's `secrets` array containing the secret name and value.
- 1. Set the value of the `triggerParameter` property to the value of the `TriggerAuthentication`'s `key` property.
+ 1. Add an entry to the `auth` array of the scale rule.
- 1. Set the value of the `secretRef` property to the name of the Container Apps secret.
+ 1. Set the value of the `triggerParameter` property to the value of the `secretTargetRef`'s `parameter` property.
+
+ 1. Set the value of the `secretRef` property to the name of the `secretTargetRef`'s `key` property.
:::code language="json" source="~/azure-docs-snippets-pr/container-apps/container-apps-azure-service-bus-rule-1.json" highlight="10,11,12,13,32,33,34,35":::
container-apps Service Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-connector.md
Previously updated : 06/16/2022 Last updated : 07/24/2024 # Customer intent: As an app developer, I want to connect a containerized app to a storage account in the Azure portal using Service Connector.
View your existing service connections using the Azure portal or the CLI.
### [Portal](#tab/azure-portal)
-1. In **Service Connector**, select **Refresh** and you'll see a Container Apps connection displayed.
+1. In **Service Connector**, select **Refresh** and you see a Container Apps connection displayed.
1. Select **>** to expand the list. You can see the environment variables required by your application code.
View your existing service connections using the Azure portal or the CLI.
### [Azure CLI](#tab/azure-cli)
-Use the Azure CLI command `az containerapp connection list` to list all your container app's provisioned connections. Provide the following information:
+Use the Azure CLI command `az containerapp connection list` to list all your container app connections. Provide the following information:
- **Source compute service resource group name**: the resource group name of the container app. - **Container app name**: the name of your container app.
container-apps Sessions Code Interpreter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions-code-interpreter.md
After you create a session pool, your application can interact with sessions in
### Session identifiers
+> [!IMPORTANT]
+> The session identifier is sensitive information which requires you to use a secure process to manage its value. Part of this process requires that your application ensures each user or tenant only has access to their own sessions.
+> Failure to secure access to sessions may result in misuse or unauthorized access to data stored in your users' sessions. For more information, see [Session identifiers](sessions.md#session-identifiers)
+ When you interact with sessions in a pool, you use a session identifier to reference each session A session identifier is a string that you define that is unique within the session pool. If you're building a web application, you can use the user's ID. If you're building a chatbot, you can use the conversation ID. If there's a running session with the identifier, the session is reused. If there's no running session with the identifier, a new session is automatically created.
container-apps Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions.md
Azure Container Apps dynamic sessions is currently in preview. The following lim
|--||| | East Asia | Γ£ö | Γ£ö | | East US | Γ£ö | Γ£ö |
- | West US 2 | Γ£ö | Γ£ö |
+ | Germany West Central | Γ£ö | Γ£ö |
+ | Italy North | Γ£ö | Γ£ö |
+ | Poland Central | Γ£ö | Γ£ö |
| North Central US | Γ£ö | - |
- | North Europe | Γ£ö | - |
+ | North Europe | Γ£ö | Γ£ö |
+ | West US 2 | Γ£ö | Γ£ö |
* Logging isn't supported. Your application can log requests to the session pool management API and its responses.
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
When you choose each level of your hierarchical partition key, it's important to
- **Have a high cardinality**. The first, second, and third (if applicable) keys of the hierarchical partition should all have a wide range of possible values.
- - Having low cardinality at the first level of the hierarchical partition key will limit all of your write operations at the time of ingestion to just one physical partition until it reaches 50 GB and splits into two physical partitions. For example, suppose your first level key is on `TenantId` and only have 5 unique tenants. Each of these tenants' operations will be scoped to just one physical partition, limiting your throughput consumption to just what is on that one physical partition. This is because hierarchical partitions optimize for all documents with the same first-level key to be colloacted on the same physical partition to avoid full-fanout queries.
+ - Having low cardinality at the first level of the hierarchical partition key will limit all of your write operations at the time of ingestion to just one physical partition until it reaches 50 GB and splits into two physical partitions. For example, suppose your first level key is on `TenantId` and only have 5 unique tenants. Each of these tenants' operations will be scoped to just one physical partition, limiting your throughput consumption to just what is on that one physical partition. This is because hierarchical partitions optimize for all documents with the same first-level key to be collocated on the same physical partition to avoid full-fanout queries.
- While this may be okay for workloads where we do a one-time ingest of all our tenants' data and the following operations are primarily read-heavy afterwards, this can be unideal for workloads where your business requirements involve ingestion of data within a specific time. For example, if you have strict business requirements to avoid latencies, the maximum throughput your workload can theoretically achieve to ingest data is number of physical partitions * 10k. If your top-level key has low cardinality, your number of physical partitions will likely be 1, unless there is sufficient data for the level 1 key for it to be spread across multiple partitions after splits which can take between 4-6 hours to complete. - **Spread request unit (RU) consumption and data storage evenly across all logical partitions**. This spread ensures even RU consumption and storage distribution across your physical partitions.
cosmos-db Scalability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/scalability-overview.md
+
+ Title: Scalability overview
+
+description: Cost and performance advantages of scalability for Azure Cosmos DB for MongoDB vCore
+++++ Last updated : 07/22/2024++
+# Scalability in Azure Cosmos DB for MongoDB (vCore)
+
+The vCore based service for Azure Cosmos DB for MongoDB offers the ability to scale clusters both vertically and horizontally. While the Compute cluster tier and Storage disk functionally depend on each other, the scalability and cost of compute and storage are decoupled.
+
+## Vertical scaling
+Vertical scaling offers the following benefits:
+- Application teams may not always have a clear path to logically shard their data. Moreover, logical sharding is defined per collection. In a dataset with several unsharded collections, data modeling to partition the data can quickly become tedious. Simply scaling up the cluster can circumvent the need for logical sharding while meeting the growing storage and compute needs of the application.
+- Vertical scaling does not require data rebalancing. The number of physical shards remains the same and only the capacity of the cluster is increased with no impact to the application.
+- Scaling up and down are zero down-time operations with no disruptions to the service. No application changes are needed and steady state operations can continue unperturbed.
+- Compute and Storage resources can also be scaled down during known time windows of low activity. Once again, scaling down avoids the need to rebalance data across fewer physical shards and is a zero down-time operation with no disruption to the service. Here too, no application changes are needed after scaling down the cluster.
+- Most importantly, Compute and Storage can be scaled independently. If more cores and memory are needed, the disk SKU can be left as is and the cluster tier can be scaled up. Equally, if more storage and IOPS are needed, the cluster tier can be left as is and the Storage SKU can be scaled up independently. If needed, both Compute and Storage can be scaled independently to optimize for each component's requirements individually, without either component's elasticity requirements affecting the other.
++
+## Horizontal scaling
+Eventually, the application grows to a point where scaling vertically is not sufficient. Workload requirements can grow beyond the capacity of the largest cluster tier and eventually more shards are needed. Horizontal scaling in the vCore based offering for Azure Cosmos DB for MongoDB offers the following benefits:
+- Logically sharded datasets do not require user intervention to balance data across the underlying physical shards. The service automatically maps logical shards to physical shards. When nodes are added or removed, data is automatically rebalanced the database under the covers.
+- Requests are automatically routed to the relevant physical shard that owns the hash range for the data being queried.
+- Geo-distributed clusters have a homogeneous multi-node configuration. Thus logical to physical shard mappings are consistent across the primary and replica regions of a cluster.
++
+## Compute and storage scaling
+Compute and memory resources influence read operations in the vCore based service for Azure Cosmos DB for MongoDB more than disk IOPS.
+- Read operations first consult the cache in the compute layer and fall back to the disk when data could not be retrieved from the cache. For workloads with a higher rate of read operations per second, scaling up the cluster tier to get more CPU and memory resources leads to higher throughput.
+- In addition to read throughput, workloads with a high volume of data per read operation also benefit from scaling the compute resources of the cluster. For instance, cluster tiers with more memory facilitate larger payload sizes per document and a larger number of smaller documents per response.
+
+Disk IOPS influences write operations in the vCore based service for Azure Cosmos DB for MongoDB more than the CPU and memory capacities of the compute resources.
+- Write operations always persist data to disk (in addition to persisting data in memory to optimize reads). Larger disks with more IOPS provide higher write throughput, particularly when running at scale.
+- The service supports upto 32 TB disks per shard, with more IOPS per shard to benefit write heavy workloads, particularly when running at scale.
++
+## Storage heavy workloads and large disks
+### No minimum storage requirements per cluster tier
+As mentioned earlier, storage and compute resources are decoupled for billing and provisioning. While they function as a cohesive unit, they can be scaled independently. The M30 cluster tier can have 32 TB disks provisioned. Similarly, the M200 cluster tier can have 32 GB disks provisioned to optimize for both storage and compute costs.
+
+### Lower TCO with large disks (32 TB and beyond)
+Typically, NoSQL databases with a vCore based model limit the storage per physical shard to 4 TB. The vCore based service for Azure Cosmos DB for MongoDB provides up to 8x that capacity with 32 TB disks. For storage heavy workloads, a 4 TB storage capacity per physical shard requires a massive fleet of compute resources just to sustain the storage requirements of the workload. Compute is more expensive than storage and over provisioning compute due to capacity limits in a service can inflate costs rapidly.
+
+Let's consider a storage heavy workload with 200 TB of data.
+
+| Storage size per shard | Min shards needed to sustain 200 TB |
+||-|
+| 4 TB | 50 |
+| 32 TiB | 7 |
+
+The reduction in Compute requirements reduces sharply with larger disks. While more than the minimum number of physical shards may be needed to sustain the throughput requirements of the workload, even doubling or tripling the number of shards are more cost effective than a 50 shard cluster with smaller disks.
+
+### Skip storage tiering with large disks
+An immediate response to compute costs in storage heavy scenarios is to "tier" the data. Data in the transactional database is limited to the most frequently accessed "hot" data while the larger volume of "cold" data is detached to a cold store. This causes operational complexity. Performance is also unpredictable and dependent upon the data tier that is accessed. Furthermore, the availability of the entire system is dependent on the resiliency of both the hot and cold data stores combined. With large disks in the vCore service, there is no need for tiered storage as the cost of storage heavy workloads is minimized.
+
+## Next steps
+- [Learn how to scale Azure Cosmos DB for MongoDB vCore cluster](./how-to-scale-cluster.md)
+- [Check out indexing best practices](./how-to-create-indexes.md)
+
+> [!div class="nextstepaction"]
+> [Migration options for Azure Cosmos DB for MongoDB vCore](migration-options.md)
+
defender-for-cloud Container Image Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/container-image-mapping.md
The following is an example of an advanced query that utilizes container image m
## Map your container image from GitHub workflows to the container registry
-1. Add the container image mapping tool to your MSDO workflow:
+1. Ensure you have onboarded a [GitHub connector](quickstart-onboard-github.md) to Defender for Cloud.
+
+1. Run the following MSDO workflow:
```yml name: Build and Map Container Image
jobs:
- uses: actions/setup-python@v4 with: python-version: '3.8'
- # Set Authentication to Container Registry of Choice
+ # Set Authentication to Container Registry of Choice.
+ # The example below is for Azure Container Registry. Amazon Elastic Container Registry and Google Artifact Registry are also supported.
- name: Azure Container Registry Login uses: Azure/docker-login@v1 with:
jobs:
- name: Run Microsoft Security DevOps Analysis uses: microsoft/security-devops-action@latest id: msdo
- with:
- include-tools: container-mapping
``` After building a container image in a GitHub workflow and pushing it to a registry, see the mapping by using the [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md):
defender-for-iot Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md
For example:
:::image type="content" source="media/device-inventory/azure-device-inventory.png" alt-text="Screenshot of the Defender for IoT Device inventory page in the Azure portal." lightbox="media/device-inventory/azure-device-inventory.png"::: - ## Supported devices Defender for IoT's device inventory supports the following device classes:
Defender for IoT device inventory is available in the following locations:
| **Microsoft Defender XDR** | Enterprise IoT devices detected by Microsoft Defender for Endpoint agents | Correlate devices across Microsoft Defender XDR in purpose-built alerts, vulnerabilities, and recommendations. | |**OT network sensor consoles** | Devices detected by that OT sensor | - View all detected devices across a network device map<br><br>- View related events on the **Event timeline** | |**An on-premises management console** | Devices detected across all connected OT sensors | Enhance device data by importing data manually or via script |
-|
For more information, see:
The following table lists the columns available in the Defender for IoT device i
> [!NOTE] > Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-|Name |Description
+|Name |Description |
||| |**Authorization** * |Editable. Determines whether or not the device is marked as *authorized*. This value might need to change as the device security changes. | |**Business Function** | Editable. Describes the device's business function. |
defender-for-iot Update Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/update-device-inventory.md
# Verify and update your detected device inventory
-This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes how to review your device inventory and enhance security monitoring with fine-tuned device details.
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for Opertaional Technology (OT) monitoring with Microsoft Defender for IoT, and describes how to review your device inventory and enhance security monitoring with fine-tuned device details.
:::image type="content" source="../media/deployment-paths/progress-fine-tuning-ot-monitoring.png" alt-text="Diagram of a progress bar with Fine-tune OT monitoring highlighted." border="false" lightbox="../media/deployment-paths/progress-fine-tuning-ot-monitoring.png":::
Before performing the procedures in this article, make sure that you have:
This step is performed by your deployment teams.
-## View your device inventory on the Azure portal
+## View the device inventory on your OT sensor
1. Sign into your OT sensor and select the **Device inventory** page.
-1. Select **Edit Columns** to view additional information in the grid so that you can review the data detected for each device.
+1. Select **Edit Columns** to make changes to the grid layout and display more data fields for reviewing the data detected for each device.
- We especially recommend reviewing data for the **Name**, **Class**, **Type**, and **Subtype**, **Authorization**, **Scanner device**, and **Programming device** columns.
+ We especially recommend reviewing data for the **Name**, **Type**, **Authorization**, **Scanner device**, and **Programming device** columns.
-1. Understand the devices that the OT sensor's detected, and identify any sensors where you'll need to identify device properties.
+1. Review the devices listed in the device inventory, and identify the devices whose device properties must be edited.
## Edit device properties per device
For each device where you need to edit device properties:
## Merge duplicate devices
-As you review the devices detected on your device inventory, note whether multiple entries have been detected for the same device on your network.
+As you review the devices detected on your device inventory, note whether multiple entries were detected for the same device on your network.
For example, this might occur when you have a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.
The devices and all their properties are merged in the device inventory. For exa
## Enhance device data (optional)
-You may want to increase device visibility and enhance device data with more details than the default data detected.
+You might want to increase device visibility and enhance device data with more details than the default data detected.
- To increase device visibility to Windows-based devices, use the Defender for IoT [Windows Management Instrumentation (WMI) tool](../detect-windows-endpoints-script.md).
dns Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/cli-samples.md
- Title: Azure CLI samples for DNS - Azure DNS
-description: With this sample, use Azure CLI to create DNS zones and records in Azure DNS.
---- Previously updated : 11/30/2023----
-# Azure CLI examples for Azure DNS
-
-The following table includes links to Azure CLI examples for Azure DNS.
-
-| Article | Description |
-|-|-|
-| [Create a DNS zone and record](./scripts/dns-cli-create-dns-zone-record.md) | Creates a DNS zone and record for a domain name. |
-| | |
--
dns Dns Cli Create Dns Zone Record https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/dns-cli-create-dns-zone-record.md
- Title: Create a DNS zone and record for a domain name - Azure CLI - Azure DNS
-description: This Azure CLI script example shows how to create a DNS zone and record for a domain name
---- Previously updated : 11/30/2023----
-# Azure CLI script example: Create a DNS zone and record
-
-This Azure CLI script example creates a DNS zone and record for a domain name.
---
-## Sample script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/dns/create-dns-zone-and-record/create-dns-zone-and-record.sh "Create DNS zone and record")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, DNS zone, and all related resources.
-
-```azurecli
-az group delete -n myResourceGroup
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az network dns zone create](/cli/azure/network/dns/zone#az-network-dns-zone-create) | Creates an Azure DNS zone. |
-| [az network dns record-set a add-record](/cli/azure/network/dns/record-set) | Adds an *A* record to a DNS zone. |
-| [az network dns record-set list](/cli/azure/network/dns/record-set) | List all *A* record sets in a DNS zone. |
-| [az group delete](/cli/azure/vm/extension#az-vm-extension-set) | Deletes a resource group including all nested resources. |
-
-## Next steps
-
-For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
dns Find Unhealthy Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/find-unhealthy-dns-records.md
- Title: Find unhealthy DNS records in Azure DNS - PowerShell script sample
-description: In this article, learn how to use an Azure PowerShell script to find unhealthy DNS records.
-- Previously updated : 10/04/2022----
-# Find unhealthy DNS records in Azure DNS - PowerShell script sample
-
-The following Azure PowerShell script finds unhealthy DNS records in Azure DNS public zones.
--
-```azurepowershell-interactive
-<#
- 1. Install Pre requisites Az PowerShell modules (/powershell/azure/install-az-ps)
- 2. Sign in to your Azure Account using Login-AzAccount or Connect-AzAccount.
- 3. From an elevated PowerShell prompt, navigate to folder where the script is saved and run the following command:
- .\ Get-AzDNSUnhealthyRecords.ps1 -SubscriptionId <subscription id> -ZoneName <zonename>
- Replace subscription id with the subscription id of interest.
- Replace ZoneName with the actual zone name.
-#>
-param(
- # subscription if to fetch dns records from
- [String]$SubscriptionId = "All",
-
- #filtering zone name
- [String]$ZoneName = "All"
-)
-
-if ($SubscriptionId -eq "All") {
- Write-Host -ForegroundColor Yellow "No subscription Id passed will process all subscriptions"
-}
-
-if ($ZoneName -eq "All") {
- Write-Host -ForegroundColor Yellow "No Zone name passed will process all zones in subscription"
-}
-
-$ErrorActionPreference = "Stop"
-
-$AZModules = @('Az.Accounts', 'Az.Dns')
-$AzLibrariesLoadStart = Get-Date
-$progressItr = 1;
-$ProgessActivity = "Loading required Modules";
-$StoreWarningPreference = $WarningPreference
-$WarningPreference = 'SilentlyContinue'
-Foreach ($module in $AZModules) {
- $progressValue = $progressItr / $AZModules.Length
- Write-Progress -Activity $ProgessActivity -Status "$module $($progressValue.ToString('P')) Complete:" -PercentComplete ($progressValue * 100)
-
- If (Get-Module -Name $module) {
- continue
- }
- elseif (Get-Module -ListAvailable -Name $module) {
- Import-Module -name $module -Scope Local -Force
- }
- else {
- Install-module -name $module -AllowClobber -Force -Scope CurrentUser
- Import-Module -name $module -Scope Local -Force
- }
-
- $progressItr = $progressItr + 1;
- If (!$(Get-Module -Name $module)) {
- Write-Error "Could not load dependant module: $module"
- throw
- }
-}
-$WarningPreference = $StoreWarningPreference
-Write-Progress -Activity $ProgessActivity -Completed
-
-$context = Get-AzAccessToken;
-if ($context.Token -eq $null) {
- Write-host -ForegroundColor Yellow "Please sign in to your Azure Account using Login-AzAccount or Connect-AzAccount before running the script."
- exit
-}
-$subscriptions = Get-AzSubscription
-
-if ($SubscriptionId -ne "All") {
- $subscriptions = $subscriptions | Where-Object { $_.Id -eq $SubscriptionId }
- if ($subscriptions.Count -eq 0) {
- Write-host -ForegroundColor Yellow "Provided Subscription Id not found exiting."
- exit
- }
-}
-
-$scount = $subscriptions | Measure-Object
-Write-Host "Subscriptions found $($scount.Count)"
-if ($scount.Count -lt 1) {
- exit
-}
-$InvalidItems = @()
-$TotalRecCount = 0;
-$ProgessActivity = "Processing Subscriptions";
-$progressItr = 1;
-$subscriptions | ForEach-Object {
- $progressValue = $progressItr / $scount.Count
-
- Select-AzSubscription -Subscription $_ | Out-Null
- Write-Progress -Activity $ProgessActivity -Status "current subscription $_ $($progressValue.ToString('P')) Complete:" -PercentComplete ($progressValue * 100)
- $progressItr = $progressItr + 1;
- $subscription = $_
- try {
- $dnsZones = Get-AzDnsZone -ErrorAction Continue
- }
- catch {
- Write-Host "Error retrieving DNS Zones for subscription $_"
- return;
- }
-
- if ($ZoneName -ne "All") {
- $dnsZones = $dnsZones | Where-Object { $_.Name -eq $ZoneName }
- if ($dnsZones.Count -eq 0) {
- Write-host -ForegroundColor Yellow "Provided ZoneName $ZoneName not found in Subscription $_."
- return;
- }
- }
-
- $dnsZones | ForEach-Object {
- $allrecs = Get-AzDnsRecordSet -Zone $_
- $sZoneName = $_.Name
- $nsrecords = $allrecs | Where-Object { $_.RecordType -eq "NS" }
- $records = $allrecs | Where-Object { ($_.RecordType -ne 'NS' ) -or ($_.Name -ne '@' ) }
- $records | ForEach-Object {
- $rec = $_
- $Invalid = $false
- $endsWith = "*$($rec.Name)"
- $nsrecords | ForEach-Object { if ($endsWith -like "*.$($_.Name)") { $Invalid = $true } }
- $TotalRecCount++
- if ($Invalid) {
- Write-Host -ForegroundColor Yellow "$($rec.Name) recordType $($rec.RecordType) zoneName $sZoneName subscription $subscription"
- $hash = @{
- Name = $rec.Name
- RecordType = $rec.RecordType
- ZoneName = $sZoneName
- subscriptionId = $subscription
- }
- $item = New-Object PSObject -Property $hash
- $InvalidItems += $item
- }
- else {
- # Write-Host -ForegroundColor Green "$($rec.Name) recordType $($rec.RecordType) zoneName $ZoneName subscription $subscription "
- }
- }
- }
-}
-Write-Progress -Activity $ProgessActivity -Completed
-
-Write-Host "Total records processed $TotalRecCount"
-$invalidMeasure = $InvalidItems | Measure-Object
-Write-Host "Invalid Count $($invalidMeasure.Count)"
-
-Write-Host "Invalid Records "
-Write-Host "==============="
-
-$InvalidItems | Format-Table
-
-```
-
-## Script explanation
-
-This script uses the following commands to create the deployment. Each item in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [Get-AzDnsZone](/powershell/module/az.dns/get-azdnszone) | Gets an Azure public DNS zone. |
-| [Get-AzDnsRecordSet](/powershell/module/az.dns/get-azdnsrecordset) | Gets a DNS record set. |
-
-## Next steps
-
-For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
event-grid Mqtt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-overview.md
MQTT is a publish-subscribe messaging transport protocol that was designed for c
- **Clean start and session expiry** enable your clients to optimize the reliability and security of the session by preserving the client's subscription information and messages for a configurable time interval. - **Negative acknowledgments** allow your clients to efficiently react to different error codes. - **Server-sent disconnect packets** allow your clients to efficiently handle disconnects.
- - **Last Will and Testament (LWT)** notifies your MQTT clients with the abrupt disconnections of other MQTT clients.
- MQTT broker is adding more MQTT v5 features in the future to align more with the MQTT specifications. The following items detail the current differences between features supported by MQTT broker and the MQTT v5 specifications: Will message, Retain flag, Message ordering, and QoS 2 aren't supported. - MQTT v3.1.1 features:
expressroute About Public Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-public-peering.md
- Title: Create and manage Azure ExpressRoute public peering
-description: Learn about and manage Azure public peering
----- Previously updated : 06/30/2023--
-# Create and manage ExpressRoute public peering
-
-> [!div class="op_single_selector"]
-> * [Article - Public peering](about-public-peering.md)
-> * [Video - Public peering](https://azure.microsoft.com/documentation/videos/azure-expressroute-how-to-set-up-azure-public-peering-for-your-expressroute-circuit)
-> * [Article - Microsoft peering](expressroute-circuit-peerings.md#microsoftpeering)
->
-
-This article helps you create and manage public peering routing configuration for an ExpressRoute circuit. You can also check the status, update, or delete and deprovision peerings. This article applies to Resource Manager circuits that were created before public peering was deprecated. If you have a previously existing circuit (created prior to public peering being deprecated), you can manage/configure public peering using [Azure PowerShell](#powershell), [Azure CLI](#cli), and the [Azure portal](#portal).
-
->[!NOTE]
->Public peering is deprecated. You cannot create public peering on new ExpressRoute circuits. If you have a new ExpressRoute circuit, instead, use [Microsoft peering](expressroute-circuit-peerings.md#microsoftpeering) for your Azure services.
->
-
-## Connectivity
-
-Connectivity is always initiated from your WAN to Microsoft Azure services. Microsoft Azure services can't initiate connections into your network through this routing domain. If your ExpressRoute circuit is enabled for Azure public peering, you can access the [public IP ranges used in Azure](../virtual-network/ip-services/public-ip-addresses.md#public-ip-addresses) over the circuit.
-
-Once public peering is enabled, you can connect to most Azure services. We don't allow you to selectively pick services for which we advertise routes to.
-
-* Services such as Azure Storage, SQL Databases, and Websites are offered on public IP addresses.
-* Through the public peering routing domain, you can privately connect to services hosted on public IP addresses, including VIPs of your cloud services.
-* You can connect the public peering domain to your DMZ and connect to all Azure services on their public IP addresses from your WAN without having to connect through the internet.
-
-## <a name="services"></a>Services
-
-This section shows the services available over public peering. Because public peering is deprecated, there's no plan to add new or more services to public peering. If you use public peering and the service you want to use is support only over Microsoft peering, you must switch to Microsoft peering. See [Microsoft peering](expressroute-faqs.md#microsoft-peering) for a list of supported services.
-
-**Supported:**
-
-* Power BI
-* Most of the Azure services are supported. Check directly with the service that you want to use to verify support.
-
-**Not supported:**
- * CDN
- * Azure Front Door
- * Multi-factor Authentication Server (legacy)
- * Traffic Manager
-
-To validate availability for a specific service, you can check the documentation for that service to see if there's a reserved range published for that service. Then you may look up the IP ranges of the target service and compare with the ranges listed in the [Azure IP Ranges and Service Tags ΓÇô Public Cloud XML file](https://www.microsoft.com/download/details.aspx?id=56519). Alternatively, you can open a support ticket for the service in question for clarification.
-
-## <a name="compare"></a>Peering comparison
--
-> [!NOTE]
-> Azure public peering has 1 NAT IP address associated to each BGP session. For greater than 2 NAT IP addresses, move to Microsoft peering. Microsoft peering allows you to configure your own NAT allocations, as well as use route filters for selective prefix advertisements. For more information, see [Move to Microsoft peering](./how-to-move-peering.md).
->
-
-## Custom route filters
-
-You can define custom route filters within your network to consume only the routes you need. Refer to the [Routing](expressroute-routing.md) page for detailed information on routing configuration.
-
-## <a name="powershell"></a>Azure PowerShell steps
---
-Because public peering is deprecated, you can't configure public peering on a new ExpressRoute circuit.
-
-1. Verify that you have an ExpressRoute circuit that is provisioned and also enabled. Use the following example:
-
- ```azurepowershell-interactive
- Get-AzExpressRouteCircuit -Name "ExpressRouteARMCircuit" -ResourceGroupName "ExpressRouteResourceGroup"
- ```
-
- The response is similar to the following example:
-
- ```
- Name : ExpressRouteARMCircuit
- ResourceGroupName : ExpressRouteResourceGroup
- Location : westus
- Id : /subscriptions/***************************/resourceGroups/ExpressRouteResourceGroup/providers/Microsoft.Network/expressRouteCircuits/ExpressRouteARMCircuit
- Etag : W/"################################"
- ProvisioningState : Succeeded
- Sku : {
- "Name": "Standard_MeteredData",
- "Tier": "Standard",
- "Family": "MeteredData"
- }
- CircuitProvisioningState : Enabled
- ServiceProviderProvisioningState : Provisioned
- ServiceProviderNotes :
- ServiceProviderProperties : {
- "ServiceProviderName": "Equinix",
- "PeeringLocation": "Silicon Valley",
- "BandwidthInMbps": 200
- }
- ServiceKey : **************************************
- Peerings : []
- ```
-2. Configure Azure public peering for the circuit. Make sure that you have the following information before you proceed further.
-
- * A /30 subnet for the primary link. This IP must be a valid public IPv4 prefix.
- * A /30 subnet for the secondary link. This IP must be a valid public IPv4 prefix.
- * A valid VLAN ID to establish this peering on. Ensure that no other peering in the circuit uses the same VLAN ID.
- * AS number for peering. You can use both 2-byte and 4-byte AS numbers.
- * Optional:
- * An MD5 hash if you choose to use one.
-
- Run the following example to configure Azure public peering for your circuit
-
- ```azurepowershell-interactive
- Add-AzExpressRouteCircuitPeeringConfig -Name "AzurePublicPeering" -ExpressRouteCircuit $ckt -PeeringType AzurePublicPeering -PeerASN 100 -PrimaryPeerAddressPrefix "12.0.0.0/30" -SecondaryPeerAddressPrefix "12.0.0.4/30" -VlanId 100
-
- Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt
- ```
-
- If you choose to use an MD5 hash, use the following example:
-
- ```azurepowershell-interactive
- Add-AzExpressRouteCircuitPeeringConfig -Name "AzurePublicPeering" -ExpressRouteCircuit $ckt -PeeringType AzurePublicPeering -PeerASN 100 -PrimaryPeerAddressPrefix "12.0.0.0/30" -SecondaryPeerAddressPrefix "12.0.0.4/30" -VlanId 100 -SharedKey "A1B2C3D4"
-
- Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt
- ```
-
- > [!IMPORTANT]
- > Ensure that you specify your AS number as peering ASN, not customer ASN.
- >
- >
-
-### <a name="getpublic"></a>To get Azure public peering details
-
-You can get configuration details using the following cmdlet:
-
-```azurepowershell-interactive
- $ckt = Get-AzExpressRouteCircuit -Name "ExpressRouteARMCircuit" -ResourceGroupName "ExpressRouteResourceGroup"
-
- Get-AzExpressRouteCircuitPeeringConfig -Name "AzurePublicPeering" -Circuit $ckt
- ```
-
-### <a name="updatepublic"></a>To update Azure public peering configuration
-
-You can update any part of the configuration using the following example. In this example, the VLAN ID of the circuit is being updated from 200 to 600.
-
-```azurepowershell-interactive
-Set-AzExpressRouteCircuitPeeringConfig -Name "AzurePublicPeering" -ExpressRouteCircuit $ckt -PeeringType AzurePublicPeering -PeerASN 100 -PrimaryPeerAddressPrefix "123.0.0.0/30" -SecondaryPeerAddressPrefix "123.0.0.4/30" -VlanId 600
-
-Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt
-```
-
-### <a name="deletepublic"></a>To delete Azure public peering
-
-You can remove your peering configuration by running the following example:
-
-```azurepowershell-interactive
-Remove-AzExpressRouteCircuitPeeringConfig -Name "AzurePublicPeering" -ExpressRouteCircuit $ckt
-Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt
-```
-
-## <a name="cli"></a>Azure CLI steps
---
-1. Check the ExpressRoute circuit to ensure it's provisioned and also enabled. Use the following example:
-
- ```azurecli-interactive
- az network express-route list
- ```
-
- The response is similar to the following example:
-
- ```output
- "allowClassicOperations": false,
- "authorizations": [],
- "circuitProvisioningState": "Enabled",
- "etag": "W/\"1262c492-ffef-4a63-95a8-a6002736b8c4\"",
- "gatewayManagerEtag": null,
- "id": "/subscriptions/81ab786c-56eb-4a4d-bb5f-f60329772466/resourceGroups/ExpressRouteResourceGroup/providers/Microsoft.Network/expressRouteCircuits/MyCircuit",
- "location": "westus",
- "name": "MyCircuit",
- "peerings": [],
- "provisioningState": "Succeeded",
- "resourceGroup": "ExpressRouteResourceGroup",
- "serviceKey": "1d05cf70-1db5-419f-ad86-1ca62c3c125b",
- "serviceProviderNotes": null,
- "serviceProviderProperties": {
- "bandwidthInMbps": 200,
- "peeringLocation": "Silicon Valley",
- "serviceProviderName": "Equinix"
- },
- "serviceProviderProvisioningState": "Provisioned",
- "sku": {
- "family": "UnlimitedData",
- "name": "Standard_MeteredData",
- "tier": "Standard"
- },
- "tags": null,
- "type": "Microsoft.Network/expressRouteCircuits]
- ```
-
-2. Configure Azure public peering for the circuit. Make sure that you have the following information before you proceed further.
-
- * A /30 subnet for the primary link. This IP must be a valid public IPv4 prefix.
- * A /30 subnet for the secondary link. This IP must be a valid public IPv4 prefix.
- * A valid VLAN ID to establish this peering on. Ensure that no other peering in the circuit uses the same VLAN ID.
- * AS number for peering. You can use both 2-byte and 4-byte AS numbers.
- * **Optional -** An MD5 hash if you choose to use one.
-
- Run the following example to configure Azure public peering for your circuit:
-
- ```azurecli-interactive
- az network express-route peering create --circuit-name MyCircuit --peer-asn 100 --primary-peer-subnet 12.0.0.0/30 -g ExpressRouteResourceGroup --secondary-peer-subnet 12.0.0.4/30 --vlan-id 200 --peering-type AzurePublicPeering
- ```
-
- If you choose to use an MD5 hash, use the following example:
-
- ```azurecli-interactive
- az network express-route peering create --circuit-name MyCircuit --peer-asn 100 --primary-peer-subnet 12.0.0.0/30 -g ExpressRouteResourceGroup --secondary-peer-subnet 12.0.0.4/30 --vlan-id 200 --peering-type AzurePublicPeering --SharedKey "A1B2C3D4"
- ```
-
- > [!IMPORTANT]
- > Ensure that you specify your AS number as peering ASN, not customer ASN.
-
-### <a name="getpublic"></a>To view Azure public peering details
-
-You can get configuration details using the following example:
-
-```azurecli
-az network express-route peering show -g ExpressRouteResourceGroup --circuit-name MyCircuit --name AzurePublicPeering
-```
-
-The output is similar to the following example:
-
-```output
-{
- "azureAsn": 12076,
- "etag": "W/\"2e97be83-a684-4f29-bf3c-96191e270666\"",
- "gatewayManagerEtag": "18",
- "id": "/subscriptions/9a0c2943-e0c2-4608-876c-e0ddffd1211b/resourceGroups/ExpressRouteResourceGroup/providers/Microsoft.Network/expressRouteCircuits/MyCircuit/peerings/AzurePublicPeering",
- "lastModifiedBy": "Customer",
- "microsoftPeeringConfig": null,
- "name": "AzurePublicPeering",
- "peerAsn": 7671,
- "peeringType": "AzurePublicPeering",
- "primaryAzurePort": "",
- "primaryPeerAddressPrefix": "",
- "provisioningState": "Succeeded",
- "resourceGroup": "ExpressRouteResourceGroup",
- "routeFilter": null,
- "secondaryAzurePort": "",
- "secondaryPeerAddressPrefix": "",
- "sharedKey": null,
- "state": "Enabled",
- "stats": null,
- "vlanId": 100
-}
-```
-
-### <a name="updatepublic"></a>To update Azure public peering configuration
-
-You can update any part of the configuration using the following example. In this example, the VLAN ID of the circuit is being updated from 200 to 600.
-
-```azurecli-interactive
-az network express-route peering update --vlan-id 600 -g ExpressRouteResourceGroup --circuit-name MyCircuit --name AzurePublicPeering
-```
-
-### <a name="deletepublic"></a>To delete Azure public peering
-
-You can remove your peering configuration by running the following example:
-
-```azurecli-interactive
-az network express-route peering delete -g ExpressRouteResourceGroup --circuit-name MyCircuit --name AzurePublicPeering
-```
-
-## <a name="portal"></a>Azure portal steps
-
-To configure peering, use the PowerShell or CLI steps contained in this article. To manage a peering, you can use the following sections. For reference, these steps look similar to managing a [Microsoft peering in the portal](expressroute-howto-routing-portal-resource-manager.md#msft).
-
-### <a name="get"></a>To view Azure public peering details
-
-View the properties of Azure public peering by selecting the peering in the portal.
-
-### <a name="update"></a>To update Azure public peering configuration
-
-Select the row for peering, then modify the peering properties.
-
-### <a name="delete"></a>To delete Azure public peering
-
-Remove your peering configuration by selecting the delete icon.
-
-## Next steps
-
-Next step, [Link a virtual network to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md).
-
-* For more information about ExpressRoute workflows, see [ExpressRoute workflows](expressroute-workflows.md).
-* For more information about circuit peering, see [ExpressRoute circuits and routing domains](expressroute-circuit-peerings.md).
-* For more information about working with virtual networks, see [Virtual network overview](../virtual-network/virtual-networks-overview.md).
expressroute How To Move Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-move-peering.md
- Title: 'Azure ExpressRoute: Move a public peering to Microsoft peering'
-description: This article shows you the steps to move your public peering to Microsoft peering on ExpressRoute.
---- Previously updated : 03/11/2024---
-# Move a public peering to Microsoft peering
-
-This article helps you move a public peering configuration to Microsoft peering with no downtime. ExpressRoute supports using Microsoft peering with route filters for Azure PaaS services, such as Azure storage and Azure SQL Database. You now need only one routing domain to access Microsoft PaaS and SaaS services. You can use route filters to selectively advertise the PaaS service prefixes for Azure regions you want to consume.
-
-> [!IMPORTANT]
-> Public peering for ExpressRoute is being retired on **March 31, 2024**. For more information, see [**retirement notice**](https://azure.microsoft.com/updates/retirement-notice-migrate-from-public-peering-by-march-31-2024/).
-
-Azure public peering has one NAT IP address associated to each BGP session. Microsoft peering allows you to configure your own NAT allocations, and use route filters for selective prefix advertisements. Public Peering is a unidirectional service using which Connectivity is always initiated from your WAN to Microsoft Azure services. Microsoft Azure services can't initiate connections into your network through this routing domain.
-
-## Peering comparison
-
-| Aspect | Public peering | Microsoft peering |
-| | | |
-| Number of NAT IP addresses | 1 (not scalable) | Per scale*** |
-| Call initiation direction | Unidirectional: on-premises to Microsoft | Bidirectional |
-| Prefix advertisement | Nonselectable | Advertisement of Microsoft prefixes controlled by route filters |
-| Support | No new public peering deployments. Public peering will be retired on March 31, 2024. | Fully supported |
-
-*** BYOIP: you can scale the number of NAT IP addresses assigned depending on your call volume. To get NAT IP addresses, work with your service provider.
-
-Once public peering is enabled, you can connect to all Azure services. We don't allow you to selectively pick services for which we advertise routes to. While Microsoft peering is a bi-directional connectivity where connection can be initiated from Microsoft Azure service along with your WAN. For more information about routing domains and peering, see [ExpressRoute circuits and routing domains](expressroute-circuit-peerings.md).
-
-## <a name="before"></a>Before you begin
-
-To connect to Microsoft peering, you need to set up and manage NAT. Your connectivity provider may set up and manage the NAT as a managed service. If you're planning to access the Azure PaaS and Azure SaaS services on Microsoft peering, it's important to size the NAT IP pool correctly. For more information about NAT for ExpressRoute, see the [NAT requirements for Microsoft peering](expressroute-nat.md#nat-requirements-for-microsoft-peering). When you connect to Microsoft through Azure ExpressRoute(Microsoft peering), you have multiple links to Microsoft. One link is your existing Internet connection, and the other is via ExpressRoute. Some traffic to Microsoft might go through the Internet but come back via ExpressRoute, or vice versa.
-
-![Bidirectional connectivity](./media/how-to-move-peering/bidirectional-connectivity.jpg)
-
-> [!Warning]
-> The NAT IP pool advertised to Microsoft must not be advertised to the Internet. This will break connectivity to other Microsoft services.
-
-Refer to [Asymmetric routing with multiple network paths](./expressroute-asymmetric-routing.md) for caveats of asymmetric routing before configuring Microsoft peering.
-
-* If you're using public peering and currently have IP Network rules for public IP addresses that are used to access [Azure Storage](../storage/common/storage-network-security.md) or [Azure SQL Database](/azure/azure-sql/database/vnet-service-endpoint-rule-overview), you need to make sure that the NAT IP pool configured with Microsoft peering gets included in the list of public IP addresses for the Azure storage account or the Azure SQL account.
-* Legacy Public peering makes use of Source Network Address Translation (SNAT) to a Microsoft-registered public IP, while Microsoft peering doesn't.
-* In order to move to Microsoft peering with no downtime, use the steps in this article in the order that they're presented.
-
-## <a name="create"></a>1. Create Microsoft peering
-
-If Microsoft peering hasn't been created, use any of the following articles to create Microsoft peering. If your connectivity provider offers managed layer 3 services, you can ask the connectivity provider to enable Microsoft peering for your circuit.
-
-If you manage layer 3, the following information is required before you can proceed:
-
-* A /30 subnet for the primary link. The prefix must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR. From this subnet, you assign the first useable IP address to your router as Microsoft uses the second useable IP for its router.<br>
-* A /30 subnet for the secondary link. The prefix must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR. From this subnet, you assign the first useable IP address to your router as Microsoft uses the second useable IP for its router.<br>
-* A valid VLAN ID to establish this peering on. Ensure that no other peering in the circuit uses the same VLAN ID. For both Primary and Secondary links you must use the same VLAN ID.<br>
-* AS number for peering. You can use both 2-byte and 4-byte AS numbers.<br>
-* Advertised prefixes: You must provide a list of all prefixes you plan to advertise over the BGP session. Only public IP address prefixes are accepted. If you plan to send a set of prefixes, you can send a comma-separated list. These prefixes must be registered to you in an RIR / IRR.<br>
-* Routing Registry Name: You can specify the RIR / IRR against which the AS number and prefixes are registered.
-
-* **Optional** - Customer ASN: If you're advertising prefixes not registered to a peering AS number, you can specify the registered AS number to which t registered.<br>
-* **Optional** - An MD5 hash if you choose to use one.
-
-Detailed instructions to enable Microsoft peering can be found in the following articles:
-
-* [Create Microsoft peering using Azure portal](expressroute-howto-routing-portal-resource-manager.md#msft)<br>
-* [Create Microsoft peering using Azure PowerShell](expressroute-howto-routing-arm.md#msft)<br>
-* [Create Microsoft peering using Azure CLI](howto-routing-cli.md#msft)
-
-## <a name="validate"></a>2. Validate Microsoft peering is enabled
-
-Verify that the Microsoft peering is enabled and the advertised public prefixes are in the configured state.
-
-* [Azure portal](expressroute-howto-routing-portal-resource-manager.md#getmsft)<br>
-* [Azure PowerShell](expressroute-howto-routing-arm.md#getmsft)<br>
-* [Azure CLI](howto-routing-cli.md#getmsft)
-
-## <a name="routefilter"></a>3. Configure and attach a route filter to the circuit
-
-By default, new Microsoft peering don't advertise any prefixes until a route filter is attached to the circuit. When you create a route filter rule, you can specify the list of service communities for Azure regions that you want to consume for Azure PaaS services. This feature provides you with the flexibility to filter the routes as per your requirement, as shown in the following screenshot:
-
-![Merge public peering](./media/how-to-move-peering/routefilter.jpg)
-
-> [!NOTE]
-> Public peering advertises all the Azure regions prefixes by default. Whereas, in the Microsoft peering you can select the regions in the route filter associated with Microsoft peering to limit the number of routes advertised to your on-premises network. To get the same routing behavior as Public peering, select all the Azure regions and service prefixes.
-
-Configure route filters using any of the following articles:
-
-* [Configure route filters for Microsoft peering using Azure portal](how-to-routefilter-portal.md)<br>
-* [Configure route filters for Microsoft peering using Azure PowerShell](how-to-routefilter-powershell.md)<br>
-* [Configure route filters for Microsoft peering using Azure CLI](how-to-routefilter-cli.md)
-
-## <a name="delete"></a>4. Delete the public peering
-
-After verifying Microsoft peering is configured and the prefixes you want to use are correctly advertised through Microsoft peering, you can then delete the public peering. To delete public peering, you can use Azure PowerShell or Azure CLI. For more information, see the following articles:
-
-* [Delete Azure public peering using Azure PowerShell](about-public-peering.md#powershell)
-* [Delete Azure public peering using CLI](about-public-peering.md#cli)
-
-## <a name="view"></a>5. View peerings
-
-You can see a list of all ExpressRoute circuits and peerings in the Azure portal. For more information, see [View Microsoft peering details](expressroute-howto-routing-portal-resource-manager.md#getmsft).
-
-## Next steps
-
-For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md).
expressroute How To Npm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-npm.md
- Title: 'Azure ExpressRoute: Configure NPM for circuits'
-description: Configure cloud-based network monitoring (NPM) for Azure ExpressRoute circuits. This covers monitoring over ExpressRoute private peering and Microsoft peering.
---- Previously updated : 06/30/2023---
-# Configure Network Performance Monitor for ExpressRoute (deprecated)
-
-This article helps you configure a Network Performance Monitor extension to monitor ExpressRoute. Network Performance Monitor (NPM) is a cloud-based network monitoring solution that monitors connectivity between Azure cloud deployments and on-premises locations (Branch offices, etc.). NPM is part of Azure Monitor logs. NPM offers an extension for ExpressRoute that lets you monitor network performance over ExpressRoute circuits that are configured to use private peering or Microsoft peering. When you configure NPM for ExpressRoute, you can detect network issues to identify and eliminate. This service is also available for Azure Government Cloud.
-
-> [!IMPORTANT]
-> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor](../network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](../network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before February 29, 2024.
--
-You can:
-
-* Monitor loss and latency across various VNets and set alerts
-
-* Monitor all paths (including redundant paths) on the network
-
-* Troubleshoot transient and point-in-time network issues that are difficult to replicate
-
-* Help determine a specific segment on the network that is responsible for degraded performance
-
-* Get throughput per virtual network (If you have agents installed in each VNet)
-
-* See the ExpressRoute system state from a previous point in time
-
-## <a name="workflow"></a>Workflow
-
-Monitoring agents are installed on multiple servers, both on-premises and in Azure. The agents communicate with each other, but don't send data, they send TCP handshake packets. The communication between the agents allows Azure to map the network topology and path the traffic could take.
-
-1. Create an NPM Workspace. This workspace is the same as a Log Analytics workspace.
-2. Install and configure software agents. (If you only want to monitor over Microsoft Peering, you don't need to install and configure software agents.):
- * Install monitoring agents on the on-premises servers and the Azure VMs (for private peering).
- * Configure settings on the monitoring agent servers to allow the monitoring agents to communicate. (Open firewall ports, etc.)
-3. Configure network security group (NSG) rules to allow the monitoring agent installed on Azure VMs to communicate with on-premises monitoring agents.
-4. Set up monitoring: Auto-Discover and manage which networks are visible in NPM.
-
-If you're already using Network Performance Monitor to monitor other objects or services, and you already have Workspace in one of the supported regions, you can skip Step 1 and Step 2, and begin your configuration with Step 3.
-
-## <a name="configure"></a>Step 1: Create a Workspace
-
-Create a workspace in the subscription that has the VNets link to the ExpressRoute circuit(s).
-
-1. In the [Azure portal](https://portal.azure.com), select the Subscription that has the VNETs peered to your ExpressRoute circuit. Then, search the list of services in the **Marketplace** for 'Network Performance Monitor'. In the return, select to open the **Network Performance Monitor** page.
-
- >[!NOTE]
- >You can create a new workspace, or use an existing workspace. If you want to use an existing workspace, you must make sure that the workspace has been migrated to the new query language. [More information...](../azure-monitor/logs/log-query-overview.md)
- >
-
- ![portal](./media/how-to-npm/3.png)<br><br>
-2. At the bottom of the main **Network Performance Monitor** page, select **Create** to open **Network Performance Monitor - Create new solution** page. Select **Log Analytics Workspace - select a workspace** to open the Workspaces page. Select **+ Create New Workspace** to open the Workspace page.
-3. On the **Log Analytics workspace** page, select **Create New**, then configure the following settings:
-
- * Log Analytics Workspace - Type a name for your Workspace.
- * Subscription - If you have multiple subscriptions, choose the one you want to associate with the new Workspace.
- * Resource group - Create a resource group, or use an existing one.
- * Location - This location is used to specify the location of the storage account that is used for the agent connection logs.
- * Pricing tier - Select the pricing tier.
-
- >[!NOTE]
- >The ExpressRoute circuit can be anywhere in the world. It doesn't have to be in the same region as the Workspace.
- >
-
- ![workspace](./media/how-to-npm/4.png)<br><br>
-4. Select **OK** to save and deploy the settings template. Once the template validates, select **Create** to deploy the Workspace.
-5. After the Workspace has been deployed, navigate to the **NetworkMonitoring(name)** resource that you created. Validate the settings then select **Solution requires additional configuration**.
-
- ![additional configuration](./media/how-to-npm/5.png)
-
-## <a name="agents"></a>Step 2: Install and configure agents
-
-### <a name="download"></a>2.1: Download the agent setup file
-
-1. Go to the **Common Settings** tab of the **Network Performance Monitor Configuration** page for your resource. Select the agent that corresponds to your server's processor from the **Install Log Analytics Agents** section, and download the setup file.
-2. Next, copy the **Workspace ID** and **Primary Key** to Notepad.
-3. From the **Configure Log Analytics Agents for monitoring using TCP protocol** section, download the PowerShell Script. The PowerShell script helps you open the relevant firewall port for the TCP transactions.
-
- ![PowerShell script](./media/how-to-npm/7.png)
-
-### <a name="installagent"></a>2.2: Install a monitoring agent on each monitoring server (on each VNET that you want to monitor)
-
-We recommend that you install at least two agents on each side of the ExpressRoute connection for redundancy (for example, on-premises, Azure VNETs). The agent must be installed on a Windows Server (2008 SP1 or later). Monitoring ExpressRoute circuits using Windows Desktop OS and Linux OS isn't supported. Use the following steps to install agents:
-
- >[!NOTE]
- >Agents pushed by SCOM (includes [MMA](/previous-versions/system-center/system-center-2012-R2/dn465154(v=sc.12))) may not be able to consistently detect their location if they are hosted in Azure. We recommend that you do not use these agents in Azure VNETs to monitor ExpressRoute.
- >
-
-1. Run **Setup** to install the agent on each server that you want to use for monitoring ExpressRoute. The server you use for monitoring can either be a VM, or on-premises, and must have Internet access. You need to install at least one agent on-premises, and one agent on each network segment that you want to monitor in Azure.
-2. On the **Welcome** page, select **Next**.
-3. On the **License Terms** page, read the license, and then select **I Agree**.
-4. On the **Destination Folder** page, change or keep the default installation folder, and then select **Next**.
-5. On the **Agent Setup Options** page, you can choose to connect the agent to Azure Monitor logs or Operations Manager. Or, you can leave the choices blank if you want to configure the agent later. After making your selection(s), select **Next**.
-
- * If you chose to connect to **Azure Log Analytics**, paste the **Workspace ID** and **Workspace Key** (Primary Key) that you copied into Notepad in the previous section. Then, select **Next**.
-
- ![ID and Key](./media/how-to-npm/8.png)
- * If you chose to connect to **Operations Manager**, on the **Management Group Configuration** page, type the **Management Group Name**, **Management Server**, and the **Management Server Port**. Then, select **Next**.
-
- ![Operations Manager](./media/how-to-npm/9.png)
- * On the **Agent Action Account** page, choose either the **Local System** account, or **Domain or Local Computer Account**. Then, select **Next**.
-
- ![Account](./media/how-to-npm/10.png)
-6. On the **Ready to Install** page, review your choices, and then select **Install**.
-7. On the **Configuration completed successfully** page, select **Finish**.
-8. When complete, the Microsoft Monitoring Agent appears in the Control Panel. You can review your configuration there, and verify that the agent is connected to Azure Monitor logs. When connected, the agent displays a message stating: **The Microsoft Monitoring Agent has successfully connected to the Microsoft Operations Management Suite service**.
-
-9. Repeat this procedure for each VNET that you need to be monitored.
-
-### <a name="proxy"></a>2.3: Configure proxy settings (optional)
-
-If you're using a web proxy to access the Internet, use the following steps to configure proxy settings for the Microsoft Monitoring Agent. Perform these steps for each server. If you have many servers that you need to configure, you might find it easier to use a script to automate this process. If so, see [To configure proxy settings for the Microsoft Monitoring Agent using a script](../azure-monitor/agents/agent-windows.md).
-
-To configure proxy settings for the Microsoft Monitoring Agent using the Control Panel:
-
-1. Open the **Control Panel**.
-2. Open **Microsoft Monitoring Agent**.
-3. Select the **Proxy Settings** tab.
-4. Select **Use a proxy server** and type the URL and port number, if one is needed. If your proxy server requires authentication, type the username and password to access the proxy server.
-
- ![proxy](./media/how-to-npm/11.png)
-
-### <a name="verifyagent"></a>2.4: Verify agent connectivity
-
-You can easily verify whether your agents are communicating.
-
-1. On a server with the monitoring agent, open the **Control Panel**.
-2. Open the **Microsoft Monitoring Agent**.
-3. Select the **Azure Log Analytics** tab.
-4. In the **Status** column, you should see that the agent connected successfully to Azure Monitor logs.
-
- ![status](./media/how-to-npm/12.png)
-
-### <a name="firewall"></a>2.5: Open the firewall ports on the monitoring agent servers
-
-To use the TCP protocol, you must open firewall ports to ensure that the monitoring agents can communicate.
-
-You can run a PowerShell script to create the registry keys that required by the Network Performance Monitor. This script also creates the Windows Firewall rules to allow monitoring agents to create TCP connections with each other. The registry keys created by the script specify whether to log the debug logs, and the path for the logs file. It also defines the agent TCP port used for communication. The values for these keys get set automatically by the script. You shouldn't manually change these keys.
-
-Port 8084 is opened by default. You can use a custom port by providing the parameter 'portNumber' to the script. However, if you do so, you must specify the same port for all the servers on which you run the script.
-
->[!NOTE]
->The 'EnableRules' PowerShell script configures Windows Firewall rules only on the server where the script is run. If you have a network firewall, you should make sure that it allows traffic destined for the TCP port being used by Network Performance Monitor.
->
->
-
-On the agent servers, open a PowerShell window with administrative privileges. Run the [EnableRules](https://aka.ms/npmpowershellscript) PowerShell script (which you downloaded earlier). Don't use any parameters.
-
-![PowerShell_Script](./media/how-to-npm/script.png)
-
-## <a name="opennsg"></a>Step 3: Configure network security group rules
-
-To monitor agent servers that are in Azure, you must configure network security group (NSG) rules to allow TCP traffic on a port used by NPM for synthetic transactions. The default port is 8084, allowing a monitoring agent installed on an Azure VM to communicate with an on-premises monitoring agent.
-
-For more information about NSG, see [Network Security Groups](../virtual-network/tutorial-filter-network-traffic.md).
-
->[!NOTE]
->Make sure that you have installed the agents (both the on-premises server agent and the Azure server agent), and have run the PowerShell script before proceeding with this step.
->
-
-## <a name="setupmonitor"></a>Step 4: Discover peering connections
-
-1. Navigate to the Network Performance Monitor overview tile by going to the **All Resources** page, then select on the allowlisted NPM Workspace.
-
- ![npm workspace](./media/how-to-npm/npm.png)
-2. Select the **Network Performance Monitor** overview tile to bring up the dashboard. The dashboard contains an ExpressRoute page, which shows that ExpressRoute is in an *unconfigured state*. Select **Feature Setup** to open the Network Performance Monitor configuration page.
-
- ![feature setup](./media/how-to-npm/npm2.png)
-3. On the configuration page, navigate to the 'ExpressRoute Peerings' tab, located on the left side panel. Next, select **Discover Now**.
-
- ![discover](./media/how-to-npm/13.png)
-4. When discovery completes, you see a list containing the following items:
- * All of the Microsoft peering connections in the ExpressRoute circuit(s) that are associated with this subscription.
- * All of the private peering connections that connect to the VNets associated with this subscription.
-
-## <a name="configmonitor"></a>Step 5: Configure monitors
-
-In this section, you configure the monitors. Follow the steps for the type of peering that you want to monitor: **private peering**, or **Microsoft peering**.
-
-### Private peering
-
-For private peering, when discovery completes, you see rules for unique **Circuit Name** and **VNet Name**. Initially, these rules are disabled.
-
-![rules](./media/how-to-npm/14.png)
-
-1. Check the **Monitor this peering** checkbox.
-2. Select the checkbox **Enable Health Monitoring for this peering**.
-3. Choose the monitoring conditions. You can set custom thresholds to generate health events by typing threshold values. Whenever the value of the condition goes above its selected threshold for the selected network/subnetwork pair, a health event is generated.
-4. Select the ON-PREM AGENTS **Add Agents** button to add the on-premises servers from which you want to monitor the private peering connection. Make sure that you only choose agents that have connectivity to the Microsoft service endpoint that you specified in the section for Step 2. The on-premises agents must be able to reach the endpoint using the ExpressRoute connection.
-5. Save the settings.
-6. After enabling the rules and selecting the values and agents you want to monitor, there's a wait of approximately 30-60 minutes for the values to begin populating and the **ExpressRoute Monitoring** tiles to become available.
-
-### Microsoft peering
-
-For Microsoft peering, select the Microsoft peering connection(s) that you want to monitor, and configure the settings.
-
-1. Check the **Monitor this peering** checkbox.
-2. (Optional) You can change the target Microsoft service endpoint. By default, NPM chooses a Microsoft service endpoint as the target. NPM monitors connectivity from your on-premises servers to this target endpoint through ExpressRoute.
- * To change this target endpoint, select the **(edit)** link under **Target:**, and select another Microsoft service target endpoint from the list of URLs.
- ![edit target](./media/how-to-npm/edit_target.png)<br>
-
- * You can use a custom URL or IP Address. This option is relevant if you're using Microsoft peering to establish a connection to Azure PaaS services, such as Azure Storage, SQL databases, and Websites that are offered on public IP addresses. Select the link **(Use custom URL or IP Address instead)** at the bottom of the URL list, then enter the public endpoint of your Azure PaaS service that is connected through the ExpressRoute Microsoft peering.
- ![custom URL](./media/how-to-npm/custom_url.png)<br>
-
- * If you're using these optional settings, make sure that only the Microsoft service endpoint is selected here. The endpoint must be connected to ExpressRoute and reachable by the on-premises agents.
-3. Select the checkbox **Enable Health Monitoring for this peering**.
-4. Choose the monitoring conditions. You can set custom thresholds to generate health events by typing threshold values. Whenever the value of the condition goes above its selected threshold for the selected network/subnetwork pair, a health event is generated.
-5. Select the ON-PREM AGENTS **Add Agents** button to add the on-premises servers from which you want to monitor the Microsoft peering connection. Make sure that you only choose agents that have connectivity to the Microsoft service endpoints that you specified in the section for Step 2. The on-premises agents must be able to reach the endpoint using the ExpressRoute connection.
-6. Save the settings.
-7. After enabling the rules and selecting the values and agents you want to monitor, there's a wait of approximately 30-60 minutes for the values to begin populating and the **ExpressRoute Monitoring** tiles to become available.
-
-## <a name="explore"></a>Step 6: View monitoring tiles
-
-Once you see the monitoring tiles, your ExpressRoute circuits and connection resources gets monitored by NPM. You can select on Microsoft Peering tile to drill down on the health of Microsoft Peering connections.
-
-![monitoring tiles](./media/how-to-npm/15.png)
-
-### <a name="dashboard"></a>Network Performance Monitor page
-
-The NPM page contains a page for ExpressRoute that shows an overview of the health of ExpressRoute circuits and peerings.
-
-![Screenshot shows a dashboard with an overview of the health of the ExpressRoute circuits and peerings.](./media/how-to-npm/dashboard.png)
-
-### <a name="circuits"></a>List of circuits
-
-To view a list of all monitored ExpressRoute circuits, select the **ExpressRoute circuits** tile. You can select a circuit and view its health state, trend charts for packet loss, bandwidth utilization, and latency. The charts are interactive. You can select a custom time window for plotting the charts. You can drag the mouse over an area on the chart to zoom in and see fine-grained data points.
-
-![circuit_list](./media/how-to-npm/circuits.png)
-
-#### <a name="trend"></a>Trend of Loss, Latency, and Throughput
-
-The bandwidth, latency, and loss charts are interactive. You can zoom into any section of these charts, using mouse controls. You can also see the bandwidth, latency, and loss data for other intervals by clicking **Date/Time**, located below the Actions button on the upper left.
-
-![trend](./media/how-to-npm/16.png)
-
-### <a name="peerings"></a>Peerings list
-
-To view list of all connections to virtual networks over private peering, select the **Private Peerings** tile on the dashboard. Here, you can select a virtual network connection and view its health state, trend charts for packet loss, bandwidth utilization, and latency.
-
-![circuit list](./media/how-to-npm/peerings.png)
-
-### <a name="nodes"></a>Nodes view
-
-To view list of all the links between the on-premises nodes and Azure VMs/Microsoft service endpoints for the chosen ExpressRoute peering connection, select **View node links**. You can view the health status of each link, and the trend of loss and latency associated with them.
-
-![nodes view](./media/how-to-npm/nodes.png)
-
-### <a name="topology"></a>Circuit topology
-
-To view circuit topology, select the **Topology** tile. The topology diagram provides the latency for each segment on the network. Each layer 3 hop gets represented by a node of the diagram. Clicking on a hop reveals more details about the hop.
-
-You can increase the level of visibility to include on-premises hops by moving the slider bar below **Filters**. Moving the slider bar left or right increases or decreases the number of hops in the topology graph. The latency across each segment is visible, which allows for faster isolation of high latency segments on your network.
-
-![filters](./media/how-to-npm/topology.png)
-
-#### Detailed Topology view of a circuit
-
-This view shows VNet connections.
-![detailed topology](./media/how-to-npm/17.png)
expressroute Monitor Expressroute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/monitor-expressroute-reference.md
Aggregation type: *Avg*
You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-2 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers. #### <a name = "bgp"></a>BGP Availability - Split by Peer
Aggregation type: *Avg*
You can view near to real-time availability of BGP (Layer-3 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Primary BGP session status is up for private peering and the Second BGP session status is down for private peering. >[!NOTE] >During maintenance between the Microsoft edge and core network, BGP availability will appear down even if the BGP session between the customer edge and Microsoft edge remains up. For information about maintenance between the Microsoft edge and core network, make sure to have your [maintenance alerts turned on and configured](./maintenance-alerts.md).
Aggregation type: *Avg*
You can view metrics across all peerings on a given ExpressRoute circuit. #### Bits In and Out - Metrics per peering
Aggregation type: *Avg*
You can view metrics for private, public, and Microsoft peering in bits/second. #### FastPath routes count (at circuit level)
Aggregation type: *Avg*
You can view the bits in per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare inbound bandwidth for both links. #### <a name = "directout"></a>Bits Out Per Second - Split by link
Aggregation type: *Avg*
You can also view the bits out per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare outbound bandwidth for both links. #### <a name = "admin"></a>Admin State - Split by link
Aggregation type: *Avg*
You can view the Admin state for each link of the ExpressRoute Direct port pair. The Admin state represents if the physical port is on or off. This state is required to pass traffic across the ExpressRoute Direct connection. #### <a name = "line"></a>Line Protocol - Split by link
Aggregation type: *Avg*
You can view the line protocol across each link of the ExpressRoute Direct port pair. The Line Protocol indicates if the physical link is up and running over ExpressRoute Direct. Monitor this dashboard and set alerts to know when the physical connection goes down. #### <a name = "rxlight"></a>Rx Light Level - Split by link
Aggregation type: *Avg*
You can view the Rx light level (the light level that the ExpressRoute Direct port is **receiving**) for each port. Healthy Rx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Rx light level falls outside of the healthy range. >[!NOTE] > ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Rx light levels by lane. However, this is not supported on all deployments.
Aggregation type: *Avg*
You can view the Tx light level (the light level that the ExpressRoute Direct port is **transmitting**) for each port. Healthy Tx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Tx light level falls outside of the healthy range. >[!NOTE] > ExpressRoute Direct connectivity is hosted across different device platforms. Some ExpressRoute Direct connections will support a split view for Tx light levels by lane. However, this is not supported on all deployments.
firewall-manager Secure Hybrid Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-hybrid-network.md
Previously updated : 09/26/2023 Last updated : 07/24/2024
For this tutorial, you create three virtual networks:
- **VNet-Spoke** - the spoke virtual network represents the workload located on Azure. - **VNet-Onprem** - The on-premises virtual network represents an on-premises network. In an actual deployment, it can be connected using either a VPN or ExpressRoute connection. For simplicity, this tutorial uses a VPN gateway connection, and an Azure-located virtual network is used to represent an on-premises network. In this tutorial, you learn how to:
In this tutorial, you learn how to:
A hybrid network uses the hub-and-spoke architecture model to route traffic between Azure VNets and on-premises networks. The hub-and-spoke architecture has the following requirements: -- Set **AllowGatewayTransit** when peering VNet-Hub to VNet-Spoke. In a hub-and-spoke network architecture, a gateway transit allows the spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network.-
- Additionally, routes to the gateway-connected virtual networks or on-premises networks are automatically propagated to the routing tables for the peered virtual networks using the gateway transit. For more information, see [Configure VPN gateway transit for virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md).
--- Set **UseRemoteGateways** when you peer VNet-Spoke to VNet-Hub. If **UseRemoteGateways** is set and **AllowGatewayTransit** on remote peering is also set, the spoke virtual network uses gateways of the remote virtual network for transit. - To route the spoke subnet traffic through the hub firewall, you need a User Defined route (UDR) that points to the firewall with the **Virtual network gateway route propagation** setting disabled. This option prevents route distribution to the spoke subnets. This prevents learned routes from conflicting with your UDR. - Configure a UDR on the hub gateway subnet that points to the firewall IP address as the next hop to the spoke networks. No UDR is required on the Azure Firewall subnet, as it learns routes from BGP.
If you don't have an Azure subscription, create a [free account](https://azure.m
1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the Azure portal search bar, type **Firewall Manager** and press **Enter**.
-3. On the Azure Firewall Manager page, under **Security**, select **Azure firewall policies**.
+3. On the Azure Firewall Manager page, under **Security**, select **Azure Firewall Policies**.
:::image type="content" source="media/secure-hybrid-network/firewall-manager-policy.png" alt-text="Screenshot showing Firewall Manager main page."lightbox="media/secure-hybrid-network/firewall-manager-policy.png":::
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **IPv4 address space**, type **10.5.0.0/16**. 1. Under **Subnets**, select **default**.
-1. For Subnet template, select **Azure Firewall**.
+1. For **Subnet purpose**, select **Azure Firewall**.
1. For **Starting address**, type **10.5.0.0/26**. 1. Accept the other default settings, and then select **Save**. 1. Select **Review + create**. 1. Select **Create**.
-Add another subnet named **GatewaySubnet** with an address space of 10.5.1.0/27. This subnet is used for the VPN gateway.
+Add another subnet with a subnet purpose set to **Virtual Network Gateway** with a starting address of **10.5.1.0/27**. This subnet is used for the VPN gateway.
## Create the spoke virtual network
Add another subnet named **GatewaySubnet** with an address space of 10.5.1.0/27.
1. For **Starting address**, type **192.168.1.0/24**. 1. Accept the other default settings, and then select **Save**. 2. Select **Add a subnet**.
-1. For **Subnet template**, select **Virtual Network Gateway**.
+1. For **Subnet purpose**, select **Virtual Network Gateway**.
1. For **Starting address** type **192.168.2.0/27**. 1. Select **Add**. 1. Select **Review + create**.
Convert the **VNet-Hub** virtual network into a *hub virtual network* and secure
This takes a few minutes to deploy. 7. After deployment completes, go to the **FW-Hybrid-Test** resource group, and select the firewall.
-9. Note the **Firewall private IP** address on the **Overview** page. You use it later when you create the default route.
+9. Note the firewall **Private IP** address on the **Overview** page. You use it later when you create the default route.
## Create and connect the VPN gateways
Now create the VPN gateway for the hub virtual network. Network-to-network confi
4. For **Name**, type **GW-hub**. 5. For **Region**, select **(US) East US**. 6. For **Gateway type**, select **VPN**.
-7. For **VPN type**, select **Route-based**.
8. For **SKU**, select **VpnGw2**. 1. For **Generation**, select **Generation2**. 1. For **Virtual network**, select **VNet-hub**.
Now create the VPN gateway for the on-premises virtual network. Network-to-netwo
4. For **Name**, type **GW-Onprem**. 5. For **Region**, select **(US) East US**. 6. For **Gateway type**, select **VPN**.
-7. For **VPN type**, select **Route-based**.
8. For **SKU**, select **VpnGw2**. 1. For **Generation**, select **Generation2**. 1. For **Virtual network**, select **VNet-Onprem**.
Now you can create the VPN connections between the hub and on-premises gateways.
In this step, you create the connection from the hub virtual network to the on-premises virtual network. A shared key is referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. It takes some time to create the connection. 1. Open the **FW-Hybrid-Test** resource group and select the **GW-hub** gateway.
-2. Select **Connections** in the left column.
+2. In the left column, under **Settings**, select **Connections**.
3. Select **Add**. 4. For the connection name, type **Hub-to-Onprem**. 5. Select **VNet-to-VNet** for **Connection type**.
Create the on-premises to hub virtual network connection. This step is similar t
3. Select **Add**. 4. For the connection name, type **Onprem-to-Hub**. 5. Select **VNet-to-VNet** for **Connection type**.
-6. For the **Second virtual network gateway**, select **GW-hub**.
-7. For **Shared key (PSK)**, type **AzureA1b2C3**.
-8. Select **OK**.
+1. Select **Next : Settings**.
+1. For the **First virtual network gateway**, select **GW-Onprem**.
+1. For the **Second virtual network gateway**, select **GW-hub**.
+1. For **Shared key (PSK)**, type **AzureA1b2C3**.
+1. Select **OK**.
#### Verify the connection
-After about five minutes or so, the status of both connections should be **Connected**.
+After about five minutes or so after the second network connection is deployed, the status of both connections should be **Connected**.
## Peer the hub and spoke virtual networks
Now peer the hub and spoke virtual networks.
1. Open the **FW-Hybrid-Test** resource group and select the **VNet-hub** virtual network. 2. In the left column, select **Peerings**. 3. Select **Add**.
-4. Under **This virtual network**:
-
+1. Under **Remote virtual network summary**:
|Setting name |Value | |||
- |Peering link name| HubtoSpoke|
- |Allow traffic to remote virtual network| selected |
- |Allow traffic forwarded from the remote virtual network (allow gateway transit) | selected |
- |Use remote Virtual network gateway or route server | not selected |
+ |Peering link name | SpoketoHub|
+ |Virtual network deployment model| Resource Manager|
+ |Subscription|\<your subscription\>|
+ |Virtual network| VNet-Spoke|
+ |Allow 'VNet-Spoke' to access 'VNet-hub'|selected|
+ |Allow 'VNet-Spoke' to receive forwarded traffic from 'VNet-Hub'|selected|
+ |Allow gateway or route server in 'VNet-Spoke' to forward traffic to 'VNet-Hub'| not selected|
+ |Enable 'VNet-Spoke' to use 'VNet-hub's' remote gateway or route server|selected|
+
+1. Under **Local virtual network summary**:
-5. Under **Remote virtual network**:
|Setting name |Value | |||
- |Peering link name | SpoketoHub|
- |Virtual network deployment model| Resource Manager|
- |Subscription|\<your subscription\>|
- |Virtual network| VNet-Spoke
- |Allow traffic to current virtual network | selected |
- |Allow traffic forwarded from current virtual network (allow gateway transit) | selected |
- |Use current virtual network gateway or route server | selected |
+ |Peering link name| HubtoSpoke|
+ |Allow 'VNet-hub' to access 'VNet-Spoke'|selected|
+ |Allow 'VNet-hub' to receive forwarded traffic from 'VNet-Spoke'|selected|
+ |Allow gateway or route server in 'VNet-Hub' to forward traffic to 'VNet-Spoke'|selected|
+ |Enable 'VNet-hub' to use 'VNet-Spoke's' remote gateway or route server| not selected|
+ 5. Select **Add**.
- :::image type="content" source="media/secure-hybrid-network/firewall-peering.png" alt-text="Screenshot showing Vnet peering.":::
+ :::image type="content" source="media/secure-hybrid-network/firewall-peering.png" lightbox="media/secure-hybrid-network/firewall-peering.png" alt-text="Screenshot showing Vnet peering.":::
## Create the routes
This is a virtual machine that you use to connect using Remote Desktop to the pu
Your connection should succeed, and you should be able to sign in.
-So now you've verified that the firewall rules are working:
+So now you verified that the firewall rules are working:
<!- You can ping the server on the spoke VNet.> - You can browse web server on the spoke virtual network.
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
# Configure Azure Firewall rules
-You can configure NAT rules, network rules, and applications rules on Azure Firewall using either classic rules or Firewall Policy. Azure Firewall denies all traffic by default, until rules are manually configured to allow traffic. The rules are terminating, so rule proceisng stops on a match.
+You can configure NAT rules, network rules, and applications rules on Azure Firewall using either classic rules or Firewall Policy. Azure Firewall denies all traffic by default, until rules are manually configured to allow traffic. The rules are terminating, so rule processing stops on a match.
## Rule processing using classic rules
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
Last updated 07/23/2024
HDInsight 4.0 has several advantages over HDInsight 3.6. Here's an overview of what's new in Azure HDInsight 4.0.
-| # | OSS component | HDInsight 4.0 Version | HDInsight 3.6 Version |
+| # | OSS component | HDInsight 4.0 version | HDInsight 3.6 version |
| | | | | | 1 | Apache Hadoop | 3.1.1 | 2.7.3 | | 2 | Apache HBase | 2.1.6 | 1.1.2 | | 3 | Apache Hive | 3.1.0 | 1.2.1, 2.1 (LLAP) |
-| 4 | Apache Kafka | 2.1.1, 2.4(GA) | 1.1 |
+| 4 | Apache Kafka | 2.1.1, 2.4 (GA) | 1.1 |
| 5 | Apache Phoenix | 5 | 4.7.0 |
-| 6 | Apache Spark | 2.4.4, 3.0.0(Preview) | 2.2 |
+| 6 | Apache Spark | 2.4.4, 3.0.0 (Preview) | 2.2 |
| 7 | Apache TEZ | 0.9.1 | 0.7.0 | | 8 | Apache ZooKeeper | 3.4.6 | 3.4.6 |
-| 9 | Apache Kafka | 2.1.1, 2.4.1(Preview) | 1.1 |
+| 9 | Apache Kafka | 2.1.1, 2.4.1 (Preview) | 1.1 |
| 10 | Apache Ranger | 1.1.0 | 0.7.0 |
-## Workloads and Features
+## Workloads and features
-**Hive**
-- Advanced features
- - LLAP workload management
- - LLAP Support JDBC, Druid, and Kafka connectors
- - Better SQL features ΓÇô Constraints and default values
- - Surrogate Keys
+### Hive
+
+- Advanced features:
+ - Low-latency analytical processing (LLAP) workload management.
+ - LLAP support for Java Database Connectivity (JDBC), Druid, and Kafka connectors.
+ - Better SQL features (constraints and default values).
+ - Surrogate keys.
- Information schema.-- Performance advantage
- - Result caching - Caching query results allow a previously computed query result to be reused
- - Dynamic materialized views - Precomputation of summaries
- - ACID V2 performance improvements in both storage format and execution engine
-- Security
- - GDPR compliance enabled on Apache Hive transactions
- - Hive UDF execution authorization in ranger
-
- **HBase**
-- Advanced features
- - Procedure 2. Procedure V2, or procv2, is an updated framework for executing multistep HBase administrative operations.
+- Performance advantage:
+ - Result caching. Caching query results allows a previously computed query result to be reused.
+ - Dynamic materialized views and precomputation of summaries.
+ - Atomicity, consistency, isolation, and durability (ACID) V2 performance improvements in both storage format and execution engine.
+- Security:
+ - GDPR compliance enabled on Apache Hive transactions.
+ - Hive user-defined function (UDF) execution authorization in Ranger.
+
+### HBase
+
+- Advanced features:
+ - Procedure V2 (procv2), an updated framework for executing multistep HBase administrative operations.
- Fully off-heap read/write path.
- - In-memory compactions
- - HBase cluster supports Premium ADLS Gen2
-- Performance advantage
- - Accelerated Writes uses Azure premium SSD managed disks to improve performance of the Apache HBase Write Ahead Log (WAL).
-- Security
- - Hardening of both secondary indexes, which include Local and Global
- -
-**Kafka**
-- Advanced features
- - Kafka partition distribution on Azure fault domains
- - Zstd compression support
- - Kafka Consumer Incremental Rebalance
- - Support MirrorMaker 2.0
-- Performance advantage
- - Improved windowed aggregation performance in Kafka Streams
- - Improved broker resiliency by reducing memory footprint of message conversion
- - Replication protocol improvements for fast leader failover
-- Security
- - Access control for topic creation for specific topics/topic prefix
- - Hostname verification to prevent SSL configuration man-in-the- middle attacks
- - Improved encryption support with faster Transport Layer Security (TLS) and CRC32C implementation
-
-**Spark**
-- Advanced features
- - Structured streaming support for ORC
- - Capability to integrate with new Metastore Catalog feature
- - Structured Streaming support for Hive Streaming library
- - Transparent write to Hive warehouse
- - Spark Cruise - an automatic computation reuse system for Spark.
-- Performance advantage
- - Result caching - Caching query results allow a previously computed query result to be reused
- - Dynamic materialized views - Precomputation of summaries
-- Security
- - GDPR compliance enabled for Spark transactions
-
-## Hive Partition Discovery and Repair
-
-Hive automatically discovers and synchronizes the metadata of the partition in Hive Metastore.
-The `discover.partitions` table property enables and disables synchronization of the file system with partitions. In external partitioned tables, this property is enabled (true) by default.
-When Hive Metastore Service (HMS) is started in remote service mode, a background thread `(PartitionManagementTask)` gets scheduled periodically every 300 s (configurable via `metastore.partition.management.task.frequency config`) that looks for tables with `discover.partitions` table property set to true and performs `msck` repair in sync mode.
-
-If the table is a transactional table, then Exclusive Lock is obtained for that table before performing `msck repair`. With this table property, `MSCK REPAIR TABLE table_name SYNC PARTITIONS` is no longer required to be run manually.
-Assuming you have an external table created using a version of Hive that doesn't support partition discovery, enable partition discovery for the table.
+ - In-memory compactions.
+ - HBase cluster support of the Azure Data Lake Storage Gen2 Premium tier.
+- Performance advantage:
+ - Accelerated writes that use Azure Premium SSD managed disks to improve performance of the Apache HBase write-ahead log (WAL).
+- Security:
+ - Hardening of both secondary indexes, which include local and global.
+
+### Kafka
+
+- Advanced features:
+ - Kafka partition distribution on Azure fault domains.
+ - Zstandard (zstd) compression support.
+ - Kafka Consumer Incremental Rebalance.
+ - Support for MirrorMaker 2.0.
+- Performance advantage:
+ - Improved windowed aggregation performance in Kafka Streams.
+ - Improved broker resiliency by reducing the memory footprint of message conversion.
+ - Replication protocol improvements for fast leader failover.
+- Security:
+ - Access control for creation of specific topics or topic prefixes.
+ - Host-name verification to help prevent Secure Sockets Layer (SSL) configuration man-in-the-middle attacks.
+ - Improved encryption support with faster Transport Layer Security (TLS) and CRC32C implementation.
+
+### Spark
+
+- Advanced features:
+ - Structured Streaming support for ORC.
+ - Capability to integrate with the new metastore catalog feature.
+ - Structured Streaming support for the Hive Streaming library.
+ - Transparent writes to Hive warehouses.
+ - SparkCruise, an automatic computation reuse system for Spark.
+- Performance advantage:
+ - Result caching. Caching query results allows a previously computed query result to be reused.
+ - Dynamic materialized views and precomputation of summaries.
+- Security:
+ - GDPR compliance enabled for Spark transactions.
+
+## Hive partition discovery and repair
+
+Hive automatically discovers and synchronizes the metadata of the partition in Hive Metastore (HMS).
+
+The `discover.partitions` table property enables and disables synchronization of the file system with partitions. In external partitioned tables, this property is enabled (`true`) by default.
+
+When Hive Metastore starts in remote service mode, a periodic background thread `(PartitionManagementTask)` is scheduled for every 300 seconds (configurable via `metastore.partition.management.task.frequency config`). The thread looks for tables with the `discover.partitions` table property set to `true` and performs `msck` repair in sync mode.
+
+If the table is a transactional table, the thread obtains an exclusive lock for that table before it performs `msck` repair. With this table property, you no longer need to run `MSCK REPAIR TABLE table_name SYNC PARTITIONS` manually.
+
+If you have an external table that you created by using a version of Hive that doesn't support partition discovery, enable partition discovery for the table:
```ALTER TABLE exttbl SET TBLPROPERTIES ('discover.partitions' = 'true');```
-Set synchronization of partitions to occur every 10 minutes expressed in seconds: In Ambari > Hive > Configs, `set metastore.partition.management.task.frequency` to 3600 or more.
-
+Set synchronization of partitions to occur every 10 minutes, expressed in seconds. In **Ambari** > **Hive** > **Configs**, set `metastore.partition.management.task.frequency` to `3600` or more.
> [!WARNING]
-> With the `management.task` running every 10 minutes, there will be pressure on the SQL server DTU.
+> Running `management.task` every 10 minutes puts pressure on the SQL Server database transaction units (DTUs).
-You can verify the output from Microsoft Azure portal.
+You can verify the output from the Azure portal.
-Hive drops the metadata and corresponding data in any partition created after the retention period. You express the retention time using a numeral and the following character or characters.
-Hive drops the metadata and corresponding data in any partition created after the retention period. You express the retention time using a numeral and the following characters.
+Hive drops the metadata and corresponding data in any partition that you create after the retention period. You express the retention time by using a numeral and the following character or characters:
``` ms (milliseconds)
m (minutes)
d (days) ```
-To configure a partition retention period for one week.
+To configure a partition retention period for one week, use this command:
``` ALTER TABLE employees SET TBLPROPERTIES ('partition.retention.period'='7d'); ```
-The partition metadata and the actual data for employees in Hive is automatically dropped after a week.
+The partition metadata and the actual data for employees in Hive are automatically dropped after a week.
-## Hive 3
+## Performance optimizations available for Hive 3
-### Performance optimizations available under Hive 3
+### OLAP vectorization
-OLAP Vectorization Dynamic Semijoin reduction Parquet support for vectorization with LLAP Automatic query cache.
+Online analytical processing (OLAP) vectorization allows Hive to process a batch of rows together instead of processing one row at a time. Each batch is usually an array of primitive types. Operations are performed on the entire column vector, which improves the instruction pipelines and cache usage.
-**New SQL features**
+This feature includes vectorized execution of Partitioned Table Function (PTF), roll-ups, and grouping sets.
-Materialized Views Surrogate Keys Constraints Metastore CachedStore.
+### Dynamic semijoin reduction
-**OLAP Vectorization**
+Dynamic `semijoin` reduction dramatically improves performance for selective joins. It builds a bloom filter from one side of a join and filters rows from the other side. It skips the scan and provides further evaluation of rows that don't qualify for the join.
-Vectorization allows Hive to process a batch of rows together instead of processing one row at a time. Each batch is usually an array of primitive types. Operations are performed on the entire column vector, which improves the instruction pipelines and cache usage.
-Vectorized execution of PTF, roll up and grouping sets.
+### Parquet support for vectorization with LLAP
-**Dynamic `Semijoin` reduction**
+Vectorized query execution is a feature that greatly reduces the CPU usage for typical query operations such as:
-Dramatically improves performance for selective joins.
-It builds a bloom filter from one side of join and filters rows from other side.
-Skips scan and further evaluation of rows that wouldn't qualify the join.
+- Scan
+- Filter
+- Aggregate
+- Join
-**Parquet support for vectorization with LLAP**
+Vectorization is also implemented for the ORC format. Spark also uses whole-stage code generation and this vectorization (for Parquet) since Spark 2.0. There's an added time-stamp column for Parquet vectorization and format under LLAP.
+
+> [!WARNING]
+> Parquet writes are slow when you convert to zoned times from the time stamp. For more information, see the [issue details](https://issues.apache.org/jira/browse/HIVE-24693) on the Apache Hive site.
-Vectorized query execution is a feature that greatly reduces the CPU usage for typical query operations such as
+### Automatic query cache
-* scans
-* filters
-* aggregate
-* joins
+Here are some considerations for automatic query cache:
-Vectorization is also implemented for the ORC format. Spark also uses Whole Stage Codegen and this vectorization (for Parquet) since Spark 2.0.
-Added timestamp column for Parquet vectorization and format under LLAP.
+- With `hive.query.results.cache.enabled=true`, every query that runs in Hive 3 stores its result in a cache.
+- If the input table changes, Hive evicts invalid data from the cache. For example, if you perform aggregation and the base table changes, queries that you run most frequently stay in the cache, but stale queries are evicted.
+- The query result cache works with managed tables only because Hive can't track changes to an external table.
+- If you join external and managed tables, Hive falls back to running the full query. The query result cache works with ACID tables. If you update an ACID table, Hive reruns the query automatically.
+- You can enable and disable the query result cache from the command line. You might want to do so to debug a query.
+- You can disable the query result cache by setting `hive.query.results.cache.enabled=false`.
+- Hive stores the query result cache in `/tmp/hive/__resultcache__/`. By default, Hive allocates 2 GB for the query result cache. You can change this setting by configuring the following parameter in bytes: `hive.query.results.cache.max.size`.
+- Changes to query processing: During query compilation, check the result cache to see if it already has the query results. If there's a cache hit, the query plan is set to a `FetchTask` that reads from the cached location.
-> [!WARNING]
-> Parquet writes are slow when conversion to zoned times from timestamp. For more information, see [**here**](https://issues.apache.org/jira/browse/HIVE-24693).
+During query execution, Parquet `DataWriteableWriter` relies on `NanoTimeUtils` to convert a time-stamp object into a binary value. This query calls `toString()` on the time-stamp object, and then it parses the string.
+If you can use the result cache for this query:
-### Automatic query cache
-1. With `hive.query.results.cache.enabled=true`, every query that runs in Hive 3 stores its result in a cache.
-1. If the input table changes, Hive evicts invalid data from the cache. For example, if you perform aggregation and the base table changes, queries you run most frequently stay in cache, but stale queries are evicted.
-1. The query result cache works with managed tables only because Hive can't track changes to an external table.
-1. If you join external and managed tables, Hive falls back to executing the full query. The query result cache works with ACID tables. If you update an ACID table, Hive reruns the query automatically.
-1. You can enable and disable the query result cache from command line. You might want to do so to debug a query.
-1. Disable the query result cache by setting the following parameter to false: `hive.query.results.cache.enabled=false`
-1. Hive stores the query result cache in `/tmp/hive/__resultcache__/`. By default, Hive allocates 2 GB for the query result cache. You can change this setting by configuring the following parameter in bytes: `hive.query.results.cache.max.size`
-1. Changes to query processing: During query compilation, check the results cache to see if it already has the query results. If there's a cache hit, then the query plan is set to a `FetchTask` that reads from the cached location.
-
-During query execution:
-
-Parquet `DataWriteableWriter` relies on `NanoTimeUtils` to convert a timestamp object into a binary value. This query calls `toString()` on the timestamp object, and then parses the String.
-
-1. If the results cache can be used for this query
- 1. The query is `FetchTask` reading from the cached results directory.
- 1. No cluster tasks are required.
-1. If the results cache can't be used, run the cluster tasks as normal
- 1. Check if the query results that have been computed are eligible to add to the results cache.
- 1. If results can be cached, the temporary results generated for the query are saved to the results cache. You might need to perform steps here to ensure that the query clean-up does not delete the query results directory.
+- The query is `FetchTask` reading from the directory of cached results.
+- No cluster tasks are required.
-## SQL features
+If you can't use the result cache, run the cluster tasks as normal:
-**Materialized Views**
+- Check if the computed query results are eligible to add to the result cache.
+- If results can be cached, the temporary results generated for the query are saved to the result cache. You might need to perform steps to ensure that the query cleanup doesn't delete the query result directory.
-The initial implementation introduced in Apache Hive 3.0.0 focuses on introducing materialized views and automatic query rewriting based on those materializations in the project. Materialized views can be stored natively in Hive or in other custom storage handlers (ORC), and they can seamlessly exploit exciting new Hive features such as LLAP acceleration.
+## SQL features
+
+The initial implementation introduced in Apache Hive 3.0.0 focuses on introducing materialized views and automatic query rewriting based on those materializations in the project. Materialized views can be stored natively in Hive or in other custom storage handlers (ORC), and they can take advantage of new Hive features such as LLAP acceleration.
-More information, see [Hive - Materialized Views - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/hive-materialized-views/ba-p/2502785)
+For more information, see the [Azure blog post on Hive materialized views](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/hive-materialized-views/ba-p/2502785).
-## Surrogate Keys
+## Surrogate keys
-Use the built-in `SURROGATE_KEY` user-defined function (UDF) to automatically generate numerical Ids for rows as you enter data into a table. The generated surrogate keys can replace wide, multiple composite keys.
+Use the built-in `SURROGATE_KEY` UDF to automatically generate numerical IDs for rows as you enter data into a table. The generated surrogate keys can replace wide, multiple composite keys.
-Hive supports the surrogate keys on ACID tables only. The table you want to join using surrogate keys can't have column types that need to cast. These data types must be primitives, such as INT or `STRING`.
+Hive supports the surrogate keys on ACID tables only. The table that you want to join by using surrogate keys can't have column types that need to cast. These data types must be primitives, such as `INT` or `STRING`.
-Joins using the generated keys are faster than joins using strings. Using generated keys doesn't force data into a single node by a row number. You can generate keys as abstractions of natural keys. Surrogate keys have an advantage over UUIDs, which are slower and probabilistic.
+Joins that use the generated keys are faster than joins that use strings. Using generated keys doesn't force data into a single node by a row number. You can generate keys as abstractions of natural keys. Surrogate keys have an advantage over universally unique identifiers (UUIDs), which are slower and probabilistic.
-The `SURROGATE_KEY UDF` generates a unique ID for every row that you insert into a table.
-It generates keys based on the execution environment in a distributed system, which includes many factors, such as
+The `SURROGATE_KEY` UDF generates a unique ID for every row that you insert into a table. It generates keys based on the execution environment in a distributed system, which includes many factors such as:
-1. Internal data structures
-2. State of a table
-3. Last transaction ID.
+- Internal data structures
+- State of a table
+- Last transaction ID
-Surrogate key generation doesn't require any coordination between compute tasks. The UDF takes no arguments, or two arguments are
+Surrogate key generation doesn't require any coordination between compute tasks. The UDF takes no arguments, or two arguments are:
-1. Write ID bits
-1. Task ID bits
+- Write ID bits
+- Task ID bits
### Constraints
-SQL constraints to enforce data integrity and improve performance. The optimizer uses the constraint information to make smart decisions. Constraints can make data predictable and easy to locate.
+SQL constraints help enforce data integrity and improve performance. The optimizer uses the constraint information to make smart decisions. Constraints can make data predictable and easy to locate.
-|Constraints|Description|
+|Constraint|Description|
|||
-|Check|Limits the range of values you can place in a column.|
-|PRIMARY KEY|Identifies each row in a table using a unique identifier.|
-|FOREIGN KEY|Identifies a row in another table using a unique identifier.|
-|UNIQUE KEY|Checks that values stored in a column are different.|
-|NOT NULL|Ensures that a column can't be set to NULL.|
-|ENABLE|Ensures that all incoming data conforms to the constraint.|
-|DISABLE|Doesn't ensure that all incoming data conforms to the constraint.|
-|VALIDATEC|hecks that all existing data in the table conforms to the constraint.|
-|NOVALIDATE|Doesn't check that all existing data in the table conforms to the constraint
-|ENFORCED|Maps to ENABLE NOVALIDATE.|
-|NOT ENFORCED|Maps to DISABLE NOVALIDATE.|
-|RELY|Specifies abiding by a constraint; used by the optimizer to apply further optimizations.|
-|NORELY|Specifies not abiding by a constraint.|
-
-For more information, see https://cwiki.apache.org/confluence/display/Hive/Supported+Features%3A++Apache+Hive+3.1
-
-### Metastore `CachedStore`
-
-Hive metastore operation takes much time and thus slow down Hive compilation. In some extreme case, it takes longer than the actual query run time. Especially, we find the latency of cloud db is high and 90% of total query runtime is waiting for metastore SQL database operations. Based on this observation, the metastore operation performance is enhanced, if we have a memory structure which cache the database query result.
-
-`hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.cache.CachedStore`
+|`Check`|Limits the range of values that you can place in a column.|
+|`PRIMARY KEY`|Identifies each row in a table by using a unique identifier.|
+|`FOREIGN KEY`|Identifies a row in another table by using a unique identifier.|
+|`UNIQUE KEY`|Checks that values stored in a column are different.|
+|`NOT NULL`|Ensures that a column can't be set to `NULL`.|
+|`ENABLE`|Ensures that all incoming data conforms to the constraint.|
+|`DISABLE`|Doesn't ensure that all incoming data conforms to the constraint.|
+|`VALIDATEC`|Checks that all existing data in the table conforms to the constraint.|
+|`NOVALIDATE`|Doesn't check that all existing data in the table conforms to the constraint.|
+|`ENFORCED`|Maps to `ENABLE NOVALIDATE`.|
+|`NOT ENFORCED`|Maps to `DISABLE NOVALIDATE`.|
+|`RELY`|Specifies abiding by a constraint. The optimizer uses it to apply further optimizations.|
+|`NORELY`|Specifies not abiding by a constraint.|
+For more information, see [Supported Features: Apache Hive 3.1](https://cwiki.apache.org/confluence/display/Hive/Supported+Features%3A++Apache+Hive+3.1) on the Apache Hive site.
-## Troubleshooting guide
+### Metastore CachedStore
-[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](./interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
+A Hive Metastore operation takes much time and slows down Hive compilation. In some extreme cases, it takes longer than the actual query runtime.
-## References
-
-**Hive 3.1.0**
+In particular, we find that the latency of the cloud database is high and that 90% of total query runtime is waiting for metastore SQL database operations. Based on this observation, you can enhance the performance of the Hive Metastore operation if you have a memory structure that caches the database query result:
-https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/hive-overview/content/hive_whats_new_in_this_release_hive.html
+`hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.cache.CachedStore`
-**HBase 2.1.6**
-https://apache.googlesource.com/hbase/+/ba26a3e1fd5bda8a84f99111d9471f62bb29ed1d/RELEASENOTES.md
+## References
-**Hadoop 3.1.1**
+For more information, see the following release notes:
-https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/3.1.1/RELEASENOTES.3.1.1.html
+- [Hive 3.1.0](https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/hive-overview/content/hive_whats_new_in_this_release_hive.html)
+- [HBase 2.1.6](https://apache.googlesource.com/hbase/+/ba26a3e1fd5bda8a84f99111d9471f62bb29ed1d/RELEASENOTES.md)
+- [Hadoop 3.1.1](https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/3.1.1/RELEASENOTES.3.1.1.html)
-## Further reading
+## Related content
-* [HDInsight 4.0 Announcement](./hdinsight-version-release.md)
-* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
+- [HDInsight 4.0 announcement](./hdinsight-version-release.md)
+- [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
+- [Troubleshooting guide for migration of Hive workloads from HDInsight 3.6 to HDInsight 4.0](./interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md)
hdinsight Domain Joined Authentication Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/domain-joined-authentication-issues.md
Title: Authentication issues in Azure HDInsight
description: Authentication issues in Azure HDInsight Previously updated : 05/09/2024 Last updated : 07/09/2024 # Authentication issues in Azure HDInsight This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters.
-On secure clusters backed by Azure Data Lake (Gen1 or Gen2), when domain users sign in to the cluster services through HDI Gateway (like signing in to the Apache Ambari portal), HDI Gateway tries to obtain an OAuth token from Microsoft Entra first, and then get a Kerberos ticket from Microsoft Entra Domain Services. Authentication can fail in either of these stages. This article is aimed at debugging some of those issues.
+On secure clusters backed by Azure Data Lake Gen2, when domain users sign in to the cluster services through HDI Gateway (like signing in to the Apache Ambari portal), HDI Gateway tries to obtain an OAuth token from Microsoft Entra first, and then get a Kerberos ticket from Microsoft Entra Domain Services. Authentication can fail in either of these stages. This article is aimed at debugging some of those issues.
-When the authentication fails, you gets prompted for credentials. If you cancel this dialog, the error message is printed. Here are some of the common error messages:
+When the authentication fails, you get prompted for credentials. If you cancel this dialog, the error message is printed. Here are some of the common error messages:
## invalid_grant or unauthorized_client, 50126
Sign in denied.
### Cause
-To get to this stage, your OAuth authentication isn't an issue, but Kerberos authentication is. If this cluster is backed by ADLS, OAuth sign in has succeeded before Kerberos auth is attempted. On WASB clusters, OAuth sign in isn't attempted. There could be many reasons for Kerberos failure - like password hashes are out of sync, user account locked out in Microsoft Entra Domain Services, and so on. Password hashes sync only when the user changes password. When you create the Microsoft Entra Domain Services instance, it will start syncing passwords that are changed after the creation. It can't retroactively sync passwords that were set before its inception.
+To get to this stage, your OAuth authentication isn't an issue, but Kerberos authentication is. If this cluster backed by ADLS, OAuth sign-in succeeded before Kerberos auth is attempted. On WASB clusters, OAuth sign-in isn't attempted. There could be many reasons for Kerberos failure - like password hashes are out of sync, user account locked out in Microsoft Entra Domain Services, and so on. Password hashes sync only when the user changes password. When you create the Microsoft Entra Domain Services instance, it will start syncing passwords that are changed after the creation. It can't retroactively sync passwords that were set before its inception.
### Resolution
Try to SSH into a You need to try to authenticate (kinit) using the same user cr
-## kinit fails
+## Kinit fails
### Issue
Ways to find `sAMAccountName`:
-## kinit fails with Preauthentication failure
+## Kinit fails with Preauthentication failure
### Issue
User receives error message `Error fetching access token`.
### Cause
-This error occurs intermittently when users try to access the ADLS Gen2 using ACLs and the Kerberos token has expired.
+This error occurs intermittently when users try to access the ADLS Gen2 using ACLs and the Kerberos token expired.
### Resolution * For Azure Data Lake Storage Gen1, clean browser cache and log into Ambari again.
-* For Azure Data Lake Storage Gen2, Run `/usr/lib/hdinsight-common/scripts/RegisterKerbTicketAndOAuth.sh <upn>` for the user the user is trying to login as
+* For Azure Data Lake Storage Gen2, Run `/usr/lib/hdinsight-common/scripts/RegisterKerbTicketAndOAuth.sh <upn>` user is trying to log in as
hdinsight Hdinsight Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/hdinsight-security-overview.md
Title: Overview of enterprise security in Azure HDInsight
description: Learn the various methods to ensure enterprise security in Azure HDInsight. Previously updated : 06/15/2024 Last updated : 07/23/2024 #Customer intent: As a user of Azure HDInsight, I want to learn the means that Azure HDInsight offers to ensure security for the enterprise.
The following table provides links to resources for each type of security soluti
| Security area | Solutions available | Responsible party | ||||
-| Data Access Security | Configure [access control lists ACLs](../../storage/blobs/data-lake-storage-access-control.md) for Azure Data Lake Storage Gen1 and Gen2 | Customer |
+| Data Access Security | Configure [access control lists ACLs](../../storage/blobs/data-lake-storage-access-control.md) for Azure Data Lake Storage Gen2 | Customer |
| | Enable the ["Secure transfer required"](../../storage/common/storage-require-secure-transfer.md) property on storage accounts. | Customer | | | Configure [Azure Storage firewalls](../../storage/common/storage-network-security.md) and virtual networks | Customer | | | Configure [Azure virtual network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Cosmos DB and [Azure SQL DB](/azure/azure-sql/database/vnet-service-endpoint-rule-overview) | Customer |
hdinsight Apache Hadoop Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-introduction.md
description: An introduction to HDInsight, and the Apache Hadoop technology stac
Previously updated : 05/09/2024 Last updated : 07/23/2024 #Customer intent: As a data analyst, I want understand what is Hadoop and how it is offered in Azure HDInsight so that I can decide on using HDInsight instead of on premises clusters.
Last updated 05/09/2024
[Apache Hadoop](https://hadoop.apache.org/) was the original open-source framework for distributed processing and analysis of big data sets on clusters. The Hadoop ecosystem includes related software and utilities, including Apache Hive, Apache HBase, Spark, Kafka, and many others.
-Azure HDInsight is a fully managed, full-spectrum, open-source analytics service in the cloud for enterprises. The Apache Hadoop cluster type in Azure HDInsight allows you to use the [Apache Hadoop Distributed File System (HDFS)](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html), [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) resource management, and a simple [MapReduce](https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) programming model to process and analyze batch data in parallel. Hadoop clusters in HDInsight are compatible with [Azure Blob storage](../../storage/common/storage-introduction.md), [Azure Data Lake Storage Gen1](../../data-lake-store/data-lake-store-overview.md), or [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md).
+Azure HDInsight is a fully managed, full-spectrum, open-source analytics service in the cloud for enterprises. The Apache Hadoop cluster type in Azure HDInsight allows you to use the [Apache Hadoop Distributed File System (HDFS)](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html), [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html) resource management, and a simple [MapReduce](https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) programming model to process and analyze batch data in parallel. Hadoop clusters in HDInsight are compatible with [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md).
To see available Hadoop technology stack components on HDInsight, see [Components and versions available with HDInsight](../hdinsight-component-versioning.md). To read more about Hadoop in HDInsight, see the [Azure features page for HDInsight](https://azure.microsoft.com/services/hdinsight/).
hdinsight Apache Hadoop Linux Create Cluster Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
|Region | From the drop-down list, select a region where the cluster is created. Choose a location closer to you for better performance. | |Cluster type| Select **Select cluster type**. Then select **Hadoop** as the cluster type.| |Version|From the drop-down list, select a **version**. Use the default version if you don't know what to choose.|
- |Cluster login username and password | The default login name is **admin**. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ```' ` "```). Make sure you **do not provide** common passwords such as "Pass@word1".|
+ |Cluster sign in username and password | The default sign in name is **admin**. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one nonalphanumeric character (except characters ```' ` "```). Make sure you **do not provide** common passwords such as "Pass@word1".|
|Secure Shell (SSH) username | The default username is `sshuser`. You can provide another name for the SSH username. |
- |Use cluster login password for SSH| Select this check box to use the same password for SSH user as the one you provided for the cluster login user.|
+ |Use cluster sign in password for SSH| Select this check box to use the same password for SSH user as the one you provided for the cluster sign in user.|
:::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-basics.png" alt-text="HDInsight Linux get started provide cluster basic values." border="true":::
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
:::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-storage.png" alt-text="HDInsight Linux get started provide cluster storage values." border="true":::
- Each cluster has an [Azure Storage account](../hdinsight-hadoop-use-blob-storage.md), an [Azure Data Lake Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
+ Each cluster has an [Azure Storage account](../hdinsight-hadoop-use-blob-storage.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
Select the **Review + create** tab.
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
:::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-hive-view-save-results.png" alt-text="Save result of Apache Hive query." border="true":::
-After you've completed a Hive job, you can [export the results to Azure SQL Database or SQL Server database](apache-hadoop-use-sqoop-mac-linux.md), you can also [visualize the results using Excel](apache-hadoop-connect-excel-power-query.md). For more information about using Hive in HDInsight, see [Use Apache Hive and HiveQL with Apache Hadoop in HDInsight to analyze a sample Apache log4j file](hdinsight-use-hive.md).
+After you've completed a Hive job, you can [export the results to Azure SQL Database or SQL Server database](apache-hadoop-use-sqoop-mac-linux.md), you can also [visualize the results using Excel](apache-hadoop-connect-excel-power-query.md). For more information about using Hive in HDInsight, see [Use Apache Hive and HiveQL with Apache Hadoop in HDInsight to analyze a sample Apache Log4j file](hdinsight-use-hive.md).
## Clean up resources
After you complete the quickstart, you may want to delete the cluster. With HDIn
:::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-delete-cluster.png" alt-text="Azure HDInsight delete cluster." border="true":::
-2. If you want to delete the cluster as well as the default storage account, select the resource group name (highlighted in the previous screenshot) to open the resource group page.
+2. If you want to delete the cluster and the default storage account, select the resource group name (highlighted in the previous screenshot) to open the resource group page.
3. Select **Delete resource group** to delete the resource group, which contains the cluster and the default storage account. Note deleting the resource group deletes the storage account. If you want to keep the storage account, choose to delete the cluster only.
hdinsight Apache Hadoop Linux Tutorial Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started.md
Two Azure resources are defined in the template:
## Review deployed resources
-Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Blob Storage](../hdinsight-hadoop-use-blob-storage.md) account, an [Azure Data Lake Storage Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
+Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Blob Storage](../hdinsight-hadoop-use-blob-storage.md) account, or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
> [!NOTE] > For other cluster creation methods and understanding the properties used in this quickstart, see [Create HDInsight clusters](../hdinsight-hadoop-provision-linux-clusters.md).
hdinsight Apache Hadoop On Premises Migration Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-storage.md
description: Learn storage best practices for migrating on-premises Hadoop clust
Previously updated : 05/22/2024 Last updated : 07/24/2024 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight
For more information, see the following articles:
- [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md) - [Monitor a storage account in the Azure portal](../../storage/common/manage-storage-analytics-logs.md)
-### Azure Data Lake Storage Gen1
-
-Azure Data Lake Storage Gen1 implements HDFS and POSIX style access control model. It provides first class integration with Microsoft Entra ID for fine grained access control. There are no limits to the size of data that it can store, or its ability to run massively parallel analytics.
-
-For more information, see the following articles:
--- [Create HDInsight clusters with Data Lake Storage Gen1 using the Azure portal](../../data-lake-store/data-lake-store-hdinsight-hadoop-use-portal.md)-- [Use Data Lake Storage Gen1 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen1.md)- ### Azure Data Lake Storage Gen2 Azure Data Lake Storage Gen2 is the latest storage offering. It unifies the core capabilities from the first generation of Azure Data Lake Storage Gen1 with a Hadoop compatible file system endpoint directly integrated into Azure Blob Storage. This enhancement combines the scale and cost benefits of object storage with the reliability and performance typically associated only with on-premises file systems.
hdinsight Hdinsight Troubleshoot Cluster Creation Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-cluster-creation-fails.md
description: Learn how to troubleshoot Apache cluster creation issues for Azure
Previously updated : 09/14/2023 Last updated : 07/23/2024 #Customer intent: As an HDInsight user, I would like to understand how to resolve common cluster creation failures.
The following issues are most common root causes for cluster creation failures:
## Permissions issues
-If you are using Azure Data Lake Storage Gen2, and receive the error `AmbariClusterCreationFailedErrorCode`: ":::no-loc text="Internal server error occurred while processing the request. Please retry the request or contact support.":::", open the Azure portal, go to your Storage account, and under Access Control (IAM), ensure that the **Storage Blob Data Contributor** or the **Storage Blob Data Owner** role has Assigned access to the **User assigned managed identity** for the subscription. See [Set up permissions for the managed identity on the Data Lake Storage Gen2](../hdinsight-hadoop-use-data-lake-storage-gen2-portal.md#set-up-permissions-for-the-managed-identity-on-the-data-lake-storage-gen2) for detailed instructions.
-
-If you are using Azure Data Lake Storage Gen1, see setup and configuration instructions [Use Azure Data Lake Storage Gen1 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen1.md). Data Lake Storage Gen1 isn't supported for HBase clusters, and is not supported in HDInsight version 4.0.
+If you're using Azure Data Lake Storage Gen2, and receive the error `AmbariClusterCreationFailedErrorCode`: ":::no-loc text="Internal server error occurred while processing the request. Please retry the request or contact support.":::", open the Azure portal, go to your Storage account, and under Access Control (IAM), ensure that the **Storage Blob Data Contributor** or the **Storage Blob Data Owner** role has Assigned access to the **User assigned managed identity** for the subscription. See [Set up permissions for the managed identity on the Data Lake Storage Gen2](../hdinsight-hadoop-use-data-lake-storage-gen2-portal.md#set-up-permissions-for-the-managed-identity-on-the-data-lake-storage-gen2) for detailed instructions.
If using Azure Storage, ensure that storage account name is valid during the cluster creation.
In general, the following policies can impact cluster creation:
Firewalls on your virtual network or storage account can deny communication with HDInsight management IP addresses.
-Allow traffic from the IP addresses in the table below.
+Allow traffic from the IP addresses in the table.
| Source IP address | Destination | Direction | ||||
Allow traffic from the IP addresses in the table below.
Also add the IP addresses specific to the region where the cluster is created. See [HDInsight management IP addresses](../hdinsight-management-ip-addresses.md) for a listing of the addresses for each Azure region.
-If you are using an express route or your own custom DNS server, see [Plan a virtual network for Azure HDInsight - connecting multiple networks](../hdinsight-plan-virtual-network-deployment.md#multinet).
+If you're using an express route or your own custom DNS server, see [Plan a virtual network for Azure HDInsight - connecting multiple networks](../hdinsight-plan-virtual-network-deployment.md#multinet).
-## Resources locks
+## Resources lock
-Ensure that there are no [locks on your virtual network and resource group](../../azure-resource-manager/management/lock-resources.md). Clusters cannot be created or deleted if the resource group is locked.
+Ensure that there are no [locks on your virtual network and resource group](../../azure-resource-manager/management/lock-resources.md). Clusters can't be created or deleted if the resource group is locked.
## Unsupported component versions
-Ensure that you are using a [supported version of Azure HDInsight and Apache Hadoop component](../hdinsight-component-versioning.md) in your solution.
+Ensure that you're using a [supported version of Azure HDInsight and Apache Hadoop component](../hdinsight-component-versioning.md) in your solution.
## Storage account name restrictions
-Storage account names cannot be more than 24 characters and cannot contain a special character. These restrictions also apply to the default container name in the storage account.
+Storage account names can't be more than 24 characters and can't contain a special character. These restrictions also apply to the default container name in the storage account.
Other naming restrictions also apply for cluster creation. See [Cluster name restrictions](../hdinsight-hadoop-provision-linux-clusters.md#cluster-name), for more information.
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/quickstart-resource-manager-template.md
description: This quickstart shows how to use Resource Manager template to creat
Previously updated : 01/04/2024 Last updated : 07/23/2024 #Customer intent: As a developer new to Apache HBase on Azure, I need to see how to create an HBase cluster.
In this quickstart, you use an Azure Resource Manager template (ARM template) to
[!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
:::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.hdinsight%2Fhdinsight-hbase-linux%2Fazuredeploy.json":::
Two Azure resources are defined in the template:
||| |Subscription|From the drop-down list, select the Azure subscription that's used for the cluster.| |Resource group|From the drop-down list, select your existing resource group, or select **Create new**.|
- |Location|The value will autopopulate with the location used for the resource group.|
+ |Location|The value autopopulates with the location used for the resource group.|
|Cluster Name|Enter a globally unique name. For this template, use only lowercase letters, and numbers.|
- |Cluster Login User Name|Provide the username, default is `admin`.|
- |Cluster Login Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ```' ` "```). |
+ |Cluster sign in User Name|Provide the username, default is `admin`.|
+ |Cluster sign in Password|Provide a password. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one nonalphanumeric character (except characters ```' ` "```). |
|Ssh User Name|Provide the username, default is `sshuser`.| |Ssh Password|Provide the password.| :::image type="content" source="./media/quickstart-resource-manager-template/resource-manager-template-hbase.png" alt-text="Deploy Resource Manager template HBase." border="true":::
-1. Review the **TERMS AND CONDITIONS**. Then select **I agree to the terms and conditions stated above**, then **Purchase**. You'll receive a notification that your deployment is in progress. It takes about 20 minutes to create a cluster.
+1. Review the **TERMS AND CONDITIONS**. Then select **I agree to the terms and conditions stated above**, then **Purchase**. You receive a notification that your deployment is in progress. It takes about 20 minutes to create a cluster.
## Review deployed resources
-Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Blob Storage](../hdinsight-hadoop-use-blob-storage.md) account, an [Azure Data Lake Storage Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
+Once the cluster is created, you receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page lists your new HDInsight cluster and the default storage associated with the cluster. Each cluster an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
## Clean up resources
hdinsight Hdinsight Administer Use Portal Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-portal-linux.md
description: Learn how to create and manage Azure HDInsight clusters using the A
Previously updated : 03/27/2024 Last updated : 07/23/2024 # Manage Apache Hadoop clusters in HDInsight by using the Azure portal
Select your cluster name from the [**HDInsight clusters**](#showClusters) page.
|Cluster size|Check, increase, and decrease the number of cluster worker nodes. See [Scale clusters](hdinsight-administer-use-portal-linux.md#scale-clusters).| |Quota limits|Display the used and available cores for your subscription.| |SSH + Cluster login|Shows the instructions to connect to the cluster using Secure Shell (SSH) connection. For more information, see [Use SSH with HDInsight](hdinsight-hadoop-linux-use-ssh-unix.md).|
- |Data Lake Storage Gen1|Configure access Data Lake Storage Gen1. See [Quickstart: Set up clusters in HDInsight](./hdinsight-hadoop-provision-linux-clusters.md).|
+ |Data Lake Storage Gen2|Configure access Data Lake Storage Gen2. See [Quickstart: Set up clusters in HDInsight](./hdinsight-hadoop-use-data-lake-storage-gen2-portal.md).|
|Storage accounts|View the storage accounts and the keys. The storage accounts are configured during the cluster creation process.| |Applications|Add/remove HDInsight applications. See [Install custom HDInsight applications](hdinsight-apps-install-custom-applications.md).| |Script actions|Run Bash scripts on the cluster. See [Customize Linux-based HDInsight clusters using Script Action](hdinsight-hadoop-customize-cluster-linux.md).|
hdinsight Hdinsight Hadoop Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-add-storage.md
Last updated 3/22/2024
# Add additional storage accounts to HDInsight
-Learn how to use script actions to add extra Azure Storage *accounts* to HDInsight. The steps in this document add a storage *account* to an existing HDInsight cluster. This article applies to storage *accounts* (not the default cluster storage account), and not additional storage such as [`Azure Data Lake Storage Gen1`](hdinsight-hadoop-use-data-lake-storage-gen1.md) and [`Azure Data Lake Storage Gen2`](hdinsight-hadoop-use-data-lake-storage-gen2.md).
+Learn how to use script actions to add extra Azure Storage *accounts* to HDInsight. The steps in this document add a storage *account* to an existing HDInsight cluster. This article applies to storage *accounts* (not the default cluster storage account), and not additional storage such as [`Azure Data Lake Storage Gen2`](hdinsight-hadoop-use-data-lake-storage-gen2.md).
> [!IMPORTANT] > The information in this document is about adding additional storage account(s) to a cluster after it has been created. For information on adding storage accounts during cluster creation, see [Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more](hdinsight-hadoop-provision-linux-clusters.md).
hdinsight Hdinsight Hadoop Compare Storage Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-compare-storage-options.md
You can choose between a few different Azure storage services when creating HDIn
* [Azure Blob storage with HDInsight](./overview-azure-storage.md) * [Azure Data Lake Storage Gen2 with HDInsight](./overview-data-lake-storage-gen2.md)
-* [Azure Data Lake Storage Gen1 with HDInsight](./overview-data-lake-storage-gen1.md)
This article provides an overview of these storage types and their unique features.
You can validate that HDInsight is properly configured to store data in a single
## Next steps * [Azure Storage overview in HDInsight](./overview-azure-storage.md)
-* [Azure Data Lake Storage Gen1 overview in HDInsight](./overview-data-lake-storage-gen1.md)
* [Azure Data Lake Storage Gen2 overview in HDInsight](./overview-data-lake-storage-gen2.md) * [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) * [Introduction to Azure Storage](../storage/common/storage-introduction.md)
hdinsight Hdinsight Hadoop Linux Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-linux-information.md
description: Get implementation tips for using Linux-based HDInsight (Hadoop) cl
Previously updated : 12/05/2023 Last updated : 07/23/2023 # Information about using HDInsight on Linux
Example data and JAR files can be found on Hadoop Distributed File System at `/e
In most Hadoop distributions, the data is stored in HDFS. HDFS is backed by local storage on the machines in the cluster. Using local storage can be costly for a cloud-based solution where you're charged hourly or by minute for compute resources.
-When using HDInsight, the data files are stored in an adaptable and resilient way in the cloud using Azure Blob Storage and optionally Azure Data Lake Storage Gen1/Gen2. These services provide the following benefits:
+When you use HDInsight, the data files are stored in an adaptable and resilient way in the cloud using Azure Blob Storage and optionally Azure Data Lake Storage Gen2. These services provide the following benefits:
* Cheap long-term storage. * Accessibility from external services such as websites, file upload/download utilities, various language SDKs, and web browsers. * Large file capacity and large adaptable storage.
-For more information, see [Azure Blob storage](../storage/common/storage-introduction.md), [Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md), or [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).
+For more information, see [Azure Blob storage](../storage/common/storage-introduction.md), or [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).
-When using either Azure Blob storage or Data Lake Storage Gen1/Gen2, you don't have to do anything special from HDInsight to access the data. For example, the following command lists files in the `/example/data` folder whether it's stored on Azure Storage or Data Lake Storage:
+When using either Azure Blob storage or Data Lake Storage Gen2, you don't have to do anything special from HDInsight to access the data. For example, the following command lists files in the `/example/data` folder whether it's stored on Azure Storage or Data Lake Storage:
```console hdfs dfs -ls /example/data
In HDInsight, the data storage resources (Azure Blob Storage and Azure Data Lake
### <a name="URI-and-scheme"></a>URI and scheme
-Some commands may require you to specify the scheme as part of the URI when accessing a file. When using non-default storage (storage added as "additional" storage to the cluster), you must always use the scheme as part of the URI.
+Some commands may require you to specify the scheme as part of the URI when accessing a file. When using nondefault storage (storage added as "additional" storage to the cluster), you must always use the scheme as part of the URI.
When using [**Azure Storage**](./hdinsight-hadoop-use-blob-storage.md), use one of the following URI schemes:
When using [**Azure Storage**](./hdinsight-hadoop-use-blob-storage.md), use one
* `wasbs:///`: Access default storage using encrypted communication. The wasbs scheme is supported only from HDInsight version 3.6 onwards.
-* `wasb://<container-name>@<account-name>.blob.core.windows.net/`: Used when communicating with a non-default storage account. For example, when you have an additional storage account or when accessing data stored in a publicly accessible storage account.
+* `wasb://<container-name>@<account-name>.blob.core.windows.net/`: Used when communicating with a nondefault storage account. For example, when you have an additional storage account or when accessing data stored in a publicly accessible storage account.
When using [**Azure Data Lake Storage Gen2**](./hdinsight-hadoop-use-data-lake-storage-gen2.md), use the following URI scheme: * `abfs://`: Access default storage using encrypted communication.
-* `abfs://<container-name>@<account-name>.dfs.core.windows.net/`: Used when communicating with a non-default storage account. For example, when you have an additional storage account or when accessing data stored in a publicly accessible storage account.
-
-When using [**Azure Data Lake Storage Gen1**](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md), use one of the following URI schemes:
-
-* `adl:///`: Access the default Data Lake Storage for the cluster.
-
-* `adl://<storage-name>.azuredatalakestore.net/`: Used when communicating with a non-default Data Lake Storage. Also used to access data outside the root directory of your HDInsight cluster.
+* `abfs://<container-name>@<account-name>.dfs.core.windows.net/`: Used when communicating with a nondefault storage account. For example, when you have an additional storage account or when accessing data stored in a publicly accessible storage account.
> [!IMPORTANT] > When using Data Lake Storage as the default store for HDInsight, you must specify a path within the store to use as the root of HDInsight storage. The default path is `/clusters/<cluster-name>/`.
hdinsight Hdinsight Hadoop Provision Linux Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md
Although an on-premises installation of Hadoop uses the Hadoop Distributed File
HDInsight clusters can use the following storage options: * Azure Data Lake Storage Gen2
-* Azure Data Lake Storage Gen1
* Azure Storage General Purpose v2
-* Azure Storage General Purpose v1
-* Azure Storage Block blob (**only supported as secondary storage**)
+* * Azure Storage Block blob (**only supported as secondary storage**)
For more information on storage options with HDInsight, see [Compare storage options for use with Azure HDInsight clusters](hdinsight-hadoop-compare-storage-options.md).
hdinsight Hdinsight Hadoop Use Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-blob-storage.md
Last updated 05/22/2024
# Use Azure storage with Azure HDInsight clusters
-You can store data in [Azure Blob storage](../storage/common/storage-introduction.md), [Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md), or [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md). Or a combination of these options. These storage options enable you to safely delete HDInsight clusters that are used for computation without losing user data.
+You can store data in [Azure Blob storage](../storage/common/storage-introduction.md), or [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md). Or a combination of these options. These storage options enable you to safely delete HDInsight clusters that are used for computation without losing user data.
-Apache Hadoop supports a notion of the default file system. The default file system implies a default scheme and authority. It can also be used to resolve relative paths. During the HDInsight cluster creation process, you can specify a blob container in Azure Storage as the default file system. Or with HDInsight 3.6, you can select either Azure Blob storage or Azure Data Lake Storage Gen1/ Azure Data Lake Storage Gen2 as the default files system with a few exceptions. For the supportability of using Data Lake Storage Gen1 as both the default and linked storage, see [Availability for HDInsight cluster](./hdinsight-hadoop-use-data-lake-storage-gen1.md#availability-for-hdinsight-clusters).
+Apache Hadoop supports a notion of the default file system. The default file system implies a default scheme and authority. It can also be used to resolve relative paths. During the HDInsight cluster creation process, you can specify a blob container in Azure Storage as the default file system. Or with HDInsight 3.6, you can select either Azure Blob storage or Azure Data Lake Storage Azure Data Lake Storage Gen2 as the default files system with a few exceptions.
In this article, you learn how Azure Storage works with HDInsight clusters.
-* To learn how Data Lake Storage Gen1 works with HDInsight clusters, see [Use Azure Data Lake Storage Gen1 with Azure HDInsight clusters](./hdinsight-hadoop-use-data-lake-storage-gen1.md).
-* to learn how Data Lake Storage Gen2 works with HDInsight clusters, see [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](./hdinsight-hadoop-use-data-lake-storage-gen2.md).
+* To learn how Data Lake Storage Gen2 works with HDInsight clusters, see [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](./hdinsight-hadoop-use-data-lake-storage-gen2.md).
* For more information about creating an HDInsight cluster, see [Create Apache Hadoop clusters in HDInsight](./hdinsight-hadoop-provision-linux-clusters.md). > [!IMPORTANT]
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md
- Title: Use Data Lake Storage Gen1 with Hadoop in Azure HDInsight
-description: Learn how to query data from Azure Data Lake Storage Gen1 and to store results of your analysis.
--- Previously updated : 12/07/2023--
-# Use Data Lake Storage Gen1 with Azure HDInsight clusters
-
-> [!Note]
-> Deploy new HDInsight clusters using [Azure Data Lake Storage Gen2](hdinsight-hadoop-use-data-lake-storage-gen2.md) for improved performance and new features.
-
-To analyze data in HDInsight cluster, you can store the data either in [`Azure Blob storage`](../storage/common/storage-introduction.md), [Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md), or [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md). All storage options enable you to safely delete HDInsight clusters that are used for computation without losing user data.
-
-In this article, you learn how Data Lake Storage Gen1 works with HDInsight clusters. To learn how Azure Blob storage works with HDInsight clusters, see [Use Azure Blob storage with Azure HDInsight clusters](hdinsight-hadoop-use-blob-storage.md). For more information about creating an HDInsight cluster, see [Create Apache Hadoop clusters in HDInsight](hdinsight-hadoop-provision-linux-clusters.md).
-
-> [!NOTE]
-> Data Lake Storage Gen1 is always accessed through a secure channel, so there is no `adls` filesystem scheme name. You always use `adl`.
--
-## Availability for HDInsight clusters
-
-Apache Hadoop supports a notion of the default file system. The default file system implies a default scheme and authority. It can also be used to resolve relative paths. During the HDInsight cluster creation process, specify a blob container in Azure Storage as the default file system. Or with HDInsight 3.5 and newer versions, you can select either Azure Blob storage or Azure Data Lake Storage Gen1 as the default files system with a few exceptions. The cluster and the storage account must be hosted in the same region.
-
-HDInsight clusters can use Data Lake Storage Gen1 in 2 ways:
-
-* As the default storage
-* As additional storage, with Azure Blob storage as default storage.
-
-Currently, only some of the HDInsight cluster types/versions support using Data Lake Storage Gen1 as default storage and additional storage accounts:
-
-| HDInsight cluster type | Data Lake Storage Gen1 as default storage | Data Lake Storage Gen1 as additional storage| Notes |
-|||||
-| HDInsight version 4.0 | No | No |ADLS Gen1 isn't supported with HDInsight 4.0 |
-| HDInsight version 3.6 | Yes | Yes | Except HBase|
-| HDInsight version 3.5 | Yes | Yes | Except HBase|
-| HDInsight version 3.4 | No | Yes | |
-| HDInsight version 3.3 | No | No | |
-| HDInsight version 3.2 | No | Yes | |
-
-> [!WARNING]
-> HDInsight HBase is not supported with Azure Data Lake Storage Gen1
-
-Using Data Lake Storage Gen1 as an additional storage account doesn't affect performance. Or the ability to read or write to Azure blob storage from the cluster.
-
-## Use Data Lake Storage Gen1 as default storage
-
-When HDInsight is deployed with Data Lake Storage Gen1 as default storage, the cluster-related files are stored in `adl://mydatalakestore/<cluster_root_path>/`, where `<cluster_root_path>` is the name of a folder you create in Data Lake Storage. By specifying a root path for each cluster, you can use the same Data Lake Storage account for more than one cluster. So, you can have a setup where:
-
-* Cluster 1 can use the path `adl://mydatalakestore/cluster1storage`
-* Cluster 2 can use the path `adl://mydatalakestore/cluster2storage`
-
-Notice that both the clusters use the same Data Lake Storage Gen1 account **mydatalakestore**. Each cluster has access to its own root filesystem in Data Lake Storage. The Azure portal deployment experience prompts you to use a folder name such as **/clusters/\<clustername>** for the root path.
-
-To use Data Lake Storage Gen1 as default storage, you must grant the service principal access to the following paths:
-
-* The Data Lake Storage Gen1 account root. For example: adl://mydatalakestore/.
-* The folder for all cluster folders. For example: adl://mydatalakestore/clusters.
-* The folder for the cluster. For example: adl://mydatalakestore/clusters/cluster1storage.
-
-For more information for creating service principal and grant access, see Configure Data Lake Storage access.
-
-### Extracting a certificate from Azure Keyvault for use in cluster creation
-
-If the certificate for your service principal is stored in Azure Key Vault, you must convert the certificate to the correct format. The following code snippets show how to do the conversion.
-
-First, download the certificate from Key Vault and extract the `SecretValueText`.
-
-```powershell
-$certPassword = Read-Host "Enter Certificate Password"
-$cert = (Get-AzureKeyVaultSecret -VaultName 'MY-KEY-VAULT' -Name 'MY-SECRET-NAME')
-$certValue = [System.Convert]::FromBase64String($cert.SecretValueText)
-```
-
-Next, convert the `SecretValueText` to a certificate.
-
-```powershell
-$certObject = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList $certValue,$null,"Exportable, PersistKeySet"
-$certBytes = $certObject.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, $certPassword.SecretValueText);
-$identityCertificate = [System.Convert]::ToBase64String($certBytes)
-```
-
-Then you can use the `$identityCertificate` to deploy a new cluster as in the following snippet:
-
-```powershell
-New-AzResourceGroupDeployment `
- -ResourceGroupName $resourceGroupName `
- -TemplateFile $pathToArmTemplate `
- -identityCertificate $identityCertificate `
- -identityCertificatePassword $certPassword.SecretValueText `
- -clusterName $clusterName `
- -clusterLoginPassword $SSHpassword `
- -sshPassword $SSHpassword `
- -servicePrincipalApplicationId $application.ApplicationId
-```
-
-## Use Data Lake Storage Gen1 as additional storage
-
-You can use Data Lake Storage Gen1 as additional storage for the cluster as well. In such cases, the cluster default storage can either be an Azure Blob storage or an Azure Data Lake Storage Gen1 account. When running HDInsight jobs against the data stored in Azure Data Lake Storage Gen1 as additional storage, use the fully qualified path. For example:
-
-`adl://mydatalakestore.azuredatalakestore.net/<file_path>`
-
-There's no **cluster_root_path** in the URL now. That's because Data Lake Storage isn't a default storage in this case. So all you need to do is provide the path to the files.
-
-To use a Data Lake Storage Gen1 as additional storage, grant the service principal access to the paths where your files are stored. For example:
-
-`adl://mydatalakestore.azuredatalakestore.net/<file_path>`
-
-For more information for creating service principal and grant access, see Configure Data Lake Storage access.
-
-## Use more than one Data Lake Storage Gen1 account
-
-Adding a Data Lake Storage account as additional and adding more than one Data Lake Storage accounts can be done. Give the HDInsight cluster permission on data in one or more Data Lake Storage accounts. See Configure Data Lake Storage Gen1 access.
-
-## Configure Data Lake Storage Gen1 access
-
-To configure Azure Data Lake Storage Gen1 access from your HDInsight cluster, you must have a Microsoft Entra service principal. Only a Microsoft Entra administrator can create a service principal. The service principal must be created with a certificate. For more information, see [Quickstart: Set up clusters in HDInsight](./hdinsight-hadoop-provision-linux-clusters.md), and [Create service principal with self-signed-certificate](../active-directory/develop/howto-authenticate-service-principal-powershell.md#create-service-principal-with-self-signed-certificate).
-
-> [!NOTE]
-> If you are going to use Azure Data Lake Storage Gen1 as additional storage for HDInsight cluster, we strongly recommend that you do this while you create the cluster as described in this article. Adding Azure Data Lake Storage Gen1 as additional storage to an existing HDInsight cluster is not a supported scenario.
-
-For more information on the access control model, see [Access control in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md).
-
-## Access files from the cluster
-
-There are several ways you can access the files in Data Lake Storage from an HDInsight cluster.
-
-* **Using the fully qualified name**. With this approach, you provide the full path to the file that you want to access.
-
- ```
- adl://<data_lake_account>.azuredatalakestore.net/<cluster_root_path>/<file_path>
- ```
-
-* **Using the shortened path format**. With this approach, you replace the path up to the cluster root with:
-
- ```
- adl:///<file path>
- ```
-
-* **Using the relative path**. With this approach, you only provide the relative path to the file that you want to access.
-
- ```
- /<file.path>/
- ```
-
-### Data access examples
-
-Examples are based on an [ssh connection](./hdinsight-hadoop-linux-use-ssh-unix.md) to the head node of the cluster. The examples use all three URI schemes. Replace `DATALAKEACCOUNT` and `CLUSTERNAME` with the relevant values.
-
-#### A few hdfs commands
-
-1. Create a file on local storage.
-
- ```bash
- touch testFile.txt
- ```
-
-1. Create directories on cluster storage.
-
- ```bash
- hdfs dfs -mkdir adl://DATALAKEACCOUNT.azuredatalakestore.net/clusters/CLUSTERNAME/sampledata1/
- hdfs dfs -mkdir adl:///sampledata2/
- hdfs dfs -mkdir /sampledata3/
- ```
-
-1. Copy data from local storage to cluster storage.
-
- ```bash
- hdfs dfs -copyFromLocal testFile.txt adl://DATALAKEACCOUNT.azuredatalakestore.net/clusters/CLUSTERNAME/sampledata1/
- hdfs dfs -copyFromLocal testFile.txt adl:///sampledata2/
- hdfs dfs -copyFromLocal testFile.txt /sampledata3/
- ```
-
-1. List directory contents on cluster storage.
-
- ```bash
- hdfs dfs -ls adl://DATALAKEACCOUNT.azuredatalakestore.net/clusters/CLUSTERNAME/sampledata1/
- hdfs dfs -ls adl:///sampledata2/
- hdfs dfs -ls /sampledata3/
- ```
-
-#### Creating a Hive table
-
-Three file locations are shown for illustrative purposes. For actual execution, use only one of the `LOCATION` entries.
-
-```hql
-DROP TABLE myTable;
-CREATE EXTERNAL TABLE myTable (
- t1 string,
- t2 string,
- t3 string,
- t4 string,
- t5 string,
- t6 string,
- t7 string)
-ROW FORMAT DELIMITED FIELDS TERMINATED BY ' '
-STORED AS TEXTFILE
-LOCATION 'adl://DATALAKEACCOUNT.azuredatalakestore.net/clusters/CLUSTERNAME/example/data/';
-LOCATION 'adl:///example/data/';
-LOCATION '/example/data/';
-```
-
-## Identify storage path from Ambari
-
-To identify the complete path to the configured default store, navigate to **HDFS** > **Configs** and enter `fs.defaultFS` in the filter input box.
-
-## Create HDInsight clusters with access to Data Lake Storage Gen1
-
-Use the following links for detailed instructions on how to create HDInsight clusters with access to Data Lake Storage Gen1.
-
-* [Using Portal](./hdinsight-hadoop-provision-linux-clusters.md)
-* [Using PowerShell (with Data Lake Storage Gen1 as default storage)](../data-lake-store/data-lake-store-hdinsight-hadoop-use-powershell-for-default-storage.md)
-* [Using PowerShell (with Data Lake Storage Gen1 as additional storage)](../data-lake-store/data-lake-store-hdinsight-hadoop-use-powershell.md)
-* [Using Azure templates](../data-lake-store/data-lake-store-hdinsight-hadoop-use-resource-manager-template.md)
-
-## Refresh the HDInsight certificate for Data Lake Storage Gen1 access
-
-The following example PowerShell code reads a certificate from a local file or Azure Key Vault, and updates your HDInsight cluster with the new certificate to access Azure Data Lake Storage Gen1. Provide your own HDInsight cluster name, resource group name, subscription ID, `app ID`, local path to the certificate. Type in the password when prompted.
-
-```powershell-interactive
-$clusterName = '<clustername>'
-$resourceGroupName = '<resourcegroupname>'
-$subscriptionId = '01234567-8a6c-43bc-83d3-6b318c6c7305'
-$appId = '01234567-e100-4118-8ba6-c25834f4e938'
-$addNewCertKeyCredential = $true
-$certFilePath = 'C:\localfolder\adls.pfx'
-$KeyVaultName = "my-key-vault-name"
-$KeyVaultSecretName = "my-key-vault-secret-name"
-$certPassword = Read-Host "Enter Certificate Password"
-# certSource
-# 0 - create self signed cert
-# 1 - read cert from file path
-# 2 - read cert from key vault
-$certSource = 0
-
-Login-AzAccount
-Select-AzSubscription -SubscriptionId $subscriptionId
-
-if($certSource -eq 0)
-{
- Write-Host "Generating new SelfSigned certificate"
-
- $cert = New-SelfSignedCertificate -CertStoreLocation "cert:\CurrentUser\My" -Subject "CN=hdinsightAdlsCert" -KeySpec KeyExchange
- $certBytes = $cert.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, $certPassword);
- $certString = [System.Convert]::ToBase64String($certBytes)
-}
-elseif($certSource -eq 1)
-{
-
- Write-Host "Reading the cert file from path $certFilePath"
-
- $cert = new-object System.Security.Cryptography.X509Certificates.X509Certificate2($certFilePath, $certPassword)
- $certString = [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes($certFilePath))
-}
-elseif($certSource -eq 2)
-{
-
- Write-Host "Reading the cert file from Azure Key Vault $KeyVaultName"
-
- $cert = (Get-AzureKeyVaultSecret -VaultName $KeyVaultName -Name $KeyVaultSecretName)
- $certValue = [System.Convert]::FromBase64String($cert.SecretValueText)
- $certObject = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList $certValue, $null,"Exportable, PersistKeySet"
-
- $certBytes = $certObject.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, $certPassword.SecretValueText);
-
- $certString =[System.Convert]::ToBase64String($certBytes)
-}
-
-if($addNewCertKeyCredential)
-{
- Write-Host "Creating new KeyCredential for the app"
- $keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
- New-AzADAppCredential -ApplicationId $appId -CertValue $keyValue -EndDate $cert.NotAfter -StartDate $cert.NotBefore
- Write-Host "Waiting for 7 minutes for the permissions to get propagated"
- Start-Sleep -s 420 #7 minutes
-}
-
-Write-Host "Updating the certificate on HDInsight cluster..."
-
-Invoke-AzResourceAction `
- -ResourceGroupName $resourceGroupName `
- -ResourceType 'Microsoft.HDInsight/clusters' `
- -ResourceName $clusterName `
- -ApiVersion '2015-03-01-preview' `
- -Action 'updateclusteridentitycertificate' `
- -Parameters @{ ApplicationId = $appId; Certificate = $certString; CertificatePassword = $certPassword.ToString() } `
- -Force
-```
-
-## Next steps
-
-In this article, you learned how to use HDFS-compatible Azure Data Lake Storage Gen1 with HDInsight. This storage allows you to build adaptable, long-term, archiving data acquisition solutions. And use HDInsight to unlock the information inside the stored structured and unstructured data.
-
-For more information, see:
-
-* [Quickstart: Set up clusters in HDInsight](./hdinsight-hadoop-provision-linux-clusters.md)
-* [Create an HDInsight cluster to use Data Lake Storage Gen1 using the Azure PowerShell](../data-lake-store/data-lake-store-hdinsight-hadoop-use-powershell.md)
-* [Upload data to HDInsight](hdinsight-upload-data.md)
-* [Use Azure Blob storage Shared Access Signatures to restrict access to data with HDInsight](hdinsight-storage-sharedaccesssignature-permissions.md)
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-azure-cli.md
Previously updated : 08/21/2023 Last updated : 07/24/2024 # Create a cluster with Data Lake Storage Gen2 using Azure CLI
You can [download a sample template file](https://github.com/Azure-Samples/hdins
The code snippet below does the following initial steps: 1. Logs in to your Azure account.
-1. Sets the active subscription where the create operations will be done.
+1. Sets the active subscription where the created operations will be done.
1. Creates a new resource group for the new deployment activities. 1. Creates a user-assigned managed identity. 1. Adds an extension to the Azure CLI to use features for Data Lake Storage Gen2.
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md
Last updated 05/10/2024
# Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters
-[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) is a cloud storage service dedicated to big data analytics, built on [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md). Data Lake Storage Gen2 combines the capabilities of Azure Blob storage and Azure Data Lake Storage Gen1. The resulting service offers features from Azure Data Lake Storage Gen1 including: file system semantics, directory-level and file-level security, and adaptability. Along with the low-cost, tiered storage, high availability, and disaster-recovery capabilities from Azure Blob storage.
+[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) is a cloud storage service dedicated to big data analytics, built on [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md). The resulting service offers features from Azure Data Lake Storage including: file system semantics, directory-level and file-level security, and adaptability. Along with the low-cost, tiered storage, high availability, and disaster-recovery capabilities from Azure Blob storage.
For a full comparison of cluster creation options using Data Lake Storage Gen2, see [Compare storage options for use with Azure HDInsight clusters](hdinsight-hadoop-compare-storage-options.md).
Use the following links for detailed instructions on how to create HDInsight clu
## Access control for Data Lake Storage Gen2 in HDInsight
-### What kinds of permissions does Data Lake Storage Gen2 support?
+### What kinds of permissions do Data Lake Storage Gen2 support?
-Data Lake Storage Gen2 uses an access control model that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs). Data Lake Storage Gen1 supports access control lists only for controlling access to data.
+Data Lake Storage Gen2 uses an access control model that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs).
Azure RBAC uses role assignments to effectively apply sets of permissions to users, groups, and service principals for Azure resources. Typically, those Azure resources are constrained to top-level resources (for example, Azure Blob storage accounts). For Azure Blob storage, and also Data Lake Storage Gen2, this mechanism has been extended to the file system resource.
hdinsight Hdinsight Rotate Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-rotate-storage-keys.md
Use [Script Action](hdinsight-hadoop-customize-cluster-linux.md#script-action-to
The preceding script directly updates the access key on the cluster side only and doesn't renew a copy on the HDInsight Resource provider side. Therefore, the script action hosted in the storage account will fail after the access key is rotated. Workaround:
-Use external storage account via [SAS URIs](hdinsight-storage-sharedaccesssignature-permissions.md) for script actions or make the scripts publicly accessible.
+
+1. Use/create another storage account in the same region.
+1. Upload the script you want to run to this storage account.
+1. Created SAS URI for the script with read access.
+1. If your cluster is in your own virtual network, make sure your virtual network allows the access to the storage account file/script.
+1. Use this SAS URI to run script action.
+
+ :::image type="content" source="./media/hdinsight-rotate-storage-keys/script-action.png" alt-text="Screenshot showing script action." border="true" lightbox="./media/hdinsight-rotate-storage-keys/script-action.png":::
## Next steps
hdinsight Hdinsight Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upload-data.md
Last updated 05/10/2024
# Upload data for Apache Hadoop jobs in HDInsight
-HDInsight provides a Hadoop distributed file system (HDFS) over Azure Storage, and Azure Data Lake Storage. This storage includes Gen1 and Gen2. Azure Storage and Data Lake Storage Gen1 and Gen2 are designed as HDFS extensions. They enable the full set of components in the Hadoop environment to operate directly on the data it manages. Azure Storage, Data Lake Storage Gen1, and Gen2 are distinct file systems. The systems are optimized for storage of data and computations on that data. For information about the benefits of using Azure Storage, see [Use Azure Storage with HDInsight](hdinsight-hadoop-use-blob-storage.md). See also, [Use Data Lake Storage Gen1 with HDInsight](hdinsight-hadoop-use-data-lake-storage-gen1.md), and [Use Data Lake Storage Gen2 with HDInsight](hdinsight-hadoop-use-data-lake-storage-gen2.md).
+HDInsight provides a Hadoop distributed file system (HDFS) over Azure Storage, and Azure Data Lake Storage. This storage includes Gen2. Azure Storage and Data Lake Storage Gen2 are designed as HDFS extensions. They enable the full set of components in the Hadoop environment to operate directly on the data it manages. Azure Storage, Data Lake Storage Gen2 are distinct file systems. The systems are optimized for storage of data and computations on that data. For information about the benefits of using Azure Storage, see [Use Azure Storage with HDInsight](hdinsight-hadoop-use-blob-storage.md). See also, [Use Data Lake Storage Gen2 with HDInsight](hdinsight-hadoop-use-data-lake-storage-gen2.md).
## Prerequisites
Note the following requirements before you begin:
* An Azure HDInsight cluster. For instructions, see [Get started with Azure HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md). * Knowledge of the following articles: * [Use Azure Storage with HDInsight](hdinsight-hadoop-use-blob-storage.md)
- * [Use Data Lake Storage Gen1 with HDInsight](hdinsight-hadoop-use-data-lake-storage-gen1.md)
* [Use Data Lake Storage Gen2 with HDInsight](hdinsight-hadoop-use-data-lake-storage-gen2.md) ## Upload data to Azure Storage
Because the default file system for HDInsight is in Azure Storage, /example/data
`wasbs:///example/data/data.txt`
-or
+Or
`wasbs://<ContainerName>@<StorageAccountName>.blob.core.windows.net/example/data/davinci.txt`
The Azure Data Factory service is a fully managed service for composing data: st
|Storage type|Documentation| |-|-| |Azure Blob storage|[Copy data to or from Azure Blob storage by using Azure Data Factory](../data-factory/connector-azure-blob-storage.md)|
-|Azure Data Lake Storage Gen1|[Copy data to or from Azure Data Lake Storage Gen1 by using Azure Data Factory](../data-factory/connector-azure-data-lake-store.md)|
+(../data-factory/connector-azure-data-lake-store.md)|
|Azure Data Lake Storage Gen2 |[Load data into Azure Data Lake Storage Gen2 with Azure Data Factory](../data-factory/load-azure-data-lake-storage-gen2.md)| ### Apache Sqoop
hdinsight Apache Hive Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-migrate-workloads.md
To convert external table (non-ACID) to Managed (ACID) table,
**Scenario 1**
-Consider table rt is external table (non-ACID). If the table is non-ORC table,
+Consider table `rt` is external table (non-ACID). If the table is non-ORC table,
``` alter table rt set TBLPROPERTIES ('transactional'='true');
ERROR:
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. work.rt can't be declared transactional because it's an external table (state=08S01,code=1) ```
-This error is occurring because the table rt is external table and you can't convert external table to ACID.
+This error is occurring because the table `rt` is external table and you can't convert external table to ACID.
**Scenario 3**
To fix this issue, you can follow the below option.
* HDInsight 3.6 by default doesn't support ACID tables. If ACID tables are present, however, run 'MAJOR' compaction on them. See the [Hive Language Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/Partition/Compact) for details on compaction.
-* If using [Azure Data Lake Storage Gen1](../overview-data-lake-storage-gen1.md), Hive table locations are likely dependent on the cluster's HDFS configurations. Run the following script action to make these locations portable to other clusters. See [Script action to a running cluster](../hdinsight-hadoop-customize-cluster-linux.md#script-action-to-a-running-cluster).
- |Property | Value | ||| |Bash script URI|`https://hdiconfigactions.blob.core.windows.net/linuxhivemigrationv01/hive-adl-expand-location-v01.sh`|
In certain situations when running a Hive query, you might receive `java.lang.Cl
``` The update command is to update the details manually in the backend DB and the alter command is used to alter the table with the new SerDe class from beeline or Hive.
-### Hive Backend DB schema compare Script
+### Hive Backend DB schema compares Script
You can run the following script after completing the migration. There's a chance of missing few columns in the backend DB, which causes the query failures. If the schema upgrade wasn't happened properly, then there's chance that we may hit the invalid column name issue. The below script fetches the column name and datatype from customer backend DB and provides the output if there's any missing column or incorrect datatype.
-The following path contains the schemacompare_final.py and test.csv file. The script is present in "schemacompare_final.py" file and the file "test.csv" contains all the column name and the datatype for all the tables, which should be present in the hive backend DB.
+The following path contains the schemacompare_final.py and test.csv file. The script is present in `schemacompare_final.py` file and the file "test.csv" contains all the column name and the datatype for all the tables, which should be present in the hive backend DB.
https://hdiconfigactions2.blob.core.windows.net/hiveschemacompare/schemacompare_final.py
Download these two files from the link. And copy these files to one of the head
**Steps to execute the script:**
-Create a directory called "schemacompare" under "/tmp" directory.
+Create a directory called `schemacompare` under "/tmp" directory.
Put the "schemacompare_final.py" and "test.csv" into the folder "/tmp/schemacompare". Do "ls -ltrh /tmp/schemacompare/" and verify whether the files are present.
-To execute the Python script, use the command "python schemacompare_final.py". This script starts executing the script and it takes less than five minutes to complete. The above script automatically connects to your backend DB and fetches the details from each and every table, which Hive uses and update the details in the new csv file called "return.csv". After creating the file return.csv, it compares the data with the file "test.csv" and prints the column name or datatype if there's anything missing under the tablename.
+To execute the Python script, use the command "python schemacompare_final.py". This script starts executing the script and it takes less than five minutes to complete. The above script automatically connects to your backend DB and fetches the details from each and every table, which Hive uses and update the details in the new csv file called "return.csv". After you create the file return.csv, it compares the data with the file "test.csv" and prints the column name or datatype if there's anything missing under the tablename.
Once after executing the script you can see the following lines, which indicate that the details are fetched for the tables and the script is in progressing
Tune Metastore to reduce their CPU usage.
1. New value: `false` 1. Optimize the partition repair feature
- 1. Disable partition repair - This feature is used to synchronize the partitions of Hive tables in storage location with Hive metastore. You may disable this feature if ΓÇ£msck repairΓÇ¥ is used after the data ingestion.
+ 1. Disable partition repair - This feature is used to synchronize the partitions of Hive tables in storage location with Hive metastore. You may disable this feature if `msck repair` is used after the data ingestion.
1. To disable the feature **add "discover.partitions=false"** under table properties using ALTER TABLE. OR (if the feature can't be disabled) 1. Increase the partition repair frequency.
hdinsight Hive Default Metastore Export Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-default-metastore-export-import.md
This article shows how to migrate metadata from a [default metastore DB](../hdin
## Why migrate to external metastore DB
-* Default metastore DB is limited to basic SKU and cannot handle production scale workloads.
+* Default metastore DB is limited to basic SKU and can't handle production scale workloads.
* External metastore DB enables customer to horizontally scale Hive compute resources by adding new HDInsight clusters sharing the same metastore DB.
-* For HDInsight 3.6 to 4.0 migration, it is mandatory to migrate metadata to external metastore DB before upgrading the Hive schema version. See [migrating workloads from HDInsight 3.6 to HDInsight 4.0](./apache-hive-migrate-workloads.md).
+* For HDInsight 3.6 to 4.0 migration, it's mandatory to migrate metadata to external metastore DB before upgrading the Hive schema version. See [migrating workloads from HDInsight 3.6 to HDInsight 4.0](./apache-hive-migrate-workloads.md).
-Because the default metastore DB has limited compute capacity, we recommend low utilization from other jobs on the cluster while migrating metadata.
+The default metastore DB with limited compute capacity, hence we recommend low utilization from other jobs on the cluster while migrating metadata.
Source and target DBs must use the same HDInsight version and the same Storage Accounts. If upgrading HDInsight versions from 3.6 to 4.0, complete the steps in this article first. Then, follow the official upgrade steps [here](./apache-hive-migrate-workloads.md). ## Prerequisites
-If using [Azure Data Lake Storage Gen1](../overview-data-lake-storage-gen1.md), Hive table locations are likely dependent on the cluster's HDFS configurations for Azure Data Lake Storage Gen1. Run the following script action to make these locations portable to other clusters. See [Script action to a running cluster](../hdinsight-hadoop-customize-cluster-linux.md#script-action-to-a-running-cluster).
+Run the following script action to make these locations portable to other clusters. See [Script action to a running cluster](../hdinsight-hadoop-customize-cluster-linux.md#script-action-to-a-running-cluster).
The action is similar to replacing symlinks with their full paths. |Property | Value | ||| |Bash script URI|`https://hdiconfigactions.blob.core.windows.net/linuxhivemigrationv01/hive-adl-expand-location-v01.sh`|
-|Node type(s)|Head|
+|Node types|Head|
|Parameters|""| ## Migrate with Export/Import using sqlpackage
An HDInsight cluster created only after 2020-10-15 supports SQL Export/Import fo
sudo python hive_metastore_tool.py --sqlpackagefile $SQLPACKAGE_FILE --targetfile $TARGET_FILE ```
-3. Save the BACPAC file. Below is an option.
-
+3. Save the BACPAC file.
```bash hdfs dfs -mkdir -p /bacpacs hdfs dfs -put $TARGET_FILE /bacpacs/
An HDInsight cluster created only after 2020-10-15 supports SQL Export/Import fo
## Migrate using Hive script
-Clusters created before 2020-10-15 do not support export/import of the default metastore DB.
+Clusters created before 2020-10-15 don't support export/import of the default metastore DB.
For such clusters, follow the guide [Copy Hive tables across Storage Accounts](./hive-migration-across-storage-accounts.md), using a second cluster with an [external Hive metastore DB](../hdinsight-use-external-metadata-stores.md#select-a-custom-metastore-during-cluster-creation). The second cluster can use the same storage account but must use a new default filesystem. ### Option to "shallow" copy
-Storage consumption would double when tables are "deep" copied using the above guide. You need to manually clean the data in the source storage container.
-We can, instead, "shallow" copy the tables if they are non-transactional. All Hive tables in HDInsight 3.6 are non-transactional by default, but only external tables are non-transactional in HDInsight 4.0. Transactional tables must be deep copied. Follow these steps to shallow copy non-transactional tables:
+Storage consumption would double when tables are "deep" copied using the guide. You need to manually clean the data in the source storage container.
+We can, instead, "shallow" copy the tables if they're nontransactional. All Hive tables in HDInsight 3.6 are nontransactional by default, but only external tables are nontransactional in HDInsight 4.0. Transactional tables must be deep copied. Follow these steps to shallow copy nontransactional tables:
1. Execute script [hive-ddls.sh](https://hdiconfigactions.blob.core.windows.net/linuxhivemigrationv01/hive-ddls.sh) on the source cluster's primary headnode to generate the DDL for every Hive table. 2. The DDL is written to a local Hive script named `/tmp/hdi_hive_ddls.hql`. Execute this on the target cluster that uses an external Hive metastore DB.
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/quickstart-resource-manager-template.md
Two Azure resources are defined in the template:
## Review deployed resources
-Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Blob Storage](../hdinsight-hadoop-use-blob-storage.md) account, an [Azure Data Lake Storage Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
+Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Blob Storage](../hdinsight-hadoop-use-blob-storage.md) account, or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
## Clean up resources
hdinsight Apache Kafka Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-resource-manager-template.md
description: In this quickstart, you learn how to create an Apache Kafka cluster
Previously updated : 09/15/2023 Last updated : 07/24/2024 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
Two Azure resources are defined in the template:
## Review deployed resources
-Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Blob Storage](../hdinsight-hadoop-use-blob-storage.md) account, an [Azure Data Lake Storage Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
+Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Blob Storage](../hdinsight-hadoop-use-blob-storage.md) account, or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
## Get the Apache Zookeeper and Broker host information
Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/availability.md) document.
- Kafka isn't aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
+ Kafka isn't aware of Azure fault domains. When you create partition replicas for topics, it may not distribute replicas properly for high availability.
To ensure high availability, use the [Apache Kafka partition rebalance tool](https://github.com/hdinsight/hdinsight-kafka-tools). This tool must be ran from an SSH connection to the head node of your Kafka cluster.
hdinsight Overview Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/overview-azure-storage.md
Certain MapReduce jobs and packages might create intermediate results that you w
- [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) - [Introduction to Azure Storage](../storage/common/storage-introduction.md)-- [Azure Data Lake Storage Gen1 overview](./overview-data-lake-storage-gen1.md) - [Use Azure storage with Azure HDInsight clusters](hdinsight-hadoop-use-blob-storage.md) - [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](hdinsight-hadoop-use-data-lake-storage-gen2.md)
hdinsight Overview Data Lake Storage Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/overview-data-lake-storage-gen1.md
- Title: Azure Data Lake Storage Gen1 overview in HDInsight
-description: Overview of Data Lake Storage Gen1 in HDInsight.
-- Previously updated : 06/13/2024--
-# Azure Data Lake Storage Gen1 overview in HDInsight
-
-Azure Data Lake Storage Gen1 is an enterprise-wide hyperscale repository for big data analytic workloads. Using Azure Data Lake, you can capture data of any size, type, and ingestion speed. And in one place for operational and exploratory analytics.
-
-Access Data Lake Storage Gen1 from Hadoop (available with an HDInsight cluster) by using the WebHDFS-compatible REST APIs. Data Lake Storage Gen1 is designed to enable analytics on the stored data and is tuned for performance in data analytics scenarios. Gen1 includes the capabilities that are essential for real-world enterprise use cases. These capabilities include security, manageability, adaptability, reliability, and availability.
-
-For more information on Azure Data Lake Storage Gen1, see the detailed [Overview of Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md).
-
-The key capabilities of Data Lake Storage Gen1 include the following.
-
-## Compatibility with Hadoop
-
-Data Lake Storage Gen1 is an Apache Hadoop file system compatible with HDFS and Hadoop environment. HDInsight applications or services that use the WebHDFS API can easily integrate with Data Lake Storage Gen1. Data Lake Storage Gen1 also exposes a WebHDFS-compatible REST interface for applications.
-
-Data stored in Data Lake Storage Gen1 can be easily analyzed using Hadoop analytic frameworks. Frameworks such as MapReduce or Hive. Azure HDInsight clusters can be provisioned and configured to directly access data stored in Data Lake Storage Gen1.
-
-## Unlimited storage, petabyte files
-
-Data Lake Storage Gen1 provides unlimited storage and is suitable for storing different kinds of data for analytics. It doesn't impose limits on account sizes, or file sizes. Or the amount of data that can be stored in a data lake. Individual files range in size from kilobytes to petabytes, making Data Lake Storage Gen1 a great choice to store any type of data. Data is stored durably by making multiple copies. And there are no limits on how long the data can be stored in the data lake.
-
-## Performance tuning for big data analytics
-
-Data Lake Storage Gen1 is designed for analytic systems. Systems that require massive throughput to query and analyze large amounts of data. The data lake spreads parts of a file over several individual storage servers. When you're analyzing data, this setup improves the read throughput when the file is read in parallel.
-
-## Readiness for enterprise: Highly available and secure
-
-Data Lake Storage Gen1 provides industry-standard availability and reliability. Data assets are stored durably: redundant copies guard against unexpected failures. Enterprises can use Data Lake Storage Gen1 in their solutions as an important part of their existing data platform.
-
-Data Lake Storage Gen1 also provides enterprise-grade security for stored data. For more information, see [Securing data in Azure Data Lake Storage Gen1](#data-security-in-data-lake-storage-gen1).
-
-## Flexible data structures
-
-Data Lake Storage Gen1 can store any data in its native format, as is, without requiring prior transformations. Data Lake Storage Gen1 doesn't require a schema to be defined before the data is loaded. The individual analytic framework interprets the data and defines a schema at the time of the analysis. Data Lake Storage Gen1 can handle structured data. And semistructured, and unstructured data.
-
-Data Lake Storage Gen1 containers for data are essentially folders and files. You operate on the stored data by using SDKs, the Azure portal, and Azure PowerShell. Data put into the store with these interfaces and containers, can store any data type. Data Lake Storage Gen1 doesn't do any special handling of data based on the type of data.
-
-## Data security in Data Lake Storage Gen1
-
-Data Lake Storage Gen1 uses Microsoft Entra ID for authentication and uses access control lists (ACLs) to manage access to your data.
-
-| **Feature** | **Description** |
-| | |
-| Authentication |Data Lake Storage Gen1 integrates with Microsoft Entra ID for identity and access management for all the data stored in Data Lake Storage Gen1. Because of the integration, Data Lake Storage Gen1 benefits from all Microsoft Entra features. These features include: multifactor authentication, Conditional Access, and Azure role-based access control. Also, application usage monitoring, security monitoring and alerting, and so on. Data Lake Storage Gen1 supports the OAuth 2.0 protocol for authentication within the REST interface. See [Authentication within Azure Data Lake Storage Gen1 using Microsoft Entra ID](../data-lake-store/data-lakes-store-authentication-using-azure-active-directory.md)|
-| Access control |Data Lake Storage Gen1 provides access control by supporting POSIX-style permissions that are exposed by the WebHDFS protocol. ACLs can be enabled on the root folder, on subfolders, and on individual files. For more information on how ACLs work in the context of Data Lake Storage Gen1, see [Access control in Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md). |
-| Encryption |Data Lake Storage Gen1 also provides encryption for data that is stored in the account. You specify the encryption settings while creating a Data Lake Storage Gen1 account. You can choose to have your data encrypted or opt for no encryption. For more information, see [Encryption in Data Lake Storage Gen1](../data-lake-store/data-lake-store-encryption.md). For instructions on how to provide an encryption-related configuration, see [Get started with Azure Data Lake Storage Gen1 using the Azure portal](../data-lake-store/data-lake-store-get-started-portal.md). |
-
-To learn more about securing data in Data Lake Storage Gen1, see [Securing data stored in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-secure-data.md).
-
-## Applications that are compatible with Data Lake Storage Gen1
-
-Data Lake Storage Gen1 is compatible with most open-source components in the Hadoop environment. It also integrates nicely with other Azure services. Follow the links below to learn more about how Data Lake Storage Gen1 can be used both with open-source components and other Azure services.
-
-* See [Open-source big data applications that work with Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-compatible-oss-other-applications.md).
-* See [Integrating Azure Data Lake Storage Gen1 with other Azure services](../data-lake-store/data-lake-store-integrate-with-other-services.md) to understand how to use Data Lake Storage Gen1 with other Azure services to enable a wider range of scenarios.
-* See [Using Azure Data Lake Storage Gen1 for big data requirements](../data-lake-store/data-lake-store-data-scenarios.md).
-
-## Data Lake Storage Gen1 file system (adl://)
-
-In Hadoop environments, you can access Data Lake Storage Gen1 through the new file system, the AzureDataLakeFilesystem (adl://). The performance of applications and services that use `adl://` can be optimized in ways that aren't currently available in WebHDFS. As a result, you get the flexibility to either avail the best performance by using the recommended adl://. Or maintain existing code by continuing to use the WebHDFS API directly. Azure HDInsight takes full advantage of the AzureDataLakeFilesystem to provide the best performance on Data Lake Storage Gen1.
-
-Access your data in Data Lake Storage Gen1 by using the following URI:
-
-`adl://<data_lake_storage_gen1_name>.azuredatalakestore.net`
-
-For more information on how to access the data in Data Lake Storage Gen1, see [Actions available on the stored data](../data-lake-store/data-lake-store-get-started-portal.md#properties).
-
-## Next steps
-
-* [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md)
-* [Introduction to Azure Storage](../storage/common/storage-introduction.md)
-* [Azure Data Lake Storage Gen2 overview](./overview-data-lake-storage-gen2.md)
hdinsight Overview Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/overview-data-lake-storage-gen2.md
For more information, see [Use the Azure Data Lake Storage Gen2 URI](../storage/
* [Introduction to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) * [Introduction to Azure Storage](../storage/common/storage-introduction.md)
-* [Azure Data Lake Storage Gen1 overview](./overview-data-lake-storage-gen1.md)
hdinsight Apache Spark Jupyter Notebook Use External Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-use-external-packages.md
In this article, you'll learn how to use the [spark-csv](https://search.maven.or
* Familiarity with using Jupyter Notebooks with Spark on HDInsight. For more information, see [Load data and run queries with Apache Spark on HDInsight](./apache-spark-load-data-run-query.md).
-* The [URI scheme](../hdinsight-hadoop-linux-information.md#URI-and-scheme) for your clusters primary storage. This would be `wasb://` for Azure Storage, `abfs://` for Azure Data Lake Storage Gen2 or `adl://` for Azure Data Lake Storage Gen1. If secure transfer is enabled for Azure Storage or Data Lake Storage Gen2, the URI would be `wasbs://` or `abfss://`, respectively See also, [secure transfer](../../storage/common/storage-require-secure-transfer.md).
+* The [URI scheme](../hdinsight-hadoop-linux-information.md#URI-and-scheme) for your clusters primary storage. This would be `wasb://` for Azure Storage, `abfs://` for Azure Data Lake Storage Gen2. If secure transfer is enabled for Azure Storage or Data Lake Storage Gen2, the URI would be `wasbs://` or `abfss://`, respectively See also, [secure transfer](../../storage/common/storage-require-secure-transfer.md).
## Use external packages with Jupyter Notebooks
hdinsight Apache Spark Jupyter Spark Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql.md
If you run into an issue with creating HDInsight clusters, it could be that you
## Review deployed resources
-Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Storage](../hdinsight-hadoop-use-blob-storage.md), an [Azure Data Lake Storage Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
+Once the cluster is created, you'll receive a **Deployment succeeded** notification with a **Go to resource** link. Your Resource group page will list your new HDInsight cluster and the default storage associated with the cluster. Each cluster has an [Azure Storage](../hdinsight-hadoop-use-blob-storage.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account dependency. It's referred as the default storage account. The HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
## Create a Jupyter Notebook file
hdinsight Apache Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-overview.md
Apache Spark is a parallel processing framework that supports in-memory processi
* [Spark Activities in Azure Data Factory](../../data-factory/transform-data-using-spark.md) allow you to use Spark analytics in your data pipeline, using on-demand or pre-existing Spark clusters.
-With Apache Spark in Azure HDInsight, you can store and process your data all within Azure. Spark clusters in HDInsight are compatible with [Azure Blob storage](../../storage/common/storage-introduction.md), [Azure Data Lake Storage Gen1](../../data-lake-store/data-lake-store-overview.md), or [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md), allowing you to apply Spark processing on your existing data stores.
+With Apache Spark in Azure HDInsight, you can store and process your data all within Azure. Spark clusters in HDInsight are compatible with [Azure Blob storage](../../storage/common/storage-introduction.md), or [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md), allowing you to apply Spark processing on your existing data stores.
:::image type="content" source="./media/apache-spark-overview/hdinsight-spark-overview-inline.svg" alt-text="Spark: a unified framework." lightbox="./media/apache-spark-overview/hdinsight-spark-overview-large.svg":::
Spark clusters in HDInsight offer a fully managed Spark service. Benefits of cre
| Ease creation |You can create a new Spark cluster in HDInsight in minutes using the Azure portal, Azure PowerShell, or the HDInsight .NET SDK. See [Get started with Apache Spark cluster in HDInsight](apache-spark-jupyter-spark-sql-use-portal.md). | | Ease of use |Spark cluster in HDInsight include Jupyter Notebooks and Apache Zeppelin Notebooks. You can use these notebooks for interactive data processing and visualization. See [Use Apache Zeppelin notebooks with Apache Spark](apache-spark-zeppelin-notebook.md) and [Load data and run queries on an Apache Spark cluster](apache-spark-load-data-run-query.md).| | REST APIs |Spark clusters in HDInsight include [Apache Livy](https://github.com/clouder).|
-| Support for Azure Storage | Spark clusters in HDInsight can use Azure Data Lake Storage Gen1/Gen2 as both the primary storage or additional storage. For more information on Data Lake Storage Gen1, see [Azure Data Lake Storage Gen1](../../data-lake-store/data-lake-store-overview.md). For more information on Data Lake Storage Gen2, see [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md).|
+| Support for Azure Storage | Spark clusters in HDInsight can use Azure Data Lake Storage Gen2 as both the primary storage or additional storage. For more information on Data Lake Storage Gen2, see [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md).|
| Integration with Azure services |Spark cluster in HDInsight comes with a connector to Azure Event Hubs. You can build streaming applications using the Event Hubs. Including Apache Kafka, which is already available as part of Spark. |
-| Integration with third-party IDEs | HDInsight provides several IDE plugins that are useful to create and submit applications to an HDInsight Spark cluster. For more information, see [Use Azure Toolkit for IntelliJ IDEA](apache-spark-intellij-tool-plugin.md), [Use Spark & Hive Tools for VSCode](../hdinsight-for-vscode.md), and [Use Azure Toolkit for Eclipse](apache-spark-eclipse-tool-plugin.md).|
+| Integration with third-party IDEs | HDInsight provides several IDE plugins that are useful to create and submit applications to an HDInsight Spark cluster. For more information, see [Use Azure Toolkit for IntelliJ IDEA](apache-spark-intellij-tool-plugin.md), [Use Spark & Hive Tools for VS Code](../hdinsight-for-vscode.md), and [Use Azure Toolkit for Eclipse](apache-spark-eclipse-tool-plugin.md).|
+ | Concurrent Queries |Spark clusters in HDInsight support concurrent queries. This capability enables multiple queries from one user or multiple queries from various users and applications to share the same cluster resources. | | Caching on SSDs |You can choose to cache data either in memory or in SSDs attached to the cluster nodes. Caching in memory provides the best query performance but could be expensive. Caching in SSDs provides a great option for improving query performance without the need to create a cluster of a size that is required to fit the entire dataset in memory. See [Improve performance of Apache Spark workloads using Azure HDInsight IO Cache](apache-spark-improve-performance-iocache.md). | | Integration with BI Tools |Spark clusters in HDInsight provide connectors for BI tools such as Power BI for data analytics. | | Pre-loaded Anaconda libraries |Spark clusters in HDInsight come with Anaconda libraries pre-installed. [Anaconda](https://docs.continuum.io/anaconda/) provides close to 200 libraries for machine learning, data analysis, visualization, and so on. |
-| Adaptability | HDInsight allows you to change the number of cluster nodes dynamically with the Autoscale feature. See [Automatically scale Azure HDInsight clusters](../hdinsight-autoscale-clusters.md). Also, Spark clusters can be dropped with no loss of data since all the data is stored in Azure Blob storage, [Azure Data Lake Storage Gen1](../../data-lake-store/data-lake-store-overview.md), or [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md). |
+| Adaptability | HDInsight allows you to change the number of cluster nodes dynamically with the Autoscale feature. See [Automatically scale Azure HDInsight clusters](../hdinsight-autoscale-clusters.md). Also, Spark clusters can be dropped with no loss of data since all the data is stored in Azure Blob storage, or [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md). |
| SLA |Spark clusters in HDInsight come with 24/7 support and an SLA of 99.9% up-time. | Apache Spark clusters in HDInsight include the following components that are available on the clusters by default.
The SparkContext can connect to several types of cluster managers, which give re
The SparkContext runs the user's main function and executes the various parallel operations on the worker nodes. Then, the SparkContext collects the results of the operations. The worker nodes read and write data from and to the Hadoop distributed file system. The worker nodes also cache transformed data in-memory as Resilient Distributed Datasets (RDDs).
-The SparkContext connects to the Spark master and is responsible for converting an application to a directed graph (DAG) of individual tasks. Tasks that get executed within an executor process on the worker nodes. Each application gets its own executor processes. Which stay up during the whole application and run tasks in multiple threads.
+The SparkContext connects to the Spark master and is responsible for converting an application to a directed graph (DAG) of individual tasks. Tasks that get executed within an executor process on the worker nodes. Each application gets its own executor processes, which stay up during the whole application and run tasks in multiple threads.
## Spark in HDInsight use cases
Spark clusters in HDInsight enable the following key scenarios:
### Interactive data analysis and BI
-Apache Spark in HDInsight stores data in Azure Blob Storage, Azure Data Lake Gen1, or Azure Data Lake Storage Gen2. Business experts and key decision makers can analyze and build reports over that data. And use Microsoft Power BI to build interactive reports from the analyzed data. Analysts can start from unstructured/semi structured data in cluster storage, define a schema for the data using notebooks, and then build data models using Microsoft Power BI. Spark clusters in HDInsight also support many third-party BI tools. Such as Tableau, making it easier for data analysts, business experts, and key decision makers.
+Apache Spark in HDInsight stores data in Azure Blob Storage and Azure Data Lake Storage Gen2. Business experts and key decision makers can analyze and build reports over that data. And use Microsoft Power BI to build interactive reports from the analyzed data. Analysts can start from unstructured/semi structured data in cluster storage, define a schema for the data using notebooks, and then build data models using Microsoft Power BI. Spark clusters in HDInsight also support many third-party BI tools. Such as Tableau, making it easier for data analysts, business experts, and key decision makers.
* [Tutorial: Visualize Spark data using Power BI](apache-spark-use-bi-tools.md)
hdinsight Apache Spark Use With Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-use-with-data-lake-store.md
- Title: Analyze Azure Data Lake Storage Gen1 with HDInsight Apache Spark
-description: Run Apache Spark jobs to analyze data stored in Azure Data Lake Storage Gen1
--- Previously updated : 06/15/2024--
-# Use HDInsight Spark cluster to analyze data in Data Lake Storage Gen1
-
-In this article, you use [Jupyter Notebook](https://jupyter.org/) available with HDInsight Spark clusters to run a job that reads data from a Data Lake Storage account.
-
-## Prerequisites
-
-* Azure Data Lake Storage Gen1 account. Follow the instructions at [Get started with Azure Data Lake Storage Gen1 using the Azure portal](../../data-lake-store/data-lake-store-get-started-portal.md).
-
-* Azure HDInsight Spark cluster with Data Lake Storage Gen1 as storage. Follow the instructions at [Quickstart: Set up clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md).
-
-## Prepare the data
-
-> [!NOTE]
-> You do not need to perform this step if you have created the HDInsight cluster with Data Lake Storage as default storage. The cluster creation process adds some sample data in the Data Lake Storage account that you specify while creating the cluster. Skip to the section Use HDInsight Spark cluster with Data Lake Storage.
-
-If you created an HDInsight cluster with Data Lake Storage as additional storage and Azure Storage Blob as default storage, you should first copy over some sample data to the Data Lake Storage account. You can use the sample data from the Azure Storage Blob associated with the HDInsight cluster.
-
-1. Open a command prompt and navigate to the directory where AdlCopy is installed, typically `%HOMEPATH%\Documents\adlcopy`.
-
-2. Run the following command to copy a specific blob from the source container to Data Lake Storage:
-
- ```scala
- AdlCopy /source https://<source_account>.blob.core.windows.net/<source_container>/<blob name> /dest swebhdfs://<dest_adls_account>.azuredatalakestore.net/<dest_folder>/ /sourcekey <storage_account_key_for_storage_container>
- ```
-
- Copy the **HVAC.csv** sample data file at **/HdiSamples/HdiSamples/SensorSampleData/hvac/** to the Azure Data Lake Storage account. The code snippet should look like:
-
- ```scala
- AdlCopy /Source https://mydatastore.blob.core.windows.net/mysparkcluster/HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv /dest swebhdfs://mydatalakestore.azuredatalakestore.net/hvac/ /sourcekey uJUfvD6cEvhfLoBae2yyQf8t9/BpbWZ4XoYj4kAS5Jf40pZaMNf0q6a8yqTxktwVgRED4vPHeh/50iS9atS5LQ==
- ```
-
- > [!WARNING]
- > Make sure that the file and path names use the proper capitalization.
-
-3. You are prompted to enter the credentials for the Azure subscription under which you have your Data Lake Storage account. You see an output similar to the following snippet:
-
- ```output
- Initializing Copy.
- Copy Started.
- 100% data copied.
- Copy Completed. 1 file copied.
- ```
-
- The data file (**HVAC.csv**) will be copied under a folder **/hvac** in the Data Lake Storage account.
-
-## Use an HDInsight Spark cluster with Data Lake Storage Gen1
-
-1. From the [Azure portal](https://portal.azure.com/), from the startboard, click the tile for your Apache Spark cluster (if you pinned it to the startboard). You can also navigate to your cluster under **Browse All** > **HDInsight Clusters**.
-
-2. From the Spark cluster blade, click **Quick Links**, and then from the **Cluster Dashboard** blade, click **Jupyter Notebook**. If prompted, enter the admin credentials for the cluster.
-
- > [!NOTE]
- > You may also reach the Jupyter Notebook for your cluster by opening the following URL in your browser. Replace **CLUSTERNAME** with the name of your cluster:
- >
- > `https://CLUSTERNAME.azurehdinsight.net/jupyter`
-
-3. Create a new notebook. Click **New**, and then click **PySpark**.
-
- :::image type="content" source="./media/apache-spark-use-with-data-lake-store/hdinsight-create-jupyter-notebook.png " alt-text="Create a new Jupyter Notebook." border="true":::
-
-4. Because you created a notebook using the PySpark kernel, you do not need to create any contexts explicitly. The Spark and Hive contexts will be automatically created for you when you run the first code cell. You can start by importing the types required for this scenario. To do so, paste the following code snippet in a cell and press **SHIFT + ENTER**.
-
- ```scala
- from pyspark.sql.types import *
- ```
-
- Every time you run a job in Jupyter, your web browser window title will show a **(Busy)** status along with the notebook title. You will also see a solid circle next to the **PySpark** text in the top-right corner. After the job is completed, this will change to a hollow circle.
-
- :::image type="content" source="./media/apache-spark-use-with-data-lake-store/hdinsight-jupyter-job-status.png " alt-text="Status of a Jupyter Notebook job." border="true":::
-
-5. Load sample data into a temporary table using the **HVAC.csv** file you copied to the Data Lake Storage Gen1 account. You can access the data in the Data Lake Storage account using the following URL pattern.
-
- * If you have Data Lake Storage Gen1 as default storage, HVAC.csv will be at the path similar to the following URL:
-
- ```scala
- adl://<data_lake_store_name>.azuredatalakestore.net/<cluster_root>/HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv
- ```
-
- Or, you could also use a shortened format such as the following:
-
- ```scala
- adl:///HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv
- ```
-
- * If you have Data Lake Storage as additional storage, HVAC.csv will be at the location where you copied it, such as:
-
- ```scala
- adl://<data_lake_store_name>.azuredatalakestore.net/<path_to_file>
- ```
-
- In an empty cell, paste the following code example, replace **MYDATALAKESTORE** with your Data Lake Storage account name, and press **SHIFT + ENTER**. This code example registers the data into a temporary table called **hvac**.
-
- ```scala
- # Load the data. The path below assumes Data Lake Storage is default storage for the Spark cluster
- hvacText = sc.textFile("adl://MYDATALAKESTORazuredatalakestore. net/cluster/mysparkclusteHdiSamples/HdiSamples/ SensorSampleData/hvac/HVAC.csv")
-
- # Create the schema
- hvacSchema = StructType([StructField("date", StringTy(), False) ,StructField("time", StringType(), FalseStructField ("targettemp", IntegerType(), FalseStructField("actualtemp", IntegerType(), FalseStructField("buildingID", StringType(), False)])
-
- # Parse the data in hvacText
- hvac = hvacText.map(lambda s: s.split(",")).filt(lambda s: s [0] != "Date").map(lambda s:(str(s[0]), s(s[1]), int(s[2]), int (s[3]), str(s[6]) ))
-
- # Create a data frame
- hvacdf = sqlContext.createDataFrame(hvac,hvacSchema)
-
- # Register the data fram as a table to run queries against
- hvacdf.registerTempTable("hvac")
- ```
-
-6. Because you are using a PySpark kernel, you can now directly run a SQL query on the temporary table **hvac** that you just created by using the `%%sql` magic. For more information about the `%%sql` magic, as well as other magics available with the PySpark kernel, see [Kernels available on Jupyter Notebooks with Apache Spark HDInsight clusters](apache-spark-jupyter-notebook-kernels.md#parameters-supported-with-the-sql-magic).
-
- ```sql
- %%sql
- SELECT buildingID, (targettemp - actualtemp) AS temp_diff, date FROM hvac WHERE date = \"6/1/13\"
- ```
-7. Once the job is completed successfully, the following tabular output is displayed by default.
-
- :::image type="content" source="./media/apache-spark-use-with-data-lake-store/jupyter-tabular-output.png " alt-text="Table output of query result." border="true":::
-
- You can also see the results in other visualizations as well. For example, an area graph for the same output would look like the following.
-
- :::image type="content" source="./media/apache-spark-use-with-data-lake-store/jupyter-area-output1.png " alt-text="Area graph of query result." border="true":::
-
-8. After you have finished running the application, you should shutdown the notebook to release the resources. To do so, from the **File** menu on the notebook, click **Close and Halt**. This will shutdown and close the notebook.
--
-## Next steps
-
-* [Create a standalone Scala application to run on Apache Spark cluster](apache-spark-create-standalone-application.md)
-* [Use HDInsight Tools in Azure Toolkit for IntelliJ to create Apache Spark applications for HDInsight Spark Linux cluster](apache-spark-intellij-tool-plugin.md)
-* [Use HDInsight Tools in Azure Toolkit for Eclipse to create Apache Spark applications for HDInsight Spark Linux cluster](apache-spark-eclipse-tool-plugin.md)
-* [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen2.md)
hdinsight Spark Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/spark-best-practices.md
This article provides various guidelines for using Apache Spark on Azure HDInsig
| Option | Documents | ||| | Azure Data Lake Storage Gen2 | [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen2.md) |
-| Azure Data Lake Storage Gen1 | [Use Azure Data Lake Storage Gen1 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen1.md) |
| Azure Blob Storage | [Use Azure storage with Azure HDInsight clusters](../hdinsight-hadoop-use-blob-storage.md) | ## Next steps
hdinsight Use Scp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/use-scp.md
Title: Use SCP with Apache Hadoop in Azure HDInsight
description: This document provides information on connecting to HDInsight using the ssh and scp commands. Previously updated : 09/14/2023 Last updated : 07/24/2024 # Use SCP with Apache Hadoop in Azure HDInsight
Use `scp` when you need to upload a resource for use from an SSH session. For ex
For information on directly loading data into the HDFS-compatible storage, see the following documents: * [HDInsight using Azure Storage](hdinsight-hadoop-use-blob-storage.md).
-* [HDInsight using Azure Data Lake Storage Gen1](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md).
## Next steps
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Az
[!INCLUDE [Create a key vault](../includes/key-vault-creation-cli.md)]
+## Give your user account permissions to manage secrets in Key Vault
++ ## Add a certificate to Key Vault To add a certificate to the vault, you just need to take a couple of additional steps. This certificate could be used by an application.
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
Connect-AzAccount
[!INCLUDE [Create a key vault](../includes/key-vault-creation-powershell.md)]
-### Grant access to your key vault
+## Give your user account permissions to manage secrets in Key Vault
## Add a certificate to Key Vault
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides centralized access management of Azure resources.
-Azure RBAC allows users to manage Key, Secrets, and Certificates permissions. It provides one place to manage all permissions across all key vaults.
+Azure RBAC allows users to manage keys, secrets, and certificates permissions, and provides one place to manage all permissions across all key vaults.
-The Azure RBAC model allows users to set permissions on different scope levels: management group, subscription, resource group, or individual resources. Azure RBAC for key vault also allows users to have separate permissions on individual keys, secrets, and certificates
+The Azure RBAC model allows users to set permissions on different scope levels: management group, subscription, resource group, or individual resources. Azure RBAC for key vault also allows users to have separate permissions on individual keys, secrets, and certificates.
For more information, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). ## Best Practices for individual keys, secrets, and certificates role assignments
-Our recommendation is to use a vault per application per environment
-(Development, Pre-Production, and Production) with roles assigned at Key Vault scope.
+Our recommendation is to use a vault per application per environment (Development, Pre-Production, and Production) with roles assigned at the key vault scope.
-Assigning roles on individual keys, secrets and certificates should be avoided. Exceptions to general guidance:
--- Scenarios where individual secrets must be shared between multiple applications, for example, one application needs to access data from the other application
+Assigning roles on individual keys, secrets and certificates should be avoided. An exception is a scenario where individual secrets must be shared between multiple applications; for example, where one application needs to access data from another application.
More about Azure Key Vault management guidelines, see:
For more information about Azure built-in roles definitions, see [Azure built-in
## Using Azure RBAC secret, key, and certificate permissions with Key Vault
-The new Azure RBAC permission model for key vault provides alternative to the vault access policy permissions model.
+The new Azure RBAC permission model for key vault provides alternative to the vault access policy permissions model.
### Prerequisites You must have an Azure subscription. If you don't, you can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-To manage role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator](../../role-based-access-control/built-in-roles.md#key-vault-data-access-administrator) (with restricted permissions to only assign/remove specific Key Vault roles), [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator),or [Owner](../../role-based-access-control/built-in-roles.md#owner).
+To manage role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator](../../role-based-access-control/built-in-roles.md#key-vault-data-access-administrator) (with restricted permissions to only assign/remove specific Key Vault roles), [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator), or [Owner](../../role-based-access-control/built-in-roles.md#owner).
### Enable Azure RBAC permissions on Key Vault > [!NOTE]
-> Changing permission model requires unrestricted 'Microsoft.Authorization/roleAssignments/write' permission, which is part of [Owner](../../role-based-access-control/built-in-roles.md#owner) and [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) roles. Classic subscription administrator roles like 'Service Administrator' and 'Co-Administrator', or restricted 'Key Vault Data Access Administrator' cannot be used to change permission model.
+> Changing the permission model requires unrestricted 'Microsoft.Authorization/roleAssignments/write' permission, which is part of the [Owner](../../role-based-access-control/built-in-roles.md#owner) and [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) roles. Classic subscription administrator roles like 'Service Administrator' and 'Co-Administrator', or restricted 'Key Vault Data Access Administrator' cannot be used to change permission model.
1. Enable Azure RBAC permissions on new key vault:
To manage role assignments, you must have `Microsoft.Authorization/roleAssignmen
### Assign role
-> [!Note]
-> It's recommended to use the unique role ID instead of the role name in scripts. Therefore, if a role is renamed, your scripts would continue to work. In this document role name is used only for readability.
+> [!NOTE]
+> It's recommended to use the unique role ID instead of the role name in scripts. Therefore, if a role is renamed, your scripts would continue to work. In this document role name is used for readability.
# [Azure CLI](#tab/azure-cli)
key-vault Tutorial Net Create Vault Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-create-vault-azure-web-app.md
To complete this tutorial, you need:
* [Azure Key Vault.](./overview.md) You can create a key vault by using the [Azure portal](quick-create-portal.md), the [Azure CLI](quick-create-cli.md), or [Azure PowerShell](quick-create-powershell.md). * A Key Vault [secret](../secrets/about-secrets.md). You can create a secret by using the [Azure portal](../secrets/quick-create-portal.md), [PowerShell](../secrets/quick-create-powershell.md), or the [Azure CLI](../secrets/quick-create-cli.md).
-If you already have your web application deployed in Azure App Service, you can skip to [configure web app access to a key vault](#create-and-assign-a-managed-identity) and [modify web application code](#modify-the-app-to-access-your-key-vault) sections.
+If you already have your web application deployed in Azure App Service, you can skip to [configure web app access to a key vault](#configure-the-web-app-to-connect-to-key-vault) and [modify web application code](#modify-the-app-to-access-your-key-vault) sections.
## Create a .NET Core app In this step, you'll set up the local .NET Core project.
http://<your-webapp-name>.azurewebsites.net
You'll see the "Hello World!" message you saw earlier when you visited `http://localhost:5000`. For more information about deploying web application using Git, see [Local Git deployment to Azure App Service](../../app-service/deploy-local-git.md)
-
+ ## Configure the web app to connect to Key Vault In this section, you'll configure web access to Key Vault and update your application code to retrieve a secret from Key Vault.
-### Create and assign a managed identity
+### Create and assign access to a managed identity
In this tutorial, we'll use [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to authenticate to Key Vault. Managed identity automatically manages application credentials.
The command will return this JSON snippet:
} ```
-To give your web app permission to do **get** and **list** operations on your key vault, pass the `principalId` to the Azure CLI [az keyvault set-policy](/cli/azure/keyvault?#az-keyvault-set-policy) command:
-
-```azurecli-interactive
-az keyvault set-policy --name "<your-keyvault-name>" --object-id "<principalId>" --secret-permissions get list
-```
-
-You can also assign access policies by using the [Azure portal](./assign-access-policy-portal.md) or [PowerShell](./assign-access-policy-powershell.md).
### Modify the app to access your key vault
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
xxxxxxxx-xx-xxxxxx xxxxxxxx-xxxx-xxxx SystemAssigned
## Assign permissions to the VM identity
-Assign the previously created identity permissions to your key vault with the [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy) command:
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az keyvault set-policy --name '<your-unique-key-vault-name>' --object-id <VMSystemAssignedIdentity> --secret-permissions get list set delete
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Set-AzKeyVaultAccessPolicy -ResourceGroupName <YourResourceGroupName> -VaultName '<your-unique-key-vault-name>' -ObjectId '<VMSystemAssignedIdentity>' -PermissionsToSecrets get,list,set,delete
-```
- ## Sign in to the virtual machine
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
Note the system-assigned identity that's displayed in the following code. The ou
## Assign permissions to the VM identity
-Now you can assign the previously created identity permissions to your key vault by running the following command:
-
-```azurecli
-az keyvault set-policy --name "<your-unique-keyvault-name>" --object-id "<systemAssignedIdentity>" --secret-permissions get list
-```
## Log in to the VM
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Az
[!INCLUDE [Create a key vault](../includes/key-vault-creation-cli.md)]
+## Give your user account permissions to manage secrets in Key Vault
++ ## Add a key to Key Vault To add a key to the vault, you just need to take a couple of additional steps. This key could be used by an application.
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-powershell.md
Connect-AzAccount
[!INCLUDE [Create a key vault](../includes/key-vault-creation-powershell.md)]
+## Give your user account permissions to manage secrets in Key Vault
++ ## Add a key to Key Vault To add a key to the vault, you just need to take a couple of additional steps. This key could be used by an application.
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-cli.md
This quickstart requires version 2.0.4 or later of the Azure CLI. If using Azure
## Give your user account permissions to manage secrets in Key Vault ## Add a secret to Key Vault
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
This quickstart is using Azure Identity library with Azure PowerShell to authent
### Grant access to your key vault [!INCLUDE [Using RBAC to provide access to a key vault](../includes/key-vault-quickstart-rbac.md)]- ### Create new .NET console app
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-powershell.md
Connect-AzAccount
## Give your user account permissions to manage secrets in Key Vault ## Adding a secret to Key Vault
load-balancer Monitor Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md
For a list of the tables used by Azure Monitor Logs and queryable by Log Analyti
NSG flow logs can be used to analyze traffic flowing through the load balancer. >[!Note]
->NSG flow logs don't contain the load balancers frontend IP address. To analyze the traffic flowing into a load balancer, the NSG flow logs would need to be filtered by the private IP addresses of the load balancerΓÇÖs backend pool members.
+>NSG flow logs doesn't contain the load balancers frontend IP address. To analyze the traffic flowing into a load balancer, the NSG flow logs would need to be filtered by the private IP addresses of the load balancerΓÇÖs backend pool members.
## Alerts
logic-apps Test Logic Apps Mock Data Static Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/test-logic-apps-mock-data-static-results.md
Title: Mock test workflows
-description: Set up mock data to test workflows in Azure Logic Apps without affecting production environments.
+ Title: Test workflows with mock outputs
+description: Set up static results to test workflows with mock outputs in Azure Logic Apps without affecting production environments.
ms.suite: integration Previously updated : 01/04/2024 Last updated : 07/24/2024
-# Test workflows with mock data in Azure Logic Apps
+# Test workflows with mock outputs in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-To test your workflows without actually calling or accessing live apps, data, services, or systems, you can set up and return mock values from actions. For example, you might want to test different action paths based on various conditions, force errors, provide specific message response bodies, or even try skipping some steps. Setting up mock data testing on an action doesn't run the action, but returns the mock data instead.
+To test your workflow without affecting your production environments, you can set up and return mock outputs, or *static results*, from your workflow operations. That way, you don't have to call or access your live apps, data, services, or systems. For example, you might want to test different action paths based on various conditions, force errors, provide specific message response bodies, or even try skipping some steps. Setting up mock results from an action doesn't run the operation, but returns the test output instead.
-For example, if you set up mock data for the Outlook 365 send mail action, Azure Logic Apps just returns the mock data that you provided, rather than call Outlook and send an email.
+For example, if you set up mock outputs for the Outlook 365 send mail action, Azure Logic Apps just returns the mock outputs that you provided, rather than call Outlook and send an email.
-This article shows how to set up mock data on an action in a workflow for the [**Logic App (Consumption)** and the **Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences). You can find previous workflow runs that use these mock data and reuse existing action outputs as mock data.
+This guide shows how to set up mock outputs for an action in a Consumption or Standard logic app workflow.
## Prerequisites
-* An Azure account and subscription. If you don't have a subscription, <a href="https://azure.microsoft.com/free/?WT.mc_id=A261C142F" target="_blank">sign up for a free Azure account</a>.
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The logic app resource and workflow where you want to set up mock data. This article uses a **Recurrence** trigger and **HTTP** action as an example workflow.
+* The logic app resource and workflow where you want to set up mock outputs. This article uses a **Recurrence** trigger and **HTTP** action as an example workflow.
- If you're new to logic apps, see [What is Azure Logic Apps](logic-apps-overview.md) and the following documentation:
+ If you're new to logic apps, see the following documentation:
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
+ * [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-<a name="enable-mock-data"></a>
+## Limitations
-## Enable mock data output
+- This capability is available only for actions, not triggers.
-### [Consumption](#tab/consumption)
-
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-
-1. On the action where you want to return mock data, follow these steps:
-
- 1. In the action's upper-right corner, select the ellipses (*...*) button, and then select **Testing**, for example:
-
- ![Screenshot showing the Azure portal, workflow designer, action shortcut menu, and "Testing" selected.](./media/test-logic-apps-mock-data-static-results/select-testing.png)
-
- 1. On the **Testing** pane, select **Enable Static Result (Preview)**. When the action's required (*) properties appear, specify the mock output values that you want to return as the action's response.
-
- The properties differ based on the selected action type. For example, the HTTP action has the following required properties:
-
- | Property | Description |
- |-|-|
- | **Status** | The action's status to return |
- | **Status Code** | The specific status code to return as output |
- | **Headers** | The header content to return |
- |||
+- No option currently exists to dynamically or programmatically enable and disable this capability.
- ![Screenshot showing the "Testing" pane after selecting "Enable Static Result".](./media/test-logic-apps-mock-data-static-results/enable-static-result.png)
+- No indications exist at the logic app level that this capability is enabled. The following list describes where you can find indications that this capability is enabled:
- > [!TIP]
- > To enter the values in JavaScript Object Notation (JSON) format,
- > select **Switch to JSON Mode** (![Icon for "Switch to JSON Mode"](./media/test-logic-apps-mock-data-static-results/switch-to-json-mode-button.png)).
+ - On the action shape, the lower-right corner shows the test beaker icon (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)).
- 1. For optional properties, open the **Select optional fields** list, and select the properties that you want to mock.
+ - On the action's details pane, on **Testing** tab, the **Static Result** option is enabled.
- ![Screenshot showing the "Testing" pane with "Select optional fields" list opened.](./media/test-logic-apps-mock-data-static-results/optional-properties.png)
-
-1. When you're ready, select **Done**.
-
- In the action's upper-right corner, the title bar now shows a test beaker icon (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)), which indicates that you've enabled static results.
-
- ![Screenshot showing an action with the static result icon.](./media/test-logic-apps-mock-data-static-results/static-result-enabled.png)
-
- To find workflow runs that use mock data, review [Find runs that use static results](#find-runs-mock-data) later in this topic.
-
-### [Standard](#tab/standard)
-
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+ - In code view, the action's JSON definition includes the following properties in the **`runtimeConfiguration`** JSON object:
-1. On the designer, select the action where you want to return mock data so that the action details pane appears.
+ ```json
+ "runtimeConfiguration": {
+ "staticResult": {
+ "name": "{action-name-ordinal}",
+ "staticResultOptions": "Enabled"
+ }
+ }
+ ```
-1. After the action details pane opens to the right side, select **Testing**.
+ - In the workflow's run history, the **Static Results** column appears with the word **Enabled** next to any run where at least one action has this capability enabled.
- ![Screenshot showing the Azure portal, workflow designer, action details pane, and "Testing" selected.](./media/test-logic-apps-mock-data-static-results/select-testing-standard.png)
+<a name="set-up-mock-outputs"></a>
-1. On the **Testing** tab, select **Enable Static Result (Preview)**. When the action's required (*) properties appear, specify the mock output values that you want to return as the action's response.
+## Set up mock outputs on an action
- The properties differ based on the selected action type. For example, the HTTP action has the following required properties:
-
- | Property | Description |
- |-|-|
- | **Status** | The action's status to return |
- | **Status Code** | The specific status code to return as output |
- | **Headers** | The header content to return |
- |||
-
- ![Screenshot showing the "Testing" tab after selecting "Enable Static Result".](./media/test-logic-apps-mock-data-static-results/enable-static-result-standard.png)
-
- > [!TIP]
- > To enter the values in JavaScript Object Notation (JSON) format,
- > select **Switch to JSON Mode** (![Icon for "Switch to JSON Mode"](./media/test-logic-apps-mock-data-static-results/switch-to-json-mode-button.png)).
+### [Consumption](#tab/consumption)
-1. For optional properties, open the **Select optional fields** list, and select the properties that you want to mock.
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
- ![Screenshot showing the "Testing" pane with "Select optional fields" list opened.](./media/test-logic-apps-mock-data-static-results/optional-properties-standard.png)
+1. On the designer, select the action where you want to return mock outputs.
-1. When you're ready, select **Done**.
+1. On the action information pane, select **Testing**, for example:
- The action's lower-right corner now shows a test beaker icon (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)), which indicates that you've enabled static results.
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/select-testing.png" alt-text="Screenshot shows the Azure portal, Consumption workflow designer, HTTP action information pane, and Testing selected." lightbox="media/test-logic-apps-mock-data-static-results/select-testing.png":::
- ![Screenshot showing an action with the static result icon on designer.](./media/test-logic-apps-mock-data-static-results/static-result-enabled-standard.png)
+1. On the **Testing** tab, select **Enable Static Result**.
- To find workflow runs that use mock data, review [Find runs that use static results](#find-runs-mock-data) later in this topic.
+1. From the **Select Fields** list, select the properties where you want to specify mock outputs to return in the action's response.
-
+ The available properties differ based on the selected action type. For example, the HTTP action has the following sections and properties:
-<a name="find-runs-mock-data"></a>
+ | Section or property | Required | Description |
+ ||-|-|
+ | **Status** | Yes | The action status to return. <br><br>- If you select **Succeeded**, you must also select **Outputs** from the **Select Fields** list. <br><br>- If you select **Failed**, you must also select **Error** from the **Select Fields** list. |
+ | **Code** | No | The specific code to return for the action |
+ | **Error** | Yes, when the **Status** is **Failed** | The error message and an optional error code to return |
+ | **Output** | Yes, when the **Status** is **Succeeded** | The status code, header content, and an optional body to return |
-## Find runs that use mock data
+ The following example shows when **Status** is set to **Failed**, which requires that you select the **Error** field and provide values for the **Error Message** and **Error Code** properties:
-### [Consumption](#tab/consumption)
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/enable-static-result.png" alt-text="Screenshot shows Consumption workflow and Testing pane after selecting Enable Static Result with the Status and Error fields also selected." lightbox="media/test-logic-apps-mock-data-static-results/enable-static-result.png":::
-To find earlier workflow runs where the actions use mock data, review that workflow's run history.
+1. When you're ready, select **Save**.
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+ The action's lower-right corner now shows a test beaker icon (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)), which indicates that you enabled static results.
-1. On your logic app resource menu, select **Overview**.
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/static-result-enabled.png" alt-text="Screenshot shows Consumption workflow with HTTP action and static result icon." lightbox="media/test-logic-apps-mock-data-static-results/static-result-enabled.png":::
-1. Under the **Essentials** section, select **Runs history**, if not already selected.
+ To find workflow runs that use mock outputs, see [Find runs that use static results](#find-runs-mock-data) later in this guide.
-1. In the **Runs history** table, find the **Static Results** column.
+### [Standard](#tab/standard)
- Any run that includes actions with mock data output has the **Static Results** column set to **Enabled**, for example:
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app workflow in the designer.
- ![Screenshot showing the workflow run history with the "Static Results" column.](./media/test-logic-apps-mock-data-static-results/run-history.png)
+1. On the designer, select the action where you want to return mock outputs.
-1. To view that actions in a run that uses mock data, select the run that you want where the **Static Results** column is set to **Enabled**.
+1. On the action information pane, select **Testing**, for example:
- Actions that use static results show the test beaker (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)) icon, for example:
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/select-testing-standard.png" alt-text="Screenshot shows Standard workflow with HTTP action details pane, and Testing selected." lightbox="media/test-logic-apps-mock-data-static-results/select-testing-standard.png":::
- ![Screenshot showing workflow run history with actions that use static result.](./media/test-logic-apps-mock-data-static-results/static-result-enabled-run-details.png)
+1. On the **Testing** tab, select **Enable Static Result**.
-### [Standard](#tab/standard)
+1. From the **Select Fields** list, select the properties where you want to specify mock outputs to return in the action's response.
-To find other workflow runs where the actions use mock data, you have to check each run.
+ The available properties differ based on the selected action type. For example, the HTTP action has the following sections and properties:
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+ | Section or property | Required | Description |
+ ||-|-|
+ | **Status** | Yes | The action status to return. <br><br>- If you select **Succeeded**, you must also select **Outputs** from the **Select Fields** list. <br><br>- If you select **Failed**, you must also select **Error** from the **Select Fields** list. |
+ | **Code** | No | The specific code to return for the action |
+ | **Error** | Yes, when the **Status** is **Failed** | The error message and an optional error code to return |
+ | **Output** | Yes, when the **Status** is **Succeeded** | The status code, header content, and an optional body to return |
-1. On the workflow menu, select **Overview**.
+ The following example shows when **Status** is set to **Failed**, which requires that you select the **Error** field and provide values for the **Error Message** and **Error Code** properties:
-1. Under the **Essentials** section, select **Run History**, if not already selected.
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/enable-static-result-standard.png" alt-text="Screenshot shows Standard workflow and Testing pane after selecting Enable Static Result with the Status and Error fields also selected." lightbox="media/test-logic-apps-mock-data-static-results/enable-static-result-standard.png":::
-1. In the **Run History** table, select the run that you want to review.
+1. When you're ready, select **Save**.
- ![Screenshot showing the workflow run history.](./media/test-logic-apps-mock-data-static-results/select-run-standard.png)
+ The action's lower-right corner now shows a test beaker icon (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)), which indicates that you've enabled static results.
-1. On the run details pane, check whether any actions show the test beaker (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)) icon, for example:
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/static-result-enabled.png" alt-text="Screenshot shows Standard workflow with HTTP action and static result icon." lightbox="media/test-logic-apps-mock-data-static-results/static-result-enabled.png":::
- ![Screenshot showing workflow run history with actions that use static result.](./media/test-logic-apps-mock-data-static-results/run-history-static-result-standard.png)
+ To find workflow runs that use mock outputs, see [Find runs that use static results](#find-runs-mock-data) later in this guide.
-<a name="reuse-sample-outputs"></a>
-
-## Reuse previous outputs as mock data
+<a name="find-runs-mock-data"></a>
-If you have a previous workflow run with outputs, you can reuse these outputs as mock data by copying and pasting those outputs from that run.
+## Find runs that use mock outputs
### [Consumption](#tab/consumption)
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-
-1. On your logic app resource menu, select **Overview**.
-
-1. Under the **Essentials** section, select **Runs history**, if not already selected. From the list that appears, select the workflow run that you want.
-
- ![Screenshot showing workflow run history.](./media/test-logic-apps-mock-data-static-results/select-run.png)
-
-1. After the run details pane opens, expand the action that has the outputs that you want.
-
-1. In the **Outputs** section, select **Show raw outputs**.
+To find earlier workflow runs where the actions use mock outputs, review that workflow's run history.
-1. On the **Outputs** pane, copy either the complete JavaScript Object Notation (JSON) object or the specific subsection you want to use, for example, the outputs section, or even just the headers section.
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
-1. Review the earlier section about how to [set up mock data](#enable-mock-data) for an action, and follow the steps to open the action's **Testing** pane.
-
-1. After the **Testing** pane opens, choose either step:
+1. On your logic app resource menu, select **Overview**.
- * To paste a complete JSON object, next to the **Testing** label, select **Switch to JSON Mode** (![Icon for "Switch to JSON Mode"](./media/test-logic-apps-mock-data-static-results/switch-to-json-mode-button.png)):
+1. Under the **Essentials** section, select **Runs history**, if not selected.
- ![Screenshot showing "Switch to JSON Mode" icon selected to paste complete JSON object.](./media/test-logic-apps-mock-data-static-results/switch-to-json-mode-button-complete.png)
+1. In the **Runs history** table, find the **Static Results** column.
- * To paste just a JSON section, next to that section's label, such as **Output** or **Headers**, select **Switch to JSON Mode**, for example:
+ Any run that includes actions with mock outputs has the **Static Results** column set to **Enabled**, for example:
- ![Screenshot showing "Switch to JSON Mode" icon selected to paste a section from a JSON object.](./media/test-logic-apps-mock-data-static-results/switch-to-json-mode-button-output.png)
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/run-history.png" alt-text="Screenshot shows Consumption workflow run history with the Static Results column." lightbox="media/test-logic-apps-mock-data-static-results/run-history.png":::
-1. In the JSON editor, paste your previously copied JSON.
+1. To view the actions in a run that uses mock outputs, select the run where the **Static Results** column is set to **Enabled**.
- ![Screenshot showing the pasted JSON in the editor.](./media/test-logic-apps-mock-data-static-results/json-editing-mode.png)
+ In the workflow run details pane, actions that use static results show the test beaker icon (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)), for example:
-1. When you're finished, select **Done**. Or, to return to the designer, select **Switch Editor Mode** (![Icon for "Switch Editor Mode"](./media/test-logic-apps-mock-data-static-results/switch-editor-mode-button.png)).
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/run-history-static-result.png" alt-text="Screenshot shows Consumption workflow run history with actions that use static results." lightbox="media/test-logic-apps-mock-data-static-results/run-history-static-result.png":::
### [Standard](#tab/standard)
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-
-1. On the workflow menu, select **Overview**.
-
-1. Under the **Essentials** section, select **Run History**, if not already selected.
-
-1. In the **Run History** table, select the run that you want to review.
+To find earlier or other workflow runs where the actions use mock outputs, review each workflow's run history.
- ![Screenshot showing the workflow run history.](./media/test-logic-apps-mock-data-static-results/select-run-standard.png)
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app workflow in the designer.
-1. After the run details pane opens, select the action that has the outputs that you want.
-
-1. In the **Outputs** section, select **Show raw outputs**.
-
-1. On the **Outputs** pane, copy either the complete JavaScript Object Notation (JSON) object or the specific subsection you want to use, for example, the outputs section, or even just the headers section.
-
-1. Review the earlier section about how to [set up mock data](#enable-mock-data) for an action, and follow the steps to open the action's **Testing** tab.
-
-1. After the **Testing** tab opens, choose either step:
+1. On the workflow menu, select **Overview**.
- * To paste a complete JSON object, next to the **Testing** label, select **Switch to JSON Mode** (![Icon for "Switch to JSON Mode"](./media/test-logic-apps-mock-data-static-results/switch-to-json-mode-button.png)):
+1. Under the **Essentials** section, select **Run History**, if not selected.
- ![Screenshot showing "Switch to JSON Mode" icon selected to paste complete JSON object.](./media/test-logic-apps-mock-data-static-results/switch-to-json-mode-button-complete-standard.png)
+1. In the **Run History** table, find the **Static Results** column.
- * To paste just a JSON section, next to that section's label, such as **Output** or **Headers**, select **Switch to JSON Mode**, for example:
+ Any run that includes actions with mock outputs has the **Static Results** column set to **Enabled**, for example:
- ![Screenshot showing "Switch to JSON Mode" icon selected to paste a section from a JSON object.](./media/test-logic-apps-mock-data-static-results/switch-to-json-mode-button-output-standard.png)
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/select-run-standard.png" alt-text="Screenshot shows Standard workflow run history with the Static Results column." lightbox="media/test-logic-apps-mock-data-static-results/select-run-standard.png":::
-1. In the JSON editor, paste your previously copied JSON.
+1. To view the actions in a run that uses mock outputs, select the run where the **Static Results** column is set to **Enabled**.
- ![Screenshot showing the pasted JSON in the editor.](./media/test-logic-apps-mock-data-static-results/json-editing-mode-standard.png)
+ On the run details pane, any actions that use static results show the test beaker icon (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)), for example:
-1. When you're finished, select **Done**. Or, to return to the designer, select **Switch Editor Mode** (![Icon for "Switch Editor Mode"](./media/test-logic-apps-mock-data-static-results/switch-editor-mode-button.png)).
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/run-history-static-result.png" alt-text="Screenshot shows Standard workflow run history with actions that use static results." lightbox="media/test-logic-apps-mock-data-static-results/run-history-static-result.png":::
-## Disable mock data
-
-Turning off static results on an action doesn't remove the values from your last setup. So, if you turn on static result again on the same action, you can continue using your previous values.
-
-### [Consumption](#tab/consumption)
-
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. Find the action where you want to disable mock data.
-
-1. In the action's upper-right corner, select the test beaker icon (![Icon for static result](./media/test-logic-apps-mock-data-static-results/static-result-test-beaker-icon.png)).
-
- ![Screenshot showing the action and the test beaker icon selected.](./media/test-logic-apps-mock-data-static-results/disable-static-result.png)
-
-1. Select **Disable Static Result** > **Done**.
+## Disable mock outputs
- ![Screenshot showing the "Disable Static Result" selected.](./media/test-logic-apps-mock-data-static-results/disable-static-result-button.png)
+Turning off static results on an action doesn't remove the values from your last setup. So, if you turn on static results again on the same action, you can continue using your previous values.
-### [Standard](#tab/standard)
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. Select the action where you want to disable mock data.
+1. Find and select the action where you want to disable mock outputs.
1. In the action details pane, select the **Testing** tab.
-1. Select **Disable Static Result** > **Done**.
+1. Select **Disable Static Result** > **Save**.
- ![Screenshot showing the "Disable Static Result" selected for Standard.](./media/test-logic-apps-mock-data-static-results/disable-static-result-button-standard.png)
--
+ :::image type="content" source="media/test-logic-apps-mock-data-static-results/disable-static-result.png" alt-text="Screenshot shows logic app workflow, HTTP action, and Testing tab with Disable Static Result selected." lightbox="media/test-logic-apps-mock-data-static-results/disable-static-result.png":::
## Reference
-For more information about this setting in your underlying workflow definitions, see [Static results - Schema reference for Workflow Definition Language](logic-apps-workflow-definition-language.md#static-results) and [runtimeConfiguration.staticResult - Runtime configuration settings](logic-apps-workflow-actions-triggers.md#runtime-configuration-settings)
+For more information about this setting in your underlying workflow definitions, see [Static results - Schema reference for Workflow Definition Language](logic-apps-workflow-definition-language.md#static-results) and [runtimeConfiguration.staticResult - Runtime configuration settings](logic-apps-workflow-actions-triggers.md#runtime-configuration-settings).
## Next steps
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Previously updated : 07/13/2023 Last updated : 07/24/2024 #Customer intent: As an experienced Python developer, I need secure access to my data in my Azure storage solutions, and I need to use that data to accomplish my machine learning tasks.
An Azure Machine Learning datastore serves as a *reference* to an *existing* Azu
- A common, easy-to-use API that interacts with different storage types (Blob/Files/ADLS). - Easier discovery of useful datastores in team operations.-- For credential-based access (service principal/SAS/key), Azure Machine Learning datastore secures connection information. This way, you won't need to place that information in your scripts.
+- For credential-based access (service principal/SAS/key), an Azure Machine Learning datastore secures connection information. This way, you don't need to place that information in your scripts.
-When you create a datastore with an existing Azure storage account, you can choose between two different authentication methods:
+When you create a datastore with an existing Azure storage account, you have two different authentication method options:
- **Credential-based** - authenticate data access with a service principal, shared access signature (SAS) token, or account key. Users with *Reader* workspace access can access the credentials. - **Identity-based** - use your Microsoft Entra identity or managed identity to authenticate data access.
-The following table summarizes the Azure cloud-based storage services that an Azure Machine Learning datastore can create. Additionally, the table summarizes the authentication types that can access those
+This table summarizes the Azure cloud-based storage services that an Azure Machine Learning datastore can create. Additionally, the table summarizes the authentication types that can access those
Supported storage service | Credential-based authentication | Identity-based authentication ||:-:|::|
-Azure Blob Container| Γ£ô | Γ£ô|
+Azure Blob Container| Γ£ô | Γ£ô |
Azure File Share| Γ£ô | |
-Azure Data Lake Gen1 | Γ£ô | Γ£ô|
-Azure Data Lake Gen2| Γ£ô | Γ£ô|
+Azure Data Lake Gen1 | Γ£ô | Γ£ô |
+Azure Data Lake Gen2| Γ£ô | Γ£ô |
-See [Create datastores](how-to-datastore.md) for more information about datastores.
+For more information about datastores, visit [Create datastores](how-to-datastore.md).
### Default datastores
-Each Azure Machine Learning workspace has a default storage account (Azure storage account) that contains the following datastores:
+Each Azure Machine Learning workspace has a default storage account (Azure storage account) that contains these datastores:
> [!TIP]
-> To find the ID for your workspace, go to the workspace in the [Azure portal](https://portal.azure.com/). Expand **Settings** and then select **Properties**. The **Workspace ID** is displayed.
+> To find the ID for your workspace, go to the workspace in the [Azure portal](https://portal.azure.com/). Expand **Settings**, and then select **Properties**. The **Workspace ID** appears.
| Datastore name | Data storage type | Data storage name | Description | |||||
Each Azure Machine Learning workspace has a default storage account (Azure stora
## Data types
-A URI (storage location) can reference a file, a folder, or a data table. A machine learning job input and output definition requires one of the following three data types:
+A URI (storage location) can reference a file, a folder, or a data table. A machine learning job input and output definition requires one of these three data types:
|Type |V2 API |V1 API |Canonical Scenarios | V2/V1 API Difference |||||| |**File**<br>Reference a single file | `uri_file` | `FileDataset` | Read/write a single file - the file can have any format. | A type new to V2 APIs. In V1 APIs, files always mapped to a folder on the compute target filesystem; this mapping required an `os.path.join`. In V2 APIs, the single file is mapped. This way, you can refer to that location in your code. |
-|**Folder**<br> Reference a single folder | `uri_folder` | `FileDataset` | You must read/write a folder of parquet/CSV files into Pandas/Spark.<br><br>Deep-learning with images, text, audio, video files located in a folder. | In V1 APIs, `FileDataset` had an associated engine that could take a file sample from a folder. In V2 APIs, a Folder is a simple mapping to the compute target filesystem. |
-|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. | In V1 APIs, the Azure Machine Learning back-end stored the data materialization blueprint. As a result, `TabularDataset` only worked if you had an Azure Machine Learning workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to AzureML* - for example, locally and on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. See [Working with tables in Azure Machine Learning](how-to-mltable.md) for more information. |
+|**Folder**<br> Reference a single folder | `uri_folder` | `FileDataset` | You must read/write a folder of parquet/CSV files into Pandas/Spark.<br><br>Deep-learning with images, text, audio, video files located in a folder. | In V1 APIs, `FileDataset` had an associated engine that could take a file sample from a folder. In V2 APIs, a folder is a simple mapping to the compute target filesystem. |
+|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. | In V1 APIs, the Azure Machine Learning back-end stored the data materialization blueprint. As a result, `TabularDataset` only worked if you had an Azure Machine Learning workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to Azure Machine Learning* - for example, locally and on-premises. In V2 APIs, it's easier to transition from local to remote jobs. For more information, visit [Working with tables in Azure Machine Learning](how-to-mltable.md). |
## URI A Uniform Resource Identifier (URI) represents a storage location on your local computer, Azure storage, or a publicly available http(s) location. These examples show URIs for different storage options:
A Uniform Resource Identifier (URI) represents a storage location on your local
|Azure Data Lake (gen2) | `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>.csv` | | Azure Data Lake (gen1) | `adl://<accountname>.azuredatalakestore.net/<folder1>/<folder2>` |
-An Azure Machine Learning job maps URIs to the compute target filesystem. This mapping means that in a command that consumes or produces a URI, that URI works like a file or a folder. A URI uses **identity-based authentication** to connect to storage services, with either your Microsoft Entra ID (default), or Managed Identity. Azure Machine Learning [Datastore](#datastore) URIs can apply either identity-based authentication, or **credential-based** (for example, Service Principal, SAS token, account key), without exposure of secrets.
+An Azure Machine Learning job maps URIs to the compute target filesystem. This mapping means that for a command that consumes or produces a URI, that URI works like a file or a folder. A URI uses **identity-based authentication** to connect to storage services, with either your Microsoft Entra ID (default) or Managed Identity. Azure Machine Learning [Datastore](#datastore) URIs can apply either identity-based authentication, or **credential-based** (for example, Service Principal, SAS token, account key) authentication, without exposure of secrets.
A URI can serve as either *input* or an *output* to an Azure Machine Learning job, and it can map to the compute target filesystem with one of four different *mode* options: -- **Read-*only* mount (`ro_mount`)**: The URI represents a storage location that is *mounted* to the compute target filesystem. The mounted data location supports read-only output exclusively.
+- **Read-*only* mount (`ro_mount`)**: The URI represents a storage location that is *mounted* to the compute target filesystem. The mounted data location exclusively supports read-only output.
- **Read-*write* mount (`rw_mount`)**: The URI represents a storage location that is *mounted* to the compute target filesystem. The mounted data location supports both read output from it *and* data writes to it. - **Download (`download`)**: The URI represents a storage location containing data that is *downloaded* to the compute target filesystem. - **Upload (`upload`)**: All data written to a compute target location is *uploaded* to the storage location represented by the URI. Additionally, you can pass in the URI as a job input string with the **direct** mode. This table summarizes the combination of modes available for inputs and outputs:
-Job<br>Input or Output | `upload` | `download` | `ro_mount` | `rw_mount` | `direct` |
- | :: | :: | :: | :: | :: |
-Input | | Γ£ô | Γ£ô | | Γ£ô |
-Output | Γ£ô | | | Γ£ô |
+Job<br>Input or Output | `upload` | `download` | `ro_mount` | `rw_mount` | `direct` |
+ | :: | :: | :: | :: | :: |
+Input | | Γ£ô | Γ£ô | | Γ£ô |
+Output | Γ£ô | | | Γ£ô |
-See [Access data in a job](how-to-read-write-data-v2.md) for more information.
+For more information, visit [Access data in a job](how-to-read-write-data-v2.md).
## Data runtime capability Azure Machine Learning uses its own *data runtime* for one of three purposes:
An Azure Machine Learning data asset resembles web browser bookmarks (favorites)
Data asset creation also creates a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and you don't risk data source integrity. You can create Data assets from Azure Machine Learning datastores, Azure Storage, public URLs, or local files.
-See [Create data assets](how-to-create-data-assets.md) for more information about data assets.
+For more information about data assets, visit [Create data assets](how-to-create-data-assets.md).
## Next steps
machine-learning Concept Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-catalog.md
Network isolation | Managed Virtual Network with Online Endpoints. [Learn more.]
Model | Managed compute | Serverless API (pay-as-you-go) --|--|-- Llama family models | Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat <br> Llama-3-8B-Instruct <br> Llama-3-70B-Instruct <br> Llama-3-8B <br> Llama-3-70B | Llama-3-70B-Instruct <br> Llama-3-8B-Instruct <br> Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat
-Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large <br> Mistral-small
+Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large (2402) <br> Mistral-large (2407) <br> Mistral-small <br> Mistral-Nemo
Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual JAIS | Not available | jais-30b-chat
-Phi3 family models | Phi-3-small-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi3-medium-128k-instruct <br> Phi3-medium-4k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi3-medium-128k-instruct <br> Phi3-medium-4k-instruct
+Phi3 family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct
Nixtla | Not available | TimeGEN-1 Other models | Available | Not available
Llama-3-70B-Instruct <br> Llama-3-8B-Instruct | [Microsoft Managed Countries](/p
Llama-2-7b <br> Llama-2-13b <br> Llama-2-70b | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, East US, West US 3, West US, North Central US, South Central US | West US 3 Llama-2-7b-chat <br> Llama-2-13b-chat <br> Llama-2-70b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, East US, West US 3, West US, North Central US, South Central US | Not available Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
-Mistral-Large | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
+Mistral Large (2402) <br> Mistral Large (2407) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
+Mistral Nemo | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Japan | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available TimeGEN-1 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Mexico <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available jais-30b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
-Phi-3-mini-4k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central, Canada Central, West US 3 | Not available
-Phi-3-mini-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
+Phi-3-mini-4k-instruct <br> Phi-3-mini-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
+Phi-3-small-8k-instruct <br> Phi-3-small-128k-Instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
Phi-3-medium-4k-instruct, Phi-3-medium-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available ### Content safety for models deployed via MaaS
machine-learning How To Deploy Models Mistral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-mistral.md
Title: How to deploy Mistral family of models with Azure Machine Learning studio
-description: Learn how to deploy Mistral Large with Azure Machine Learning studio.
+description: Learn how to deploy Mistral family of models with Azure Machine Learning studio.
In this article, you learn how to use Azure Machine Learning studio to deploy th
Mistral AI offers two categories of models in Azure Machine Learning studio. These models are available in the [model catalog](concept-model-catalog.md). -- __Premium models__: Mistral Large and Mistral Small. These models can be deployed as serverless APIs with pay-as-you-go token-based billing.-- __Open models__: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models can be deployed to managed computes in your own Azure subscription.
+* __Premium models__: Mistral Large (2402), Mistral Large (2407), and Mistral Small.
+* __Open models__: Mistral Nemo, Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01.
+
+All the premium models and Mistral Nemo (an open model) can be deployed as serverless APIs with pay-as-you-go token-based billing. The other open models can be deployed to managed computes in your own Azure subscription.
You can browse the Mistral family of models in the model catalog by filtering on the Mistral collection.
You can browse the Mistral family of models in the model catalog by filtering on
# [Mistral Large](#tab/mistral-large)
-Mistral Large is Mistral AI's most advanced Large Language Model (LLM). It can be used on any language-based task, thanks to its state-of-the-art reasoning and knowledge capabilities.
+Mistral Large is Mistral AI's most advanced Large Language Model (LLM). It can be used on any language-based task, thanks to its state-of-the-art reasoning and knowledge capabilities. There are two variants available for the Mistral Large model version:
+
+- Mistral Large (2402)
+- Mistral Large (2407)
-Additionally, Mistral Large is:
+Additionally, some attributes of _Mistral Large (2402)_ include:
- __Specialized in RAG.__ Crucial information isn't lost in the middle of long context windows (up to 32 K tokens). - __Strong in coding.__ Code generation, review, and comments. Supports all mainstream coding languages. - __Multi-lingual by design.__ Best-in-class performance in French, German, Spanish, and Italian - in addition to English. Dozens of other languages are supported. - __Responsible AI compliant.__ Efficient guardrails baked in the model, and extra safety layer with the `safe_mode` option.
+And attributes of _Mistral Large (2407)_ include:
+
+- **Multi-lingual by design.** Supports dozens of languages, including English, French, German, Spanish, and Italian.
+- **Proficient in coding.** Trained on more than 80 coding languages, including Python, Java, C, C++, JavaScript, and Bash. Also trained on more specific languages such as Swift and Fortran.
+- **Agent-centric.** Possesses agentic capabilities with native function calling and JSON outputting.
+- **Advanced in reasoning.** Demonstrates state-of-the-art mathematical and reasoning capabilities.
++ # [Mistral Small](#tab/mistral-small) Mistral Small is Mistral AI's most efficient Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency. Mistral Small is: -- **A small model optimized for low latency.** Very efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model, it outperforms Mixtral-8x7B and has lower latency.
+- **A small model optimized for low latency.** Efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model, it outperforms Mixtral-8x7B and has lower latency.
- **Specialized in RAG.** Crucial information isn't lost in the middle of long context windows (up to 32K tokens). - **Strong in coding.** Code generation, review, and comments. Supports all mainstream coding languages. - **Multi-lingual by design.** Best-in-class performance in French, German, Spanish, Italian, and English. Dozens of other languages are supported. - **Responsible AI compliant.** Efficient guardrails baked in the model, and extra safety layer with the `safe_mode` option. +
+# [Mistral Nemo](#tab/mistral-nemo)
+
+Mistral Nemo is a cutting-edge Language Model (LLM) boasting state-of-the-art reasoning, world knowledge, and coding capabilities within its size category.
+
+Mistral Nemo is a 12B model, making it a powerful drop-in replacement for any system using Mistral 7B, which it supersedes. It supports a context length of 128K, and it accepts only text inputs and generates text outputs.
+
+Additionally, Mistral Nemo is:
+
+- **Jointly developed with Nvidia.** This collaboration has resulted in a powerful 12B model that pushes the boundaries of language understanding and generation.
+- **Multilingual proficient.** Mistral Nemo is equipped with a tokenizer called Tekken, which is designed for multilingual applications. It supports over 100 languages, such as English, French, German, and Spanish. Tekken is more efficient than the Llama 3 tokenizer in compressing text for approximately 85% of all languages, with significant improvements in Malayalam, Hindi, Arabic, and prevalent European languages.
+- **Agent-centric.** Mistral Nemo possesses top-tier agentic capabilities, including native function calling and JSON outputting.
+- **Advanced in reasoning.** Mistral Nemo demonstrates state-of-the-art mathematical and reasoning capabilities within its size category.
+ [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
Mistral Small is:
Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
-**Mistral Large** and **Mistral Small** can be deployed as a serverless API with pay-as-you-go billing and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models.
+**Mistral Large (2402)**, **Mistral Large (2407)**, **Mistral Small**, and **Mistral Nemo** can be deployed as a serverless API with pay-as-you-go billing and are offered by Mistral AI through the Microsoft Azure Marketplace. Mistral AI can change or update the terms of use and pricing of these models.
### Prerequisites
Certain models in the model catalog can be deployed as a serverless API with pay
### Create a new deployment
-The following steps demonstrate the deployment of Mistral Large, but you can use the same steps to deploy Mistral Small by replacing the model name.
+The following steps demonstrate the deployment of Mistral Large (2402), but you can use the same steps to deploy Mistral Nemo or any of the premium Mistral models by replacing the model name.
To create a deployment: 1. Go to [Azure Machine Learning studio](https://ml.azure.com/home). 1. Select the workspace in which you want to deploy your model. To use the serverless API model deployment offering, your workspace must belong to one of the regions listed in the [prerequisites](#prerequisites).
-1. Choose the model you want to deploy, for example Mistral-large, from the [model catalog](https://ml.azure.com/model/catalog).
+1. Choose the model you want to deploy, for example the Mistral Large (2402) model, from the [model catalog](https://ml.azure.com/model/catalog).
Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**.
To create a deployment:
1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. 1. You can also select the **Pricing and terms** tab to learn about pricing for the selected model.
-1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering (for example, Mistral-large). This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a workspace.
+1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering (for example, Mistral Large (2402)). This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a workspace.
1. Once you subscribe the workspace for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ workspace don't require subscribing again. If this scenario applies to you, you'll see a **Continue to deploy** option to select.
To learn about billing for Mistral models deployed as a serverless API with pay-
### Consume the Mistral family of models as a service
-You can consume Mistral Large by using the chat API.
+You can consume Mistral models by using the chat API.
1. In the **workspace**, select **Endpoints** > **Serverless endpoints**. 1. Find and select the deployment you created.
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
Previously updated : 05/09/2022 Last updated : 07/24/2024 - references_regions - ignite-2023
mysql Concepts Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md
With data encryption with customer-managed keys for Azure Database for MySQL flexible server, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. With customer managed keys (CMKs), the customer is responsible for and ultimately controls the key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys.
+> [!NOTE]
+> Azure Key Vault Managed HSM (Hardware Security Module) is currently supported for customer-managed keys for Azure Database for MySQL Flexible Server.
+ ## Benefits Data encryption with customer-managed keys for Azure Database for MySQL flexible server provides the following benefits: - You fully control data access by the ability to remove the key and make the database inaccessible - Full control over the key lifecycle, including rotation of the key to aligning with corporate policies-- Central management and organization of keys in Azure Key Vault
+- Central management and organization of keys in Azure Key Vault or Managed HSM
- Ability to implement separation of duties between security officers, DBA, and system administrators ## How does data encryption with a customer-managed key work?
The UMI must have the following access to the key vault:
- **Wrap Key**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL flexible server instance. - **Unwrap Key**: To be able to decrypt the DEK. Azure Database for MySQL flexible server needs the decrypted DEK to encrypt/decrypt the data.
+If RBAC is enabled, the UMI must also be assigned the following role:
+
+- **Key Vault Crypto Service Encryption User** or the role with the permissions:
+ - Microsoft.KeyVault/vaults/keys/wrap/action
+ - Microsoft.KeyVault/vaults/keys/unwrap/action
+ - Microsoft.KeyVault/vaults/keys/read like "Key Vault Crypto Service Encryption User"
+- For Managed HSM, assign the **Managed HSM Crypto Service Encryption User** role
++ ### Terminology and description **Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that encrypts and decrypts a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
After logging is enabled, auditors can use Azure Monitor to review Key Vault aud
## Requirements for configuring data encryption for Azure Database for MySQL flexible server
-Before you attempt to configure Key Vault, be sure to address the following requirements.
+Before you attempt to configure Key Vault or Managed HSM, be sure to address the following requirements.
- The Key Vault and Azure Database for MySQL flexible server instance must belong to the same Microsoft Entra tenant. Cross-tenant Key Vault and flexible server interactions need to be supported. You'll need to reconfigure data encryption if you move Key Vault resources after performing the configuration. - The Key Vault and Azure Database for MySQL flexible server instance must reside in the same region.
Before you attempt to configure the CMK, be sure to address the following requir
## Recommendations for configuring data encryption
-As you configure Key Vault to use data encryption using a customer-managed key, keep in mind the following recommendations.
+As you configure Key Vault or Managed HSM to use data encryption using a customer-managed key, keep in mind the following recommendations.
- Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion. - Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated.
As you configure Key Vault to use data encryption using a customer-managed key,
- If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey). > [!NOTE]
-> * It is advised to use a key vault from the same region, but if necessary, you can use a key vault from another region by specifying the "enter key identifier" information.
-> * RSA key stored in **Azure Key Vault Managed HSM**, is currently not supported.
+> * It is advised to use a key vault from the same region, but if necessary, you can use a key vault from another region by specifying the "enter key identifier" information. The key vault managed HSM must be in the same region as the MySQL flexible server.
+ ## Inaccessible customer-managed key condition
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-cli.md
az login
az account set --subscription \<subscription id\> ``` -- In Azure Key Vault, create a key vault and a key. The key vault must have the following properties to use as a customer-managed key:
+- In Azure Key Vault, create a key vault or managed HSM and a key. The key vault or managed HSM must have the following properties to use as a customer-managed key:
[Soft delete](../../key-vault/general/soft-delete-overview.md):
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-portal.md
In this tutorial, you learn how to:
- Configure data encryption for replica servers. > [!NOTE]
-> Azure key vault access configuration now supports two types of permission models - [Azure role-based access control](../../role-based-access-control/overview.md) and [Vault access policy](../../key-vault/general/assign-access-policy.md). The tutorial describes configuring data encryption for Azure Database for MySQL flexible server using Vault access policy. However, you can choose to use Azure RBAC as permission model to grant access to Azure Key Vault. To do so, you need any built-in or custom role that has below three permissions and assign it through "role assignments" using Access control (IAM) tab in the keyvault: a) KeyVault/vaults/keys/wrap/action b) KeyVault/vaults/keys/unwrap/action c) KeyVault/vaults/keys/read
+ > Azure key vault access configuration now supports two types of permission models - [Azure role-based access control](../../role-based-access-control/overview.md) and [Vault access policy](../../key-vault/general/assign-access-policy.md). The tutorial describes configuring data encryption for Azure Database for MySQL flexible server using Vault access policy. However, you can choose to use Azure RBAC as permission model to grant access to Azure Key Vault. To do so, you need any built-in or custom role that has below three permissions and assign it through "role assignments" using Access control (IAM) tab in the keyvault: a) KeyVault/vaults/keys/wrap/action b) KeyVault/vaults/keys/unwrap/action c) KeyVault/vaults/keys/read. For Azure key vault managed HSM, you will also need to assign the "Managed HSM Crypto Service Encryption User" role assignment in RBAC.
After your Azure Database for MySQL flexible server instance is encrypted with a
## Next steps - [Customer managed keys data encryption](concepts-customer-managed-key.md)+ - [Data encryption with Azure CLI](how-to-data-encryption-cli.md)
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
See [Server concepts](concept-servers.md) for more information.
## Enterprise grade security, compliance, and privacy
-Azure Database for MySQL flexible server uses the FIPS 140-2 validated cryptographic module to store data at rest. Data, including backups and temporary files created while running queries, are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system-managed (default).
+Azure Database for MySQL flexible server uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default). You can also use customer managed keys (CMKs) to bring your own key (BYOK) stored in an Azure Key Vault or Managed Hardware Security Module (HSM) for data encryption at rest.
+
+For more information, see [data encryption with customer managed keys for Azure Database for MySQL flexible server instances](concepts-customer-managed-key.md).
+ Azure Database for MySQL flexible server encrypts data in-motion with transport layer security enforced by default. Azure Database for MySQL flexible server by default supports encrypted connections using Transport Layer Security (TLS 1.2) and all incoming connections with TLS 1.0 and TLS 1.1 are denied. You can disable TSL/SSL enforcement by setting the require_secure_transport server parameter and then setting the minimum tls_version for your server.
network-watcher Nsg Flow Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-portal.md
Previously updated : 05/30/2024 Last updated : 07/24/2024 #CustomerIntent: As an Azure administrator, I want to log my virtual network IP traffic using Network Watcher NSG flow logs so that I can analyze it later.
In this article, you learn how to create, change, disable, or delete an NSG flow
- A network security group. If you need to create a network security group, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md?tabs=network-security-group-portal). -- An Azure storage account. If you need to create a storage account, see [Create a storage account using PowerShell](../storage/common/storage-account-create.md?tabs=azure-portal).
+- An Azure storage account. If you need to create a storage account, see [Create a storage account using the Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal).
## Register Insights provider
In this article, you learn how to create, change, disable, or delete an NSG flow
1. In the search box at the top of the portal, enter *subscriptions*. Select **Subscriptions** from the search results.
+ :::image type="content" source="./media/nsg-flow-logs-portal/subscriptions.png" alt-text="Screenshot that shows how to search for Subscriptions in the Azure portal." lightbox="./media/nsg-flow-logs-portal/subscriptions.png":::
+ 1. Select the Azure subscription that you want to enable the provider for in **Subscriptions**. 1. Under **Settings**, select **Resource providers**.
Create a flow log for your network security group. This NSG flow log is saved in
:::image type="content" source="./media/nsg-flow-logs-portal/flow-logs.png" alt-text="Screenshot of Flow logs page in the Azure portal." lightbox="./media/nsg-flow-logs-portal/flow-logs.png":::
-1. Enter or select the following values in **Create a flow log**:
+1. On the **Basics** tab of **Create a flow log**, enter or select the following values:
| Setting | Value | | - | -- | | **Project details** | | | Subscription | Select the Azure subscription of your network security group that you want to log. |
- | Network security group | Select **+ Select resource**. <br> In **Select network security group**, select **myNSG**. Then, select **Confirm selection**. |
- | Flow Log Name | Enter a name for the flow log or leave the default name. **myNSG-myResourceGroup-flowlog** is the default name for this example. |
+ | Flow log type | Select **Network security group** then select **+ Select target resource**. <br> Select the network security group that you want to flow log, then select **Confirm selection**. |
+ | Flow Log Name | Enter a name for the flow log or leave the default name. Azure portal uses ***{ResourceName}-{ResourceGroupName}-flowlog*** as a default name for the flow log. **myNSG-myResourceGroup-flowlog** is the default name used in this article. |
| **Instance details** | | | Subscription | Select the Azure subscription of your storage account. |
- | Storage Accounts | Select the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. |
- | Retention (days) | Enter a retention time for the logs. Enter *0* if you want to retain the flow logs data in the storage account forever (until you delete it from the storage account). For information about pricing, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). |
+ | Storage accounts | Select the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. |
+ | Retention (days) | Enter a retention time for the logs (this option is only available with [Standard general-purpose v2](../storage/common/storage-account-overview.md?toc=/azure/network-watcher/toc.json#types-of-storage-accounts) storage accounts). Enter *0* if you want to retain the flow logs data in the storage account forever (until you delete it from the storage account). For information about pricing, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). |
:::image type="content" source="./media/nsg-flow-logs-portal/create-nsg-flow-log.png" alt-text="Screenshot of creating an NSG flow log in the Azure portal."::: > [!NOTE]
- > If the storage account is in a different subscription, the network security group and storage account must be associated with the same Azure Active Directory tenant. The account you use for each subscription must have the [necessary permissions](required-rbac-permissions.md).
+ > If the storage account is in a different subscription, the network security group and storage account must be associated with the same Microsoft Entra tenant. The account you use for each subscription must have the [necessary permissions](required-rbac-permissions.md).
+
+1. To enable traffic analytics, select **Next: Analytics** button, or select the **Analytics** tab. Enter or select the following values:
+
+ | Setting | Value |
+ | - | -- |
+ | Flow logs version | Select the version of the network security group flow log, available options are: **Version 1** and **Version 2**. The default version is version 2. For more information, see [Flow logging for network security groups](nsg-flow-logs-overview.md). |
+ | Enable traffic analytics | Select the checkbox to enable traffic analytics for your flow log. |
+ | Traffic analytics processing interval | Select the processing interval that you prefer, available options are: **Every 1 hour** and **Every 10 mins**. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md). |
+ | Subscription | Select the Azure subscription of your Log Analytics workspace. |
+ | Log Analytics Workspace | Select your Log Analytics workspace. By default, Azure portal creates ***DefaultWorkspace-{SubscriptionID}-{Region}*** Log Analytics workspace in ***defaultresourcegroup-{Region}*** resource group. |
+
+ :::image type="content" source="./media/nsg-flow-logs-portal/create-nsg-flow-log-analytics.png" alt-text="Screenshot that shows how to enable traffic analytics for a new flow log in the Azure portal.":::
+
+ > [!NOTE]
+ > To create and select a Log Analytics workspace other than the default one, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?toc=/azure/network-watcher/toc.json)
1. Select **Review + create**. 1. Review the settings, and then select **Create**.
-## Create a flow log and traffic analytics workspace
+## Enable or disable traffic analytics
-Create a flow log for your network security group and enable traffic analytics. The NSG flow log is saved in an Azure storage account.
+Enable traffic analytics for a flow log to analyze the flow log data. Traffic analytics provides insights into your traffic patterns. You can enable or disable traffic analytics for a flow log at any time.
+
+To enable traffic analytics for a flow log, follow these steps:
1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** from the search results. 1. Under **Logs**, select **Flow logs**.
-1. In **Network Watcher | Flow logs**, select **+ Create** or **Create flow log** blue button.
-
- :::image type="content" source="./media/nsg-flow-logs-portal/flow-logs.png" alt-text="Screenshot of Flow logs page in the Azure portal." lightbox="./media/nsg-flow-logs-portal/flow-logs.png":::
-
-1. Enter or select the following values in **Create a flow log**:
+1. In **Network Watcher | Flow logs**, select the flow log that you want to enable traffic analytics for.
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select the Azure subscription of your network security group that you want to log. |
- | Network security group | Select **+ Select resource**. <br> In **Select network security group**, select **myNSG**. Then, select **Confirm selection**. |
- | Flow Log Name | Enter a name for the flow log or leave the default name. By default, Azure portal creates *{network-security-group}-{resource-group}-flowlog* flow log in **NetworkWatcherRG** resource group. |
- | **Instance details** | |
- | Subscription | Select the Azure subscription of your storage account. |
- | Storage Accounts | Select the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. |
- | Retention (days) | Enter a retention time for the logs. Enter *0* if you want to retain the flow logs data in the storage account forever (until you delete it from the storage account). For information about pricing, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). |
-
- :::image type="content" source="./media/nsg-flow-logs-portal/create-nsg-flow-log-basics.png" alt-text="Screenshot of the Basics tab of Create a flow log in the Azure portal.":::
+1. In **Flow logs settings**, check the **Enable traffic analytics** checkbox.
- > [!NOTE]
- > If the storage account is in a different subscription, the network security group and storage account must be associated with the same Azure Active Directory tenant. The account you use for each subscription must have the [necessary permissions](required-rbac-permissions.md).
+ :::image type="content" source="./media/nsg-flow-logs-portal/enable-traffic-analytics.png" alt-text="Screenshot that shows how to enable traffic analytics for an existing flow log in the Azure portal." lightbox="./media/nsg-flow-logs-portal/enable-traffic-analytics.png":::
-1. Select **Next: Analytics** button, or select **Analytics** tab. Then enter or select the following values:
+1. Select the following values:
| Setting | Value | | - | -- |
- | Flow Logs Version | Select the flow log version. Version 2 is selected by default when you create a flow log using the Azure portal. For more information about flow logs versions, see [Log format of NSG flow logs](nsg-flow-logs-overview.md#log-format). |
- | **Traffic Analytics** | |
- | Enable Traffic Analytics | Select the checkbox to enable traffic analytics for your flow log. |
- | Traffic Analytics processing interval | Select the processing interval that you prefer, available options are: **Every 1 hour** and **Every 10 mins**. The default processing interval is every one hour. For more information, see [Traffic Analytics](traffic-analytics.md). |
| Subscription | Select the Azure subscription of your Log Analytics workspace. |
- | Log Analytics Workspace | Select your Log Analytics workspace. By default, Azure portal creates and selects *DefaultWorkspace-{subscription-id}-{region}* Log Analytics workspace in *defaultresourcegroup-{Region}* resource group. |
+ | Log Analytics workspace | Select your Log Analytics workspace. By default, Azure portal creates ***DefaultWorkspace-{SubscriptionID}-{Region}*** Log Analytics workspace in ***defaultresourcegroup-{Region}*** resource group. |
+ | Traffic logging interval | Select the processing interval that you prefer, available options are: **Every 1 hour** and **Every 10 mins**. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md). |
- :::image type="content" source="./media/nsg-flow-logs-portal/enable-traffic-analytics.png" alt-text="Screenshot of enabling traffic analytics for a flow log in the Azure portal.":::
+ :::image type="content" source="./media/nsg-flow-logs-portal/enable-traffic-analytics-settings.png" alt-text="Screenshot that shows configurations of traffic analytics for an existing flow log in the Azure portal." lightbox="./media/nsg-flow-logs-portal/enable-traffic-analytics-settings.png":::
-1. Select **Review + create**.
+1. Select **Save** to apply the changes.
-1. Review the settings, and then select **Create**.
+To disable traffic analytics for a flow log, take the previous steps 1-3, then uncheck the **Enable traffic analytics** checkbox and select **Save**.
+ ## Change a flow log
You can change the properties of a flow log after you create it. For example, yo
1. In **Flow logs settings**, you can change any of the following settings:
- - **Flow Logs Version**: Change the flow log version. Available versions are: version 1 and version 2. Version 2 is selected by default when you create a flow log using the Azure portal. For more information about flow logs versions, see [Log format of NSG flow logs](nsg-flow-logs-overview.md#log-format).
- - **Storage Account**: Change the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**.
- - **Retention (days)**: Change the retention time in the storage account. Enter *0* if you want to retain the flow logs data in the storage account forever (until you manually delete the data from the storage account).
- - **Traffic Analytics**: Enable or disable traffic analytics for your flow log. For more information, see [Traffic Analytics](traffic-analytics.md).
- - **Traffic Analytics processing interval**: Change the processing interval of traffic analytics (if traffic analytics is enabled). Available options are: one hour and 10 minutes. The default processing interval is every one hour. For more information, see [Traffic Analytics](traffic-analytics.md).
- - **Log Analytics workspace**: Change the Log Analytics workspace that you want to save the flow logs to (if traffic analytics is enabled).
-
- :::image type="content" source="./media/nsg-flow-logs-portal/change-flow-log.png" alt-text="Screenshot of Flow logs settings page in the Azure portal where you can change some settings." lightbox="./media/nsg-flow-logs-portal/change-flow-log.png":::
+ | Setting | Value |
+ | - | -- |
+ | **Storage account** | |
+ | Subscription | Change the Azure subscription of the storage account that you want to use. |
+ | Storage account | Change the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. |
+ | Retention (days) | Change the retention time in the storage account. Enter *0* if you want to retain the flow logs data in the storage account forever (until you manually delete the data from the storage account). |
+ | **Traffic analytics** | |
+ | Enable traffic analytics | Enable or disable traffic analytics by checking or unchecking the checkbox. |
+ | Subscription | Change the Azure subscription of the Log Analytics workspace that you want to use. |
+ | Log analytics workspace | Change the Log Analytics workspace that you want to save the flow logs to (if traffic analytics is enabled). |
+ | Traffic logging interval | Change the processing interval of traffic analytics (if traffic analytics is enabled). Available options are: one hour and 10 minutes. The default processing interval is every one hour. For more information, see [Traffic Analytics](traffic-analytics.md). |
## List all flow logs
network-watcher Vnet Flow Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-portal.md
Previously updated : 07/23/2024 Last updated : 07/24/2024 #CustomerIntent: As an Azure administrator, I want to log my virtual network IP traffic using Network Watcher VNet flow logs so that I can analyze it later.
Create a flow log for your virtual network, subnet, or network interface. This f
| - | -- | | **Project details** | | | Subscription | Select the Azure subscription of your virtual network that you want to log. |
- | Flow log type | Select **Virtual Network** then select **+ Select target resource** (available options are: **Virtual network**, **Subnet**, and **Network interface**). <br> Select the resources that you want to flow log, then select **Confirm selection**. |
+ | Flow log type | Select **Virtual network** then select **+ Select target resource** (available options are: **Virtual network**, **Subnet**, and **Network interface**). <br> Select the resources that you want to flow log, then select **Confirm selection**. |
| Flow Log Name | Enter a name for the flow log or leave the default name. Azure portal uses ***{ResourceName}-{ResourceGroupName}-flowlog*** as a default name for the flow log. | | **Instance details** | | | Subscription | Select the Azure subscription of the storage account. |
- | Storage Accounts | Select the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. |
+ | Storage accounts | Select the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. |
| Retention (days) | Enter a retention time for the logs (this option is only available with [Standard general-purpose v2](../storage/common/storage-account-overview.md?toc=/azure/network-watcher/toc.json#types-of-storage-accounts) storage accounts). Enter *0* if you want to retain the flow logs data in the storage account forever (until you manually delete it from the storage account). For information about pricing, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). | :::image type="content" source="./media/vnet-flow-logs-portal/create-vnet-flow-log-basics.png" alt-text="Screenshot that shows the Basics tab of creating a virtual network flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/create-vnet-flow-log-basics.png":::
To enable traffic analytics for a flow log, follow these steps:
1. In **Network Watcher | Flow logs**, select the flow log that you want to enable traffic analytics for.
-1. In **Flow logs settings**, check the **Enable traffic analytics** checkbox.
+1. In **Flow logs settings**, under **Traffic analytics**, check the **Enable traffic analytics** checkbox.
:::image type="content" source="./media/vnet-flow-logs-portal/enable-traffic-analytics.png" alt-text="Screenshot that shows how to enable traffic analytics for an existing flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/enable-traffic-analytics.png":::
To enable traffic analytics for a flow log, follow these steps:
| Setting | Value | | - | -- | | Subscription | Select the Azure subscription of your Log Analytics workspace. |
- | Log Analytics Workspace | Select your Log Analytics workspace. By default, Azure portal creates ***DefaultWorkspace-{SubscriptionID}-{Region}*** Log Analytics workspace in ***defaultresourcegroup-{Region}*** resource group. |
- | Traffic analytics processing interval | Select the processing interval that you prefer, available options are: **Every 1 hour** and **Every 10 mins**. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md). |
+ | Log Analytics workspace | Select your Log Analytics workspace. By default, Azure portal creates ***DefaultWorkspace-{SubscriptionID}-{Region}*** Log Analytics workspace in ***defaultresourcegroup-{Region}*** resource group. |
+ | Traffic logging interval | Select the processing interval that you prefer, available options are: **Every 1 hour** and **Every 10 mins**. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md). |
:::image type="content" source="./media/vnet-flow-logs-portal/enable-traffic-analytics-settings.png" alt-text="Screenshot that shows configurations of traffic analytics for an existing flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/enable-traffic-analytics-settings.png":::
payment-hsm Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/lifecycle-management.md
+
+ Title: Azure Payment HSM Lifecycle Management
+description: Azure Payment HSM is a bare metal service utilizing Thales payShield 10K devices in Azure data centers, providing automated allocation and deallocation, physical security management, and customer responsibility for key management and HSM monitoring.
+++++ Last updated : 07/23/2024++++
+# Azure Payment HSM Lifecycle Management
+
+Azure Payment HSM is a Bare Metal service delivered via Thales payShield 10K. Microsoft collaborates with Thales to deploy Thales payShield 10K HSM devices into Azure data centers, modifying them to allow automated allocation/deallocation of those devices to customers. The payShield in Azure has the same management and host command interfaces as on-premises payShield, enabling customers to use the same payShield manager smart cards, readers, and payShield TMD devices.
+
+## Deployment and allocation
+
+Microsoft personnel deploy HSMs into Azure data centers and allocate them to customers through automated tools on-demand. Once an HSM is allocated, Microsoft relinquishes logical access and does not maintain console-access. MicrosoftΓÇÖs administrative access is disabled, and the customer assumes full responsibility for the configuration and maintenance of the HSM and its software.
+
+## Security and compliance
+
+Microsoft handles tasks related to the physical security requirements of the HSM, based on PCI DSS, PCI 3DS, PCI PIN, PCI P2PE requirements. Deallocating HSMs from customers erases all encryption material from the device as part of the mechanism that re-enables MicrosoftΓÇÖs administrative access. Microsoft does not have any ability to manage or affect the security of keys beyond hosting the physical HSM devices.
+
+## Key management & customer scenarios
+
+Microsoft customers using Payment HSMs utilize 2 or more "Admin cards" provided by Thales to create a Local Master Key (LMK) and security domain. All key management occurs within this domain.
+
+Several scenarios may occur:
+
+- **Key Loading:** Customers may receive printed key components from third-parties or internal backups for loading into the HSM. Compliance requires the use of a PCI Key-Loading Device (KLD) due to the lack of direct physical access to the HSM by customers.
+
+- **Key Distribution:** Customers may generate keys within the HSM, but then need to distribute those keys to third parties in the form of key components. The Payment HSM solution ΓÇô being cloud-based ΓÇô cannot support printing key components directly from the HSM. However, customers may use a TMD or similar solution to export keys and print from the customerΓÇÖs secure location.
+
+## HSM firmware management
+
+Microsoft allocates Payment HSMs with a base image by default that includes approved firmware for the FIPS 140-2 Level 3 certification and PCI PTS HSMv3 approved. Microsoft is responsible for applying security patches to unallocated HSMs. Customers are responsible for ongoing patching and maintenance of the allocated HSM.
+
+## HSM monitoring
+
+Microsoft monitors HSM physical health and network connectivity, which includes individual HSMΓÇÖs power, temperature/Fan, OOB Connectivity, tamper, HOST1/HOST2/MGMT link status, upstream networking, and equipment.
+
+Customers are responsible for monitoring their allocated HSMΓÇÖs operational health, which includes HSM error logs and audit logs. Customers can utilize all payShield monitoring solutions.
+
+## Managing unresponsive HSM devices
+
+If a situation arises where a customer allocated HSM is unresponsive, open a support ticket; see [Azure Payment HSM service support guide](support-guide.md#microsoft-support). A representative will work with you and the Engineering group to resolve the issue. This may require either a reboot, or a deallocation/reallocation to resolve.
+
+### Rebooting
+
+There are two methods of rebooting:
+
+- **Soft Reboot:** The Engineering group can issue an Out of Band (OOB) request to the device for it to initiate a restart and can quickly verify via Service Audit Logs that it was successful. This option can be exercised shortly after a request via Customer Support. Note that there are some circumstances (device network issues, device hard-lock) that would prevent the HSM from receiving this request.
+
+- **Hard Reboot:** The Engineering group can request on-site datacenter personnel physically interact with the HSM to reboot it. This option can take longer time depending on severity of the impact. We highly recommend customer to discuss with support and engineering group to evaluate the impact, and determine whether customer should create a new HSM to move forward or wait for the hard reboot.
+
+Customer Data Impact: In either method, customer data should be unaffected by a reboot operation.
+
+### Deallocation/reallocation
+
+There are two methods to deallocate/delete an HSM:
+
+- **Normal Delete:** In this process the customer can Release the HSM via the payShield Manager before deleting the HSM in Azure. This process checks/ensures that the HSM is released (and therefore all customer content/secrets are removed) before it is handed back to Microsoft and will block if that check fails. After the customer releases the HSM they should retry the request. See [Tutorial: Remove a commissioned payment HSM](remove-payment-hsm.md?tabs=azure-cli).
+
+- **Force delete:** If the customer is unable to Release the HSM before deletion (due to unresponsive device, etc.) the Engineering group, with a documented request from the customer, can set a flag that bypasses the Release check. In this case, when the HSM is deleted the automated management system performs an OOB "Reclaim" request, which issues a "Release" command on behalf of the previous customer and clears all customer content (data, logs, etc.).
+
+Customer Data Impact: In either method, customer data is irrevocably removed by the "Release Device" command.
+
+### Failed HSMs
+
+In the case of an actual HSM hardware failure of a customer allocated device the only course of action is to use the "Force Delete" deallocation method. That allows the Azure resource linked with that HSM to be deleted. Once that has been completed, on-site Datacenter personnel are directed to follow the approved Datacenter runbook to destroy data bearing devices (HDD) and the actual HSM contained in the failed HSM.
+
+## Next steps
+
+- Learn more about [Azure Payment HSM](overview.md)
+- Find out how to [get started with Azure Payment HSM](getting-started.md)
+- Learn how to [Create a payment HSM](create-payment-hsm.md)
+- Read the [frequently asked questions](faq.yml)
playwright-testing Quickstart Automate End To End Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-automate-end-to-end-testing.md
Once you have access to the reporting tool, use the following steps to set up yo
```json "dependencies": {
- "@microsoft/mpt-reporter": "0.1.0-22052024-private-preview"
+ "@microsoft/mpt-reporter": "0.1.0-19072024-private-preview"
} ``` 5. Update the Playwright config file.
Once you have access to the reporting tool, use the following steps to set up yo
targetType: 'inline' script: | 'npm config set @microsoft:registry=https://npm.pkg.github.com'
- 'npm set //npm.pkg.github.com/:_authToken ${{secrets PAT_TOKEN_PACKAGE}}'
+ 'npm set //npm.pkg.github.com/:_authToken ${PAT_TOKEN_PACKAGE}'
'npm install' workingDirectory: path/to/playwright/folder # update accordingly
playwright-testing Quickstart Run End To End Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-run-end-to-end-tests.md
Once you have access to the reporting tool, use the following steps to set up yo
```json "dependencies": {
- "@microsoft/mpt-reporter": "0.1.0-22052024-private-preview"
+ "@microsoft/mpt-reporter": "0.1.0-19072024-private-preview"
} ```
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
These backup files can't be exported or used to create servers outside Azure Dat
Backups on Azure Database for PostgreSQL flexible server instances are snapshot based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are currently taken once daily. If none of the databases in the server receive any further modifications after the last snapshot backup is taken, snapshot backups are suspended until new modifications are made in any of the databases, point at which a new snapshot is immediately taken. **The first snapshot is a full backup and consecutive snapshots are differential backups.**
-Transaction log backups happen at varied frequencies, depending on the workload and when the WAL file is filled and ready to be archived. In general, the delay (recovery point objective, or RPO) can be up to 15 minutes.
+Transaction log backups happen at varied frequencies, depending on the workload and when the WAL file is filled and ready to be archived. In general, the delay (recovery point objective, or RPO) can be up to 5 minutes.
## Backup redundancy options
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md
The following table compares RTO and RPO in a **typical workload** scenario:
| **Capability** | **Burstable** | **General Purpose** | **Memory optimized** | | :: | :-: | :--: | :: |
-| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min |
+| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 5 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 5 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 5 min |
| Geo-restore from geo-replicated backups | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h | | Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
postgresql Concepts Scaling Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-scaling-resources.md
Title: Scaling resources
description: This article describes the resource scaling in Azure Database for PostgreSQL flexible server. - Previously updated : 07/18/2024 Last updated : 07/23/2024
Typically, this process could take anywhere between 2 to 10 minutes with regular
When you update your Azure Database for PostgreSQL flexible server instance in scaling scenarios, we create a new copy of your server (VM) with the updated configuration. We synchronize it with your current one, and switch to the new copy with a 30-second interruption. Then we retire the old server. The process occurs all at no extra cost to you.
-This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers. The experience remains consistent for both high-availablity (HA) and non-HA servers. This feature is enabled in all Azure regions. *No customer action is required* to use this capability.
+This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers. This feature is only available on non-HA servers and is enabled in all Azure regions. *No customer action is required* to use this capability.
For read replica configured servers, scaling operations must follow a specific sequence to ensure data consistency and minimize downtime. For details about that sequence, refer to [scaling with read replicas](./concepts-read-replicas.md#scale).
For read replica configured servers, scaling operations must follow a specific s
- Near-zero downtime scaling doesn't work for a replica server because it's only supported on the primary server. For a replica server, it automatically goes through a regular scaling process. - Near-zero downtime scaling doesn't work if a [virtual network-injected server with a delegated subnet](../flexible-server/concepts-networking-private.md#virtual-network-concepts) doesn't have sufficient usable IP addresses. If you have a standalone server, one extra IP address is necessary. For an HA-enabled server, two extra IP addresses are required. - Logical replication slots aren't preserved during a near-zero downtime failover event. To maintain logical replication slots and ensure data consistency after a scale operation, use the [pg_failover_slot](https://github.com/EnterpriseDB/pg_failover_slots) extension. For more information, see [Enabling extension in a flexible server](../flexible-server/concepts-extensions.md#pg_failover_slots-preview).-- For HA-enabled servers, near-zero downtime scaling is currently enabled for a limited set of regions. More regions will be enabled in a phased manner based on regional capacity. - Near-zero downtime scaling doesn't work with [unlogged tables](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-UNLOGGED). Customers using unlogged tables for any of their data will lose all the data in those tables after the near-zero downtime scaling. ## Related content
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
* General availability of [Pgvector 0.7.0](concepts-extensions.md) extension. * General availability support for [Storage-Autogrow with read replicas](concepts-read-replicas.md) * [SCRAM authentication](how-to-connect-scram.md) authentication set as default for new PostgreSQL 14+ new server deployments.
+* General availability support for [System Assigned Managed Identity](concepts-Identity.md) for Azure Database for PostgreSQL flexible server.
## Release: June 2024 * Support for new [minor versions](concepts-supported-versions.md) 16.3, 15.7, 14.12, 13.15, and 12.19 <sup>$</sup>
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
Title: Azure Private Endpoint private DNS zone values description: Learn about the private DNS zone values for Azure services that support private endpoints.-+
For Azure services, use the recommended zone names as described in the following
>| Azure Event Grid (Microsoft.EventGrid/topics) | topic | privatelink.eventgrid.azure.net | eventgrid.azure.net | >| Azure Event Grid (Microsoft.EventGrid/domains) | domain | privatelink.eventgrid.azure.net | eventgrid.azure.net | >| Azure Event Grid (Microsoft.EventGrid/namespaces) | topic | privatelink.eventgrid.azure.net | eventgrid.azure.net |
+>| Azure Event Grid (Microsoft.EventGrid/namespaces/topicSpace) | topicSpace | privatelink.ts.eventgrid.azure.net | eventgrid.azure.net |
>| Azure Event Grid (Microsoft.EventGrid/partnerNamespaces) | partnernamespace | privatelink.eventgrid.azure.net | eventgrid.azure.net | >| Azure API Management (Microsoft.ApiManagement/service) | gateway | privatelink.azure-api.net | azure-api.net | >| Azure Health Data Services (Microsoft.HealthcareApis/workspaces) | healthcareworkspace | privatelink.workspace.azurehealthcareapis.com </br> privatelink.fhir.azurehealthcareapis.com </br> privatelink.dicom.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com |
remote-rendering Commercial Ready https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/commercial-ready/commercial-ready.md
Azure Application Insights helps you understand how people use your Azure Remote
For more information, visit:
-* [Usage Analysis with Application Insights](../../../../azure-monitor/app/usage-overview.md)
+* [Usage Analysis with Application Insights](../../../../azure-monitor/app/usage.md)
## Fast startup time strategies
sap High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md
+ vm-linux
Previously updated : 07/19/2024 Last updated : 07/22/2024 # Set up Pacemaker on Red Hat Enterprise Linux in Azure
-[planning-guide]:planning-guide.md
-[deployment-guide]:deployment-guide.md
-[dbms-guide]:dbms-guide-general.md
-[sap-hana-ha]:sap-hana-high-availability.md
-[1928533]:https://launchpad.support.sap.com/#/notes/1928533
-[2015553]:https://launchpad.support.sap.com/#/notes/2015553
-[2002167]:https://launchpad.support.sap.com/#/notes/2002167
-[2009879]:https://launchpad.support.sap.com/#/notes/2009879
-[2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2191498]:https://launchpad.support.sap.com/#/notes/2191498
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
-[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[3108316]:https://launchpad.support.sap.com/#/notes/3108316
-[3108302]:https://launchpad.support.sap.com/#/notes/3108302
-
-[virtual-machines-linux-maintenance]:../../virtual-machines/maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot
- This article describes how to configure a basic Pacemaker cluster on Red Hat Enterprise Server (RHEL). The instructions cover RHEL 7, RHEL 8, and RHEL 9.
-## Prerequisites
-
-Read the following SAP Notes and papers first:
-
-* SAP Note [1928533], which has:
- * A list of Azure virtual machine (VM) sizes that are supported for the deployment of SAP software.
- * Important capacity information for Azure VM sizes.
- * The supported SAP software and operating system (OS) and database combinations.
- * The required SAP kernel version for Windows and Linux on Microsoft Azure.
-* SAP Note [2015553] lists prerequisites for SAP-supported SAP software deployments in Azure.
-* SAP Note [2002167] recommends OS settings for Red Hat Enterprise Linux.
-* SAP Note [3108316] recommends OS settings for Red Hat Enterprise Linux 9.x.
-* SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux.
-* SAP Note [3108302] has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x.
-* SAP Note [2178632] has detailed information about all monitoring metrics reported for SAP in Azure.
-* SAP Note [2191498] has the required SAP Host Agent version for Linux in Azure.
-* SAP Note [2243692] has information about SAP licensing on Linux in Azure.
-* SAP Note [1999351] has more troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
-* [SAP Community WIKI](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP Notes for Linux.
-* [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide]
-* [Azure Virtual Machines deployment for SAP on Linux (this article)][deployment-guide]
-* [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide]
-* [SAP HANA system replication in Pacemaker cluster](https://access.redhat.com/articles/3004101)
-* General RHEL documentation:
- * [High Availability (HA) Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index)
- * [High-Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index)
- * [High-Availability Add-On Reference](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
- * [Support Policies for RHEL High-Availability Clusters - `sbd` and `fence_sbd`](https://access.redhat.com/articles/2800691)
-* Azure-specific RHEL documentation:
- * [Support Policies for RHEL High-Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341)
- * [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure](https://access.redhat.com/articles/3252491)
- * [Considerations in Adopting RHEL 8 - High Availability and Clusters](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/high-availability-and-clusters_considerations-in-adopting-rhel-8)
- * [Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL 7.6](https://access.redhat.com/articles/3974941)
- * [RHEL for SAP Offerings on Azure](https://access.redhat.com/articles/5456301)
+## Pre-requisites
-## Cluster installation
+Read the following SAP Notes and articles first:
-![Diagram that shows an overview of Pacemaker on RHEL.](./media/high-availability-guide-rhel-pacemaker/pacemaker-rhel.png)
+* RHEL High Availability (HA) documentation
+ * [Configuring and managing high availability clusters](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/index).
+ * [Support Policies for RHEL High-Availability Clusters - sbd and fence_sbd](https://access.redhat.com/articles/2800691).
+ * [Support Policies for RHEL High Availability clusters - fence_azure_arm](https://access.redhat.com/articles/6627541).
+ * [Software-Emulated Watchdog Known Limitations](https://access.redhat.com/articles/7034141).
+ * [Exploring RHEL High Availability's Components - sbd and fence_sbd](https://access.redhat.com/articles/2943361).
+ * [Design Guidance for RHEL High Availability Clusters - sbd Considerations](https://access.redhat.com/articles/2941601).
+ * [Considerations in adopting RHEL 8 - High Availability and Clusters](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/high-availability-and-clusters_considerations-in-adopting-rhel-8)
-> [!NOTE]
-> Red Hat doesn't support a software-emulated watchdog. Red Hat doesn't support SBD on cloud platforms. For more information, see [Support Policies for RHEL High-Availability Clusters - sbd and fence_sbd](https://access.redhat.com/articles/2800691).
->
-> The only supported fencing mechanism for Pacemaker RHEL clusters on Azure is an Azure fence agent.
+* Azure-specific RHEL documentation
+ * [Support Policies for RHEL High-Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341).
+ * [Design Guidance for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3402391).
+
+* RHEL documentation for SAP offerings
+ * [Support Policies for RHEL High Availability Clusters - Management of SAP S/4HANA in a cluster](https://access.redhat.com/articles/4016901).
+ * [Configuring SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker](https://access.redhat.com/articles/3974941).
+ * [Configuring SAP HANA system replication in Pacemaker cluster](https://access.redhat.com/articles/3004101).
+ * [Red Hat Enterprise Linux HA Solution for SAP HANA Scale-Out and System Replication](https://access.redhat.com/solutions/4386601).
+
+## Overview
> [!IMPORTANT] > Pacemaker clusters that span multiple Virtual networks(VNets)/subnets are not covered by standard support policies.
-The following items are prefixed with:
+There are two options available on Azure for configuring the fencing in a pacemaker cluster for RHEL: Azure fence agent, which restarts a failed node via the Azure APIs, or you can use SBD device.
-- **[A]**: Applicable to all nodes-- **[1]**: Only applicable to node 1-- **[2]**: Only applicable to node 2
+> [!IMPORTANT]
+> In Azure, RHEL high availability cluster with storage based fencing (fence_sbd) uses software-emulated watchdog. It is important to review [Software-Emulated Watchdog Known Limitations](https://access.redhat.com/articles/7034141) and [Support Policies for RHEL High Availability Clusters - sbd and fence_sbd](https://access.redhat.com/articles/2800691) when selecting SBD as the fencing mechanism.
-Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL 9 are marked in the document.
+### Use an SBD device
+
+> [!NOTE]
+> The fencing mechanism with SBD is supported on RHEL 8.8 and higher, and RHEL 9.0 and higher.
+
+You can configure the SBD device by using either of two options:
+
+* SBD with iSCSI target server
+
+ The SBD device requires at least one additional virtual machine (VM) that acts as an Internet Small Compute System Interface (iSCSI) target server and provides an SBD device. These iSCSI target servers can, however, be shared with other pacemaker clusters. The advantage of using an SBD device is that if you're already using SBD devices on-premises, they don't require any changes to how you operate the pacemaker cluster.
+
+ You can use up to three SBD devices for a pacemaker cluster to allow an SBD device to become unavailable (for example, during OS patching of the iSCSI target server). If you want to use more than one SBD device per pacemaker, be sure to deploy multiple iSCSI target servers and connect one SBD from each iSCSI target server. We recommend using either one or three SBD device. Pacemaker can't automatically fence a cluster node if only two SBD devices are configured and one them is unavailable. If you want to be able to fence when one iSCSI target server is down, you have to use three SBD devices and, therefore, three iSCSI target servers. That's the most resilient configuration when you're using SBDs.
+
+ ![Diagram of pacemaker with iSCSI target server as SBD device in RHEL](./media/high-availability-guide-suse-pacemaker/pacemaker.png)
+
+ > [!IMPORTANT]
+ > When you're planning to deploy and configure Linux pacemaker cluster nodes and SBD devices, do not allow the routing between your virtual machines and the VMs that are hosting the SBD devices to pass through any other devices, such as a [network virtual appliance (NVA)](https://azure.microsoft.com/solutions/network-appliances/).
+ >
+ > Maintenance events and other issues with the NVA can have a negative impact on the stability and reliability of the overall cluster configuration. For more information, see [user-defined routing rules](../../virtual-network/virtual-networks-udr-overview.md).
+
+* SBD with Azure shared disk
+
+ To configure an SBD device, you need to attach at least one Azure shared disk to all virtual machines that are part of pacemaker cluster. The advantage of SBD device using an Azure shared disk is that you don't need to deploy and configure additional virtual machines.
+
+ ![Diagram of the Azure shared disk SBD device for RHEL Pacemaker cluster.](./media/high-availability-guide-suse-pacemaker/azure-shared-disk-sbd-device.png)
+
+ Here are some important considerations about SBD devices when configuring using Azure Shared Disk:
+
+ * An Azure shared disk with Premium SSD is supported as an SBD device.
+ * SBD devices that use an Azure shared disk are supported on RHEL 8.8 and later.
+ * SBD devices that use an Azure premium share disk are supported on [locally redundant storage (LRS)](../../virtual-machines/disks-redundancy.md#locally-redundant-storage-for-managed-disks) and [zone-redundant storage (ZRS)](../../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks).
+ * Depending on the [type of your deployment](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload), choose the appropriate redundant storage for an Azure shared disk as your SBD device.
+ * An SBD device using LRS for an Azure premium shared disk (skuName - Premium_LRS) is only supported with regional deployment like availability set.
+ * An SBD device using ZRS for an Azure premium shared disk (skuName - Premium_ZRS) is recommended with zonal deployment like availability zone, or scale set with FD=1.
+ * A ZRS for managed disk is currently available in the regions listed in [regional availability](../../virtual-machines/disks-redundancy.md#regional-availability) document.
+ * The Azure shared disk that you use for SBD devices doesn't need to be large. The [maxShares](../../virtual-machines/disks-shared-enable.md?tabs=azure-portal#disk-sizes) value determines how many cluster nodes can use the shared disk. For example, you can use P1 or P2 disk sizes for your SBD device on two-node cluster such as SAP ASCS/ERS or SAP HANA scale-up.
+ * For HANA scale-out with HANA system replication (HSR) and pacemaker, you can use an Azure shared disk for SBD devices in clusters with up to five nodes per replication site because of the current limit of [maxShares](../../virtual-machines/disks-shared-enable.md#disk-sizes).
+ * We don't recommend attaching an Azure shared disk SBD device across pacemaker clusters.
+ * If you use multiple Azure shared disk SBD devices, check on the limit for a maximum number of data disks that can be attached to a VM.
+ * For more information about limitations for Azure shared disks, carefully review the "Limitations" section of [Azure shared disk documentation](../../virtual-machines/disks-shared.md#limitations).
+
+### Use an Azure fence agent
+
+You can set up fencing by using an Azure fence agent. Azure fence agent requires managed identities for the cluster VMs or a service principal or managed system identity (MSI) that manages to restart failed nodes via Azure APIs. Azure fence agent doesn't require the deployment of additional virtual machines.
+
+## SBD with an iSCSI target server
+
+To use an SBD device that uses an iSCSI target server for fencing, follow the instructions in the next sections.
+
+### Set up the iSCSI target server
+
+You first need to create the iSCSI target virtual machines. You can share iSCSI target servers with multiple pacemaker clusters.
+
+1. Deploy virtual machines that run on supported RHEL OS version, and connect to them via SSH. The VMs don't have to be of large size. VM sizes such as Standard_E2s_v3 or Standard_D2s_v3 are sufficient. Be sure to use Premium storage for the OS disk.
+
+2. It isn't necessary to use RHEL for SAP with HA and Update Services, or RHEL for SAP Apps OS image for the iSCSI target server. A standard RHEL OS image can be used instead. However, be aware that the support life cycle varies between different OS product releases.
+
+3. Run following commands on all iSCSI target virtual machines.
+
+ 1. Update RHEL.
+
+ ```bash
+ sudo yum -y update
+ ```
+
+ > [!NOTE]
+ > You might need to reboot the node after you upgrade or update the OS.
+
+ 2. Install iSCSI target package.
+
+ ```bash
+ sudo yum install targetcli
+ ```
+
+ 3. Start and configure target to start at boot time.
+
+ ```bash
+ sudo systemctl start target
+ sudo systemctl enable target
+ ```
+
+ 4. Open port `3260` in the firewall
+
+ ```bash
+ sudo firewall-cmd --add-port=3260/tcp --permanent
+ sudo firewall-cmd --add-port=3260/tcp
+ ```
+
+### Create an iSCSI device on the iSCSI target server
-1. **[A]** Register. This step is optional. If you're using RHEL SAP HA-enabled images, this step isn't required.
+To create the iSCSI disks for your SAP system clusters, execute following commands on every iSCSI target virtual machine. The example illustrates the creation of SBD Devices for several clusters, demonstrating the use of a single iSCSI target server for multiple clusters. The SBD device is configured on the OS disk, so ensure there's enough space.
- For example, if you're deploying on RHEL 7, register your VM and attach it to a pool that contains repositories for RHEL 7.
+* ascsnw1: Represents the ASCS/ERS cluster of NW1.
+* dbhn1: Represents the database cluster of HN1.
+* sap-cl1 and sap-cl2: Hostnames of the NW1 ASCS/ERS cluster nodes.
+* hn1-db-0 and hn1-db-1: Hostnames of the database cluster nodes.
+
+In the following instructions, modify the command with your specific hostnames and SIDs as needed.
+
+1. Create the root folder for all SBD devices.
```bash
- sudo subscription-manager register
- # List the available pools
- sudo subscription-manager list --available --matches '*SAP*'
- sudo subscription-manager attach --pool=<pool id>
+ sudo mkdir /sbd
```
- When you attach a pool to an Azure Marketplace pay-as-you-go RHEL image, you're effectively double billed for your RHEL usage. You're billed once for the pay-as-you-go image and once for the RHEL entitlement in the pool you attach. To mitigate this situation, Azure now provides bring-your-own-subscription RHEL images. For more information, see [Red Hat Enterprise Linux bring-your-own-subscription Azure images](../../virtual-machines/workloads/redhat/byos.md).
+2. Create the SBD device for the ASCS/ERS servers of the system NW1.
-1. **[A]** Enable RHEL for SAP repos. This step is optional. If you're using RHEL SAP HA-enabled images, this step isn't required.
+ ```bash
+ sudo targetcli backstores/fileio create sbdascsnw1 /sbd/sbdascsnw1 50M write_back=false
+ sudo targetcli iscsi/ create iqn.2006-04.ascsnw1.local:ascsnw1
+ sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/luns/ create /backstores/fileio/sbdascsnw1
+ sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/ create iqn.2006-04.sap-cl1.local:sap-cl1
+ sudo targetcli iscsi/iqn.2006-04.ascsnw1.local:ascsnw1/tpg1/acls/ create iqn.2006-04.sap-cl2.local:sap-cl2
+ ```
- To install the required packages on RHEL 7, enable the following repositories:
+3. Create the SBD device for the database cluster of the system HN1.
```bash
- sudo subscription-manager repos --disable "*"
- sudo subscription-manager repos --enable=rhel-7-server-rpms
- sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
- sudo subscription-manager repos --enable=rhel-sap-for-rhel-7-server-rpms
- sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-eus-rpms
+ sudo targetcli backstores/fileio create sbddbhn1 /sbd/sbddbhn1 50M write_back=false
+ sudo targetcli iscsi/ create iqn.2006-04.dbhn1.local:dbhn1
+ sudo targetcli iscsi/iqn.2006-04.dbhn1.local:dbhn1/tpg1/luns/ create /backstores/fileio/sbddbhn1
+ sudo targetcli iscsi/iqn.2006-04.dbhn1.local:dbhn1/tpg1/acls/ create iqn.2006-04.hn1-db-0.local:hn1-db-0
+ sudo targetcli iscsi/iqn.2006-04.dbhn1.local:dbhn1/tpg1/acls/ create iqn.2006-04.hn1-db-1.local:hn1-db-1
```
-1. **[A]** Install the RHEL HA add-on.
+4. Save the targetcli configuration.
+
+ ```bash
+ sudo targetcli saveconfig
+ ```
+
+5. Check to ensure that everything was set up correctly
+
+ ```bash
+ sudo targetcli ls
+
+ o- / ......................................................................................................................... [...]
+ o- backstores .............................................................................................................. [...]
+ | o- block .................................................................................................. [Storage Objects: 0]
+ | o- fileio ................................................................................................. [Storage Objects: 2]
+ | | o- sbdascsnw1 ............................................................... [/sbd/sbdascsnw1 (50.0MiB) write-thru activated]
+ | | | o- alua ................................................................................................... [ALUA Groups: 1]
+ | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
+ | | o- sbddbhn1 ................................................................... [/sbd/sbddbhn1 (50.0MiB) write-thru activated]
+ | | o- alua ................................................................................................... [ALUA Groups: 1]
+ | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
+ | o- pscsi .................................................................................................. [Storage Objects: 0]
+ | o- ramdisk ................................................................................................ [Storage Objects: 0]
+ o- iscsi ............................................................................................................ [Targets: 2]
+ | o- iqn.2006-04.dbhn1.local:dbhn1 ..................................................................................... [TPGs: 1]
+ | | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
+ | | o- acls .......................................................................................................... [ACLs: 2]
+ | | | o- iqn.2006-04.hn1-db-0.local:hn1-db-0 .................................................................. [Mapped LUNs: 1]
+ | | | | o- mapped_lun0 ............................................................................... [lun0 fileio/sbdhdb (rw)]
+ | | | o- iqn.2006-04.hn1-db-1.local:hn1-db-1 .................................................................. [Mapped LUNs: 1]
+ | | | o- mapped_lun0 ............................................................................... [lun0 fileio/sbdhdb (rw)]
+ | | o- luns .......................................................................................................... [LUNs: 1]
+ | | | o- lun0 ............................................................. [fileio/sbddbhn1 (/sbd/sbddbhn1) (default_tg_pt_gp)]
+ | | o- portals .................................................................................................... [Portals: 1]
+ | | o- 0.0.0.0:3260 ..................................................................................................... [OK]
+ | o- iqn.2006-04.ascsnw1.local:ascsnw1 ................................................................................. [TPGs: 1]
+ | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
+ | o- acls .......................................................................................................... [ACLs: 2]
+ | | o- iqn.2006-04.sap-cl1.local:sap-cl1 .................................................................... [Mapped LUNs: 1]
+ | | | o- mapped_lun0 ........................................................................... [lun0 fileio/sbdascsers (rw)]
+ | | o- iqn.2006-04.sap-cl2.local:sap-cl2 .................................................................... [Mapped LUNs: 1]
+ | | o- mapped_lun0 ........................................................................... [lun0 fileio/sbdascsers (rw)]
+ | o- luns .......................................................................................................... [LUNs: 1]
+ | | o- lun0 ......................................................... [fileio/sbdascsnw1 (/sbd/sbdascsnw1) (default_tg_pt_gp)]
+ | o- portals .................................................................................................... [Portals: 1]
+ | o- 0.0.0.0:3260 ..................................................................................................... [OK]
+ o- loopback ......................................................................................................... [Targets: 0]
+ ```
+
+### Set up the iSCSI target server SBD device
+
+**[A]**: Applies to all node.
+**[1]**: Applies only to node 1.
+**[2]**: Applies only to node 2.
+
+On the cluster nodes, connect and discover iSCSI device that was created in the earlier section. Run the following commands on the nodes of the new cluster that you want to create.
+
+1. **[A]** Install or update iSCSI initiator utils on all cluster nodes.
+
+ ```bash
+ sudo yum install -y iscsi-initiator-utils
+ ```
+
+2. **[A]** Install cluster and SBD packages on all cluster nodes.
+
+ ```bash
+ sudo yum install -y pcs pacemaker sbd fence-agents-sbd
+ ```
+
+3. **[A]** Enable iSCSI service.
+
+ ```bash
+ sudo systemctl enable iscsid iscsi
+ ```
+
+4. **[1]** Change the initiator name on the first node of the cluster.
+
+ ```bash
+ sudo vi /etc/iscsi/initiatorname.iscsi
+
+ # Change the content of the file to match the access control ists (ACLs) you used when you created the iSCSI device on the iSCSI target server (for example, for the ASCS/ERS servers)
+ InitiatorName=iqn.2006-04.sap-cl1.local:sap-cl1
+ ```
+
+5. **[2]** Change the initiator name on the second node of the cluster.
+
+ ```bash
+ sudo vi /etc/iscsi/initiatorname.iscsi
+
+ # Change the content of the file to match the access control ists (ACLs) you used when you created the iSCSI device on the iSCSI target server (for example, for the ASCS/ERS servers)
+ InitiatorName=iqn.2006-04.sap-cl2.local:sap-cl2
+ ```
+
+6. **[A]** Restart the iSCSI service to apply the changes.
```bash
- sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat
+ sudo systemctl restart iscsid
+ sudo systemctl restart iscsi
```
+7. **[A]** Connect the iSCSI devices. In the following example, 10.0.0.17 is the IP address of the iSCSI target server, and 3260 is the default port. The target name `iqn.2006-04.ascsnw1.local:ascsnw1` get listed when you run the first command `iscsiadm -m discovery`.
+
+ ```bash
+ sudo iscsiadm -m discovery --type=st --portal=10.0.0.17:3260
+ sudo iscsiadm -m node -T iqn.2006-04.ascsnw1.local:ascsnw1 --login --portal=10.0.0.17:3260
+ sudo iscsiadm -m node -p 10.0.0.17:3260 -T iqn.2006-04.ascsnw1.local:ascsnw1 --op=update --name=node.startup --value=automatic
+ ```
+
+8. **[A]** If you using multiple SBD devices, also connect to the second iSCSI target server.
+
+ ```bash
+ sudo iscsiadm -m discovery --type=st --portal=10.0.0.18:3260
+ sudo iscsiadm -m node -T iqn.2006-04.ascsnw1.local:ascsnw1 --login --portal=10.0.0.18:3260
+ sudo iscsiadm -m node -p 10.0.0.18:3260 -T iqn.2006-04.ascsnw1.local:ascsnw1 --op=update --name=node.startup --value=automatic
+ ```
+
+9. **[A]** If you using multiple SBD devices, also connect to the third iSCSI target server.
+
+ ```bash
+ sudo iscsiadm -m discovery --type=st --portal=10.0.0.19:3260
+ sudo iscsiadm -m node -T iqn.2006-04.ascsnw1.local:ascsnw1 --login --portal=10.0.0.19:3260
+ sudo iscsiadm -m node -p 10.0.0.19:3260 -T iqn.2006-04.ascsnw1.local:ascsnw1 --op=update --name=node.startup --value=automatic
+ ```
+
+10. **[A]** Make sure that the iSCSI devices are available and note the device name. In the following example, three iSCSI devices are discovered by connecting the node to three iSCSI target servers.
+
+ ```bash
+ lsscsi
+
+ [0:0:0:0] disk Msft Virtual Disk 1.0 /dev/sde
+ [1:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda
+ [1:0:0:1] disk Msft Virtual Disk 1.0 /dev/sdb
+ [1:0:0:2] disk Msft Virtual Disk 1.0 /dev/sdc
+ [1:0:0:3] disk Msft Virtual Disk 1.0 /dev/sdd
+ [2:0:0:0] disk LIO-ORG sbdascsnw1 4.0 /dev/sdf
+ [3:0:0:0] disk LIO-ORG sbdascsnw1 4.0 /dev/sdh
+ [4:0:0:0] disk LIO-ORG sbdascsnw1 4.0 /dev/sdg
+ ```
+
+11. **[A]** Retrieve the IDs of the iSCSI devices.
+
+ ```bash
+ ls -l /dev/disk/by-id/scsi-* | grep -i sdf
+
+ # lrwxrwxrwx 1 root root 9 Jul 15 20:21 /dev/disk/by-id/scsi-1LIO-ORG_sbdhdb:85d254ed-78e2-4ec4-8b0d-ecac2843e086 -> ../../sdf
+ # lrwxrwxrwx 1 root root 9 Jul 15 20:21 /dev/disk/by-id/scsi-3600140585d254ed78e24ec48b0decac2 -> ../../sdf
+ # lrwxrwxrwx 1 root root 9 Jul 15 20:21 /dev/disk/by-id/scsi-SLIO-ORG_sbdhdb_85d254ed-78e2-4ec4-8b0d-ecac2843e086 -> ../../sdf
+
+ ls -l /dev/disk/by-id/scsi-* | grep -i sdh
+
+ # lrwxrwxrwx 1 root root 9 Jul 15 20:21 /dev/disk/by-id/scsi-1LIO-ORG_sbdhdb:87122bfc-8a0b-4006-b538-d0a6d6821f04 -> ../../sdh
+ # lrwxrwxrwx 1 root root 9 Jul 15 20:21 /dev/disk/by-id/scsi-3600140587122bfc8a0b4006b538d0a6d -> ../../sdh
+ # lrwxrwxrwx 1 root root 9 Jul 15 20:21 /dev/disk/by-id/scsi-SLIO-ORG_sbdhdb_87122bfc-8a0b-4006-b538-d0a6d6821f04 -> ../../sdh
+
+ ls -l /dev/disk/by-id/scsi-* | grep -i sdg
+
+ # lrwxrwxrwx 1 root root 9 Jul 15 20:21 /dev/disk/by-id/scsi-1LIO-ORG_sbdhdb:d2ddc548-060c-49e7-bb79-2bb653f0f34a -> ../../sdg
+ # lrwxrwxrwx 1 root root 9 Jul 15 20:21 /dev/disk/by-id/scsi-36001405d2ddc548060c49e7bb792bb65 -> ../../sdg
+ # lrwxrwxrwx 1 root root 9 Jul 15 20:21 /dev/disk/by-id/scsi-SLIO-ORG_sbdhdb_d2ddc548-060c-49e7-bb79-2bb653f0f34a -> ../../sdg
+
+ ```
+
+ The command lists three device IDs for every SBD device. We recommend using the ID that starts with scsi-3. In the preceding example, the IDs are:
+
+ * /dev/disk/by-id/scsi-3600140585d254ed78e24ec48b0decac2
+ * /dev/disk/by-id/scsi-3600140587122bfc8a0b4006b538d0a6d
+ * /dev/disk/by-id/scsi-36001405d2ddc548060c49e7bb792bb65
+
+12. **[1]** Create the SBD device.
+
+ 1. Use the device ID of the iSCSI devices to create the new SBD devices on the first cluster node.
+
+ ```bash
+ sudo sbd -d /dev/disk/by-id/scsi-3600140585d254ed78e24ec48b0decac2 -1 60 -4 120 create
+ ```
+
+ 2. Also create the second and third SBD devices if you want to use more than one.
+
+ ```bash
+ sudo sbd -d /dev/disk/by-id/scsi-3600140587122bfc8a0b4006b538d0a6d -1 60 -4 120 create
+ sudo sbd -d /dev/disk/by-id/scsi-36001405d2ddc548060c49e7bb792bb65 -1 60 -4 120 create
+ ```
+
+13. **[A]** Adapt the SBD configuration
+
+ 1. Open the SBD config file.
+
+ ```bash
+ sudo vi /etc/sysconfig/sbd
+ ```
+
+ 2. Change the property of the SBD device, enable the pacemaker integration, and change the start mode of SBD.
+
+ ```bash
+ [...]
+ SBD_DEVICE="/dev/disk/by-id/scsi-3600140585d254ed78e24ec48b0decac2;/dev/disk/by-id/scsi-3600140587122bfc8a0b4006b538d0a6d;/dev/disk/by-id/scsi-36001405d2ddc548060c49e7bb792bb65"
+ [...]
+ SBD_PACEMAKER=yes
+ [...]
+ SBD_STARTMODE=always
+ [...]
+ SBD_DELAY_START=yes
+ [...]
+ ```
+
+14. **[A]** Run the following command to load the `softdog` module.
+
+ ```bash
+ modprobe softdog
+ ```
+
+15. **[A]** Run the following command to ensure `softdog` is automatically loaded after a node reboot.
+
+ ```bash
+ echo softdog > /etc/modules-load.d/watchdog.conf
+ systemctl restart systemd-modules-load
+ ```
+
+16. **[A]** The SBD service timeout value is set to 90 s by default. However, if the `SBD_DELAY_START` value is set to `yes`, the SBD service will delay its start until after the `msgwait` timeout. Therefore, the SBD service timeout value should exceed the `msgwait` timeout when `SBD_DELAY_START` is enabled.
+
+ ```bash
+ sudo mkdir /etc/systemd/system/sbd.service.d
+ echo -e "[Service]\nTimeoutSec=144" | sudo tee /etc/systemd/system/sbd.service.d/sbd_delay_start.conf
+ sudo systemctl daemon-reload
+
+ systemctl show sbd | grep -i timeout
+ # TimeoutStartUSec=2min 24s
+ # TimeoutStopUSec=2min 24s
+ ```
+
+## SBD with an Azure shared disk
+
+This section applies only if you want to use an SBD Device with an Azure shared disk.
+
+### Configure Azure shared disk with PowerShell
+
+To create and attach an Azure shared disk with PowerShell, execute following instruction. If you want to deploy resources by using the Azure CLI or the Azure portal, you can also refer to [Deploy a ZRS disk](../../virtual-machines/disks-deploy-zrs.md).
+
+```powershell
+$ResourceGroup = "MyResourceGroup"
+$Location = "MyAzureRegion"
+$DiskSizeInGB = 4
+$DiskName = "SBD-disk1"
+$ShareNodes = 2
+$LRSSkuName = "Premium_LRS"
+$ZRSSkuName = "Premium_ZRS"
+$vmNames = @("prod-cl1-0", "prod-cl1-1") # VMs to attach the disk
+
+# ZRS Azure shared disk: Configure an Azure shared disk with ZRS for a premium shared disk
+$zrsDiskConfig = New-AzDiskConfig -Location $Location -SkuName $ZRSSkuName -CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount $ShareNodes
+$zrsDataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName $DiskName -Disk $zrsDiskConfig
+
+# Attach ZRS disk to cluster VMs
+foreach ($vmName in $vmNames) {
+ $vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $vmName
+ Add-AzVMDataDisk -VM $vm -Name $diskName -CreateOption Attach -ManagedDiskId $zrsDataDisk.Id -Lun 0
+ Update-AzVM -VM $vm -ResourceGroupName $resourceGroup -Verbose
+}
+
+# LRS Azure shared disk: Configure an Azure shared disk with LRS for a premium shared disk
+$lrsDiskConfig = New-AzDiskConfig -Location $Location -SkuName $LRSSkuName -CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount $ShareNodes
+$lrsDataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName $DiskName -Disk $lrsDiskConfig
+
+# Attach LRS disk to cluster VMs
+foreach ($vmName in $vmNames) {
+ $vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $vmName
+ Add-AzVMDataDisk -VM $vm -Name $diskName -CreateOption Attach -ManagedDiskId $lrsDataDisk.Id -Lun 0
+ Update-AzVM -VM $vm -ResourceGroupName $resourceGroup -Verbose
+}
+```
+
+### Set up an Azure shared disk SBD device
+
+1. **[A]** Install cluster and SBD packages on all cluster nodes.
+
+ ```bash
+ sudo yum install -y pcs pacemaker sbd fence-agents-sbd
+ ```
+
+2. **[A]** Make sure the attached disk is available.
+
+ ```bash
+ lsblk
+
+ # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ # sda 8:0 0 4G 0 disk
+ # sdb 8:16 0 64G 0 disk
+ # Γö£ΓöÇsdb1 8:17 0 500M 0 part /boot
+ # Γö£ΓöÇsdb2 8:18 0 63G 0 part
+ # Γöé Γö£ΓöÇrootvg-tmplv 253:0 0 2G 0 lvm /tmp
+ # Γöé Γö£ΓöÇrootvg-usrlv 253:1 0 10G 0 lvm /usr
+ # Γöé Γö£ΓöÇrootvg-homelv 253:2 0 1G 0 lvm /home
+ # Γöé Γö£ΓöÇrootvg-varlv 253:3 0 8G 0 lvm /var
+ # Γöé ΓööΓöÇrootvg-rootlv 253:4 0 2G 0 lvm /
+ # Γö£ΓöÇsdb14 8:30 0 4M 0 part
+ # ΓööΓöÇsdb15 8:31 0 495M 0 part /boot/efi
+ # sr0 11:0 1 1024M 0 rom
+
+ lsscsi
+
+ # [0:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdb
+ # [0:0:0:2] cd/dvd Msft Virtual DVD-ROM 1.0 /dev/sr0
+ # [1:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda
+ # [1:0:0:1] disk Msft Virtual Disk 1.0 /dev/sdc
+ ```
+
+3. **[A]** Retrieve the device ID of the attached shared disk.
+
+ ```bash
+ ls -l /dev/disk/by-id/scsi-* | grep -i sda
+
+ # lrwxrwxrwx 1 root root 9 Jul 15 22:24 /dev/disk/by-id/scsi-14d534654202020200792c2f5cc7ef14b8a7355cb3cef0107 -> ../../sda
+ # lrwxrwxrwx 1 root root 9 Jul 15 22:24 /dev/disk/by-id/scsi-3600224800792c2f5cc7e55cb3cef0107 -> ../../sda
+ ```
+
+ The command list device ID for the attached shared disk. We recommend using the ID that starts with scsi-3. In this example, the ID is **/dev/disk/by-id/scsi-3600224800792c2f5cc7e55cb3cef0107**.
+
+4. **[1]** Create the SBD device
+
+ ```bash
+ # Use the device ID from step 3 to create the new SBD device on the first cluster node
+ sudo sbd -d /dev/disk/by-id/scsi-3600224800792c2f5cc7e55cb3cef0107 -1 60 -4 120 create
+ ```
+
+5. **[A]** Adapt the SBD configuration
+
+ 1. Open the SBD config file.
+
+ ```bash
+ sudo vi /etc/sysconfig/sbd
+ ```
+
+ 2. Change the property of the SBD device, enable the pacemaker integration, and change the start mode of SBD
+
+ ```bash
+ [...]
+ SBD_DEVICE="/dev/disk/by-id/scsi-3600224800792c2f5cc7e55cb3cef0107"
+ [...]
+ SBD_PACEMAKER=yes
+ [...]
+ SBD_STARTMODE=always
+ [...]
+ SBD_DELAY_START=yes
+ [...]
+ ```
+
+6. **[A]** Run the following command to load the `softdog` module.
+
+ ```bash
+ modprobe softdog
+ ```
+
+7. **[A]** Run the following command to ensure `softdog` is automatically loaded after a node reboot.
+
+ ```bash
+ echo softdog > /etc/modules-load.d/watchdog.conf
+ systemctl restart systemd-modules-load
+ ```
+
+8. **[A]** The SBD service timeout value is set to 90 seconds by default. However, if the `SBD_DELAY_START` value is set to `yes`, the SBD service will delay its start until after the `msgwait` timeout. Therefore, the SBD service timeout value should exceed the `msgwait` timeout when `SBD_DELAY_START` is enabled.
+
+ ```bash
+ sudo mkdir /etc/systemd/system/sbd.service.d
+ echo -e "[Service]\nTimeoutSec=144" | sudo tee /etc/systemd/system/sbd.service.d/sbd_delay_start.conf
+ sudo systemctl daemon-reload
+
+ systemctl show sbd | grep -i timeout
+ # TimeoutStartUSec=2min 24s
+ # TimeoutStopUSec=2min 24s
+ ```
+
+## Azure fence agent configuration
+
+The fencing device uses either a managed identity for Azure resource or a service principal to authorize against Azure. Depending on the identity management method, follow the appropriate procedures -
+
+1. Configure identity management
+
+ Use managed identity or service principal.
+
+ #### [Managed identity](#tab/msi)
+
+ To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. If a system-assigned managed identity already exists, then it would be used. Don't use user-assigned managed identities with Pacemaker at this time. A fence device, based on managed identity, is supported on RHEL 7.9 and RHEL 8.x/RHEL 9.x.
+
+ #### [Service principal](#tab/spn)
+
+ Follow these steps to create a service principal, if you aren't using managed identity.
+
+ 1. Go to the [Azure portal](https://portal.azure.com).
+ 1. Open the **Microsoft Entra ID** pane.
+ 1. Go to **Properties** and make a note of the **Directory ID**. This is the **tenant ID**.
+ 1. Select **App registrations**.
+ 1. Select **New Registration**.
+ 1. Enter a **Name** and select **Accounts in this organization directory only**.
+ 1. Select **Application Type** as **Web**, enter a sign-on URL (for example, http:\//localhost), and select **Add**. The sign-on URL isn't used and can be any valid URL.
+ 1. Select **Certificates and Secrets**, and then select **New client secret**.
+ 1. Enter a description for a new key, select **Two years**, and select **Add**.
+ 1. Make a note of the **Value**. It's used as the **password** for the service principal.
+ 1. Select **Overview**. Make a note of the **Application ID**. It's used as the username (**login ID** in the following steps) of the service principal.
+
+
+
+2. Create a custom role for the fence agent
+
+ Both the managed identity and the service principal don't have permissions to access your Azure resources by default. You need to give the managed identity or service principal permissions to start and stop (power-off) all VMs of the cluster. If you haven't already created the custom role, you can create it by using [PowerShell](../../role-based-access-control/custom-roles-powershell.md) or the [Azure CLI](../../role-based-access-control/custom-roles-cli.md).
+
+ Use the following content for the input file. You need to adapt the content to your subscriptions, that is, replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` and `yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` with the IDs of your subscription. If you only have one subscription, remove the second entry in `AssignableScopes`.
+
+ ```json
+ {
+ "Name": "Linux Fence Agent Role",
+ "description": "Allows to power-off and start virtual machines",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "/subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"
+ ],
+ "actions": [
+ "Microsoft.Compute/*/read",
+ "Microsoft.Compute/virtualMachines/powerOff/action",
+ "Microsoft.Compute/virtualMachines/start/action"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ```
+
+3. Assign the custom role
+
+ Use managed identity or service principal.
+
+ #### [Managed identity](#tab/msi)
+
+ Assign the custom role `Linux Fence Agent Role` that was created in the last section to each managed identity of the cluster VMs. Each VM system-assigned managed identity needs the role assigned for every cluster VM's resource. For more information, see [Assign a managed identity access to a resource by using the Azure portal](../../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). Verify that each VM's managed identity role assignment contains all the cluster VMs.
+ > [!IMPORTANT]
- > We recommend the following versions of the Azure fence agent (or later) for customers to benefit from a faster failover time, if a resource stop fails or the cluster nodes can't communicate with each other anymore:
- >
- > RHEL 7.7 or higher use the latest available version of fence-agents package.
- >
- > RHEL 7.6: fence-agents-4.2.1-11.el7_6.8
- >
- > RHEL 7.5: fence-agents-4.0.11-86.el7_5.8
- >
- > RHEL 7.4: fence-agents-4.0.11-66.el7_4.12
- >
- > For more information, see [Azure VM running as a RHEL High-Availability cluster member takes a very long time to be fenced, or fencing fails/times out before the VM shuts down](https://access.redhat.com/solutions/3408711).
+ > Be aware that assignment and removal of authorization with managed identities [can be delayed](../../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization) until effective.
+
+ #### [Service principal](#tab/spn)
+
+ Assign the custom role `Linux Fence Agent Role` that was created in the last section to the service principal. *Don't use the Owner role anymore.* For more information, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
+
+ Make sure to assign the role for both cluster nodes.
+
+
+
+## Cluster installation
+
+Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL 9 are marked in the document.
+
+1. **[A]** Install the RHEL HA add-on.
+
+ ```bash
+ sudo yum install -y pcs pacemaker nmap-ncat
+ ```
+
+2. **[A]** On RHEL 9.x, install the resource agents for cloud deployment.
+
+ ```bash
+ sudo yum install -y resource-agents-cloud
+ ```
+
+3. **[A]** Install the fence-agents package if you're using a fencing device based on Azure fence agent.
+
+ ```bash
+ sudo yum install -y fence-agents-azure-arm
+ ```
> [!IMPORTANT] > We recommend the following versions of the Azure fence agent (or later) for customers who want to use managed identities for Azure resources instead of service principal names for the fence agent: >
- > RHEL 8.4: fence-agents-4.2.1-54.el8.
- >
- > RHEL 8.2: fence-agents-4.2.1-41.el8_2.4
- >
- > RHEL 8.1: fence-agents-4.2.1-30.el8_1.4
- >
- > RHEL 7.9: fence-agents-4.2.1-41.el7_9.4.
+ > * RHEL 8.4: fence-agents-4.2.1-54.el8.
+ > * RHEL 8.2: fence-agents-4.2.1-41.el8_2.4
+ > * RHEL 8.1: fence-agents-4.2.1-30.el8_1.4
+ > * RHEL 7.9: fence-agents-4.2.1-41.el7_9.4.
> [!IMPORTANT] > On RHEL 9, we recommend the following package versions (or later) to avoid issues with the Azure fence agent: >
- > fence-agents-4.10.0-20.el9_0.7
- >
- > fence-agents-common-4.10.0-20.el9_0.6
- >
- > ha-cloud-support-4.10.0-20.el9_0.6.x86_64.rpm
+ > * fence-agents-4.10.0-20.el9_0.7
+ > * fence-agents-common-4.10.0-20.el9_0.6
+ > * ha-cloud-support-4.10.0-20.el9_0.6.x86_64.rpm
Check the version of the Azure fence agent. If necessary, update it to the minimum required version or later. ```bash # Check the version of the Azure Fence Agent
- sudo yum info fence-agents-azure-arm
+ sudo yum info fence-agents-azure-arm
``` > [!IMPORTANT]
- > If you need to update the Azure fence agent, and if you're using a custom role, make sure to update the custom role to include the action **powerOff**. For more information, see [Create a custom role for the fence agent](#1-create-a-custom-role-for-the-fence-agent).
-
-1. If you're deploying on RHEL 9, also install the resource agents for cloud deployment.
+ > If you need to update the Azure fence agent, and if you're using a custom role, make sure to update the custom role to include the action **powerOff**. For more information, see [Create a custom role for the fence agent](#azure-fence-agent-configuration).
- ```bash
- sudo yum install -y resource-agents-cloud
- ```
-
-1. **[A]** Set up hostname resolution.
+4. **[A]** Set up hostname resolution.
You can either use a DNS server or modify the `/etc/hosts` file on all nodes. This example shows how to use the `/etc/hosts` file. Replace the IP address and the hostname in the following commands.
Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL
10.0.0.7 prod-cl1-1 ```
-1. **[A]** Change the `hacluster` password to the same password.
+5. **[A]** Change the `hacluster` password to the same password.
```bash sudo passwd hacluster ```
-1. **[A]** Add firewall rules for Pacemaker.
+6. **[A]** Add firewall rules for Pacemaker.
Add the following firewall rules to all cluster communication between the cluster nodes.
Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL
sudo firewall-cmd --add-service=high-availability ```
-1. **[A]** Enable basic cluster services.
+7. **[A]** Enable basic cluster services.
Run the following commands to enable the Pacemaker service and start it.
Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL
sudo systemctl enable pcsd.service ```
-1. **[1]** Create a Pacemaker cluster.
+8. **[1]** Create a Pacemaker cluster.
- Run the following commands to authenticate the nodes and create the cluster. Set the token to 30000 to allow memory preserving maintenance. For more information, see [this article for Linux][virtual-machines-linux-maintenance].
+ Run the following commands to authenticate the nodes and create the cluster. Set the token to 30000 to allow memory preserving maintenance. For more information, see [this article for Linux](../../virtual-machines/maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot).
If you're building a cluster on **RHEL 7.x**, use the following commands:
Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL
# pcsd: active/enabled ```
-1. **[A]** Set expected votes.
+9. **[A]** Set expected votes.
```bash # Check the quorum votes
Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL
> [!TIP] > If you're building a multinode cluster, that is, a cluster with more than two nodes, don't set the votes to 2.
-1. **[1]** Allow concurrent fence actions.
-
- ```bash
- sudo pcs property set concurrent-fencing=true
- ```
-
-## Create a fencing device
-
-The fencing device uses either a managed identity for Azure resource or a service principal to authorize against Azure.
-
-### [Managed identity](#tab/msi)
+10. **[1]** Allow concurrent fence actions.
-To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. If a system-assigned managed identity already exists, it's used. Don't use user-assigned managed identities with Pacemaker at this time. A fence device, based on managed identity, is supported on RHEL 7.9 and RHEL 8.x/RHEL 9.x.
+ ```bash
+ sudo pcs property set concurrent-fencing=true
+ ```
-### [Service principal](#tab/spn)
+### Create a fencing device on the Pacemaker cluster
-Follow these steps to create a service principal, if you aren't using managed identity.
+> [!TIP]
+>
+> * To avoid fence races within a two-node pacemaker cluster, you can configure the `priority-fencing-delay` cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521).
+> * The property `priority-fencing-delay` is applicable for Pacemaker version 2.0.4-6.el8 or higher and on a two-node cluster. If you configure the `priority-fencing-delay` cluster property, you don't need to set the `pcmk_delay_max` property. But if the Pacemaker version is less than 2.0.4-6.el8, you need to set the `pcmk_delay_max` property.
+> * For instructions on how to set the `priority-fencing-delay` cluster property, see the respective SAP ASCS/ERS and SAP HANA scale-up HA documents.
-1. Go to the [Azure portal](https://portal.azure.com).
-1. Open the **Microsoft Entra ID** pane.
- Go to **Properties** and make a note of the **Directory ID**. This is the **tenant ID**.
-1. Select **App registrations**.
-1. Select **New Registration**.
-1. Enter a **Name** and select **Accounts in this organization directory only**.
-1. Select **Application Type** as **Web**, enter a sign-on URL (for example, http:\//localhost), and select **Add**.
- The sign-on URL isn't used and can be any valid URL.
-1. Select **Certificates and Secrets**, and then select **New client secret**.
-1. Enter a description for a new key, select **Two years**, and select **Add**.
-1. Make a note of the **Value**. It's used as the **password** for the service principal.
-1. Select **Overview**. Make a note of the **Application ID**. It's used as the username (**login ID** in the following steps) of the service principal.
+Based on the selected fencing mechanism, follow only one section for relevant instructions: [SBD as fencing device](#sbd-as-fencing-device) or [Azure fence agent as fencing device](#azure-fence-agent-as-fencing-device).
-
+#### SBD as fencing device
-### **[1]** Create a custom role for the fence agent
-
-Both the managed identity and the service principal don't have permissions to access your Azure resources by default. You need to give the managed identity or service principal permissions to start and stop (power-off) all VMs of the cluster. If you haven't already created the custom role, you can create it by using [PowerShell](../../role-based-access-control/custom-roles-powershell.md) or the [Azure CLI](../../role-based-access-control/custom-roles-cli.md).
-
-Use the following content for the input file. You need to adapt the content to your subscriptions, that is, replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` and `yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` with the IDs of your subscription. If you only have one subscription, remove the second entry in `AssignableScopes`.
-
-```json
-{
- "Name": "Linux Fence Agent Role",
- "description": "Allows to power-off and start virtual machines",
- "assignableScopes": [
- "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "/subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"
- ],
- "actions": [
- "Microsoft.Compute/*/read",
- "Microsoft.Compute/virtualMachines/powerOff/action",
- "Microsoft.Compute/virtualMachines/start/action"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
-}
-```
+1. **[A]** Enable SBD service
-### **[A]** Assign the custom role
+ ```bash
+ sudo systemctl enable sbd
+ ```
-Use managed identity or service principal.
+2. **[1]** For the SBD device configured using iSCSI target servers or Azure shared disk, run the following commands.
-#### [Managed identity](#tab/msi)
+ ```bash
+ sudo pcs property set stonith-timeout=144
+ sudo pcs property set stonith-enabled=true
-Assign the custom role `Linux Fence Agent Role` that was created in the last section to each managed identity of the cluster VMs. Each VM system-assigned managed identity needs the role assigned for every cluster VM's resource. For more information, see [Assign a managed identity access to a resource by using the Azure portal](../../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). Verify that each VM's managed identity role assignment contains all the cluster VMs.
+ # Replace the device IDs with your device ID.
+ pcs stonith create sbd fence_sbd \
+ devices=/dev/disk/by-id/scsi-3600140585d254ed78e24ec48b0decac2,/dev/disk/by-id/scsi-3600140587122bfc8a0b4006b538d0a6d,/dev/disk/by-id/scsi-36001405d2ddc548060c49e7bb792bb65 \
+ op monitor interval=600 timeout=15
+ ```
-> [!IMPORTANT]
-> Be aware that assignment and removal of authorization with managed identities [can be delayed](../../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization) until effective.
+3. **[1]** Restart the cluster
-#### [Service principal](#tab/spn)
+ ```bash
+ sudo pcs cluster stop --all
-Assign the custom role `Linux Fence Agent Role` that was created in the last section to the service principal. *Don't use the Owner role anymore.* For more information, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
+ # It would take time to start the cluster as "SBD_DELAY_START" is set to "yes"
+ sudo pcs cluster start --all
+ ```
-Make sure to assign the role for both cluster nodes.
+ > [!NOTE]
+ > If you encounter following error while starting the pacemaker cluster, you can disregard the message. Alternatively, you can start the cluster using the command `pcs cluster start --all --request-timeout 140`.
+ >
+ > Error: unable to start all nodes
+ > node1/node2: Unable to connect to node1/node2, check if pcsd is running there or try setting higher timeout with `--request-timeout` option (Operation timed out after 60000 milliseconds with 0 bytes received)
-
+#### Azure fence agent as fencing device
-### **[1]** Create the fencing devices
+1. **[1]** After you've assigned roles to both cluster nodes, you can configure the fencing devices in the cluster.
-After you edit the permissions for the VMs, you can configure the fencing devices in the cluster.
+ ```bash
+ sudo pcs property set stonith-timeout=900
+ sudo pcs property set stonith-enabled=true
+ ```
-```bash
-sudo pcs property set stonith-timeout=900
-```
+2. **[1]** Run the appropriate command depending on whether you're using a managed identity or a service principal for the Azure fence agent.
-> [!NOTE]
-> The option `pcmk_host_map` is *only* required in the command if the RHEL hostnames and the Azure VM names are *not* identical. Specify the mapping in the format **hostname:vm-name**.
-> Refer to the bold section in the command. For more information, see [What format should I use to specify node mappings to fencing devices in pcmk_host_map?](https://access.redhat.com/solutions/2619961).
+ > [!NOTE]
+ > The option `pcmk_host_map` is *only* required in the command if the RHEL hostnames and the Azure VM names are *not* identical. Specify the mapping in the format **hostname:vm-name**.
+ >
+ > Refer to the bold section in the command. For more information, see [What format should I use to specify node mappings to fencing devices in pcmk_host_map?](https://access.redhat.com/solutions/2619961).
-#### [Managed identity](#tab/msi)
+ #### [Managed identity](#tab/msi)
-For RHEL **7.x**, use the following command to configure the fence device:
+ For RHEL **7.x**, use the following command to configure the fence device:
-```bash
-sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \
-subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
-power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
-op monitor interval=3600
-```
+ ```bash
+ sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \
+ subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
+ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
+ op monitor interval=3600
+ ```
-For RHEL **8.x/9.x**, use the following command to configure the fence device:
+ For RHEL **8.x/9.x**, use the following command to configure the fence device:
-```bash
-# Run following command if you are setting up fence agent on (two-node cluster and pacemaker version greater than 2.0.4-6.el8) OR (HANA scale out)
-sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \
-subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
-power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \
-op monitor interval=3600
+ ```bash
+ # Run following command if you are setting up fence agent on (two-node cluster and pacemaker version greater than 2.0.4-6.el8) OR (HANA scale out)
+ sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \
+ subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
+ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \
+ op monitor interval=3600
+
+ # Run following command if you are setting up fence agent on (two-node cluster and pacemaker version less than 2.0.4-6.el8)
+ sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \
+ subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
+ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
+ op monitor interval=3600
+ ```
-# Run following command if you are setting up fence agent on (two-node cluster and pacemaker version less than 2.0.4-6.el8)
-sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \
-subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
-power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
-op monitor interval=3600
-```
+ #### [Service principal](#tab/spn)
-#### [Service principal](#tab/spn)
+ For RHEL **7.x**, use the following command to configure the fence device:
-For RHEL **7.x**, use the following command to configure the fence device:
+ ```bash
+ sudo pcs stonith create rsc_st_azure fence_azure_arm login="login ID" passwd="password" \
+ resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \
+ pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
+ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
+ op monitor interval=3600
+ ```
-```bash
-sudo pcs stonith create rsc_st_azure fence_azure_arm login="login ID" passwd="password" \
-resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \
-pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
-power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
-op monitor interval=3600
-```
+ For RHEL **8.x/9.x**, use the following command to configure the fence device:
-For RHEL **8.x/9.x**, use the following command to configure the fence device:
-
-```bash
-# Run following command if you are setting up fence agent on (two-node cluster and pacemaker version greater than 2.0.4-6.el8) OR (HANA scale out)
-sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \
-resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \
-pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
-power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \
-op monitor interval=3600
-# Run following command if you are setting up fence agent on (two-node cluster and pacemaker version less than 2.0.4-6.el8)
-sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \
-resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \
-pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
-power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
-op monitor interval=3600
-```
+ ```bash
+ # Run following command if you are setting up fence agent on (two-node cluster and pacemaker version greater than 2.0.4-6.el8) OR (HANA scale out)
+ sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \
+ resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \
+ pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
+ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \
+ op monitor interval=3600
+
+ # Run following command if you are setting up fence agent on (two-node cluster and pacemaker version less than 2.0.4-6.el8)
+ sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \
+ resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \
+ pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \
+ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
+ op monitor interval=3600
+ ```
-
+
If you're using a fencing device based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters by using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration.
-> [!TIP]
->
-> * To avoid fence races within a two-node pacemaker cluster, you can configure the `priority-fencing-delay` cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521).
-> * The property `priority-fencing-delay` is applicable for Pacemaker version 2.0.4-6.el8 or higher and on a two-node cluster. If you configure the `priority-fencing-delay` cluster property, you don't need to set the `pcmk_delay_max` property. But if the Pacemaker version is less than 2.0.4-6.el8, you need to set the `pcmk_delay_max` property.
-> * For instructions on how to set the `priority-fencing-delay` cluster property, see the respective SAP ASCS/ERS and SAP HANA scale-up HA documents.
- The monitoring and fencing operations are deserialized. As a result, if there's a longer running monitoring operation and simultaneous fencing event, there's no delay to the cluster failover because the monitoring operation is already running.
-### **[1]** Enable the use of a fencing device
-
-```bash
-sudo pcs property set stonith-enabled=true
-```
- > [!TIP]
->The Azure fence agent requires outbound connectivity to public endpoints. For more information along with possible solutions, see [Public endpoint connectivity for VMs using standard ILB](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+> The Azure fence agent requires outbound connectivity to public endpoints. For more information along with possible solutions, see [Public endpoint connectivity for VMs using standard ILB](./high-availability-guide-standard-load-balancer-outbound-connections.md).
## Configure Pacemaker for Azure scheduled events
When the cluster health attribute is set for a node, the location constraint tri
* RHEL 9.0: `resource-agents-cloud-4.10.0-9.6` * RHEL 9.2 and newer: `resource-agents-cloud-4.10.0-34.1`
-1. **[1]** Configure the resources in Pacemaker.
+2. **[1]** Configure the resources in Pacemaker.
```bash #Place the cluster in maintenance mode sudo pcs property set maintenance-mode=true
+ ```
-1. **[1]** Set the Pacemaker cluster health-node strategy and constraint.
+3. **[1]** Set the Pacemaker cluster health-node strategy and constraint.
```bash sudo pcs property set node-health-strategy=custom+ sudo pcs constraint location 'regexp%!health-.*' \ rule score-attribute='#health-azure' \ defined '#uname'
When the cluster health attribute is set for a node, the location constraint tri
> > Don't define any other resources in the cluster starting with `health-` besides the resources described in the next steps.
-1. **[1]** Set the initial value of the cluster attributes. Run for each cluster node and for scale-out environments including majority maker VM.
+4. **[1]** Set the initial value of the cluster attributes. Run for each cluster node and for scale-out environments including majority maker VM.
```bash sudo crm_attribute --node prod-cl1-0 --name '#health-azure' --update 0 sudo crm_attribute --node prod-cl1-1 --name '#health-azure' --update 0 ```
-1. **[1]** Configure the resources in Pacemaker. Make sure the resources start with `health-azure`.
+5. **[1]** Configure the resources in Pacemaker. Make sure the resources start with `health-azure`.
```bash sudo pcs resource create health-azure-events \ ocf:heartbeat:azure-events-az \ op monitor interval=10s timeout=240s \ op start timeout=10s start-delay=90s+ sudo pcs resource clone health-azure-events allow-unhealthy-nodes=true failure-timeout=120s ```
-1. Take the Pacemaker cluster out of maintenance mode.
+6. Take the Pacemaker cluster out of maintenance mode.
```bash sudo pcs property set maintenance-mode=false ```
-1. Clear any errors during enablement and verify that the `health-azure-events` resources have started successfully on all cluster nodes.
+7. Clear any errors during enablement and verify that the `health-azure-events` resources have started successfully on all cluster nodes.
```bash sudo pcs resource cleanup
When the cluster health attribute is set for a node, the location constraint tri
> [!TIP] > This section is only applicable if you want to configure the special fencing device `fence_kdump`.
-If you need to collect diagnostic information within the VM, it might be useful to configure another fencing device based on the fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete before other fencing methods are invoked. Note that `fence_kdump` isn't a replacement for traditional fence mechanisms, like the Azure fence agent, when you're using Azure VMs.
+If you need to collect diagnostic information within the VM, it might be useful to configure another fencing device based on the fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete before other fencing methods are invoked. Note that `fence_kdump` isn't a replacement for traditional fence mechanisms, like the SBD or Azure fence agent, when you're using Azure VMs.
> [!IMPORTANT] > Be aware that when `fence_kdump` is configured as a first-level fencing device, it introduces delays in the fencing operations and, respectively, delays in the application resources failover.
If you need to collect diagnostic information within the VM, it might be useful
> > The proposed `fence_kdump` timeout might need to be adapted to the specific environment. >
-> We recommend that you configure `fence_kdump` fencing only when necessary to collect diagnostics within the VM and always in combination with traditional fence methods, such as the Azure fence agent.
+> We recommend that you configure `fence_kdump` fencing only when necessary to collect diagnostics within the VM and always in combination with traditional fence methods, such as SBD or Azure fence agent.
The following Red Hat KB articles contain important information about configuring `fence_kdump` fencing:
Run the following optional steps to add `fence_kdump` as a first-level fencing c
pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" pcmk_host_list="prod-cl1-0 prod-cl1-1" pcs stonith level add 1 prod-cl1-0 rsc_st_kdump pcs stonith level add 1 prod-cl1-1 rsc_st_kdump
- pcs stonith level add 2 prod-cl1-0 rsc_st_azure
- pcs stonith level add 2 prod-cl1-1 rsc_st_azure
+ # Replace <stonith-resource-name> to the resource name of the STONITH resource configured in your pacemaker cluster (example based on above configuration - sbd or rsc_st_azure)
+ pcs stonith level add 2 prod-cl1-0 <stonith-resource-name>
+ pcs stonith level add 2 prod-cl1-1 <stonith-resource-name>
# Check the fencing level configuration pcs stonith level # Example output # Target: prod-cl1-0 # Level 1 - rsc_st_kdump
- # Level 2 - rsc_st_azure
+ # Level 2 - <stonith-resource-name>
# Target: prod-cl1-1 # Level 1 - rsc_st_kdump
- # Level 2 - rsc_st_azure
+ # Level 2 - <stonith-resource-name>
``` 1. **[A]** Allow the required ports for `fence_kdump` through the firewall.
Run the following optional steps to add `fence_kdump` as a first-level fencing c
firewall-cmd --add-port=7410/udp --permanent ```
-1. **[A]** Ensure that the `initramfs` image file contains the `fence_kdump` and `hosts` files. For more information, see [How do I configure fence_kdump in a Red Hat Pacemaker cluster?](https://access.redhat.com/solutions/2876971).
-
- ```bash
- lsinitrd /boot/initramfs-$(uname -r)kdump.img | egrep "fence|hosts"
- # Example output
- # -rw-r--r-- 1 root root 208 Jun 7 21:42 etc/hosts
- # -rwxr-xr-x 1 root root 15560 Jun 17 14:59 usr/libexec/fence_kdump_send
- ```
-
-1. **[A]** Perform the `fence_kdump_nodes` configuration in `/etc/kdump.conf` to avoid `fence_kdump` from failing with a timeout for some `kexec-tools` versions. For more information, see [fence_kdump times out when fence_kdump_nodes is not specified with kexec-tools version 2.0.15 or later](https://access.redhat.com/solutions/4498151) and [fence_kdump fails with "timeout after X seconds" in a RHEL 6 or 7 High Availability cluster with kexec-tools versions older than 2.0.14](https://access.redhat.com/solutions/2388711). The example configuration for a two-node cluster is presented here. After you make a change in `/etc/kdump.conf`, the kdump image must be regenerated. To regenerate, restart the `kdump` service.
+1. **[A]** Perform the `fence_kdump_nodes` configuration in `/etc/kdump.conf` to avoid `fence_kdump` from failing with a timeout for some `kexec-tools` versions. For more information, see [fence_kdump times out when fence_kdump_nodes isn't specified with kexec-tools version 2.0.15 or later](https://access.redhat.com/solutions/4498151) and [fence_kdump fails with "timeout after X seconds" in a RHEL 6 or 7 High Availability cluster with kexec-tools versions older than 2.0.14](https://access.redhat.com/solutions/2388711). The example configuration for a two-node cluster is presented here. After you make a change in `/etc/kdump.conf`, the kdump image must be regenerated. To regenerate, restart the `kdump` service.
```bash vi /etc/kdump.conf
Run the following optional steps to add `fence_kdump` as a first-level fencing c
systemctl restart kdump ```
+1. **[A]** Ensure that the `initramfs` image file contains the `fence_kdump` and `hosts` files. For more information, see [How do I configure fence_kdump in a Red Hat Pacemaker cluster?](https://access.redhat.com/solutions/2876971).
+
+ ```bash
+ lsinitrd /boot/initramfs-$(uname -r)kdump.img | egrep "fence|hosts"
+ # Example output
+ # -rw-r--r-- 1 root root 208 Jun 7 21:42 etc/hosts
+ # -rwxr-xr-x 1 root root 15560 Jun 17 14:59 usr/libexec/fence_kdump_send
+ ```
+ 1. Test the configuration by crashing a node. For more information, see [How do I configure fence_kdump in a Red Hat Pacemaker cluster?](https://access.redhat.com/solutions/2876971). > [!IMPORTANT]
Run the following optional steps to add `fence_kdump` as a first-level fencing c
## Next steps
-* See [Azure Virtual Machines planning and implementation for SAP][planning-guide].
-* See [Azure Virtual Machines deployment for SAP][deployment-guide].
-* See [Azure Virtual Machines DBMS deployment for SAP][dbms-guide].
-* To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines][sap-hana-ha].
+* See [Azure Virtual Machines planning and implementation for SAP](./planning-guide.md).
+* See [Azure Virtual Machines deployment for SAP](./deployment-guide.md).
+* See [Azure Virtual Machines DBMS deployment for SAP](./dbms-guide-general.md).
+* To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines](./sap-hana-high-availability.md).
sap High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel.md
Read the following SAP Notes and papers first:
* Important capacity information for Azure VM sizes. * Supported SAP software and operating system (OS) and database combinations. * Required SAP kernel version for Windows and Linux on Microsoft Azure.- * SAP Note [2015553] lists prerequisites for SAP-supported SAP software deployments in Azure. * SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux (RHEL). * SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux.
Read the following SAP Notes and papers first:
* [High Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index) * [High Availability Add-On Reference](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index) * [Configuring ASCS/ERS for SAP NetWeaver with Standalone Resources in RHEL 7.5](https://access.redhat.com/articles/3569681)
- * [Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL
- ](https://access.redhat.com/articles/3974941)
+ * [Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL](https://access.redhat.com/articles/3974941)
* Azure-specific RHEL documentation: * [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341) * [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure](https://access.redhat.com/articles/3252491)
Follow these steps to install an SAP application server.
last-rc-change='Tue Aug 21 13:52:39 2018', queued=0ms, exec=0ms ```
+ > [!NOTE]
+ > If you're using SBD as a STONITH mechanism, it could happen that after a reboot, when the node attempts to rejoin the cluster, it receives the message "we were allegendly just fenced" message in /var/log/messages and shut down the Pacemaker and Corosync services. To address the issue, you can follow the workaround described in RedHat KB [A node shuts down pacemaker after getting fenced and restarting corosync and pacemaker](https://access.redhat.com/solutions/5644441). However, in Azure, set a delay of 150 seconds for Corosync service to startup. Ensure that these steps are applied to all cluster nodes.
+ Use the following command to clean the failed resources. ```bash
sap Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-one-region.md
In this scenario, data that's replicated to the HANA instance in the second VM i
### SAP HANA system replication with automatic failover
-In the standard and most common availability configuration within one Azure region, two Azure VMs running Linux with HA packages have a failover cluster defined. The HA Linux cluster is based on the `Pacemaker` framework using [SLES](./high-availability-guide-suse-pacemaker.md) or [RHEL](./high-availability-guide-rhel-pacemaker.md) with a `fencing device` [SLES](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) or [RHEL](./high-availability-guide-rhel-pacemaker.md#create-a-fencing-device) as an example.
+In the standard and most common availability configuration within one Azure region, two Azure VMs running Linux with HA packages have a failover cluster defined. The HA Linux cluster is based on the `Pacemaker` framework using [SLES](./high-availability-guide-suse-pacemaker.md) or [RHEL](./high-availability-guide-rhel-pacemaker.md) with a `fencing device` [SLES](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) or [RHEL](./high-availability-guide-rhel-pacemaker.md#azure-fence-agent-configuration) as an example.
From an SAP HANA perspective, the replication mode that's used is synced and an automatic failover is configured. In the second VM, the SAP HANA instance acts as a hot standby node. The standby node receives a synchronous stream of change records from the primary SAP HANA instance. As transactions are committed by the application at the HANA primary node, the primary HANA node waits to confirm the commit to the application until the secondary SAP HANA node confirms that it received the commit record. SAP HANA offers two synchronous replication modes. For details and for a description of differences between these two synchronous replication modes, see the SAP article [Replication modes for SAP HANA system replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/c039a1a5b8824ecfa754b55e0caffc01.html).
search Search Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synonyms.md
Title: Synonyms for query expansion
-description: Create a synonym map to expand the scope of a search query over an Azure AI Search index. Scope is broadened to include equivalent terms you provide in the synonym map.
+description: Create a synonym map to expand the scope of a search query over an Azure AI Search index. The query can search on equivalent terms provided in the synonym map, even if the query doesn't explicitly include the term.
- ignite-2023- Previously updated : 04/22/2024+ Last updated : 07/22/2024 # Synonyms in Azure AI Search
-On a search service, synonym maps are a global resource that associate equivalent terms, expanding the scope of a query without the user having to actually provide the term. For example, assuming "dog", "canine", and "puppy" are mapped synonyms, a query on "canine" will match on a document containing "dog".
+On a search service, a synonym map associates equivalent terms, expanding the scope of a query without the user having to actually provide the term. For example, assuming "dog", "canine", and "puppy" are mapped synonyms, a query on "canine" matches on a document containing "dog". You might create multiple synonym maps for different languages, such as English and French versions, or lexicons if your content includes technical jargon, slang, or obscure terminology.
-## Create synonyms
+Some key points about synonym maps:
-A synonym map is an asset that can be created once and used by many indexes. The [service tier](search-limits-quotas-capacity.md#synonym-limits) determines how many synonym maps you can create, ranging from three synonym maps for Free and Basic tiers, up to 20 for the Standard tiers.
+- A synonym map is a top-level resource that can be created once and used by many indexes.
+- A synonym map applies to string fields.
+- You can create and assign a synonym map at any time with no disruption to indexing or queries.
+- Your [service tier](search-limits-quotas-capacity.md#synonym-limits) sets the limits on how many synonym maps you can create.
+- Your search service can have multiple synonym maps, but within an index, a field definition can only have one synonym map assignment.
-You might create multiple synonym maps for different languages, such as English and French versions, or lexicons if your content includes technical jargon, slang, or obscure terminology. Although you can create multiple synonym maps in your search service, within an index, a field definition can only have one synonym map assignment.
+## Create a synonym map
A synonym map consists of name, format, and rules that function as synonym map entries. The only format that is supported is `solr`, and the `solr` format determines rule construction.
+To create a synonym map, do so programmatically. The portal doesn't support synonym map definitions.
+
+### [REST](#tab/rest)
+
+Use the [Create Synonym Map (REST API)](/rest/api/searchservice/create-synonym-map) to create a synonym map.
+ ```http POST /synonymmaps?api-version=2023-11-01 {
POST /synonymmaps?api-version=2023-11-01
} ```
-To create a synonym map, do so programmatically (the portal doesn't support synonym map definitions):
+### [.NET](#tab/dotnet)
+
+Use the [SynonymMap class (.NET)](/dotnet/api/azure.search.documents.indexes.models.synonymmap) and [Create a synonym map(Azure SDK sample)](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample02_Service.md#create-a-synonym-map) to create the map.
+
+### [Python](#tab/python)
+
+Use the [SynonymMap class (Python)](/python/api/azure-search-documents/azure.search.documents.indexes.models.synonymmap) to create the map.
+
+### [Java](#tab/java)
+
+Use the [SynonymMap class (Java)](/java/api/com.azure.search.documents.indexes.models.synonymmap) to create the map.
-+ [Create Synonym Map (REST API)](/rest/api/searchservice/create-synonym-map). This reference is the most descriptive.
-+ [SynonymMap class (.NET)](/dotnet/api/azure.search.documents.indexes.models.synonymmap) and [Create a synonym map(Azure SDK sample)](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample02_Service.md#create-a-synonym-map)
-+ [SynonymMap class (Python)](/python/api/azure-search-documents/azure.search.documents.indexes.models.synonymmap)
-+ [SynonymMap interface (JavaScript)](/javascript/api/@azure/search-documents/synonymmap)
-+ [SynonymMap class (Java)](/java/api/com.azure.search.documents.indexes.models.synonymmap)
+### [JavaScript](#tab/javascript)
-## Define rules
+Use the [SynonymMap interface (JavaScript)](/javascript/api/@azure/search-documents/synonymmap) to create the map.
+++
+### Define rules
Mapping rules adhere to the open-source synonym filter specification of Apache Solr, described in this document: [SynonymFilter](https://cwiki.apache.org/confluence/display/solr/Filter+Descriptions#FilterDescriptions-SynonymFilter). The `solr` format supports two kinds of rules:
-+ equivalency (where terms are equal substitutes in the query)
+- equivalency (where terms are equal substitutes in the query)
-+ explicit mappings (where terms are mapped to one explicit term prior to querying)
+- explicit mappings (where terms are mapped to one explicit term)
-Each rule must be delimited by the new line character (`\n`). You can define up to 5,000 rules per synonym map in a free service and 20,000 rules per map in other tiers. Each rule can have up to 20 expansions (or items in a rule). For more information, see [Synonym limits](search-limits-quotas-capacity.md#synonym-limits).
+Each rule is delimited by the new line character (`\n`). You can define up to 5,000 rules per synonym map in a free service and 20,000 rules per map in other tiers. Each rule can have up to 20 expansions (or items in a rule). For more information, see [Synonym limits](search-limits-quotas-capacity.md#synonym-limits).
-Query parsers automatically lower-case any upper or mixed case terms, but if you want to preserve special characters in the string, such as a comma or dash, add the appropriate escape characters when creating the synonym map.
+Query parsers automatically lower-case any upper or mixed case terms. To preserve special characters in the string, such as a comma or dash, add the appropriate escape characters when creating the synonym map.
### Equivalency rules
In the explicit case, a query for `Washington`, `Wash.` or `WA` is rewritten as
### Escaping special characters
-In full text search, synonyms are analyzed during query processing just like any other query term, which means that rules around reserved and special characters apply to the terms in your synonym map. The list of characters that requires escaping varies between the simple syntax and full syntax:
+Synonyms are analyzed during query processing just like any other query term, which means that rules for reserved and special characters apply to the terms in your synonym map. The list of characters that requires escaping varies between the simple syntax and full syntax:
-+ [simple syntax](query-simple-syntax.md) `+ | " ( ) ' \`
-+ [full syntax](query-lucene-syntax.md) `+ - & | ! ( ) { } [ ] ^ " ~ * ? : \ /`
+- [simple syntax](query-simple-syntax.md) `+ | " ( ) ' \`
+- [full syntax](query-lucene-syntax.md) `+ - & | ! ( ) { } [ ] ^ " ~ * ? : \ /`
-Recall that if you need to preserve characters that would otherwise be discarded by the default analyzer during indexing, you should substitute an analyzer that preserves them. Some choices include Microsoft natural [language analyzers](index-add-language-analyzers.md), which preserves hyphenated words, or a custom analyzer for more complex patterns. For more information, see [Partial terms, patterns, and special characters](search-query-partial-matching.md).
+To preserve characters that the default analyzer discards, substitute an analyzer that preserves them. Some choices include Microsoft natural [language analyzers](index-add-language-analyzers.md), which preserves hyphenated words, or a custom analyzer for more complex patterns. For more information, see [Partial terms, patterns, and special characters](search-query-partial-matching.md).
The following example shows an example of how to escape a character with a backslash:
The following example shows an example of how to escape a character with a backs
} ```
-Since the backslash is itself a special character in other languages like JSON and C#, you'll probably need to double-escape it. For example, the JSON sent to the REST API for the above synonym map would look like this:
+Since the backslash is itself a special character in other languages like JSON and C#, you probably need to double-escape it. Here's an example in JSON:
```json {
Since the backslash is itself a special character in other languages like JSON a
} ```
-## Upload and manage synonym maps
+## Manage synonym maps
-As mentioned previously, you can create or update a synonym map without disrupting query and indexing workloads. A synonym map is a standalone object (like indexes or data sources), and as long as no field is using it, updates won't cause indexing or queries to fail. However, once you add a synonym map to a field definition, if you then delete a synonym map, any query that includes the fields in question will fail with a 404 error.
+You can update a synonym map without disrupting query and indexing workloads. However, once you add a synonym map to a field, if you then delete a synonym map, any query that includes the fields in question fail with a 404 error.
-Creating, updating, and deleting a synonym map is always a whole-document operation, meaning that you can't update or delete parts of the synonym map incrementally. Updating even a single rule requires a reload.
+Creating, updating, and deleting a synonym map is always a whole-document operation. You can't update or delete parts of the synonym map incrementally. Updating even a single rule requires a reload.
## Assign synonyms to fields
-After uploading a synonym map, you can enable the synonyms on fields of the type `Edm.String` or `Collection(Edm.String)`, on fields having `"searchable":true`. As noted, a field definition can use only one synonym map.
+After you create the synonym map, assign it to a field in your index. To assign synonym maps, do so programmatically. The portal doesn't support synonym map field associations.
+
+- A field must be of type `Edm.String` or `Collection(Edm.String)`
+- A field must have `"searchable":true`
+- A field can have only one synonym map
+
+If the synonym map exists on the search service, it's used on the next query, with no reindexing or rebuild required.
+
+### [REST](#tab/rest-assign)
+
+Use the [Create or Update Index (REST API)](/rest/api/searchservice/indexes/create-or-update) to modify a field definition.
```http POST /indexes?api-version=2023-11-01
POST /indexes?api-version=2023-11-01
} ```
+### [**.NET SDK**](#tab/dotnet-assign)
+
+Use the [**SearchIndexClient**](/dotnet/api/azure.search.documents.indexes.searchindexclient) to update an index. Provide the whole index definition and include the new parameters for synonym map assignments.
+
+In this example, the "country" field has a synonymMapName property.
+
+```csharp
+// Update anindex
+string indexName = "hotels";
+SearchIndex index = new SearchIndex(indexName)
+{
+ Fields =
+ {
+ new SimpleField("hotelId", SearchFieldDataType.String) { IsKey = true, IsFilterable = true, IsSortable = true },
+ new SearchableField("hotelName") { IsFilterable = true, IsSortable = true },
+ new SearchableField("description") { AnalyzerName = LexicalAnalyzerName.EnLucene },
+ new SearchableField("descriptionFr") { AnalyzerName = LexicalAnalyzerName.FrLucene }
+ new ComplexField("address")
+ {
+ Fields =
+ {
+ new SearchableField("streetAddress"),
+ new SearchableField("city") { IsFilterable = true, IsSortable = true, IsFacetable = true },
+ new SearchableField("stateProvince") { IsFilterable = true, IsSortable = true, IsFacetable = true },
+ new SearchableField("country") { SynonymMapNames = new[] { synonymMapName }, IsFilterable = true, IsSortable = true, IsFacetable = true },
+ new SearchableField("postalCode") { IsFilterable = true, IsSortable = true, IsFacetable = true }
+ }
+ }
+ }
+};
+
+await indexClient.CreateIndexAsync(index);
+```
+
+For more examples, see[azure-search-dotnet-samples/quickstart/v11/](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/quickstart/v11).
+
+### [**Other SDKs**](#tab/other-sdks-assign)
+
+You can use any supported SDK to update a search index. All of them provide a **SearchIndexClient** that has methods for updating indexes.
+
+| Azure SDK | Client | Examples |
+|--|--|-|
+| Java | [SearchIndexClient](/java/api/com.azure.search.documents.indexes.searchindexclient) | [CreateIndexExample.java](https://github.com/Azure/azure-sdk-for-java/blob/azure-search-documents_11.1.3/sdk/search/azure-search-documents/src/samples/java/com/azure/search/documents/indexes/CreateIndexExample.java) |
+| JavaScript | [SearchIndexClient](/javascript/api/@azure/search-documents/searchindexclient) | [Indexes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) |
+| Python | [SearchIndexClient](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexclient) | [sample_index_crud_operations.py](https://github.com/Azure/azure-sdk-for-python/blob/7cd31ac01fed9c790cec71de438af9c45cb45821/sdk/search/azure-search-documents/samples/sample_index_crud_operations.py) |
+++ ## Query on equivalent or mapped fields
-Adding synonyms doesn't impose new requirements on query construction. You can issue term and phrase queries just as you did before the addition of synonyms. The only difference is that if a query term exists in the synonym map, the query engine will either expand or rewrite the term or phrase, depending on the rule.
+A synonym field assignment doesn't change how you write queries. After the synonym map assignment, the only difference is that if a query term exists in the synonym map, the search engine either expands or rewrites the term or phrase, depending on the rule.
## How synonyms are used during query execution
-Synonyms are a query expansion technique that supplements the contents of an index with equivalent terms, but only for fields that have a synonym assignment. If a field-scoped query *excludes* a synonym-enabled field, you won't see matches from the synonym map.
+Synonyms are a query expansion technique that supplements the contents of an index with equivalent terms, but only for fields that have a synonym assignment. If a field-scoped query *excludes* a synonym-enabled field, you don't see matches from the synonym map.
-For synonym-enabled fields, synonyms are subject to the same text analysis as the associated field. For example, if a field is analyzed using the standard Lucene analyzer, synonym terms will also be subject to the standard Lucene analyzer at query time. If you want to preserve punctuation, such as periods or dashes, in the synonym term, apply a content-preserving analyzer on the field.
+For synonym-enabled fields, synonyms are subject to the same text analysis as the associated field. For example, if a field is analyzed using the standard Lucene analyzer, synonym terms are also subject to the standard Lucene analyzer at query time. If you want to preserve punctuation, such as periods or dashes, in the synonym term, apply a content-preserving analyzer on the field.
Internally, the synonyms feature rewrites the original query with synonyms with the OR operator. For this reason, hit highlighting and scoring profiles treat the original term and synonyms as equivalent.
Synonym expansions don't apply to wildcard search terms; prefix, fuzzy, and rege
If you need to do a single query that applies synonym expansion and wildcard, regex, or fuzzy searches, you can combine the queries using the OR syntax. For example, to combine synonyms with wildcards for simple query syntax, the term would be `<query> | <query>*`.
-If you have an existing index in a development (non-production) environment, experiment with a small dictionary to see how the addition of synonyms changes the search experience, including impact on scoring profiles, hit highlighting, and suggestions.
+If you have an existing index in a development (nonproduction) environment, experiment with a small dictionary to see how the addition of synonyms changes the search experience, including impact on scoring profiles, hit highlighting, and suggestions.
## Next steps
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
- ignite-2023 Previously updated : 06/13/2024 Last updated : 07/24/2024 # Add semantic ranking to queries in Azure AI Search
You can use any of the following tools and SDKs to build a query that uses seman
## Avoid features that bypass relevance scoring
-Several query capabilities in Azure AI Search bypass relevance scoring or are otherwise incompatible with semantic ranking. If your query logic includes the following features, you can't semantically rank your results:
+A few query capabilities bypass relevance scoring, which makes them incompatible with semantic ranking. If your query logic includes the following features, you can't semantically rank your results:
-+ A query with `search=*` or an empty search string, such as pure filter-only query, won't work because there's nothing to measure semantic relevance against. The query must provide terms or phrases that can be assessed during processing.
-
-+ A query composed in the [full Lucene syntax](query-lucene-syntax.md) (`queryType=full`) is incompatible with semantic ranking (`queryType=semantic`). The semantic model doesn't support the full Lucene syntax.
++ A query with `search=*` or an empty search string, such as pure filter-only query, won't work because there's nothing to measure semantic relevance against and so the search scores are zero. The query must provide terms or phrases that can be evaluated during processing. + Sorting (orderBy clauses) on specific fields overrides search scores and a semantic score. Given that the semantic score is supposed to provide the ranking, adding an orderby clause results in an HTTP 400 error if you apply semantic ranking over ordered results.
The following example in this section uses the [hotels-sample-index](search-get-
1. Set "queryType" to "semantic".
- In other queries, the "queryType" is used to specify the query parser. In semantic ranking, it's set to "semantic". For the "search" field, you can specify queries that conform to the [simple syntax](query-simple-syntax.md).
-
-1. Set "search" to a full text search query based on the [simple syntax](query-simple-syntax.md). Semantic ranking is an extension of full text search, so while this parameter isn't required, you won't get an expected outcome if it's null.
+1. Set "search" to a full text search query. Your search string can support either the [simple syntax](query-simple-syntax.md) or [full Lucene syntax](query-lucene-syntax.md). Semantic ranking is an extension of full text search, so while "search" isn't required, you won't get an expected outcome if it's an empty search (`"search": "*"`).
1. Set "semanticConfiguration" to a [predefined semantic configuration](semantic-how-to-configure.md) that's embedded in your index.
sentinel Customize Alert Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customize-alert-details.md
Follow the procedure detailed below to use the alert details feature. These step
| **ConfidenceScore** (Preview) | Integer, between **0**-**1** (inclusive) | | **ExtendedLinks** (Preview) | String | | **ProductComponentName** (Preview) | String |
- | **ProductName** (Preview) | String |
+ | **ProductName** (Preview)<br>\* See note following this table | String |
| **ProviderName** (Preview) | String | | **RemediationSteps** (Preview) | String |
+ > [!NOTE]
+ >
+ > If you onboarded Microsoft Sentinel to the unified security operations platform, **do not customize** the *ProductName* field for alerts from Microsoft sources. Doing so will result in these alerts being dropped from Microsoft Defender XDR and no incident being created.
+ If you change your mind, or if you made a mistake, you can remove an alert detail by clicking the trash can icon next to the **Alert property/Value** pair, or delete the free text from the **Alert Name/Description Format** fields. 1. When you have finished customizing your alert details, if you're now creating the rule, continue to the next tab in the wizard. If you're editing an existing rule, select the **Review and create** tab. Once the rule validation is successful, select **Save**.
sentinel Deploy Sap Btp Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-btp-solution.md
Previously updated : 03/30/2023 Last updated : 07/17/2024 # customer intent: As an SAP admin, I want to know how to deploy the Microsoft Sentinel solution for SAP BTP so that I can plan a deployment.
Last updated 03/30/2023
This article describes how to deploy the Microsoft Sentinel solution for SAP Business Technology Platform (BTP) system. The Microsoft Sentinel solution for SAP BTP monitors and protects your SAP BTP system. It collects audit logs and activity logs from the BTP infrastructure and BTP-based apps, and then detects threats, suspicious activities, illegitimate activities, and more. [Read more about the solution](sap-btp-solution-overview.md).
-> [!IMPORTANT]
-> The Microsoft Sentinel solution for SAP BTP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Prerequisites Before you begin, verify that:
You also can retrieve the logs via the UI:
We recommend that you periodically rotate the BPT subaccount client secrets. The following sample script demonstrates the process of updating an existing data connector with a new secret fetched from Azure Key Vault.
-Before you start, collect the values you'll need for the scripts parameters, including:
+Before you start, collect the values you need for the scripts parameters, including:
- The subscription ID, resource group, and workspace name for your Microsoft Sentinel workspace. - The key vault and the name of the key vault secret.
sentinel Sap Btp Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-btp-security-content.md
Title: Microsoft Sentinel Solution for SAP® BTP - security content reference
-description: Learn about the built-in security content provided by the Microsoft Sentinel Solution for SAP® BTP.
+ Title: Microsoft Sentinel Solution for SAP BTP - security content reference
+description: Learn about the built-in security content provided by the Microsoft Sentinel Solution for SAP BTP.
Previously updated : 03/30/2023 Last updated : 07/17/2024
-# Microsoft Sentinel Solution for SAP® BTP: security content reference
+# Microsoft Sentinel Solution for SAP BTP: security content reference
-This article details the security content available for the Microsoft Sentinel Solution for SAP® BTP.
-
-> [!IMPORTANT]
-> The Microsoft Sentinel Solution for SAP® BTP is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+This article details the security content available for the Microsoft Sentinel Solution for SAP BTP.
Available security content currently includes a built-in workbook and analytics rules. You can also add SAP-related [watchlists](../watchlists.md) to use in your search, detection rules, threat hunting, and response playbooks.
The BTP Activity Workbook provides a dashboard overview of BTP activity.
The **Overview** tab shows: - An overview of BTP subaccounts, helping analysts identify the most active accounts and the type of ingested data. -- Subaccount sign-in activity, helping analysts identify spikes and trends that may be associated with sign-in failures in SAP Business Application Studio (BAS).
+- Subaccount sign-in activity, helping analysts identify spikes and trends that might be associated with sign-in failures in SAP Business Application Studio (BAS).
- Timeline of BTP activity and number of BTP security alerts, helping analysts search for any correlation between the two. The **Identity Management** tab shows a grid of identity management events, such as user and security role changes, in a human-readable format. The search bar lets you quickly find specific changes. :::image type="content" source="./media/sap-btp-security-content/sap-btp-workbook-identity-management.png" alt-text="Screenshot of the Identity Management tab of the SAP BTP workbook." lightbox="./media/sap-btp-security-content/sap-btp-workbook-identity-management.png":::
-For more information, see [Tutorial: Visualize and monitor your data](../monitor-your-data.md) and [Deploy Microsoft Sentinel Solution for SAP® BTP](deploy-sap-btp-solution.md).
+For more information, see [Tutorial: Visualize and monitor your data](../monitor-your-data.md) and [Deploy Microsoft Sentinel Solution for SAP BTP](deploy-sap-btp-solution.md).
## Built-in analytics rules | Rule name | Description | Source action | Tactics | | | | | |
-| **BTP - Failed access attempts across multiple BAS subaccounts** |Identifies failed Business Application Studio (BAS) access attempts over a predefined number of subaccounts.<br>Default threshold: 3 | Run failed login attempts to BAS over the defined threshold number of subaccounts. <br><br>**Data sources**: SAPBTPAuditLog_CL | Discovery, Reconnaissance |
+| **BTP - Failed access attempts across multiple BAS subaccounts** |Identifies failed Business Application Studio (BAS) access attempts over a predefined number of subaccounts.<br>Default threshold: 3 | Run failed sign-in attempts to BAS over the defined threshold number of subaccounts. <br><br>**Data sources**: SAPBTPAuditLog_CL | Discovery, Reconnaissance |
| **BTP - Malware detected in BAS dev space** |Identifies instances of malware detected by the SAP internal malware agent within BAS developer spaces. | Copy or create a malware file in a BAS developer space. <br><br>**Data sources**: SAPBTPAuditLog_CL| Execution, Persistence, Resource Development |
-| **BTP - User added to sensitive privileged role collection** |Identifies identity management actions where a user is added to a set of monitored privileged role collections. | Assign one of the following role collections to a user: "Subaccount Service Administrator", "Subaccount Administrator", "Connectivity and Destination Administrator", "Destination Administrator", "Cloud Connector AdministratorΓÇ¥. <br><br>**Data sources**: SAPBTPAuditLog_CL | Lateral Movement, Privilege Escalation |
+| **BTP - User added to sensitive privileged role collection** |Identifies identity management actions where a user is added to a set of monitored privileged role collections. | Assign one of the following role collections to a user: <br>- `Subaccount Service Administrator`<br>- `Subaccount Administrator`<br>- `Connectivity and Destination Administrator`<br>- `Destination Administrator`<br>- `Cloud Connector Administrator` <br><br>**Data sources**: SAPBTPAuditLog_CL | Lateral Movement, Privilege Escalation |
| **BTP - Trust and authorization Identity Provider monitor** |Identifies create, read, update, and delete (CRUD) operations on Identity Provider settings within a subaccount. | Change, read, update, or delete any of the identity provider settings within a subaccount. <br><br>**Data sources**: SAPBTPAuditLog_CL | Credential Access, Privilege Escalation | | **BTP - Mass user deletion in a subaccount** |Identifies user account deletion activity where the number of deleted users exceeds a predefined threshold.<br>Default threshold: 10 | Delete count of user accounts over the defined threshold. <br><br>**Data sources**: SAPBTPAuditLog_CL | Impact | ## Next steps
-In this article, you learned about the security content provided with the Microsoft Sentinel Solution for SAP® BTP.
+In this article, you learned about the security content provided with the Microsoft Sentinel Solution for SAP BTP.
-- [Deploy Microsoft Sentinel solution for SAP® BTP](deploy-sap-btp-solution.md)-- [Microsoft Sentinel Solution for SAP® BTP overview](sap-btp-solution-overview.md)
+- [Deploy Microsoft Sentinel solution for SAP BTP](deploy-sap-btp-solution.md)
+- [Microsoft Sentinel Solution for SAP BTP overview](sap-btp-solution-overview.md)
sentinel Sap Btp Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-btp-solution-overview.md
Title: Microsoft Sentinel Solution for SAP® BTP overview
-description: This article introduces the Microsoft Sentinel Solution for SAP® BTP.
+ Title: Microsoft Sentinel Solution for SAP BTP overview
+description: This article introduces the Microsoft Sentinel Solution for SAP BTP.
- Previously updated : 03/22/2023+ Last updated : 07/17/2024
-# Microsoft Sentinel Solution for SAP® BTP overview
+# Microsoft Sentinel Solution for SAP BTP overview
-This article introduces the Microsoft Sentinel Solution for SAP® BTP. The solution monitors and protects your SAP Business Technology Platform (BTP) system: It collects audits and activity logs from the BTP infrastructure and BTP based apps, and detects threats, suspicious activities, illegitimate activities, and more.
+This article introduces the Microsoft Sentinel Solution for SAP BTP. The solution monitors and protects your SAP Business Technology Platform (BTP) system: It collects audits and activity logs from the BTP infrastructure and BTP based apps, and detects threats, suspicious activities, illegitimate activities, and more.
SAP BTP is a cloud-based solution that provides a wide range of tools and services for developers to build, run, and manage applications. One of the key features of SAP BTP is its low-code development capabilities. Low-code development allows developers to create applications quickly and efficiently by using visual drag-and-drop interfaces and prebuilt components, rather than writing code from scratch.
-> [!IMPORTANT]
-> The Microsoft Sentinel Solution for SAP® BTP is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ### Why it's important to monitor BTP activity
-While low-code development platforms have become increasingly popular among businesses looking to accelerate their application development processes, there are also security risks that organizations must consider. One key concern is the risk of security vulnerabilities introduced by citizen developers, some of whom may lack the security awareness of traditional pro-dev community. To counter these vulnerabilities, it's crucial for organizations to quickly detect and respond to threats on BTP applications.
+While low-code development platforms have become increasingly popular among businesses looking to accelerate their application development processes, there are also security risks that organizations must consider. One key concern is the risk of security vulnerabilities introduced by citizen developers, some of whom might lack the security awareness of traditional pro-dev community. To counter these vulnerabilities, it's crucial for organizations to quickly detect and respond to threats on BTP applications.
Beyond the low-code aspect, BTP applications: - Access sensitive business data, such as customers, opportunities, orders, financial data, and manufacturing processes. - Access and integrate with multiple different business applications and data storesΓÇï. - Enable key business processesΓÇï.-- Are created by citizen developers who may not be security savvy or aware of cyber threats.
+- Are created by citizen developers who might not be security savvy or aware of cyber threats.
- Used by wide range of users, internal and externalΓÇï. Therefore, it's important to protect your BTP system against these risks. ## How the solution addresses BTP security risks
-With the Microsoft Sentinel Solution for SAP® BTP, you can:
+With the Microsoft Sentinel Solution for SAP BTP, you can:
- Gain visibility to activities **on** BTP applications, including creation, modification, permissions change, execution, and more. - Gain visibility to activities **in** BTP applications, including who uses the application, which business applications the BTP application accesses, business data Create, Read, Update, Delete (CRUD) activities, and more.
The solution includes:
## Next steps
-In this article, you learned about the Microsoft Sentinel solution for SAP® BTP.
+In this article, you learned about the Microsoft Sentinel solution for SAP BTP.
> [!div class="nextstepaction"]
-> [Deploy the Microsoft Sentinel Solution for SAP® BTP](deploy-sap-btp-solution.md)
+> [Deploy the Microsoft Sentinel Solution for SAP BTP](deploy-sap-btp-solution.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## July 2024
+- [SAP Business Technology Platform (BTP) connector now generally available](#sap-business-technology-platform-btp-connector-now-generally-available-ga)
- [Microsoft unified security platform now generally available](#microsoft-unified-security-platform-now-generally-available)
+### SAP Business Technology Platform (BTP) connector now generally available (GA)
+
+The Microsoft Sentinel Solution for SAP BTP is now generally available (GA). This solution provides visibility into your SAP BTP environment, and helps you detect and respond to threats and suspicious activities.
+
+For more information, see:
+
+- [Microsoft Sentinel Solution for SAP Business Technology Platform (BTP)](sap/sap-btp-solution-overview.md)
+- [Deploy the Microsoft Sentinel solution for SAP BTP](sap/deploy-sap-btp-solution.md)
+- [Microsoft Sentinel Solution for SAP BTP: security content reference](sap/sap-btp-security-content.md)
+ ### Microsoft unified security platform now generally available Microsoft Sentinel is now generally available within the Microsoft unified security operations platform in the Microsoft Defender portal. The Microsoft unified security operations platform brings together the full capabilities of Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Copilot in Microsoft Defender. For more information, see the following resources:
Windows DNS events can now be ingested to Microsoft Sentinel using the Azure Mon
### Reduce false positives for SAP systems with analytics rules
-Use analytics rules together with the [Microsoft Sentinel solution for SAP® applications](sap/solution-overview.md) to lower the number of false positives triggered from your SAP® systems. The Microsoft Sentinel solution for SAP® applications now includes the following enhancements:
+Use analytics rules together with the [Microsoft Sentinel solution for SAP applications](sap/solution-overview.md) to lower the number of false positives triggered from your SAP systems. The Microsoft Sentinel solution for SAP applications now includes the following enhancements:
- The [**SAPUsersGetVIP**](sap/sap-solution-log-reference.md#sapusersgetvip) function now supports excluding users according to their SAP-given roles or profile. - The **SAP_User_Config** watchlist now supports using wildcards in the **SAPUser** field to exclude all users with a specific syntax.
-For more information, see [Microsoft Sentinel solution for SAP® applications data reference](sap/sap-solution-log-reference.md) and [Handle false positives in Microsoft Sentinel](false-positives.md).
+For more information, see [Microsoft Sentinel solution for SAP applications data reference](sap/sap-solution-log-reference.md) and [Handle false positives in Microsoft Sentinel](false-positives.md).
## Next steps
service-bus-messaging Service Bus Typescript How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-typescript-how-to-use-queues.md
+
+ Title: Get started with Azure Service Bus queues (TypeScript)
+description: This tutorial shows you how to send messages to and receive messages from Azure Service Bus queues using the TypeScript programming language.
++ Last updated : 07/17/2024+
+ms.devlang: typescript
+++
+# Send messages to and receive messages from Azure Service Bus queues (TypeScript)
+> [!div class="op_single_selector" title1="Select the programming language:"]
+> * [C#](service-bus-dotnet-get-started-with-queues.md)
+> * [Java](service-bus-java-how-to-use-queues.md)
+> * [JavaScript](service-bus-nodejs-how-to-use-queues.md)
+> * [Python](service-bus-python-how-to-use-queues.md)
+> * [TypeScript](service-bus-typescript-how-to-use-queues.md)
+
+In this tutorial, you complete the following steps:
+
+1. Create a Service Bus namespace, using the Azure portal.
+2. Create a Service Bus queue, using the Azure portal.
+3. Write a TypeScript ESM application to use the [@azure/service-bus](https://www.npmjs.com/package/@azure/service-bus) package to:
+ 1. Send a set of messages to the queue.
+ 1. Receive those messages from the queue.
+
+> [!NOTE]
+> This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for JavaScript repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/servicebus/service-bus/samples/v7).
+
+## Prerequisites
+
+If you're new to the service, see [Service Bus overview](service-bus-messaging-overview.md) before you do this quickstart.
+
+- An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).
+- [TypeScript 5+](https://www.typescriptlang.org/download/)
+- [Node.js LTS](https://nodejs.org/en/download/)
+
+### [Passwordless](#tab/passwordless)
+
+To use this quickstart with your own Azure account, you need:
+* Install [Azure CLI](/cli/azure/install-azure-cli), which provides the passwordless authentication to your developer machine.
+* Sign in with your Azure account at the terminal or command prompt with `az login`.
+* Use the same account when you add the appropriate data role to your resource.
+* Run the code in the same terminal or command prompt.
+* Note down your **queue** name for your Service Bus namespace. You'll need that in the code.
+
+### [Connection string](#tab/connection-string)
+
+Note down the following, which you'll use in the code below:
+* Service Bus namespace **connection string**
+* Service Bus namespace **queue** you created
+++
+> [!NOTE]
+> This tutorial works with samples that you can copy and run using [Node.js](https://nodejs.org/). For instructions on how to create a Node.js application, see [Create and deploy a Node.js application to an Azure Website](../app-service/quickstart-nodejs.md), or [Node.js cloud service using Windows PowerShell](../cloud-services/cloud-services-nodejs-develop-deploy-app.md).
++++++
+## Use Node Package Manager (npm) to install the package
+
+### [Passwordless](#tab/passwordless)
+
+1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
+
+1. Install the following packages:
+
+ ```bash
+ npm install @azure/service-bus @azure/identity
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
+
+1. Install the following package:
+
+ ```bash
+ npm install @azure/service-bus
+ ```
+++
+## Send messages to a queue
+
+The following sample code shows you how to send a message to a queue.
+
+### [Passwordless](#tab/passwordless)
+
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+1. In the `src` folder, create a file called `send.ts` and paste the below code into it. This code sends the names of scientists as messages to your queue.
+
+ > [!IMPORTANT]
+ > The passwordless credential is provided with the [**DefaultAzureCredential**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/identity/identity#defaultazurecredential).
+
+ :::code language="typescript" source="~/azure-typescript-e2e-apps/quickstarts/service-bus/ts/src/queue-passwordless-send.ts" :::
+
+3. Replace `<SERVICE-BUS-NAMESPACE>` with your Service Bus namespace.
+4. Replace `<QUEUE NAME>` with the name of the queue.
+5. Then run the command in a command prompt to execute this file.
+
+ ```console
+ npm run build
+ node dist/send.js
+ ```
+6. You should see the following output.
+
+ ```console
+ Sent a batch of messages to the queue: myqueue
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+1. In the `src` folder, create a file called `src/send.ts` and paste the below code into it. This code sends the names of scientists as messages to your queue.
+
+ :::code language="typescript" source="~/azure-typescript-e2e-apps/quickstarts/service-bus/ts/src/queue-connection-string-send.ts" :::
+
+3. Replace `<CONNECTION STRING TO SERVICE BUS NAMESPACE>` with the connection string to your Service Bus namespace.
+4. Replace `<QUEUE NAME>` with the name of the queue.
+5. Then run the command in a command prompt to execute this file.
+
+ ```console
+ npm run build
+ node dist/send.js
+ ```
+6. You should see the following output.
+
+ ```console
+ Sent a batch of messages to the queue: myqueue
+ ```
+++
+## Receive messages from a queue
+
+### [Passwordless](#tab/passwordless)
+
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+2. In the `src` folder, create a file called `receive.ts` and paste the following code into it.
+
+ :::code language="typescript" source="~/azure-typescript-e2e-apps/quickstarts/service-bus/ts/src/queue-passwordless-receive.ts" :::
+
+3. Replace `<SERVICE-BUS-NAMESPACE>` with your Service Bus namespace.
+4. Replace `<QUEUE NAME>` with the name of the queue.
+5. Then run the command in a command prompt to execute this file.
+
+ ```console
+ npm run build
+ node dist/receive.js
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+2. In the `src` folder, create a file called `receive.ts` and paste the following code into it.
+
+ :::code language="typescript" source="~/azure-typescript-e2e-apps/quickstarts/service-bus/ts/src/queue-connection-string-receive.ts" :::
+
+3. Replace `<CONNECTION STRING TO SERVICE BUS NAMESPACE>` with the connection string to your Service Bus namespace.
+
+4. Replace `<QUEUE NAME>` with the name of the queue.
+5. Then run the command in a command prompt to execute this file.
+
+ ```console
+ npm run build
+ node dist/receive.js
+ ```
+++
+You should see the following output.
+
+```console
+Received message: Albert Einstein
+Received message: Werner Heisenberg
+Received message: Marie Curie
+Received message: Stephen Hawking
+Received message: Isaac Newton
+Received message: Niels Bohr
+Received message: Michael Faraday
+Received message: Galileo Galilei
+Received message: Johannes Kepler
+Received message: Nikolaus Kopernikus
+```
+
+On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
++
+Select the queue on this **Overview** page to navigate to the **Service Bus Queue** page. You see the **incoming** and **outgoing** message count on this page too. You also see other information such as the **current size** of the queue, **maximum size**, **active message count**, and so on.
++
+## Troubleshooting
+
+If you receive one of the following errors when running the **passwordless** version of the TypeScript code, make sure you are signed in via the Azure CLI command, `az login` and the [appropriate role](#azure-built-in-roles-for-azure-service-bus) is applied to your Azure user account:
+
+* 'Send' claim(s) are required to perform this operation
+* 'Receive' claim(s) are required to perform this operation
+
+## Clean up resources
+
+Navigate to your Service Bus namespace in the Azure portal, and select **Delete** on the Azure portal to delete the namespace and the queue in it.
+
+## Next steps
+See the following documentation and samples:
+
+- [Azure Service Bus client library for JavaScript](https://www.npmjs.com/package/@azure/service-bus)
+- [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/servicebus/service-bus/samples/v7/javascript)
+- [TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/servicebus/service-bus/samples/v7/typescript)
+- [API reference documentation](/javascript/api/overview/azure/service-bus)
service-bus-messaging Service Bus Typescript How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-typescript-how-to-use-topics-subscriptions.md
+
+ Title: Get started with Azure Service Bus topics (TypeScript)
+description: This tutorial shows you how to send messages to Azure Service Bus topics and receive messages from topics' subscriptions using the TypeScript programming language.
++ Last updated : 07/17/2024+
+ms.devlang: typescript
+++
+# Send messages to an Azure Service Bus topic and receive messages from subscriptions to the topic (TypeScript)
+
+> [!div class="op_single_selector" title1="Select the programming language:"]
+> * [C#](service-bus-dotnet-how-to-use-topics-subscriptions.md)
+> * [Java](service-bus-java-how-to-use-topics-subscriptions.md)
+> * [JavaScript](service-bus-nodejs-how-to-use-topics-subscriptions.md)
+> * [Python](service-bus-python-how-to-use-topics-subscriptions.md)
+> * [TypeScript](service-bus-typescript-how-to-use-topics-subscriptions.md)
+
+In this tutorial, you complete the following steps:
+
+1. Create a Service Bus namespace, using the Azure portal.
+2. Create a Service Bus topic, using the Azure portal.
+3. Create a Service Bus subscription to that topic, using the Azure portal.
+4. Write a TypeScript ESM application to use the [@azure/service-bus](https://www.npmjs.com/package/@azure/service-bus) package to:
+ * Send a set of messages to the topic.
+ * Receive those messages from the subscription.
+
+> [!NOTE]
+> This quick start provides step-by-step instructions for a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for JavaScript repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/servicebus/service-bus/samples/v7).
+
+## Prerequisites
+- An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).
+- [TypeScript 5+](https://www.typescriptlang.org/download/)
+- [Node.js LTS](https://nodejs.org/en/download/)
+- Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md). You will use only one subscription for this quickstart.
++
+### [Passwordless](#tab/passwordless)
+
+To use this quickstart with your own Azure account, you need:
+* Install [Azure CLI](/cli/azure/install-azure-cli), which provides the passwordless authentication to your developer machine.
+* Sign in with your Azure account at the terminal or command prompt with `az login`.
+* Use the same account when you add the appropriate role to your resource.
+* Run the code in the same terminal or command prompt.
+* Note down your **topic** name and **subscription** for your Service Bus namespace. You'll need that in the code.
+
+### [Connection string](#tab/connection-string)
+
+Note down the following, which you'll use in the code below:
+* Service Bus namespace **connection string**
+* Service Bus namespace **topic** name you created
+* Service Bus namespace **subscription**
+++
+> [!NOTE]
+> This tutorial works with samples that you can copy and run using [Node.js](https://nodejs.org/). For instructions on how to create a Node.js application, see [Create and deploy a Node.js application to an Azure Website](../app-service/quickstart-nodejs.md), or [Node.js Cloud Service using Windows PowerShell](../cloud-services/cloud-services-nodejs-develop-deploy-app.md).
++++++
+
+## Use Node Package Manager (npm) to install the package
+
+### [Passwordless](#tab/passwordless)
+
+1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
+
+1. Install the following packages:
+
+ ```bash
+ npm install @azure/service-bus @azure/identity
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
+
+1. Install the following package:
+
+ ```bash
+ npm install @azure/service-bus
+ ```
+++
+## Send messages to a topic
+The following sample code shows you how to send a batch of messages to a Service Bus topic. See code comments for details.
+
+### [Passwordless](#tab/passwordless)
+
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+2. In the `src` folder, create a file called `sendtotopic.ts` and paste the below code into it. This code will send a message to your topic.
+
+ > [!IMPORTANT]
+ > The passwordless credential is provided with the [**DefaultAzureCredential**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/identity/identity#defaultazurecredential).
+
+ :::code language="typescript" source="~/azure-typescript-e2e-apps/quickstarts/service-bus/ts/src/topic-passwordless-send.ts" :::
+
+3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace.
+1. Replace `<TOPIC NAME>` with the name of the topic.
+1. Then run the command in a command prompt to execute this file.
+
+ ```console
+ npm run build
+ node dist/sendtotopic.js
+ ```
+1. You should see the following output.
+
+ ```console
+ Sent a batch of messages to the topic: mytopic
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+2. In the `src` folder, create a file called `sendtotopic.ts` and paste the below code into it. This code will send a message to your topic.
+
+ :::code language="typescript" source="~/azure-typescript-e2e-apps/quickstarts/service-bus/ts/src/topic-connection-string-send.ts" :::
+
+3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace.
+1. Replace `<TOPIC NAME>` with the name of the topic.
+1. Then run the command in a command prompt to execute this file.
+
+ ```console
+ npm run build
+ node dist/sendtotopic.js
+ ```
+1. You should see the following output.
+
+ ```console
+ Sent a batch of messages to the topic: mytopic
+ ```
+++
+## Receive messages from a subscription
+
+### [Passwordless](#tab/passwordless)
+
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+2. In the `src` folder, create a file called `receivefromsubscription.ts` and paste the following code into it. See code comments for details.
+
+ :::code language="typescript" source="~/azure-typescript-e2e-apps/quickstarts/service-bus/ts/src/topic-passwordless-receive.ts" :::
+
+3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to the namespace.
+4. Replace `<TOPIC NAME>` with the name of the topic.
+5. Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
+6. Then run the command in a command prompt to execute this file.
+
+ ```console
+ npm run build
+ node dist/receivefromsubscription.js
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+2. In the `src` folder, create a file called `receivefromsubscription.js` and paste the following code into it. See code comments for details.
+
+ :::code language="typescript" source="~/azure-typescript-e2e-apps/quickstarts/service-bus/ts/src/topic-connection-string-receive.ts" :::
+
+3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to the namespace.
+4. Replace `<TOPIC NAME>` with the name of the topic.
+5. Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
+6. Then run the command in a command prompt to execute this file.
+
+ ```console
+ npm run build
+ node dist/receivefromsubscription.js
+ ```
+++
+You should see the following output.
+
+```console
+Received message: Albert Einstein
+Received message: Werner Heisenberg
+Received message: Marie Curie
+Received message: Stephen Hawking
+Received message: Isaac Newton
+Received message: Niels Bohr
+Received message: Michael Faraday
+Received message: Galileo Galilei
+Received message: Johannes Kepler
+Received message: Nikolaus Kopernikus
+```
+
+In the Azure portal, navigate to your Service Bus namespace, switch to **Topics** in the bottom pane, and select your topic to see the **Service Bus Topic** page for your topic. On this page, you should see 10 incoming and 10 outgoing messages in the **Messages** chart.
++
+If you run only the send app next time, on the **Service Bus Topic** page, you see 20 incoming messages (10 new) but 10 outgoing messages.
++
+On this page, if you select a subscription in the bottom pane, you get to the **Service Bus Subscription** page. You can see the active message count, dead-letter message count, and more on this page. In this example, there are 10 active messages that haven't been received by a receiver yet.
++
+## Troubleshooting
+
+If you receive an error when running the **passwordless** version of the TypeScript code about required claims, make sure you are signed in via the Azure CLI command, `az login` and the [appropriate role](#azure-built-in-roles-for-azure-service-bus) is applied to your Azure user account.
+
+## Clean up resources
+
+Navigate to your Service Bus namespace in the Azure portal, and select **Delete** on the Azure portal to delete the namespace and the topic in it.
+
+## Next steps
+See the following documentation and samples:
+
+- [Azure Service Bus client library for JavaScript](https://www.npmjs.com/package/@azure/service-bus)
+- [JavaScript samples](/samples/azure/azure-sdk-for-js/service-bus-javascript/)
+- [TypeScript samples](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
+- [API reference documentation](/javascript/api/overview/azure/service-bus)
static-web-apps Bitbucket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/bitbucket.md
Previously updated : 03/31/2021 Last updated : 04/24/2024
This article uses a GitHub repository as the source to import code into a Bitbuc
1. Next to the *Project* label, select **Create new project**. 1. Enter **MyStaticWebApp**.
-2. Select **Import repository** and wait a moment while the website creates your repository.
+1. Select **Import repository** and wait a moment while the website creates your repository.
### Set main branch
From time to time the template repository have more than one branch. Use the fol
1. Expand the **Advanced** section. 1. Under the *Main branch* label, ensure **main** is selected in the drop down. 1. If you made a change, select **Save changes**.
-2. Select **Back**.
+1. Select **Back**.
## Create a static web app
Now that the repository is created, you can create a static web app from the Azu
1. Select **Review + create**. 1. Select **Create**.
-2. Select **Go to resource**.
-3. Select **Manage deployment token**.
-4. Copy the deployment token value and set it aside in an editor for later use.
-5. Select **Close** on the *Manage deployment token* window.
+1. Select **Go to resource**.
+1. Select **Manage deployment token**.
+1. Copy the deployment token value and set it aside in an editor for later use.
+1. Select **Close** on the *Manage deployment token* window.
## Create the pipeline task in Bitbucket
Now that the repository is created, you can create a static web app from the Azu
1. Ensure the **main** branch is selected in the branch drop down. 1. Select **Pipelines**. 1. Select text link **Create your first pipeline**.
-2. On the *Starter pipeline* card, select **Select**.
-3. Enter the following YAML into the configuration file.
+1. On the *Starter pipeline* card, select **Select**.
+1. Enter the following YAML into the configuration file.
# [No Framework](#tab/vanilla-javascript)
Now that the repository is created, you can create a static web app from the Azu
API_TOKEN: $deployment_token ```
+ > [!NOTE]
+ > If you are using these instructions with your own code and Angular 17 or above, the output location value needs to end with **/browser**.
+ # [Blazor](#tab/blazor) ```yml
static-web-apps Deploy Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-angular.md
Previously updated : 08/02/2023 Last updated : 07/24/2024 zone_pivot_groups: devops-or-github
This article uses a GitHub template repository to make it easy for you to get st
This article uses an Azure DevOps repository to make it easy for you to get started. The repository features a starter app used to deploy using Azure Static Web Apps. 1. Sign in to Azure DevOps.
-2. Select **New repository**.
-3. In the *Create new project* window, expand **Advanced** menu and make the following selections:
+1. Select **New repository**.
+1. In the *Create new project* window, expand the **Advanced** menu, and make the following selections:
| Setting | Value | |--|--| | Project | Enter **my-first-web-static-app**. | | Visibility | Select **Private**. |
- | Version control | Select **Git**. |
+ | Version control | Select **Git**. |
| Work item process | Select the option that best suits your development methods. |
-4. Select **Create**.
-5. Select the **Repos** menu item.
-6. Select the **Files** menu item.
-7. Under the *Import repository* card, select **Import**.
-8. Copy a repository URL for the framework of your choice, and paste it into the *Clone URL* box.
+1. Select **Create**.
+1. Select the **Repos** menu item.
+1. Select the **Files** menu item.
+1. Under the *Import repository* card, select **Import**.
+1. Copy a repository URL for the framework of your choice, and paste it into the *Clone URL* box.
[https://github.com/staticwebdev/angular-basic.git](https://github.com/staticwebdev/angular-basic.git)
-9. Select **Import** and wait for the import process to complete.
+1. Select **Import** and wait for the import process to complete.
::: zone-end
In the _Build Details_ section, add configuration details specific to your prefe
1. Leave the _Api location_ box empty.
-1. Type **dist/angular-basic** in the _App artifact location_ box.
+1. Type **dist/angular-basic** in the _Output location_ box.
+
+> [!NOTE]
+> If you are using these instructions with your own code and Angular 17 or above, the output location value needs to end with **/browser**.
Select **Review + create**.
static-web-apps Deploy Nextjs Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs-hybrid.md
In the _Build Details_ section, add configuration details specific to your prefe
1. Leave the _Api location_ box empty.
-1. Leave the _App artifact location_ box empty.
+1. Leave the _Output location_ box empty.
Select **Review + create**.
static-web-apps Deploy Vue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-vue.md
This article uses an Azure DevOps repository to make it easy for you to get star
1. Sign in to Azure DevOps. 2. Select **New repository**.
-3. In the *Create new project* window, expand **Advanced** menu and make the following selections:
+3. In the *Create new project* window, expand **Advanced** menu, and make the following selections:
| Setting | Value | |--|--| | Project | Enter **my-first-web-static-app**. | | Visibility | Select **Private**. |
- | Version control | Select **Git**. |
+ | Version control | Select **Git**. |
| Work item process | Select the option that best suits your development methods. | 4. Select **Create**.
In the _Build Details_ section, add configuration details specific to your prefe
1. Leave the _Api location_ box empty.
-1. Keep the default value in the _App artifact location_ box.
+1. Keep the default value in the _Output location_ box.
Select **Review + create**.
static-web-apps Named Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/named-environments.md
Previously updated : 04/27/2022 Last updated : 07/24/2023
You can configure your site to deploy every change to a named environment. This
## Configuration
-To enable stable URL environments with named deployment environment, make the following changes to your [configuration file](configuration.md).
+To enable stable URL environments with named deployment environment, make the following changes to your build configuration file.
- Set the `deployment_environment` input to a specific name on the `static-web-apps-deploy` job in GitHub action or on the AzureStaticWebApp task. This ensures all changes to your tracked branches are deployed to the named preview environment. - List the branches you want to deploy to preview environments in the trigger array in your workflow configuration so that changes to those branches also trigger the GitHub Actions or Azure Pipelines deployment.
storage Configure Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/configure-network-routing-preference.md
Previously updated : 03/17/2021 Last updated : 07/23/2024
The network routing preference specifies how network traffic is routed to your a
By default, the routing preference for the public endpoint of the storage account is set to Microsoft global network. You can choose between the Microsoft global network and Internet routing as the default routing preference for the public endpoint of your storage account. To learn more about the difference between these two types of routing, see [Network routing preference for Azure Storage](network-routing-preference.md).
+> [!WARNING]
+> If your storage account contains or will contain Azure file shares, don't change your routing preference to Internet routing. The default option, Microsoft routing, works with all Azure Files configurations. The Internet routing option doesn't support AD domain join scenarios or Azure File Sync.
+ ### [Portal](#tab/azure-portal) To change your routing preference to Internet routing:
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
When a SAS expiration policy is in effect for the storage account, the signed st
## Configure a SAS expiration policy
-When you configure a SAS expiration policy on a storage account, the policy applies to each type of SAS that is signed with the account key. The types of shared access signatures that are signed with the account key are the service SAS and the account SAS.
+When you configure a SAS expiration policy on a storage account, the policy applies to each type of SAS that is signed with the account key or with a user delegation key. The types of shared access signatures that are signed with the account key are the service SAS and the account SAS.
### Do I need to rotate the account access keys first?
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
description: Learn about file shares hosted in Azure Files using the Network Fil
Previously updated : 06/11/2024 Last updated : 07/10/2024
The status of items that appear in this table might change over time as support
| Root squash| ✔️ | | Access same data from Windows and Linux client| ⛔ | | [Identity-based authentication](storage-files-active-directory-overview.md) | ⛔ |
-| [Azure file share soft delete](storage-files-prevent-file-share-deletion.md) | ✔️ (preview) |
+| [Azure file share soft delete](storage-files-prevent-file-share-deletion.md) | ✔️ |
| [Azure File Sync](../file-sync/file-sync-introduction.md)| ⛔ | | [Azure file share backups](../../backup/azure-file-share-backup-overview.md)| ⛔ | | [Azure file share snapshots](storage-snapshots-files.md)| ✔️ |
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 04/12/2024 Last updated : 07/23/2024
Azure Files and Azure File Sync are updated regularly to offer new features and
## What's new in 2024
+### 2024 quarter 3 (July, August, September)
+
+#### Soft delete for NFS Azure file shares is generally available
+
+Soft delete protects your Azure file shares from accidental deletion. The feature has been available for SMB Azure file shares for some time, and is now generally available for NFS Azure file shares. For more information, [read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/soft-delete-for-nfs-azure-file-shares-is-now-generally-available/ba-p/4162222).
+ ### 2024 quarter 2 (April, May, June) #### Azure Files vaulted backup is now in public preview
storage Storage Files Enable Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-enable-soft-delete.md
description: Learn how to enable soft delete on Azure file shares for data recov
Previously updated : 06/07/2024 Last updated : 07/09/2024
Azure Files offers soft delete for file shares so that you can easily recover yo
|-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) (preview) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
## Prerequisites
storage Storage Files Prevent File Share Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-prevent-file-share-deletion.md
description: Learn about soft delete for Azure Files and how you can use it for
Previously updated : 06/07/2024 Last updated : 07/10/2024
Azure Files offers soft delete, which allows you to recover your file share when
|-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) (preview)|
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)|
## How soft delete works
stream-analytics Event Ordering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-ordering.md
Your input source (Event Hub/IoT Hub) likely has multiple partitions. Azure Stre
When multiple partitions from the same input stream are combined, the late arrival tolerance is the maximum amount of time that every partition waits for new data. If there's one partition in your event hub or if IoT Hub doesnΓÇÖt receive inputs, the timeline for that partition doesn't progress until it reaches the late arrival tolerance threshold. This delays your output by the late arrival tolerance threshold. In such cases, you may see the following message: <br><code>
-{"message Time":"2/3/2019 8:54:16 PM UTC","message":"Input Partition [2] does not have additional data for more than [5] minute(s). Partition will not progress until either events arrive or late arrival threshold is met.","type":"InputPartitionNotProgressing","correlation ID":"2328d411-52c7-4100-ba01-1e860c757fc2"}
+{"message Time":"2/3/2019 8:54:16 PM UTC","message":"Input Partition [2] does not have additional data for more than [5] minute(s). Partition will not progress until either events arrive or late arrival threshold is met.","type":"InputPartitionNotProgressing","correlation ID":"0000000000-0000-0000-0000-00000000000000"}
</code><br><br> This message to inform you that at least one partition in your input is empty and will delay your output by the late arrival threshold. To overcome this, it's recommended you either: 1. Ensure all partitions of your Event Hub/IoT Hub receive input.
synapse-analytics Clone Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/clone-lake-database.md
Title: Clone a lake database using the database designer. description: Learn how to clone an entire lake database or specific tables within a lake database using the database designer.-+
synapse-analytics Implementation Success Assess Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-assess-environment.md
Title: "Synapse implementation success methodology: Assess environment" description: "Learn how to assess your environment to help evaluate the solution design and make informed technology decisions to implement Azure Synapse Analytics."-+
synapse-analytics Implementation Success Evaluate Data Integration Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-data-integration-design.md
Title: "Synapse implementation success methodology: Evaluate data integration design" description: "Learn how to evaluate the data integration design and validate that it meets guidelines and requirements."-+
synapse-analytics Implementation Success Evaluate Dedicated Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-dedicated-sql-pool-design.md
Title: "Synapse implementation success methodology: Evaluate dedicated SQL pool design" description: "Learn how to evaluate your dedicated SQL pool design to identify issues and validate that it meets guidelines and requirements."-+
synapse-analytics Implementation Success Evaluate Project Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-project-plan.md
Title: "Synapse implementation success methodology: Evaluate project plan" description: "Learn how to evaluate your modern data warehouse project plan before the project starts."-+
synapse-analytics Implementation Success Evaluate Serverless Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-serverless-sql-pool-design.md
Title: "Synapse implementation success methodology: Evaluate serverless SQL pool design" description: "Learn how to evaluate your serverless SQL pool design to identify issues and validate that it meets guidelines and requirements."-+
synapse-analytics Implementation Success Evaluate Solution Development Environment Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-solution-development-environment-design.md
Title: "Synapse implementation success methodology: Evaluate solution development environment design" description: "Learn how to set up multiple environments for your modern data warehouse project to support development, testing, and production."-+
synapse-analytics Implementation Success Evaluate Spark Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-spark-pool-design.md
Title: "Synapse implementation success methodology: Evaluate Spark pool design" description: "Learn how to evaluate your Spark pool design to identify issues and validate that it meets guidelines and requirements."-+
synapse-analytics Implementation Success Evaluate Team Skill Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-team-skill-sets.md
Title: "Synapse implementation success methodology: Evaluate team skill sets" description: "Learn how to evaluate your team of skilled resources that will implement your Azure Synapse solution."-+
synapse-analytics Implementation Success Evaluate Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-workspace-design.md
Title: "Synapse implementation success methodology: Evaluate workspace design" description: "Learn how to evaluate the Synapse workspace design and validate that it meets guidelines and requirements."-+
synapse-analytics Implementation Success Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-overview.md
Title: Azure Synapse implementation success by design description: "Learn about the Azure Synapse success series of articles that's designed to help you deliver a successful implementation of Azure Synapse Analytics."-+
synapse-analytics Implementation Success Perform Monitoring Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-monitoring-review.md
Title: "Synapse implementation success methodology: Perform monitoring review" description: "Learn how to perform monitoring of your Azure Synapse solution."-+
synapse-analytics Implementation Success Perform Operational Readiness Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-operational-readiness-review.md
Title: "Synapse implementation success methodology: Perform operational readiness review" description: "Learn how to perform an operational readiness review to evaluate your solution for its preparedness to provide optimal services to users."-+
synapse-analytics Implementation Success Perform User Readiness And Onboarding Plan Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-user-readiness-and-onboarding-plan-review.md
Title: "Synapse implementation success methodology: Perform user readiness and onboarding plan review" description: "Learn how to perform user readiness and onboarding of new users to ensure successful adoption of your data warehouse."-+
synapse-analytics Proof Of Concept Playbook Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-dedicated-sql-pool.md
Title: "Synapse POC playbook: Data warehousing with dedicated SQL pool in Azure Synapse Analytics" description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for dedicated SQL pool."-+
synapse-analytics Proof Of Concept Playbook Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-overview.md
Title: Azure Synapse proof of concept playbook description: "Introduction to a series of articles that provide a high-level methodology for planning, preparing, and running an effective Azure Synapse Analytics proof of concept project."-+
synapse-analytics Proof Of Concept Playbook Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-serverless-sql-pool.md
Title: "Synapse POC playbook: Data lake exploration with serverless SQL pool in Azure Synapse Analytics" description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for serverless SQL pool."-+
synapse-analytics Proof Of Concept Playbook Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-spark-pool.md
Title: "Synapse POC playbook: Big data analytics with Apache Spark pool in Azure Synapse Analytics" description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for Apache Spark pool."-+
synapse-analytics Security White Paper Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-access-control.md
Title: "Azure Synapse Analytics security white paper: Access control" description: Use different approaches or a combination of techniques to control access to data with Azure Synapse Analytics.-+
synapse-analytics Security White Paper Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-authentication.md
Title: "Azure Synapse Analytics security white paper: Authentication" description: Implement authentication mechanisms with Azure Synapse Analytics.-+
synapse-analytics Security White Paper Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-data-protection.md
Title: "Azure Synapse Analytics security white paper: Data protection" description: Protect data to comply with federal, local, and company guidelines with Azure Synapse Analytics.-+
synapse-analytics Security White Paper Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-introduction.md
Title: Azure Synapse Analytics security white paper description: Overview of the Azure Synapse Analytics security white paper series of articles.-+
synapse-analytics Security White Paper Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-network-security.md
Title: "Azure Synapse Analytics security white paper: Network security" description: Manage secure network access with Azure Synapse Analytics.-+
synapse-analytics Security White Paper Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-threat-protection.md
Title: "Azure Synapse Analytics security white paper: Threat detection" description: Audit, protect, and monitor Azure Synapse Analytics.-+
synapse-analytics Success By Design Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/success-by-design-introduction.md
Title: Success by design description: Azure Synapse Customer Success Engineering Success by Design repository.-+
synapse-analytics Synapse Machine Learning Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/synapse-machine-learning-library.md
Title: SynapseML and its use in Azure Synapse Analytics. description: Learn about the SynapseML library and how it simplifies the creation of massively scalable machine learning (ML) pipelines in Azure Synapse Analytics.-+
synapse-analytics Quickstart Connect Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-connect-azure-data-explorer.md
Title: 'Quickstart: Connect Azure Data Explorer to an Azure Synapse Analytics workspace' description: Connect an Azure Data Explorer cluster to an Azure Synapse Analytics workspace by using Apache Spark for Azure Synapse Analytics.-+
synapse-analytics Connect Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database.md
Title: Get started with Azure Synapse Link for Azure SQL Database description: Learn how to connect an Azure SQL database to an Azure Synapse workspace with Azure Synapse Link.-+
synapse-analytics Connect Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md
Title: Create Azure Synapse Link for SQL Server 2022 description: Learn how to create and connect a SQL Server 2022 instance to an Azure Synapse workspace by using Azure Synapse Link.-+
synapse-analytics Sql Database Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-database-synapse-link.md
Title: Azure Synapse Link for Azure SQL Database description: Learn about Azure Synapse Link for Azure SQL Database, the link connection, and monitoring the Synapse Link.-+
synapse-analytics Sql Server 2022 Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-server-2022-synapse-link.md
Title: Azure Synapse Link for SQL Server 2022 description: Learn about Azure Synapse Link for SQL Server 2022, the link connection, landing zone, Self-hosted integration runtime, and monitoring the Azure Synapse Link for SQL.-+
synapse-analytics Sql Synapse Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-synapse-link-overview.md
Title: What is Azure Synapse Link for SQL? description: Learn about Azure Synapse Link for SQL, the benefits it offers, and price.-+
traffic-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/cli-samples.md
- Title: Azure CLI Samples for Traffic Manager
-description: Learn about an Azure CLI script you can use to direct traffic across multiple regions for high application availability.
---- Previously updated : 10/23/2018----
-# Azure CLI samples for Traffic Manager
-
-The following table includes links to bash scripts for Traffic Manager built using the Azure CLI.
-
-|Title |Description |
-|||
-|[Direct traffic across multiple regions for high application availability](./scripts/traffic-manager-cli-websites-high-availability.md) | Creates two app service plans, two web apps, a traffic manager profile, and two traffic manager endpoints. |
traffic-manager Traffic Manager Powershell Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/scripts/traffic-manager-powershell-websites-high-availability.md
- Title: Route traffic for HA of applications - Azure PowerShell - Traffic Manager
-description: Azure PowerShell script sample - Route traffic for high availability of applications
---
-tags: azure-infrastructure
-- Previously updated : 04/27/2023----
-# Route traffic for high availability of applications using Azure PowerShell
-
-This script creates a resource group, two app service plans, two web apps, a traffic manager profile, and two traffic manager endpoints. Traffic Manager directs traffic to the application in one region as the primary region, and to the secondary region when the application in the primary region is unavailable. Before executing the script, you must change the MyWebApp, MyWebAppL1 and MyWebAppL2 values to unique values across Azure. After running the script, you can access the app in the primary region with the URL mywebapp.trafficmanager.net.
-
-If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure), and then run `Connect-AzAccount` to create a connection with Azure.
--
-## Sample script
--
-[!code-powershell[main](../../../powershell_scripts/traffic-manager/direct-traffic-for-increased-application-availability/direct-traffic-for-increased-application-availability.ps1 "Route traffic for high availability")]
--
-Run the following command to remove the resource group, VM, and all related resources.
-
-```powershell
-Remove-AzResourceGroup -Name myResourceGroup1
-Remove-AzResourceGroup -Name myResourceGroup2
-```
--
-## Script explanation
-
-This script uses the following commands to create a resource group, web app, traffic manager profile, and all related resources. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzAppServicePlan](/powershell/module/az.websites/new-azappserviceplan) | Creates an App Service plan. This is like a server farm for your Azure web app. |
-| [New-AzWebApp](/powershell/module/az.websites/new-azwebapp) | Creates an Azure web app within the App Service plan. |
-| [Set-AzResource](/powershell/module/az.resources/new-azresource) | Creates an Azure web app within the App Service plan. |
-| [New-AzTrafficManagerProfile](/powershell/module/az.trafficmanager/new-aztrafficmanagerprofile) | Creates an Azure Traffic Manager profile. |
-| [New-AzTrafficManagerEndpoint](/powershell/module/az.trafficmanager/new-aztrafficmanagerendpoint) | Adds an endpoint to an Azure Traffic Manager Profile. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-Additional networking PowerShell script samples can be found in the [Azure Networking Overview documentation](../powershell-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
You can update the zones parameter and the scale set capacity in the same ARM te
When you are satisfied that the new instances are ready, scale in your scale set to remove the original regional instances. You can either manually delete the specific regional instances, or scale in by reducing the scale set capacity. When scaling in via reducing scale set capacity, the platform will always prefer removing the regional instances, then follow the scale in policy.
-#### Automate with Rolling upgrades + MaxSurge
-
-With [Rolling upgrades + MaxSurge](virtual-machine-scale-sets-upgrade-policy.md), new zonal instances are created and brought up-to-date with the latest scale model in batches. Once a batch of new instances is added to the scale set and report as healthy, a batch of old instances are automated removed from the scale set. Upgrades continue until all instances are brought up-to-date.
-
-> [!IMPORTANT]
-> Rolling upgrades with MaxSurge is currently under Public Preview. It is only available for VMSS Uniform Orchestration Mode.
- ### Known issues and limitations * The feature is targeted to stateless workloads on Virtual Machine Scale Sets.
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
az vm list-skus --resource-type virtualMachines --location $region --query "[?na
```powershell # Example value is southeastasia
-region = "<yourLocation>"
+$region = "<yourLocation>"
# Example value is Standard_E64s_v3
-vmSize = "<yourVMSize>"
-$sku = (Get-AzComputeResourceSku | where {$_.Locations.Contains($region) -and ($_.Name -eq $vmSize) -and $_.LocationInfo[0].ZoneDetails.Count -gt 0})
+$vmSize = "<yourVMSize>"
+$sku = (Get-AzComputeResourceSku | where {$_.Locations -icontains($region) -and ($_.Name -eq $vmSize) -and $_.LocationInfo[0].ZoneDetails.Count -gt 0})
if($sku){$sku[0].LocationInfo[0].ZoneDetails} Else {Write-host "$vmSize is not supported with Ultra Disk in $region region"} ```
az vm list-skus --resource-type virtualMachines --location $region --query "[?na
```powershell # Example value is westus
-region = "<yourLocation>"
+$region = "<yourLocation>"
# Example value is Standard_E64s_v3
-vmSize = "<yourVMSize>"
-(Get-AzComputeResourceSku | where {$_.Locations.Contains($region) -and ($_.Name -eq $vmSize) })[0].Capabilities
+$vmSize = "<yourVMSize>"
+(Get-AzComputeResourceSku | where {$_.Locations -icontains($region) -and ($_.Name -eq $vmSize) })[0].Capabilities
``` The response will be similar to the following form, `UltraSSDAvailable True` indicates whether the VM size supports Ultra Disks in this region.
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
For more information about enabling the NVMe interface on virtual machines creat
## Supported Windows OS images -- [Azure portal - Plan ID: 2019-datacenter-core-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCore) - [Azure portal - Plan ID: 2019-datacenter-core-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCore2019-datacenter-core-smalldisk-g2)-- [Azure portal - Plan ID: 2019 datacenter-core](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCore) - [Azure portal - Plan ID: 2019-datacenter-core-g2](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCore2019-datacenter-core-g2)-- [Azure portal - Plan ID: 2019-datacenter-core-with-containers-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCorewithContainers) - [Azure portal - Plan ID: 2019-datacenter-core-with-containers-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCorewithContainers2019-datacenter-core-with-containers-smalldisk-g2)-- [Azure portal - Plan ID: 2019-datacenter-with-containers-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterwithContainers2019-datacenter-with-containers-smalldisk-g2)-- [Azure portal - Plan ID: 2019-datacenter-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019Datacenter)
+- [Azure portal - Plan ID: 2019-datacenter-with-containers-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterwithContainers2019-datacenter-with-containers-smalldisk-g2)
- [Azure portal - Plan ID: 2019-datacenter-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019Datacenter2019-datacenter-smalldisk-g2)-- [Azure portal - Plan ID: 2019-datacenter-zhcn](https://portal.azure.com/#create/Microsoft.WindowsServer2019Datacenterzhcn) - [Azure portal - Plan ID: 2019-datacenter-zhcn-g2](https://portal.azure.com/#create/Microsoft.WindowsServer2019Datacenterzhcn2019-datacenter-zhcn-g2)-- [Azure portal - Plan ID: 2019-datacenter-core-with-containers](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCorewithContainers) - [Azure portal - Plan ID: 2019-datacenter-core-with-containers-g2](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCorewithContainers2019-datacenter-core-with-containers-g2)-- [Azure portal - Plan ID: 2019-datacenter-with-containers](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterwithContainers) - [Azure portal - Plan ID: 2019-datacenter-with-containers-g2](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterwithContainers2019-datacenter-with-containers-g2)-- [Azure portal - Plan ID: 2019-datacenter](https://portal.azure.com/#create/Microsoft.WindowsServer2019Datacenter) - [Azure portal - Plan ID: 2019-datacenter-gensecond](https://portal.azure.com/#create/Microsoft.WindowsServer2019Datacenter2019-datacenter-gensecond)-- [Azure portal - Plan ID: 2022-datacenter-core](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-core) - [Azure portal - Plan ID: 2022-datacenter-core-g2](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-core-g2)-- [Azure portal - Plan ID: 2022-datacenter-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-smalldisk) - [Azure portal - Plan ID: 2022-datacenter-smalldisk-g2](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-smalldisk-g2)-- [Azure portal - Plan ID: 2022-datacenter](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter) - [Azure portal - Plan ID: 2022-datacenter-g2](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-g2)-- [Azure portal - Plan ID: 2022-datacenter-core-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-core-smalldisk) - [Azure portal - Plan ID: 2022-datacenter-core-smalldisk-g2](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-core-smalldisk-g2)-- [Azure portal - Plan ID: 2022-datacenter-azure-edition-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-smalldisk) - [Azure portal - Plan ID: 2022-datacenter-azure-edition](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition) - [Azure portal - Plan ID: 2022-datacenter-azure-edition-core](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-core) - [Azure portal - Plan ID: 2022-datacenter-azure-edition-core-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-core-smalldisk)
virtual-machines Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/guest-configuration.md
Title: Azure Automanage machine configuration (guest configuration)
-description: Learn about the machine configuration extension feature of Azure Automanage, and audit and configure settings for Azure virtual machines.
+ Title: Azure Machine Configuration (guest configuration)
+description: Learn about the Machine Configuration extension, and audit and configure settings for Azure virtual machines.
Last updated 04/05/2023
-# Azure Automanage machine configuration extension
+# Azure Machine Configuration extension
-The machine configuration extension is a feature of Azure Automanage that performs audit and configuration operations inside virtual machines (VMs).
+The Machine Configuration extension performs audit and configuration operations inside virtual machines (VMs).
-To check policies inside VMs, such as Azure compute security baseline definitions for [Linux](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) and [Windows](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc), the machine configuration extension must be installed.
+To check policies inside VMs, such as Azure compute security baseline definitions for [Linux](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) and [Windows](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc), the Machine Configuration extension must be installed.
## Prerequisites
-To enable your VM to authenticate to the machine configuration service, your VM must have a [system-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview). You can satisfy the identity requirement for your VM by setting the `"type": "SystemAssigned"` property:
+To enable your VM to authenticate to the Machine Configuration service, your VM must have a [system-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview). You can satisfy the identity requirement for your VM by setting the `"type": "SystemAssigned"` property:
```json "identity": {
To enable your VM to authenticate to the machine configuration service, your VM
### Operating systems
-Operating system support for the machine configuration extension is the same as documented [operating system support for the end-to-end solution](/azure/governance/machine-configuration/overview#supported-client-types).
+Operating system support for the Machine Configuration extension is the same as documented [operating system support for the end-to-end solution](/azure/governance/machine-configuration/overview#supported-client-types).
### Internet connectivity
-The agent installed by the machine configuration extension must be able to reach content packages listed by guest configuration assignments,
-and report status to the machine configuration service. The VM can connect by using outbound HTTPS over TCP port 443, or a connection provided through private networking.
+The agent installed by the Machine Configuration extension must be able to reach content packages listed by guest configuration assignments,
+and report status to the Machine Configuration service. The VM can connect by using outbound HTTPS over TCP port 443, or a connection provided through private networking.
To learn more about private networking, see the following articles: -- [Azure Automanage machine configuration, Communicate over Azure Private Link](/azure/governance/machine-configuration/overview#communicate-over-private-link-in-azure)
+- [Azure Mchine Configuration, Communicate over Azure Private Link](/azure/governance/machine-configuration/overview#communicate-over-private-link-in-azure)
- [Use private endpoints for Azure Storage](/azure/storage/common/storage-private-endpoints) ## Install the extension
-You can install and deploy the machine configuration extension directly from the Azure CLI or PowerShell. Deployment templates are also available for Azure Resource Manager (ARM), Bicep, and Terraform. For deployment template details, see [Microsoft.GuestConfiguration guestConfigurationAssignments](/azure/templates/microsoft.guestconfiguration/guestconfigurationassignments?pivots=deployment-language-arm-template).
+You can install and deploy the Machine Configuration extension directly from the Azure CLI or PowerShell. Deployment templates are also available for Azure Resource Manager (ARM), Bicep, and Terraform. For deployment template details, see [Microsoft.GuestConfiguration guestConfigurationAssignments](/azure/templates/microsoft.guestconfiguration/guestconfigurationassignments?pivots=deployment-language-arm-template).
> [!NOTE] > In the following deployment examples, replace `<placeholder>` parameter values with specific values for your configuration. ### Deployment considerations
-Before you install and deploy the machine configuration extension, review the following considerations.
+Before you install and deploy the Machine Configuration extension, review the following considerations.
-- **Instance name**. When you install the machine configuration extension, the instance name of the extension must be set to `AzurePolicyforWindows` or `AzurePolicyforLinux`. The security baseline definition policies described earlier require these specific strings.
+- **Instance name**. When you install the Machine Configuration extension, the instance name of the extension must be set to `AzurePolicyforWindows` or `AzurePolicyforLinux`. The security baseline definition policies described earlier require these specific strings.
-- **Versions**. By default, all deployments update to the latest version. The value of the `autoUpgradeMinorVersion` property defaults to `true` unless otherwise specified. This feature helps to alleviate concerns about updating your code when new versions of the machine configuration extension are released.
+- **Versions**. By default, all deployments update to the latest version. The value of the `autoUpgradeMinorVersion` property defaults to `true` unless otherwise specified. This feature helps to alleviate concerns about updating your code when new versions of the Machine Configuration extension are released.
-- **Automatic upgrade**. The machine configuration extension supports the `enableAutomaticUpgrade` property. When this property is set to `true`, Azure automatically upgrades to the latest version of the extension as future releases become available. For more information, see [Automatic Extension Upgrade for VMs and Virtual Machine Scale Sets in Azure](/azure/virtual-machines/automatic-extension-upgrade).
+- **Automatic upgrade**. The Machine Configuration extension supports the `enableAutomaticUpgrade` property. When this property is set to `true`, Azure automatically upgrades to the latest version of the extension as future releases become available. For more information, see [Automatic Extension Upgrade for VMs and Virtual Machine Scale Sets in Azure](/azure/virtual-machines/automatic-extension-upgrade).
-- **Azure Policy**. To deploy the latest version of the machine configuration extension at scale including identity requirements, follow the steps in [Create a policy assignment to identify noncompliant resources](/azure/governance/policy/assign-policy-portal#create-a-policy-assignment). Create the following assignment with Azure Policy:
+- **Azure Policy**. To deploy the latest version of the Machine Configuration extension at scale including identity requirements, follow the steps in [Create a policy assignment to identify noncompliant resources](/azure/governance/policy/assign-policy-portal#create-a-policy-assignment). Create the following assignment with Azure Policy:
- [Deploy prerequisites to enable Guest Configuration policies on virtual machines](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policySetDefinitions/Guest%20Configuration/Prerequisites.json) -- **Other properties**. You don't need to include any settings or protected-settings properties on the machine configuration extension. The agent retrieves this class of information from the Azure REST API [Guest Configuration assignment](/rest/api/guestconfiguration/guestconfigurationassignments) resources. For example, the [`ConfigurationUri`](/rest/api/guestconfiguration/guestconfigurationassignments/createorupdate#guestconfigurationnavigation), [`Mode`](/rest/api/guestconfiguration/guestconfigurationassignments/createorupdate#configurationmode), and [`ConfigurationSetting`](/rest/api/guestconfiguration/guestconfigurationassignments/createorupdate#configurationsetting) properties are each managed per-configuration rather than on the VM extension.
+- **Other properties**. You don't need to include any settings or protected-settings properties on the Machine Configuration extension. The agent retrieves this class of information from the Azure REST API [Guest Configuration assignment](/rest/api/guestconfiguration/guestconfigurationassignments) resources. For example, the [`ConfigurationUri`](/rest/api/guestconfiguration/guestconfigurationassignments/createorupdate#guestconfigurationnavigation), [`Mode`](/rest/api/guestconfiguration/guestconfigurationassignments/createorupdate#configurationmode), and [`ConfigurationSetting`](/rest/api/guestconfiguration/guestconfigurationassignments/createorupdate#configurationsetting) properties are each managed per-configuration rather than on the VM extension.
### Azure CLI
The following table lists possible error messages related to enabling the Guest
| Error code | Description | ||| | **NoComplianceReport** | The VM hasn't reported the compliance data. |
-| **GCExtensionMissing** | The machine configuration (guest configuration) extension is missing. |
+| **GCExtensionMissing** | The Machine Configuration (guest configuration) extension is missing. |
| **ManagedIdentityMissing** | The managed identity is missing. | | **UserIdentityMissing** | The user-assigned identity is missing. |
-| **GCExtensionManagedIdentityMissing** | The machine configuration (guest configuration) extension and managed identity are missing. |
-| **GCExtensionUserIdentityMissing** | The machine configuration (guest configuration) extension and user-assigned identity are missing. |
-| **GCExtensionIdentityMissing** | The machine configuration (guest configuration) extension, managed identity, and user-assigned identity are missing. |
+| **GCExtensionManagedIdentityMissing** | The Machine Configuration (guest configuration) extension and managed identity are missing. |
+| **GCExtensionUserIdentityMissing** | The Machine Configuration (guest configuration) extension and user-assigned identity are missing. |
+| **GCExtensionIdentityMissing** | The Machine Configuration (guest configuration) extension, managed identity, and user-assigned identity are missing. |
## Next steps -- For more information about the machine configuration extension, see [Understand the machine configuration feature of Azure Automanage](/azure/governance/machine-configuration/overview).
+- For more information about the Machine Configuration extension, see [Understand Azure Machine Configuration](/azure/governance/machine-configuration/overview).
- For more information about how the Linux Agent and extensions work, see [Virtual machine extensions and features for Linux](features-linux.md). - For more information about how the Windows Guest Agent and extensions work, see [Virtual machine extensions and features for Windows](features-windows.md). - To install the Windows Guest Agent, see [Azure Virtual Machine Agent overview](agent-windows.md).
virtual-machines Dpdsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dpdsv6-series.md
Title: Dpdsv6 size series description: Information on and specifications of the Dpdsv6-series sizes-+ - build-2024 Last updated 07/22/2024--++ # Dpdsv6 sizes series
virtual-machines Dpldsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dpldsv6-series.md
Title: Dpldsv6 size series description: Information on and specifications of the Dpldsv6-series sizes-+ - build-2024 Last updated 07/22/2024--++ # Dpldsv6 sizes series
virtual-machines Dplsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dplsv6-series.md
Title: Dplsv6 size series description: Information on and specifications of the Dplsv6-series sizes-+ - build-2024 Last updated 07/22/2024--++ # Dplsv6 sizes series
virtual-machines Dpsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dpsv6-series.md
Title: Dpsv6 size series description: Information on and specifications of the Dpsv6-series sizes-+ - build-2024 Last updated 07/22/2024--++ # Dpsv6 sizes series
virtual-machines Epdsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/epdsv6-series.md
Title: Epdsv6 size series description: Information on and specifications of the Epdsv6-series sizes-+ - build-2024 Last updated 07/22/2024--++ # Epdsv6 sizes series
virtual-machines Epsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/epsv6-series.md
Title: Epsv6 size series description: Information on and specifications of the Epsv6-series sizes-+ - build-2024 Last updated 07/22/2024--++ # Epsv6 sizes series
virtual-machines Mbsv3 Mbdsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/mbsv3-mbdsv3-series.md
+
+ Title: Overview of Mbsv3 and Mbdsv3 Series
+description: Overview of Mbsv3 and Mbdsv3 virtual machines
+++
+# ms.prod: sizes
+ Last updated : 07/15/2024++
+# Mbsv3 and Mbdsv3 Series (Public Preview)
++++
+The Storage optimized Mbv3 VM (Mbsv3 and Mbdsv3) series are based on the 4th generation Intel® Xeon® Scalable processors and deliver higher remote disk storage performance. These new VM sizes offer up to 650,000 IOPS and 10GBps of remote disk storage throughput Premium SSD v2/Ultra Disk, up to 4TB of RAM.
+
+The increased remote storage performance of these VMs is ideal for storage throughput-intensive workloads such as relational databases and data analytics applications.
+
+## Mbsv3 series
+
+| **Size** | **vCPU** | **Memory: GiB** | **Max data disks** | **Max uncached Premium** **SSD throughput: IOPS/MBps** | **Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Max network bandwidth (Mbps)** |
+|||||||||
+| **Standard_M16bs_v3** | 16 | 128 | 64 | 44,000/1,000 | 64,000/1,000 | 8 | 8,000 |
+| **Standard_M32bs_v3** | 32 | 256 | 64 | 88,000/2,000 | 88,000/2,000 | 8 | 16,000 |
+| **Standard_M48bs_v3** | 48 | 384 | 64 | 88,000/2,000 | 120,000/2,000 | 8 | 16,000 |
+| **Standard_M64bs_v3** | 64 | 512 | 64 | 88,000/2, 000 | 160,000/2, 000 | 8 | 16,000 |
+| **Standard_M96bs_v3** | 96 | 768 | 64 | 260,000/4,000 | 260,000/4,000 | 8 | 25,000 |
+| **Standard_M128bs_v3** | 128 | 1024 | 64 | 260,000/4,000 | 400,000/4,000 | 8 | 40,000 |
+| **Standard_M176bs_v3** | 176 | 1536 | 64 | 260,000/6,000 | 650,000/6,000 | 8 | 50,000 |
+| **Standard_M176bs_3_v3** | 176 | 2796 | 64 | 260,000/8,000 | 650,000/10,000 | 8 | 40,000 |
+
+## Mbdsv3 series
+
+| **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBps*** | **Max uncached Premium** **SSD throughput: IOPS/MBps** | **Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Max network bandwidth (Mbps)** |
+|||||||||||
+| **Standard_M16bds_v3** | 16 | 128 | 400 | 64 | 10,000/100 | 44,000/1,000 | 64,000/1,000 | 8 | 8,000 |
+| **Standard_M32bds_v3** | 32 | 256 | 400 | 64 | 20,000/200 | 88,000/2,000 | 88,000/2,000 | 8 | 16,000 |
+| **Standard_M48bds_v3** | 48 | 384 | 400 | 64 | 40,000/400 | 88,000/2,000 | 120,000/2,000 | 8 | 16,000 |
+| **Standard_M64bds_v3** | 64 | 512 | 400 | 64 | 40,000/400 | 88,000/2,000 | 160,000/2,000 | 8 | 16,000 |
+| **Standard_M96bds_v3** | 96 | 768 | 400 | 64 | 40,000/400 | 260,000/4,000 | 260,000/4,000 | 8 | 25,000 |
+| **Standard_M128bds_v3** | 128 | 1,024 | 400 | 64 | 160,000/1600 | 260,000/4,000 | 400,000/4,000 | 8 | 40,000 |
+| **Standard_M176bds_v3** | 176 | 1,536 | 400 | 64 | 160,000/1600 | 260,000/6,000 | 650,000/6,000 | 8 | 50,000 |
+| **Standard_M176bds_3_v3** | 176 | 2796 | 400 | 64 | 160,000/1600 | 260,000/8,000 | 650,000/10,000 | 8 | 40,000 |
+| **Standard_M64bds_1_v3** | 64 | 1397 | 3000 | 64 | 40,000/400 | 130,000/6,000 | 160, 000/6,000 | 8 | 20,000 |
+| **Standard_M96bds_2_v3** | 96 | 1946 | 4500 | 64 | 40,000/400 | 130,000/8,000 | 260,000/8,000 | 8 | 20,000 |
+| **Standard_M128bds_3_v3** | 128 | 2794 | 6000 | 64 | 160,000/1600 | 260,000/8,000 | 400,000/10,000 | 8 | 40,000 |
+| **Standard_M176bds_4_v3** | 176 | 3892 | 8000 | 64 | 160,000/1600 | 260,000/8,000 | 650,000/10,000 | 8 | 40,000 |
+
+## Size table definitions
+
+ΓÇó Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+
+ΓÇó Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+
+ΓÇó IOPS/MBps listed here refer to uncached mode for data disks.
+
+ΓÇó To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](https://learn.microsoft.com/azure/virtual-machines/disks-performance).
+
+ΓÇó IOPS spec is defined using common small random block sizes like 4KiB or 8KiB. Maximum IOPS is defined as "up-to" and measured using 4KiB random reads workloads.
+
+ΓÇó TPUT spec is defined using common large sequential block sizes like 128KiB or 1024KiB. Maximum TPUT is defined as "up-to" and measured using 128KiB sequential reads workloads.
+
+## Other sizes and information
+
+[General purpose](/azure/virtual-machines/sizes-general)
+
+[Memory optimized](/azure/virtual-machines/sizes-memory)
+
+[Storage optimized](/azure/virtual-machines/sizes-storage)
+
+[GPU optimized](/azure/virtual-machines/sizes-gpu)
+
+[High performance compute](/azure/virtual-machines/sizes-hpc)
+
+[Previous generations](/azure/virtual-machines/sizes-previous-gen)
+
+[Deploy a Premium SSD v2 managed disk - Azure Virtual Machines | Microsoft Learn](/azure/virtual-machines/disks-deploy-premium-v2)
+
+[Ultra disks for VMs - Azure managed disks - Azure Virtual Machines | Microsoft Learn](/azure/virtual-machines/disks-enable-ultra-ssd)
virtual-machines Trusted Launch Existing Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-existing-vm.md
Title: Enable Trusted Launch on existing VMs
-description: Learn how to enable Trusted Launch on existing Azure virtual machines (VMs).
+ Title: Enable Trusted launch on existing VMs
+description: Learn how to enable Trusted launch on existing Azure virtual machines (VMs).
Last updated 08/13/2023
-# Enable Trusted Launch on existing Azure VMs
+# Enable Trusted launch on existing Azure VMs
**Applies to:** :heavy_check_mark: Linux VM :heavy_check_mark: Windows VM :heavy_check_mark: Generation 2 VM
-Azure Virtual Machines supports enabling Azure Trusted Launch on existing [Azure Generation 2](generation-2.md) virtual machines (VMs) by upgrading to the [Trusted Launch](trusted-launch.md) security type.
+Azure Virtual Machines supports enabling Azure Trusted launch on existing [Azure Generation 2](generation-2.md) virtual machines (VMs) by upgrading to the [Trusted launch](trusted-launch.md) security type.
-[Trusted Launch](trusted-launch.md) is a way to enable foundational compute security on [Azure Generation 2 VMs](generation-2.md) VMs and protects against advanced and persistent attack techniques like boot kits and rootkits. It does so by combining infrastructure technologies like Secure Boot, virtual Trusted Platform Module (vTPM), and boot integrity monitoring on your VM.
+[Trusted launch](trusted-launch.md) is a way to enable foundational compute security on [Azure Generation 2 VMs](generation-2.md) VMs and protects against advanced and persistent attack techniques like boot kits and rootkits. It does so by combining infrastructure technologies like Secure Boot, virtual Trusted Platform Module (vTPM), and boot integrity monitoring on your VM.
> [!IMPORTANT]
-> Support for *enabling Trusted Launch on existing Azure Generation 1 VMs* is currently in private preview. You can gain access to preview by using the [registration form](https://aka.ms/Gen1ToTLUpgrade).
+> Support for *enabling Trusted launch on existing Azure Generation 1 VMs* is currently in private preview. You can gain access to preview by using the [registration form](https://aka.ms/Gen1ToTLUpgrade).
## Prerequisites - Azure Generation 2 VM is configured with:
- - [Trusted Launch supported size family](trusted-launch.md#virtual-machines-sizes).
- - [Trusted Launch supported operating system (OS) image](trusted-launch.md#operating-systems-supported). For custom OS images or disks, the base image should be *Trusted Launch capable*.
-- Azure Generation 2 VM isn't using [features currently not supported with Trusted Launch](trusted-launch.md#unsupported-features).-- Azure Generation 2 VMs should be *stopped and deallocated* before you enable the Trusted Launch security type.-- Azure Backup, if enabled, for VMs should be configured with the [Enhanced Backup policy](../backup/backup-azure-vms-enhanced-policy.md). The Trusted Launch security type can't be enabled for Generation 2 VMs configured with *Standard policy* backup protection.
+ - [Trusted launch supported size family](trusted-launch.md#virtual-machines-sizes).
+ - [Trusted launch supported operating system (OS) image](trusted-launch.md#operating-systems-supported). For custom OS images or disks, the base image should be *Trusted launch capable*.
+- Azure Generation 2 VM isn't using [features currently not supported with Trusted launch](trusted-launch.md#unsupported-features).
+- Azure Generation 2 VMs should be *stopped and deallocated* before you enable the Trusted launch security type.
+- Azure Backup, if enabled, for VMs should be configured with the [Enhanced Backup policy](../backup/backup-azure-vms-enhanced-policy.md). The Trusted launch security type can't be enabled for Generation 2 VMs configured with *Standard policy* backup protection.
- Existing Azure VM backup can be migrated from the *Standard* to the *Enhanced* policy. Follow the steps in [Migrate Azure VM backups from Standard to Enhanced policy (preview)](../backup/backup-azure-vm-migrate-enhanced-policy.md). ## Best practices -- Enable Trusted Launch on a test Generation 2 VM and determine if any changes are required to meet the prerequisites before you enable Trusted Launch on Generation 2 VMs associated with production workloads.-- [Create restore points](create-restore-points.md) for Azure Generation 2 VMs associated with production workloads before you enable the Trusted Launch security type. You can use the restore points to re-create the disks and Generation 2 VM with the previous well-known state.
+- Enable Trusted launch on a test Generation 2 VM and determine if any changes are required to meet the prerequisites before you enable Trusted launch on Generation 2 VMs associated with production workloads.
+- [Create restore points](create-restore-points.md) for Azure Generation 2 VMs associated with production workloads before you enable the Trusted launch security type. You can use the restore points to re-create the disks and Generation 2 VM with the previous well-known state.
-## Enable Trusted Launch on an existing VM
+## Enable Trusted launch on an existing VM
> [!NOTE] >
-> - After you enable Trusted Launch, currently VMs can't be rolled back to the Standard security type (non-Trusted Launch configuration).
+> - After you enable Trusted launch, currently VMs can't be rolled back to the Standard security type (non-Trusted launch configuration).
> - vTPM is enabled by default. > - We recommend that you enable Secure Boot, if you aren't using custom unsigned kernel or drivers. It's not enabled by default. Secure Boot preserves boot integrity and enables foundational security for VMs. ### [Portal](#tab/portal)
-Enable Trusted Launch on an existing Azure Generation 2 VM by using the Azure portal.
+Enable Trusted launch on an existing Azure Generation 2 VM by using the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Confirm that the VM generation is **V2** and select **Stop** for the VM.
Enable Trusted Launch on an existing Azure Generation 2 VM by using the Azure po
> [!NOTE] >
- > - Generation 2 VMs created by using [Azure Compute Gallery (ACG)](azure-compute-gallery.md), [Managed image](capture-image-resource.yml), or an [OS disk](./scripts/create-vm-from-managed-os-disks.md) can't be upgraded to Trusted Launch by using the portal. Ensure that the [OS version is supported for Trusted Launch](trusted-launch.md#operating-systems-supported). Use PowerShell, the Azure CLI, or an Azure Resource Manager template (ARM template) to run the upgrade.
+ > - Generation 2 VMs created by using [Azure Compute Gallery (ACG)](azure-compute-gallery.md), [Managed image](capture-image-resource.yml), or an [OS disk](./scripts/create-vm-from-managed-os-disks.md) can't be upgraded to Trusted launch by using the portal. Ensure that the [OS version is supported for Trusted launch](trusted-launch.md#operating-systems-supported). Use PowerShell, the Azure CLI, or an Azure Resource Manager template (ARM template) to run the upgrade.
:::image type="content" source="./media/trusted-launch/05-generation-2-to-trusted-launch-select-uefi-settings.png" alt-text="Screenshot that shows the Secure Boot and vTPM settings."::: 1. After the update successfully finishes, close the **Configuration** page. On the **Overview** page in the VM properties, confirm the **Security type** settings.
- :::image type="content" source="./media/trusted-launch/06-generation-2-to-trusted-launch-validate-uefi.png" alt-text="Screenshot that shows the Trusted Launch upgraded VM.":::
+ :::image type="content" source="./media/trusted-launch/06-generation-2-to-trusted-launch-validate-uefi.png" alt-text="Screenshot that shows the Trusted launch upgraded VM.":::
-1. Start the upgraded Trusted Launch VM. Verify that you can sign in to the VM by using either the Remote Desktop Protocol (RDP) for Windows VMs or the Secure Shell Protocol (SSH) for Linux VMs.
+1. Start the upgraded Trusted launch VM. Verify that you can sign in to the VM by using either the Remote Desktop Protocol (RDP) for Windows VMs or the Secure Shell Protocol (SSH) for Linux VMs.
### [CLI](#tab/cli)
-Follow the steps to enable Trusted Launch on an existing Azure Generation 2 VM by using the Azure CLI.
+Follow the steps to enable Trusted launch on an existing Azure Generation 2 VM by using the Azure CLI.
Make sure that you install the latest [Azure CLI](/cli/azure/install-az-cli2) and are signed in to an Azure account with [az login](/cli/azure/reference-index).
Make sure that you install the latest [Azure CLI](/cli/azure/install-az-cli2) an
az account set --subscription 00000000-0000-0000-0000-000000000000 ```
-1. Deallocate the VM.
+2. Deallocate the VM.
-1. Enable Trusted Launch by setting `--security-type` to `TrustedLaunch`.
+3. Enable Trusted launch by setting `--security-type` to `TrustedLaunch`.
```azurecli-interactive az vm deallocate \ --resource-group myResourceGroup --name myVm ```
-1. Validate the output of the previous command. Ensure that the `securityProfile` configuration is returned with the command output.
+4. Validate the output of the previous command. Ensure that the `securityProfile` configuration is returned with the command output.
```azurecli-interactive az vm update \
Make sure that you install the latest [Azure CLI](/cli/azure/install-az-cli2) an
--enable-secure-boot true --enable-vtpm true ```
-1. Validate the output of the previous command. Ensure that the `securityProfile` configuration is returned with the command output.
+5. Validate the output of the previous command. Ensure that the `securityProfile` configuration is returned with the command output.
```json {
Make sure that you install the latest [Azure CLI](/cli/azure/install-az-cli2) an
} ```
-1. Start the VM.
+6. Start the VM.
```azurecli-interactive az vm start \ --resource-group myResourceGroup --name myVm ```
-1. Start the upgraded Trusted Launch VM. Verify that you can sign in to the VM by using either RDP (for Windows VMs) or SSH (for Linux VMs).
+7. Start the upgraded Trusted launch VM. Verify that you can sign in to the VM by using either RDP (for Windows VMs) or SSH (for Linux VMs).
### [PowerShell](#tab/powershell)
-Follow the steps to enable Trusted Launch on an existing Azure Generation 2 VM by using Azure PowerShell.
+Follow the steps to enable Trusted launch on an existing Azure Generation 2 VM by using Azure PowerShell.
Make sure that you install the latest [Azure PowerShell](/powershell/azure/install-azps-windows) and are signed in to an Azure account with [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).
Make sure that you install the latest [Azure PowerShell](/powershell/azure/insta
Connect-AzAccount -SubscriptionId 00000000-0000-0000-0000-000000000000 ```
-1. Deallocate the VM.
+2. Deallocate the VM.
```azurepowershell-interactive Stop-AzVM -ResourceGroupName myResourceGroup -Name myVm ```
-1. Enable Trusted Launch by setting `-SecurityType` to `TrustedLaunch`.
+3. Enable Trusted launch by setting `-SecurityType` to `TrustedLaunch`.
```azurepowershell-interactive Get-AzVM -ResourceGroupName myResourceGroup -VMName myVm `
Make sure that you install the latest [Azure PowerShell](/powershell/azure/insta
-EnableSecureBoot $true -EnableVtpm $true ```
-1. Validate `securityProfile` in the updated VM configuration.
+4. Validate `securityProfile` in the updated VM configuration.
```azurepowershell-interactive # Following command output should be `TrustedLaunch`
Make sure that you install the latest [Azure PowerShell](/powershell/azure/insta
```
-1. Start the VM.
+5. Start the VM.
```azurepowershell-interactive Start-AzVM -ResourceGroupName myResourceGroup -Name myVm ```
-1. Start the upgraded Trusted Launch VM. Verify that you can sign in to the VM by using either RDP (for Windows VMs) or SSH (for Linux VMs).
+6. Start the upgraded Trusted launch VM. Verify that you can sign in to the VM by using either RDP (for Windows VMs) or SSH (for Linux VMs).
### [Template](#tab/template)
-Follow the steps to enable Trusted Launch on an existing Azure Generation 2 VM by using an ARM template.
+Follow the steps to enable Trusted launch on an existing Azure Generation 2 VM by using an ARM template.
[!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)]
Follow the steps to enable Trusted Launch on an existing Azure Generation 2 VM b
} ```
-1. Edit the `parameters` JSON file with VMs to be updated with the `TrustedLaunch` security type.
+2. Edit the `parameters` JSON file with VMs to be updated with the `TrustedLaunch` security type.
```json {
Follow the steps to enable Trusted Launch on an existing Azure Generation 2 VM b
-|-|- vmName | Name of Azure Generation 2 VM. | `myVm` location | Location of Azure Generation 2 VM. | `westus3`
- secureBootEnabled | Enable Secure Boot with the Trusted Launch security type. | `true`
+ secureBootEnabled | Enable Secure Boot with the Trusted launch security type. | `true`
-1. Deallocate all Azure Generation 2 VMs to be updated.
+3. Deallocate all Azure Generation 2 VMs to be updated.
```azurepowershell-interactive Stop-AzVM -ResourceGroupName myResourceGroup -Name myVm01 ```
-1. Run the ARM template deployment.
+4. Run the ARM template deployment.
```azurepowershell-interactive $resourceGroupName = "myResourceGroup"
Follow the steps to enable Trusted Launch on an existing Azure Generation 2 VM b
-TemplateFile $templateFile -TemplateParameterFile $parameterFile ```
- :::image type="content" source="./media/trusted-launch/generation-2-trusted-launch-settings.png" alt-text="Screenshot that shows the Trusted Launch properties of the VM.":::
+ :::image type="content" source="./media/trusted-launch/generation-2-trusted-launch-settings.png" alt-text="Screenshot that shows the Trusted launch properties of the VM.":::
- :::image type="content" source="./media/trusted-launch/generation-2-trusted-launch-settings.png" alt-text="Screenshot that shows the Trusted Launch properties of the VM.":::
+ :::image type="content" source="./media/trusted-launch/generation-2-trusted-launch-settings.png" alt-text="Screenshot that shows the Trusted launch properties of the VM.":::
-1. Start the upgraded Trusted Launch VM. Verify that you can sign in to the VM by using either RDP (for Windows VMs) or SSH (for Linux VMs).
+5. Start the upgraded Trusted launch VM. Verify that you can sign in to the VM by using either RDP (for Windows VMs) or SSH (for Linux VMs).
+## Azure Advisor Recommendation
+
+Azure Advisor populates an **Enable Trusted launch foundational excellence, and modern security for Existing Generation 2 VM(s)** operational excellence recommendation for existing Generation 2 VMs to adopt [Trusted launch](trusted-launch.md), a higher security posture for Azure VMs at no additional cost to you. Ensure Generation 2 VM has all prerequisites to migrate to Trusted launch, follow all the best practices including validation of OS image, VM Size, and creating restore points. For the Advisor recommendation to be considered complete, follow the steps outlined in the [**Enable Trusted launch on an existing VM**](trusted-launch-existing-vm.md#enable-trusted-launch-on-an-existing-vm) to upgrade the virtual machines security type and enable Trusted launch.
+
+**What if there is Generation 2 VMs, that doesn't fit the prerequisites for Trusted launch?**
+
+For a Generation 2 VM, that has not met the [prerequisites](#prerequisites) to upgrade to Trusted launch, look how to fulfill the prerequisites. For example, If using a virtual machine size not supported, please look for an [equivalent Trusted launch supported size](trusted-launch.md#virtual-machines-sizes) that supports Trusted launch.
+
+> [!NOTE]
+>
+> Please dismiss the recommendation if Gen2 virtual machine is configured with VM size families which are currently not supported with Trusted launch like MSv2-series.
+ ## Related content
+- Enable Trusted launch for new virtual machine deployments. For more details, see [Deploy Trusted launch virtual machines](trusted-launch-portal.md)
- After the upgrades, we recommend that you enable [boot integrity monitoring](trusted-launch.md#microsoft-defender-for-cloud-integration) to monitor the health of the VM by using Microsoft Defender for Cloud.-- Learn more about [Trusted Launch](trusted-launch.md) and review [frequently asked questions](trusted-launch-faq.md).
+- Learn more about [Trusted launch](trusted-launch.md) and review [frequently asked questions](trusted-launch-faq.md).
virtual-machines Trusted Launch Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-faq.md
Trusted Launch supports ephemeral OS disks. For more information, see [Trusted L
> [!NOTE] > When you use ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the virtual Trusted Platform Module (vTPM) after the creation of the VM might not be persisted across operations like reimaging and platform events like service healing.
+### Are security features available with Trusted launch applicable to data disks as well?
+
+Trusted launch provides foundational security for Operating system hosted in virtual machine by attesting its boot integrity. Trusted launch security features are applicable for running OS and OS disks only, they are not applicable to data disks or OS binaries stored in data disks. For more details, see [Trusted launch overview](trusted-launch.md)
+ ### Can a VM be restored by using backups taken before Trusted Launch was enabled? Backups taken before you [upgrade an existing Generation 2 VM to Trusted Launch](trusted-launch-existing-vm.md) can be used to restore the entire VM or individual data disks. They can't be used to restore or replace the OS disk only.
virtual-network Add Dual Stack Ipv6 Vm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-cli.md
- Title: Add a dual-stack network to an existing virtual machine - Azure CLI-
-description: Learn how to add a dual-stack network to an existing virtual machine using the Azure CLI.
----- Previously updated : 08/24/2023---
-# Add a dual-stack network to an existing virtual machine using the Azure CLI
-
-In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).---- This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--- An existing virtual network, public IP address and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address and a virtual machine, see [Quickstart: Create a Linux virtual machine with the Azure CLI](../../virtual-machines/linux/quick-create-cli.md).-
- - The example virtual network used in this article is named **myVNet**. Replace this value with the name of your virtual network.
-
- - The example virtual machine used in this article is named **myVM**. Replace this value with the name of your virtual machine.
-
- - The example public IP address used in this article is named **myPublicIP**. Replace this value with the name of your public IP address.
-
-## Add IPv6 to virtual network
-
-In this section, you add an IPv6 address space and subnet to your existing virtual network.
-
-Use [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update) to update the virtual network.
-
-```azurecli-interactive
-az network vnet update \
- --address-prefixes 10.0.0.0/16 2404:f800:8000:122::/63 \
- --resource-group myResourceGroup \
- --name myVNet
-```
-
-Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to create the subnet.
-
-```azurecli-interactive
-az network vnet subnet update \
- --address-prefixes 10.0.0.0/24 2404:f800:8000:122::/64 \
- --name myBackendSubnet \
- --resource-group myResourceGroup \
- --vnet-name myVNet
-```
-
-## Create IPv6 public IP address
-
-In this section, you create a IPv6 public IP address for the virtual machine.
-
-Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create the public IP address.
-
-```azurecli-interactive
- az network public-ip create \
- --resource-group myResourceGroup \
- --name myPublicIP-Ipv6 \
- --sku Standard \
- --version IPv6 \
- --zone 1 2 3
-```
-## Add IPv6 configuration to virtual machine
-
-Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-create) to create the IPv6 configuration for the NIC. The **`--nic-name`** used in the example is **myvm569**. Replace this value with the name of the network interface in your virtual machine.
-
-```azurecli-interactive
- az network nic ip-config create \
- --resource-group myResourceGroup \
- --name Ipv6config \
- --nic-name myvm569 \
- --private-ip-address-version IPv6 \
- --vnet-name myVNet \
- --subnet myBackendSubnet \
- --public-ip-address myPublicIP-IPv6
-```
-
-## Next steps
-
-In this article, you learned how to create an Azure Virtual machine with a dual-stack network.
-
-For more information about IPv6 and IP addresses in Azure, see:
--- [Overview of IPv6 for Azure Virtual Network.](ipv6-overview.md)--- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
virtual-network Add Dual Stack Ipv6 Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-portal.md
Title: Add a dual-stack network to an existing virtual machine - Azure portal
+ Title: Add a dual-stack network to an existing virtual machine
-description: Learn how to add a dual stack network to an existing virtual machine using the Azure portal.
+description: Learn how to add a dual stack network to an existing virtual machine using the Azure portal, Azure CLI, or Azure PowerShell.
Previously updated : 08/24/2023- Last updated : 07/24/2024+
-# Add a dual-stack network to an existing virtual machine using the Azure portal
+# Add a dual-stack network to an existing virtual machine
-In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address.
+In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address. You choose from the Azure portal, Azure CLI, or Azure PowerShell to complete the steps in this article.
## Prerequisites
+# [Azure portal](#tab/azureportal)
+ - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An existing virtual network, public IP address and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address and a virtual machine, see [Quickstart: Create a Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md).
+- An existing virtual network, public IP address, and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address, and a virtual machine, see [Quickstart: Create a Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md).
+
+ - The example virtual network used in this article is named **myVNet**. Replace this value with the name of your virtual network.
+
+ - The example virtual machine used in this article is named **myVM**. Replace this value with the name of your virtual machine.
+
+ - The example public IP address used in this article is named **myPublicIP**. Replace this value with the name of your public IP address.
+
+# [Azure CLI](#tab/azurecli/)
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
++
+- This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- An existing virtual network, public IP address, and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address, and a virtual machine, see [Quickstart: Create a Linux virtual machine with the Azure CLI](../../virtual-machines/linux/quick-create-cli.md).
- The example virtual network used in this article is named **myVNet**. Replace this value with the name of your virtual network.
In this article, you add IPv6 support to an existing virtual network. You config
- The example public IP address used in this article is named **myPublicIP**. Replace this value with the name of your public IP address.
+# [Azure PowerShell](#tab/azurepowershell/)
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+
+- Azure PowerShell installed locally or Azure Cloud Shell
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+- An existing virtual network, public IP address, and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address, and a virtual machine, see [Quickstart: Create a Linux virtual machine in Azure with PowerShell](../../virtual-machines/linux/quick-create-powershell.md).
+
+ - The example virtual network used in this article is named **myVNet**. Replace this value with the name of your virtual network.
+
+ - The example virtual machine used in this article is named **myVM**. Replace this value with the name of your virtual machine.
+
+ - The example public IP address used in this article is named **myPublicIP**. Replace this value with the name of your public IP address.
+
++ ## Add IPv6 to virtual network
+# [Azure portal](#tab/azureportal)
+ In this section, you add an IPv6 address space and subnet to your existing virtual network. 1. Sign in to the [Azure portal](https://portal.azure.com).
In this section, you add an IPv6 address space and subnet to your existing virtu
11. Select **Save**.
+# [Azure CLI](#tab/azurecli/)
+
+In this section, you add an IPv6 address space and subnet to your existing virtual network.
+
+Use [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update) to update the virtual network.
+
+```azurecli-interactive
+az network vnet update \
+ --address-prefixes 10.0.0.0/16 2404:f800:8000:122::/63 \
+ --resource-group myResourceGroup \
+ --name myVNet
+```
+
+Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to create the subnet.
+
+```azurecli-interactive
+az network vnet subnet update \
+ --address-prefixes 10.0.0.0/24 2404:f800:8000:122::/64 \
+ --name myBackendSubnet \
+ --resource-group myResourceGroup \
+ --vnet-name myVNet
+```
+
+# [Azure PowerShell](#tab/azurepowershell/)
+
+In this section, you add an IPv6 address space and subnet to your existing virtual network.
+
+Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to update the virtual network.
+
+```azurepowershell-interactive
+## Place your virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place address space into a variable. ##
+$IPAddressRange = '2404:f800:8000:122::/63'
+
+## Add the address space to the virtual network configuration. ##
+$vnet.AddressSpace.AddressPrefixes.Add($IPAddressRange)
+
+## Save the configuration to the virtual network. ##
+Set-AzVirtualNetwork -VirtualNetwork $vnet
+```
+
+Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) to add the new IPv6 subnet to the virtual network.
+
+```azurepowershell-interactive
+## Place your virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Create the subnet configuration. ##
+$sub = @{
+ Name = 'myBackendSubnet'
+ AddressPrefix = '10.0.0.0/24','2404:f800:8000:122::/64'
+ VirtualNetwork = $vnet
+}
+Set-AzVirtualNetworkSubnetConfig @sub
+
+## Save the configuration to the virtual network. ##
+Set-AzVirtualNetwork -VirtualNetwork $vnet
+```
+++ ## Create IPv6 public IP address
+# [Azure portal](#tab/azureportal)
++ In this section, you create a IPv6 public IP address for the virtual machine. 1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results.
In this section, you create a IPv6 public IP address for the virtual machine.
4. Select **Create**.
+# [Azure CLI](#tab/azurecli/)
+
+In this section, you create a IPv6 public IP address for the virtual machine.
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create the public IP address.
+
+```azurecli-interactive
+ az network public-ip create \
+ --resource-group myResourceGroup \
+ --name myPublicIP-Ipv6 \
+ --sku Standard \
+ --version IPv6 \
+ --zone 1 2 3
+```
+
+# [Azure PowerShell](#tab/azurepowershell/)
+
+In this section, you create a IPv6 public IP address for the virtual machine.
+
+Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create the public IP address.
+
+```azurepowershell-interactive
+$ip6 = @{
+ Name = 'myPublicIP-IPv6'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ IpAddressVersion = 'IPv6'
+ Zone = 1,2,3
+}
+New-AzPublicIpAddress @ip6
+```
+++ ## Add IPv6 configuration to virtual machine
+# [Azure portal](#tab/azureportal)
+ The virtual machine must be stopped to add the IPv6 configuration to the existing virtual machine. You stop the virtual machine and add the IPv6 configuration to the existing virtual machine's network interface. 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
The virtual machine must be stopped to add the IPv6 configuration to the existin
10. Start **myVM**.
+# [Azure CLI](#tab/azurecli/)
+
+Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-create) to create the IPv6 configuration for the network interface. The **`--nic-name`** used in the example is **myvm569**. Replace this value with the name of the network interface in your virtual machine.
+
+```azurecli-interactive
+ az network nic ip-config create \
+ --resource-group myResourceGroup \
+ --name Ipv6config \
+ --nic-name myvm569 \
+ --private-ip-address-version IPv6 \
+ --vnet-name myVNet \
+ --subnet myBackendSubnet \
+ --public-ip-address myPublicIP-IPv6
+```
++
+# [Azure PowerShell](#tab/azurepowershell/)
+
+Use [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) to create the IPv6 configuration for the network interface. The **`-Name`** used in the example is **myvm569**. Replace this value with the name of the network interface in your virtual machine.
+
+```azurepowershell-interactive
+## Place your virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place your virtual network subnet into a variable. ##
+$sub = @{
+ Name = 'myBackendSubnet'
+ VirtualNetwork = $vnet
+}
+$subnet = Get-AzVirtualNetworkSubnetConfig @sub
+
+## Place the IPv6 public IP address you created previously into a variable. ##
+$pip = @{
+ Name = 'myPublicIP-IPv6'
+ ResourceGroupName = 'myResourceGroup'
+}
+$publicIP = Get-AzPublicIPAddress @pip
+
+## Place the network interface into a variable. ##
+$net = @{
+ Name = 'myvm569'
+ ResourceGroupName = 'myResourceGroup'
+}
+$nic = Get-AzNetworkInterface @net
+
+## Create the configuration for the network interface. ##
+$ipc = @{
+ Name = 'Ipv6config'
+ Subnet = $subnet
+ PublicIpAddress = $publicIP
+ PrivateIpAddressVersion = 'IPv6'
+}
+$ipconfig = New-AzNetworkInterfaceIpConfig @ipc
+
+## Add the IP configuration to the network interface. ##
+$nic.IpConfigurations.Add($ipconfig)
+
+## Save the configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+```
++ ## Next steps In this article, you learned how to add a dual stack IP configuration to an existing virtual network and virtual machine.
virtual-network Add Dual Stack Ipv6 Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-powershell.md
- Title: Add a dual-stack network to an existing virtual machine - Azure PowerShell-
-description: Learn how to add a dual-stack network to an existing virtual machine using Azure PowerShell.
----- Previously updated : 08/24/2023---
-# Add a dual-stack network to an existing virtual machine using Azure PowerShell
-
-In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).--- Azure PowerShell installed locally or Azure Cloud Shell-
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
--- An existing virtual network, public IP address and virtual machine in your subscription that is configured for IPv4 support only. For more information about creating a virtual network, public IP address and a virtual machine, see [Quickstart: Create a Linux virtual machine in Azure with PowerShell](../../virtual-machines/linux/quick-create-powershell.md).-
- - The example virtual network used in this article is named **myVNet**. Replace this value with the name of your virtual network.
-
- - The example virtual machine used in this article is named **myVM**. Replace this value with the name of your virtual machine.
-
- - The example public IP address used in this article is named **myPublicIP**. Replace this value with the name of your public IP address.
-
-## Add IPv6 to virtual network
-
-In this section, you add an IPv6 address space and subnet to your existing virtual network.
-
-Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to update the virtual network.
-
-```azurepowershell-interactive
-## Place your virtual network into a variable. ##
-$net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
-}
-$vnet = Get-AzVirtualNetwork @net
-
-## Place address space into a variable. ##
-$IPAddressRange = '2404:f800:8000:122::/63'
-
-## Add the address space to the virtual network configuration. ##
-$vnet.AddressSpace.AddressPrefixes.Add($IPAddressRange)
-
-## Save the configuration to the virtual network. ##
-Set-AzVirtualNetwork -VirtualNetwork $vnet
-```
-
-Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) to add the new IPv6 subnet to the virtual network.
-
-```azurepowershell-interactive
-## Place your virtual network into a variable. ##
-$net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
-}
-$vnet = Get-AzVirtualNetwork @net
-
-## Create the subnet configuration. ##
-$sub = @{
- Name = 'myBackendSubnet'
- AddressPrefix = '10.0.0.0/24','2404:f800:8000:122::/64'
- VirtualNetwork = $vnet
-}
-Set-AzVirtualNetworkSubnetConfig @sub
-
-## Save the configuration to the virtual network. ##
-Set-AzVirtualNetwork -VirtualNetwork $vnet
-```
-
-## Create IPv6 public IP address
-
-In this section, you create a IPv6 public IP address for the virtual machine.
-
-Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create the public IP address.
-
-```azurepowershell-interactive
-$ip6 = @{
- Name = 'myPublicIP-IPv6'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus2'
- Sku = 'Standard'
- AllocationMethod = 'Static'
- IpAddressVersion = 'IPv6'
- Zone = 1,2,3
-}
-New-AzPublicIpAddress @ip6
-```
-## Add IPv6 configuration to virtual machine
-
-Use [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) to create the IPv6 configuration for the NIC. The **`-Name`** used in the example is **myvm569**. Replace this value with the name of the network interface in your virtual machine.
-
-```azurepowershell-interactive
-## Place your virtual network into a variable. ##
-$net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
-}
-$vnet = Get-AzVirtualNetwork @net
-
-## Place your virtual network subnet into a variable. ##
-$sub = @{
- Name = 'myBackendSubnet'
- VirtualNetwork = $vnet
-}
-$subnet = Get-AzVirtualNetworkSubnetConfig @sub
-
-## Place the IPv6 public IP address you created previously into a variable. ##
-$pip = @{
- Name = 'myPublicIP-IPv6'
- ResourceGroupName = 'myResourceGroup'
-}
-$publicIP = Get-AzPublicIPAddress @pip
-
-## Place the network interface into a variable. ##
-$net = @{
- Name = 'myvm569'
- ResourceGroupName = 'myResourceGroup'
-}
-$nic = Get-AzNetworkInterface @net
-
-## Create the configuration for the network interface. ##
-$ipc = @{
- Name = 'Ipv6config'
- Subnet = $subnet
- PublicIpAddress = $publicIP
- PrivateIpAddressVersion = 'IPv6'
-}
-$ipconfig = New-AzNetworkInterfaceIpConfig @ipc
-
-## Add the IP configuration to the network interface. ##
-$nic.IpConfigurations.Add($ipconfig)
-
-## Save the configuration to the network interface. ##
-$nic | Set-AzNetworkInterface
-```
-
-## Next steps
-
-In this article, you learned how to add a dual-stack network to an existing virtual machine.
-
-For more information about IPv6 and IP addresses in Azure, see:
--- [Overview of IPv6 for Azure Virtual Network.](ipv6-overview.md)--- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
virtual-network Create Vm Dual Stack Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-cli.md
- Title: Create an Azure virtual machine with a dual-stack network - Azure CLI-
-description: In this article, learn how to use the Azure CLI to create a virtual machine with a dual-stack virtual network in Azure.
----- Previously updated : 08/24/2023---
-# Create an Azure Virtual Machine with a dual-stack network using the Azure CLI
-
-In this article, you create a virtual machine in Azure with the Azure CLI. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).---- This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Create a resource group
-
-An Azure resource group is a logical container into which Azure resources are deployed and managed.
-
-Create a resource group with [az group create](/cli/azure/group#az-group-create) named **myResourceGroup** in the **eastus2** location.
-
-```azurecli-interactive
- az group create \
- --name myResourceGroup \
- --location eastus2
-```
-
-## Create a virtual network
-
-In this section, you create a dual-stack virtual network for the virtual machine.
-
-Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network.
-
-```azurecli-interactive
- az network vnet create \
- --resource-group myResourceGroup \
- --location eastus2 \
- --name myVNet \
- --address-prefixes 10.0.0.0/16 2404:f800:8000:122::/63 \
- --subnet-name myBackendSubnet \
- --subnet-prefixes 10.0.0.0/24 2404:f800:8000:122::/64
-```
-
-## Create public IP addresses
-
-You create two public IP addresses in this section, IPv4 and IPv6.
-
-Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create the public IP addresses.
-
-```azurecli-interactive
- az network public-ip create \
- --resource-group myResourceGroup \
- --name myPublicIP-Ipv4 \
- --sku Standard \
- --version IPv4 \
- --zone 1 2 3
-
- az network public-ip create \
- --resource-group myResourceGroup \
- --name myPublicIP-Ipv6 \
- --sku Standard \
- --version IPv6 \
- --zone 1 2 3
-
-```
-## Create a network security group
-
-In this section, you create a network security group for the virtual machine and virtual network.
-
-Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to create the network security group.
-
-```azurecli-interactive
- az network nsg create \
- --resource-group myResourceGroup \
- --name myNSG
-```
-
-### Create network security group rules
-
-You create a rule to allow connections to the virtual machine on port 22 for SSH. An extra rule is created to allow all ports for outbound connections.
-
-Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to create the network security group rules.
-
-```azurecli-interactive
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNSG \
- --name myNSGRuleSSH \
- --protocol '*' \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 22 \
- --access allow \
- --priority 200
-
- az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNSG \
- --name myNSGRuleAllOUT \
- --protocol '*' \
- --direction outbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range '*' \
- --access allow \
- --priority 200
-```
-
-## Create virtual machine
-
-In this section, you create the virtual machine and its supporting resources.
-
-### Create network interface
-
-You use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
-
-```azurecli-interactive
- az network nic create \
- --resource-group myResourceGroup \
- --name myNIC1 \
- --vnet-name myVNet \
- --subnet myBackEndSubnet \
- --network-security-group myNSG \
- --public-ip-address myPublicIP-IPv4
-```
-
-### Create IPv6 IP configuration
-
-Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-create) to create the IPv6 configuration for the NIC.
-
-```azurecli-interactive
- az network nic ip-config create \
- --resource-group myResourceGroup \
- --name myIPv6config \
- --nic-name myNIC1 \
- --private-ip-address-version IPv6 \
- --vnet-name myVNet \
- --subnet myBackendSubnet \
- --public-ip-address myPublicIP-IPv6
-```
-
-### Create virtual machine
-
-Use [az vm create](/cli/azure/vm#az-vm-create) to create the virtual machine.
-
-```azurecli-interactive
- az vm create \
- --resource-group myResourceGroup \
- --name myVM \
- --nics myNIC1 \
- --image Ubuntu2204 \
- --admin-username azureuser \
- --authentication-type ssh \
- --generate-ssh-keys
-```
-
-## Test SSH connection
-
-Use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-ip-show) to display the IP addresses of the virtual machine.
-
-```azurecli-interactive
- az network public-ip show \
- --resource-group myResourceGroup \
- --name myPublicIP-IPv4 \
- --query ipAddress \
- --output tsv
-```
-
-```azurecli-interactive
-user@Azure:~$ az network public-ip show \
-> --resource-group myResourceGroup \
-> --name myPublicIP-IPv4 \
-> --query ipAddress \
-> --output tsv
-20.119.201.208
-```
-
-```azurecli-interactive
- az network public-ip show \
- --resource-group myResourceGroup \
- --name myPublicIP-IPv6 \
- --query ipAddress \
- --output tsv
-```
-
-```azurecli-interactive
-user@Azure:~$ az network public-ip show \
-> --resource-group myResourceGroup \
-> --name myPublicIP-IPv6 \
-> --query ipAddress \
-> --output tsv
-2603:1030:408:6::9d
-```
-
-Open an SSH connection to the virtual machine by using the following command. Replace the IP address with the IP address of your virtual machine.
-
-```azurecli-interactive
- ssh azureuser@20.119.201.208
-```
-
-## Clean up resources
-
-When no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, virtual machine, and all related resources.
-
-```azurecli-interactive
- az group delete \
- --name myResourceGroup
-```
-
-## Next steps
-
-In this article, you learned how to create an Azure Virtual machine with a dual-stack network.
-
-For more information about IPv6 and IP addresses in Azure, see:
--- [Overview of IPv6 for Azure Virtual Network.](ipv6-overview.md)--- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
virtual-network Create Vm Dual Stack Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md
Title: Create an Azure virtual machine with a dual-stack network - Azure portal
+ Title: Create an Azure virtual machine with a dual-stack network
-description: In this article, learn how to use the Azure portal to create a virtual machine with a dual-stack virtual network in Azure.
+description: In this article, learn how to create a virtual machine with a dual-stack virtual network in Azure using the Azure portal, Azure CLI, or PowerShell.
Previously updated : 12/05/2023 Last updated : 07/24/2024
-# Create an Azure Virtual Machine with a dual-stack network using the Azure portal
+# Create an Azure Virtual Machine with a dual-stack network
-In this article, you create a virtual machine in Azure with the Azure portal. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
+In this article, you create a virtual machine in Azure with the Azure portal. The virtual machine is created along with the dual-stack network as part of the procedures. You choose from the Azure portal, Azure CLI, or Azure PowerShell to complete the steps in this article. When completed, the virtual machine supports IPv4 and IPv6 communication.
## Prerequisites
+# [Azure portal](#tab/azureportal)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+# [Azure CLI](#tab/azurecli/)
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
++
+- This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+# [Azure PowerShell](#tab/azurepowershell/)
+ - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell installed locally or Azure Cloud Shell.
+- Sign in to Azure PowerShell and select the subscription you want to use. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+- Ensure your Az. Network module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az. Network".
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
++
-## Create a virtual network
+## Create a resource group and virtual network
-In this section, you create a dual-stack virtual network for the virtual machine.
+# [Azure portal](#tab/azureportal)
+
+In this section, you create a resource group and dual-stack virtual network for the virtual machine in the Azure portal.
1. Sign-in to the [Azure portal](https://portal.azure.com).
In this section, you create a dual-stack virtual network for the virtual machine
| Address range | Leave default of **2404:f800:8000:122::**. | | Size | Leave the default of **/64**. |
-1. Select **Add**.
+1. Select **Add**.
1. Select the **Review + create**. 1. Select **Create**.
+# [Azure CLI](#tab/azurecli/)
+
+In this section, you create a resource group dual-stack virtual network for the virtual machine with Azure CLI.
+
+Create a resource group with [az group create](/cli/azure/group#az-group-create) named **myResourceGroup** in the **eastus2** location.
+
+```azurecli-interactive
+ az group create \
+ --name myResourceGroup \
+ --location eastus2
+```
+
+Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network.
+
+```azurecli-interactive
+ az network vnet create \
+ --resource-group myResourceGroup \
+ --location eastus2 \
+ --name myVNet \
+ --address-prefixes 10.0.0.0/16 2404:f800:8000:122::/63 \
+ --subnet-name myBackendSubnet \
+ --subnet-prefixes 10.0.0.0/24 2404:f800:8000:122::/64
+```
+
+# [Azure PowerShell](#tab/azurepowershell/)
++
+In this section, you create a dual-stack virtual network for the virtual machine with Azure PowerShell.
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) named **myResourceGroup** in the **eastus2** location.
+
+```azurepowershell-interactive
+$rg =@{
+ Name = 'myResourceGroup'
+ Location = 'eastus2'
+}
+New-AzResourceGroup @rg
+```
+
+Use [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) and [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to create a virtual network.
+
+```azurepowershell-interactive
+## Create backend subnet config ##
+$subnet = @{
+ Name = 'myBackendSubnet'
+ AddressPrefix = '10.0.0.0/24','2404:f800:8000:122::/64'
+}
+$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet
+
+## Create the virtual network ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ AddressPrefix = '10.0.0.0/16','2404:f800:8000:122::/63'
+ Subnet = $subnetConfig
+}
+New-AzVirtualNetwork @net
+
+```
++ ## Create public IP addresses
-You create two public IP addresses in this section, IPv4 and IPv6.
+# [Azure portal](#tab/azureportal)
+
+You create two public IP addresses in this section, IPv4 and IPv6 in the Azure portal.
### Create IPv4 public IP address
You create two public IP addresses in this section, IPv4 and IPv6.
4. Select **Review + create** then **Create**.
+# [Azure CLI](#tab/azurecli/)
+
+You create two public IP addresses in this section, IPv4 and IPv6 with Azure CLI.
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create the public IP addresses.
+
+```azurecli-interactive
+ az network public-ip create \
+ --resource-group myResourceGroup \
+ --name myPublicIP-Ipv4 \
+ --sku Standard \
+ --version IPv4 \
+ --zone 1 2 3
+
+ az network public-ip create \
+ --resource-group myResourceGroup \
+ --name myPublicIP-Ipv6 \
+ --sku Standard \
+ --version IPv6 \
+ --zone 1 2 3
+
+```
+# [Azure PowerShell](#tab/azurepowershell/)
+
+You create two public IP addresses in this section, IPv4 and IPv6.
+
+Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create the public IP addresses.
+
+```azurepowershell-interactive
+$ip4 = @{
+ Name = 'myPublicIP-IPv4'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ IpAddressVersion = 'IPv4'
+ Zone = 1,2,3
+}
+New-AzPublicIpAddress @ip4
+
+$ip6 = @{
+ Name = 'myPublicIP-IPv6'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ IpAddressVersion = 'IPv6'
+ Zone = 1,2,3
+}
+New-AzPublicIpAddress @ip6
+```
+ ## Create virtual machine
+In this section, you create the virtual machine and its supporting resources.
+
+# [Azure portal](#tab/azureportal)
+
+### Create virtual machine
+ 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. 2. Select **+ Create** then **Azure virtual machine**.
You create two public IP addresses in this section, IPv4 and IPv6.
12. Stop **myVM**.
-## Network interface configuration
+### Configure network interface
A network interface is automatically created and attached to the chosen virtual network during creation. In this section, you add the IPv6 configuration to the existing network interface.
A network interface is automatically created and attached to the chosen virtual
10. Return to the **Overview** of **myVM** and start the virtual machine.
+# [Azure CLI](#tab/azurecli/)
+
+In this section, you create the virtual machine and its supporting resources.
+
+### Create network interface
+
+You use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
+
+```azurecli-interactive
+ az network nic create \
+ --resource-group myResourceGroup \
+ --name myNIC1 \
+ --vnet-name myVNet \
+ --subnet myBackEndSubnet \
+ --network-security-group myNSG \
+ --public-ip-address myPublicIP-IPv4
+```
+
+### Create IPv6 IP configuration
+
+Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-create) to create the IPv6 configuration for the NIC.
+
+```azurecli-interactive
+ az network nic ip-config create \
+ --resource-group myResourceGroup \
+ --name myIPv6config \
+ --nic-name myNIC1 \
+ --private-ip-address-version IPv6 \
+ --vnet-name myVNet \
+ --subnet myBackendSubnet \
+ --public-ip-address myPublicIP-IPv6
+```
+
+### Create virtual machine
+
+Use [az vm create](/cli/azure/vm#az-vm-create) to create the virtual machine.
+
+```azurecli-interactive
+ az vm create \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --nics myNIC1 \
+ --image Ubuntu2204 \
+ --admin-username azureuser \
+ --authentication-type ssh \
+ --generate-ssh-keys
+```
+
+# [Azure PowerShell](#tab/azurepowershell/)
+
+In this section, you create the virtual machine and its supporting resources.
+
+### Create network interface
+
+You use [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) and [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
+
+```azurepowershell-interactive
+## Place the virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place the network security group into a variable. ##
+$ns = @{
+ Name = 'myNSG'
+ ResourceGroupName = 'myResourceGroup'
+}
+$nsg = Get-AzNetworkSecurityGroup @ns
+
+## Place the IPv4 public IP address into a variable. ##
+$pub4 = @{
+ Name = 'myPublicIP-IPv4'
+ ResourceGroupName = 'myResourceGroup'
+}
+$pubIPv4 = Get-AzPublicIPAddress @pub4
+
+## Place the IPv6 public IP address into a variable. ##
+$pub6 = @{
+ Name = 'myPublicIP-IPv6'
+ ResourceGroupName = 'myResourceGroup'
+}
+$pubIPv6 = Get-AzPublicIPAddress @pub6
+
+## Create IPv4 configuration for NIC. ##
+$IP4c = @{
+ Name = 'ipconfig-ipv4'
+ Subnet = $vnet.Subnets[0]
+ PrivateIpAddressVersion = 'IPv4'
+ PublicIPAddress = $pubIPv4
+}
+$IPv4Config = New-AzNetworkInterfaceIpConfig @IP4c
+
+## Create IPv6 configuration for NIC. ##
+$IP6c = @{
+ Name = 'ipconfig-ipv6'
+ Subnet = $vnet.Subnets[0]
+ PrivateIpAddressVersion = 'IPv6'
+ PublicIPAddress = $pubIPv6
+}
+$IPv6Config = New-AzNetworkInterfaceIpConfig @IP6c
+
+## Command to create network interface for VM ##
+$nic = @{
+ Name = 'myNIC1'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ NetworkSecurityGroup = $nsg
+ IpConfiguration = $IPv4Config,$IPv6Config
+}
+New-AzNetworkInterface @nic
+```
+
+### Create virtual machine
+
+Use the following commands to create the virtual machine:
+
+* [New-AzVM](/powershell/module/az.compute/new-azvm)
+
+* [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
+
+* [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
+
+* [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
+
+* [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+
+```azurepowershell-interactive
+$cred = Get-Credential
+
+## Place network interface into a variable. ##
+$nic = @{
+ Name = 'myNIC1'
+ ResourceGroupName = 'myResourceGroup'
+}
+$nicVM = Get-AzNetworkInterface @nic
+
+## Create a virtual machine configuration for VMs ##
+$vmsz = @{
+ VMName = 'myVM'
+ VMSize = 'Standard_DS1_v2'
+}
+$vmos = @{
+ ComputerName = 'myVM'
+ Credential = $cred
+}
+$vmimage = @{
+ PublisherName = 'Debian'
+ Offer = 'debian-11'
+ Skus = '11'
+ Version = 'latest'
+}
+$vmConfig = New-AzVMConfig @vmsz `
+ | Set-AzVMOperatingSystem @vmos -Linux `
+ | Set-AzVMSourceImage @vmimage `
+ | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+## Create the virtual machine for VMs ##
+$vm = @{
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ VM = $vmConfig
+ SshKeyName = 'mySSHKey'
+ }
+New-AzVM @vm -GenerateSshKey
+```
+++ ## Test SSH connection
+# [Azure portal](#tab/azureportal)
+ You connect to the virtual machine with SSH to test the IPv4 public IP address. 1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results.
You connect to the virtual machine with SSH to test the IPv4 public IP address.
```bash ssh -i ~/.ssh/mySSHkey.pem azureuser@20.22.46.19 ```
+# [Azure CLI](#tab/azurecli/)
+
+Use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-ip-show) to display the IP addresses of the virtual machine.
+
+```azurecli-interactive
+ az network public-ip show \
+ --resource-group myResourceGroup \
+ --name myPublicIP-IPv4 \
+ --query ipAddress \
+ --output tsv
+```
+
+```azurecli-interactive
+user@Azure:~$ az network public-ip show \
+> --resource-group myResourceGroup \
+> --name myPublicIP-IPv4 \
+> --query ipAddress \
+> --output tsv
+20.119.201.208
+```
+
+```azurecli-interactive
+ az network public-ip show \
+ --resource-group myResourceGroup \
+ --name myPublicIP-IPv6 \
+ --query ipAddress \
+ --output tsv
+```
+
+```azurecli-interactive
+user@Azure:~$ az network public-ip show \
+> --resource-group myResourceGroup \
+> --name myPublicIP-IPv6 \
+> --query ipAddress \
+> --output tsv
+2603:1030:408:6::9d
+```
+
+Open an SSH connection to the virtual machine by using the following command. Replace the IP address with the IP address of your virtual machine.
+
+```azurecli-interactive
+ ssh azureuser@20.119.201.208
+```
+
+# [Azure PowerShell](#tab/azurepowershell/)
+
+Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) to display the IP addresses of the virtual machine.
+
+```azurepowershell-interactive
+$ip4 = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myPublicIP-IPv4'
+}
+Get-AzPublicIPAddress @ip4 | select IpAddress
+```
+
+```azurepowershell-interactive
+PS /home/user> Get-AzPublicIPAddress @ip4 | select IpAddress
+
+IpAddress
+
+20.72.115.187
+```
+
+```azurepowershell-interactive
+$ip6 = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myPublicIP-IPv6'
+}
+Get-AzPublicIPAddress @ip6 | select IpAddress
+```
+
+```azurepowershell-interactive
+PS /home/user> Get-AzPublicIPAddress @ip6 | select IpAddress
+
+IpAddress
+
+2603:1030:403:3::1ca
+```
+
+Open an SSH connection to the virtual machine by using the following command. Replace the IP address with the IP address of your virtual machine.
+
+```azurepowershell-interactive
+ssh azureuser@20.72.115.187
+```
++ ## Clean up resources
+# [Azure portal](#tab/azureportal)
+ When your finished with the resources created in this article, delete the resource group and all of the resources it contains: 1. In the search box at the top of the portal, enter **myResourceGroup**. Select **myResourceGroup** in the search results in **Resource groups**.
When your finished with the resources created in this article, delete the resour
3. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME** and select **Delete**. +
+# [Azure CLI](#tab/azurecli/)
+
+When no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, virtual machine, and all related resources.
+
+```azurecli-interactive
+ az group delete \
+ --name myResourceGroup
+```
+# [Azure PowerShell](#tab/azurepowershell/)
+
+When no longer needed, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, virtual machine, and all related resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name 'myResourceGroup'
+```
++ ## Next steps In this article, you learned how to create an Azure Virtual machine with a dual-stack network.
virtual-network Create Vm Dual Stack Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-powershell.md
- Title: Create an Azure virtual machine with a dual-stack network - PowerShell-
-description: In this article, learn how to use PowerShell to create a virtual machine with a dual-stack virtual network in Azure.
----- Previously updated : 08/24/2023---
-# Create an Azure Virtual Machine with a dual-stack network using PowerShell
-
-In this article, you'll create a virtual machine in Azure with PowerShell. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure PowerShell installed locally or Azure Cloud Shell.-- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).-- Ensure your Az. Network module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az. Network" if necessary.-
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Create a resource group
-
-An Azure resource group is a logical container into which Azure resources are deployed and managed.
-
-Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) named **myResourceGroup** in the **eastus2** location.
-
-```azurepowershell-interactive
-$rg =@{
- Name = 'myResourceGroup'
- Location = 'eastus2'
-}
-New-AzResourceGroup @rg
-```
-
-## Create a virtual network
-
-In this section, you'll create a dual-stack virtual network for the virtual machine.
-
-Use [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) and [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to create a virtual network.
-
-```azurepowershell-interactive
-## Create backend subnet config ##
-$subnet = @{
- Name = 'myBackendSubnet'
- AddressPrefix = '10.0.0.0/24','2404:f800:8000:122::/64'
-}
-$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet
-
-## Create the virtual network ##
-$net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus2'
- AddressPrefix = '10.0.0.0/16','2404:f800:8000:122::/63'
- Subnet = $subnetConfig
-}
-New-AzVirtualNetwork @net
-
-```
-
-## Create public IP addresses
-
-You'll create two public IP addresses in this section, IPv4 and IPv6.
-
-Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create the public IP addresses.
-
-```azurepowershell-interactive
-$ip4 = @{
- Name = 'myPublicIP-IPv4'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus2'
- Sku = 'Standard'
- AllocationMethod = 'Static'
- IpAddressVersion = 'IPv4'
- Zone = 1,2,3
-}
-New-AzPublicIpAddress @ip4
-
-$ip6 = @{
- Name = 'myPublicIP-IPv6'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus2'
- Sku = 'Standard'
- AllocationMethod = 'Static'
- IpAddressVersion = 'IPv6'
- Zone = 1,2,3
-}
-New-AzPublicIpAddress @ip6
-```
-## Create a network security group
-
-In this section, you'll create a network security group for the virtual machine and virtual network.
-
-Use [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) and [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) to create the network security group and rules.
-
-```azurepowershell-interactive
-## Create rule for network security group and place in variable. ##
-$nsgrule1 = @{
- Name = 'myNSGRuleSSH'
- Description = 'Allow SSH'
- Protocol = '*'
- SourcePortRange = '*'
- DestinationPortRange = '22'
- SourceAddressPrefix = 'Internet'
- DestinationAddressPrefix = '*'
- Access = 'Allow'
- Priority = '200'
- Direction = 'Inbound'
-}
-$rule1 = New-AzNetworkSecurityRuleConfig @nsgrule1
-
-$nsgrule2 = @{
- Name = 'myNSGRuleAllOUT'
- Description = 'Allow All out'
- Protocol = '*'
- SourcePortRange = '*'
- DestinationPortRange = '*'
- SourceAddressPrefix = 'Internet'
- DestinationAddressPrefix = '*'
- Access = 'Allow'
- Priority = '201'
- Direction = 'Outbound'
-}
-$rule2 = New-AzNetworkSecurityRuleConfig @nsgrule2
-
-## Create network security group ##
-$nsg = @{
- Name = 'myNSG'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus2'
- SecurityRules = $rule1,$rule2
-}
-New-AzNetworkSecurityGroup @nsg
-```
-
-## Create virtual machine
-
-In this section, you'll create the virtual machine and its supporting resources.
-
-### Create network interface
-
-You'll use [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) and [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
-
-```azurepowershell-interactive
-## Place the virtual network into a variable. ##
-$net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
-}
-$vnet = Get-AzVirtualNetwork @net
-
-## Place the network security group into a variable. ##
-$ns = @{
- Name = 'myNSG'
- ResourceGroupName = 'myResourceGroup'
-}
-$nsg = Get-AzNetworkSecurityGroup @ns
-
-## Place the IPv4 public IP address into a variable. ##
-$pub4 = @{
- Name = 'myPublicIP-IPv4'
- ResourceGroupName = 'myResourceGroup'
-}
-$pubIPv4 = Get-AzPublicIPAddress @pub4
-
-## Place the IPv6 public IP address into a variable. ##
-$pub6 = @{
- Name = 'myPublicIP-IPv6'
- ResourceGroupName = 'myResourceGroup'
-}
-$pubIPv6 = Get-AzPublicIPAddress @pub6
-
-## Create IPv4 configuration for NIC. ##
-$IP4c = @{
- Name = 'ipconfig-ipv4'
- Subnet = $vnet.Subnets[0]
- PrivateIpAddressVersion = 'IPv4'
- PublicIPAddress = $pubIPv4
-}
-$IPv4Config = New-AzNetworkInterfaceIpConfig @IP4c
-
-## Create IPv6 configuration for NIC. ##
-$IP6c = @{
- Name = 'ipconfig-ipv6'
- Subnet = $vnet.Subnets[0]
- PrivateIpAddressVersion = 'IPv6'
- PublicIPAddress = $pubIPv6
-}
-$IPv6Config = New-AzNetworkInterfaceIpConfig @IP6c
-
-## Command to create network interface for VM ##
-$nic = @{
- Name = 'myNIC1'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus2'
- NetworkSecurityGroup = $nsg
- IpConfiguration = $IPv4Config,$IPv6Config
-}
-New-AzNetworkInterface @nic
-```
-
-### Create virtual machine
-
-Use the following commands to create the virtual machine:
-
-* [New-AzVM](/powershell/module/az.compute/new-azvm)
-
-* [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
-
-* [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
-
-* [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
-
-* [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
-
-```azurepowershell-interactive
-$cred = Get-Credential
-
-## Place network interface into a variable. ##
-$nic = @{
- Name = 'myNIC1'
- ResourceGroupName = 'myResourceGroup'
-}
-$nicVM = Get-AzNetworkInterface @nic
-
-## Create a virtual machine configuration for VMs ##
-$vmsz = @{
- VMName = 'myVM'
- VMSize = 'Standard_DS1_v2'
-}
-$vmos = @{
- ComputerName = 'myVM'
- Credential = $cred
-}
-$vmimage = @{
- PublisherName = 'Debian'
- Offer = 'debian-11'
- Skus = '11'
- Version = 'latest'
-}
-$vmConfig = New-AzVMConfig @vmsz `
- | Set-AzVMOperatingSystem @vmos -Linux `
- | Set-AzVMSourceImage @vmimage `
- | Add-AzVMNetworkInterface -Id $nicVM.Id
-
-## Create the virtual machine for VMs ##
-$vm = @{
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus2'
- VM = $vmConfig
- SshKeyName = 'mySSHKey'
- }
-New-AzVM @vm -GenerateSshKey
-```
-
-## Test SSH connection
-
-Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) to display the IP addresses of the virtual machine.
-
-```azurepowershell-interactive
-$ip4 = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myPublicIP-IPv4'
-}
-Get-AzPublicIPAddress @ip4 | select IpAddress
-```
-
-```azurepowershell-interactive
-PS /home/user> Get-AzPublicIPAddress @ip4 | select IpAddress
-
-IpAddress
-
-20.72.115.187
-```
-
-```azurepowershell-interactive
-$ip6 = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myPublicIP-IPv6'
-}
-Get-AzPublicIPAddress @ip6 | select IpAddress
-```
-
-```azurepowershell-interactive
-PS /home/user> Get-AzPublicIPAddress @ip6 | select IpAddress
-
-IpAddress
-
-2603:1030:403:3::1ca
-```
-
-Open an SSH connection to the virtual machine by using the following command. Replace the IP address with the IP address of your virtual machine.
-
-```azurepowershell-interactive
-ssh azureuser@20.72.115.187
-```
-
-## Clean up resources
-
-When no longer needed, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, virtual machine, and all related resources.
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name 'myResourceGroup'
-```
-
-## Next steps
-
-In this article, you learned how to create an Azure Virtual machine with a dual-stack network.
-
-For more information about IPv6 and IP addresses in Azure, see:
--- [Overview of IPv6 for Azure Virtual Network.](ipv6-overview.md)--- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
virtual-network Monitor Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/monitor-virtual-network-reference.md
Title: Monitoring Azure virtual network data reference
-description: Important reference material needed when you monitor Azure virtual network
-
+ Title: Monitoring data reference for Azure Virtual Network
+description: This article contains important reference material you need when you monitor Azure Virtual Network by using Azure Monitor.
Last updated : 07/21/2024+ + -- Previously updated : 06/29/2021+
-# Monitoring Azure virtual network data reference
+# Azure Virtual Network monitoring data reference
++
+See [Monitor Azure Virtual Network](monitor-virtual-network.md) for details on the data you can collect for Virtual Network and how to use it.
++
+### Supported metrics for Microsoft.Network/virtualNetworks
+
+The following table lists the metrics available for the Microsoft.Network/virtualNetworks resource type.
+++
+### Supported metrics for Microsoft.Network/networkInterfaces
+
+The following table lists the metrics available for the Microsoft.Network/networkInterfaces resource type.
+++
+### Supported metrics for Microsoft.Network/publicIPAddresses
-See [Monitoring Azure virtual network](monitor-virtual-network.md) for details on collecting and analyzing monitoring data for Azure virtual networks.
+The following table lists the metrics available for the Microsoft.Network/publicIPAddresses resource type.
-## Metrics
-This section lists all the automatically collected platform metrics collected for Azure virtual network.
-| Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Virtual network | [Microsoft.Network/virtualNetworks](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkvirtualnetworks) |
-| Network interface | [Microsoft.Network/networkInterfaces](../azure-monitor/essentials/metrics-supported.md#microsoftnetworknetworkinterfaces) |
-| Public IP address | [Microsoft.Network/publicIPAddresses](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) |
-| NAT gateways | [Microsoft.Network/natGateways](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkpublicipaddresses)
+### Supported metrics for Microsoft.Network/natGateways
-For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+The following table lists the metrics available for the Microsoft.Network/natGateways resource type.
-## Metric dimensions
-For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
-Azure virtual network has the following dimensions associated with its metrics.
-### Dimensions for NAT gateway
-| Dimension Name | Description |
-| - | -- |
-| **Direction (Out - In)** | The direction of traffic flow. The supported values are In and Out. |
-| **Protocol** | The type of transport protocol. The supported values are TCP and UDP. |
+Dimensions for Microsoft.Network/virtualNetworks:
-## Resource logs
+| Dimension name | Description |
+|:|:|
+| DestinationCustomerAddress | |
+| ProtectedIPAddress | |
+| SourceCustomerAddress | |
-This section lists the types of resource logs you can collect for resources used with Azure virtual network.
+Dimensions for Microsoft.Network/networkInterfaces:
-For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md).
+None.
-|Resource Log Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Network security group | [Microsoft.Network/networksecuritygroups](../azure-monitor/essentials/resource-logs-categories.md#microsoftnetworknetworksecuritygroups) |
-| Public IP address | [Microsoft.Network/publicIPAddresses](../azure-monitor/essentials/resource-logs-categories.md#microsoftnetworkpublicipaddresses) |
+Dimensions for Microsoft.Network/publicIPAddresses:
-## Azure Monitor logs tables
+| Dimension name | Description |
+|:|:|
+| Direction | |
+| Port | |
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure virtual network and available for query by Log Analytics.
+Dimensions for Microsoft.Network/natGateways:
-|Resource Type | Notes |
-|-|--|
-| Virtual network | [Microsoft.Network/virtualNetworks](/azure/azure-monitor/reference/tables/tables-resourcetype#virtual-networks) |
-| Network interface | [Microsoft.Network/networkInterface](/azure/azure-monitor/reference/tables/tables-resourcetype#network-interfaces) |
-| Public IP address | [Microsoft.Network/publicIP](/azure/azure-monitor/reference/tables/tables-resourcetype#public-ip-addresses) |
+| Dimension name | Description |
+|:|:|
+| Direction | The direction of traffic flow. The supported values are `In` and `Out`. |
+| Protocol | The type of transport protocol. The supported values are `TCP` and `UDP`. |
+| ConnectionState | |
-### Diagnostics tables
-**Virtual network**
+### Supported resource logs for Microsoft.Network/networksecuritygroups
-Azure virtual network doesn't have diagnostic logs.
-## Activity log
+### Supported resource logs for Microsoft.Network/publicIPAddresses
-The following table lists the operations related to Azure virtual network that may be created in the Activity log.
+
+### Supported resource logs for Microsoft.Network/virtualNetworks
+++
+### Virtual Network Microsoft.Network/virtualNetworks
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns)
+
+### Virtual Network Microsoft.Network/networkinterfaces
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns)
+
+### Virtual Network Microsoft.Network/PublicIpAddresses
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns)
++
+- [Microsoft.Network resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftnetwork)
+
+The following table lists the operations related to Azure virtual network that might be created in the Activity log.
| Operation | Description |
-|:|:|
-| All administrative operations | All administrative operations including create, update and delete of an Azure virtual network. |
+|:-|:|
+| All administrative operations | All administrative operations including create, update, and delete of an Azure virtual network. |
| Create or update virtual network | A virtual network was created or updated. |
-| Deletes virtual network | A virtual network was deleted.|
-
-For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
+| Deletes virtual network | A virtual network was deleted.|
-## See also
+## Related content
-- See [Monitoring Azure virtual network](monitor-virtual-network.md) for a description of monitoring Azure virtual network.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitor Azure Virtual Network](monitor-virtual-network.md) for a description of monitoring Virtual Network.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
virtual-network Monitor Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/monitor-virtual-network.md
Title: Monitoring Azure virtual networks
-description: Start here to learn how to monitor Azure virtual networks
-
+ Title: Monitor Azure Virtual Network
+description: Start here to learn how to monitor Azure virtual networks by using Azure Monitor.
Last updated : 07/21/2024++ -- Previously updated : 06/29/2021
-# Monitoring Azure virtual network
+# Monitor Azure Virtual Network
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure virtual network. Azure virtual network uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
-## Monitoring data
+For more information about the resource types for Virtual Network, see [Azure Virtual Network monitoring data reference](monitor-virtual-network-reference.md).
-Azure virtual network collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
-See [Monitoring Azure virtual network data reference](monitor-virtual-network-reference.md) for detailed information on the metrics and logs metrics created by Azure virtual network.
-## Collection and routing
-
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure virtual network* are listed in [Azure virtual network monitoring data reference](monitor-virtual-network-reference.md#resource-logs).
+For a list of available metrics for Virtual Network, see [Azure Virtual Network monitoring data reference](monitor-virtual-network-reference.md#metrics).
> [!IMPORTANT]
-> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator).
-
-The metrics and logs you can collect are discussed in the following sections.
+> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which might increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator).
-## Analyzing metrics
+### Analyzing metrics
Azure Monitor currently doesn't support analyzing *Azure virtual network* metrics from the metrics explorer. To view *Azure virtual network* metrics, select **Metrics** under **Monitoring** from the virtual network you want to analyze. :::image type="content" source="./media/monitor-virtual-network/metrics.png" alt-text="Screenshot of the metrics dashboard for Virtual Network." lightbox="./media/monitor-virtual-network/metrics-expanded.png":::
-For a list of the platform metrics collected for Azure virtual network, see [Monitoring Azure virtual network data reference metrics](monitor-virtual-network-reference.md#metrics).
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+For more information, see [Monitor and visualize network configurations with Azure Network Policy Manager](kubernetes-network-policies.md#monitor-and-visualize-network-configurations-with-azure-npm).
-## Analyzing logs
-Azure virtual network doesn't support resource logs.
+For the available resource log categories, their associated Log Analytics tables, and the log schemas for Virtual Network, see [Azure Virtual Network monitoring data reference](monitor-virtual-network-reference.md#resource-logs).
-For a list of the types of resource logs collected for resources in a virtual network, see [Monitoring virtual network data reference](monitor-virtual-network-reference.md#resource-logs)
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform sign-in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
++
+### Virtual Network alert rules
+
+The following table lists some suggested alert rules for Virtual Network. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Azure Virtual Network monitoring data reference](monitor-virtual-network-reference.md).
The following table lists common and recommended activity alert rules for Azure virtual network. | Alert type | Condition | Description | |:|:|:|
-| Create or Update Virtual Network | Event Level: All selected, Status: All selected, Event initiated by: All services and users | When a user creates or makes configuration changes to the virtual network. |
-| Delete Virtual Network | Event Level: All selected, Status: Started | When a user deletes a virtual network. |
+| Create or Update Virtual Network | Event Level: All selected, Status: All selected, Event initiated by: All services and users | When a user creates or makes configuration changes to the virtual network |
+| Delete Virtual Network | Event Level: All selected, Status: Started | When a user deletes a virtual network |
+
-## Next steps
+## Related content
-* See [Monitoring virtual network data reference](monitor-virtual-network-reference.md) for a reference of the metrics, logs, and other important values created by Azure virtual network.
-* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/overview.md) for details on monitoring Azure resources.
+- See [Azure Virtual Network monitoring data reference](monitor-virtual-network-reference.md) for a reference of the metrics, logs, and other important values created for Virtual Network.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
vpn-gateway About Active Active Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-active-active-gateways.md
+
+ Title: 'About active-active VPN gateways'
+
+description: Learn about active-active VPN gateways, including configuration and design.
+++ Last updated : 07/22/2024+++
+# About active-active mode VPN gateways
+
+Azure VPN gateways can be configured as active-standby or active-active. This article helps you better understand active-active gateway configurations and why you might want to create a gateway in active-active mode.
+
+## Why create an active-active gateway?
+
+VPN gateways consist of two instances in an active-standby configuration unless you specify active-active mode. In active-standby mode, for any planned maintenance or unplanned disruption that happens to the active instance, behavior is as follows:
+
+* **S2S and VNet-to-VNet**: The standby instance takes over automatically (failover), and resumes the site-to-site (S2S) VPN or VNet-to-VNet connections. This switch over causes a brief interruption. For planned maintenance, connectivity is restored quickly. For unplanned issues, the connection recovery is longer.
+* **P2S**: For point-to-site (P2S) VPN client connections to the gateway, P2S connections are disconnected. Users need to reconnect from the client machines.
+
+To avoid this interruption, you can always create your gateway in **active-active** mode, or you can change an active-standby gateway to active-active.
+
+### Active-active design
+
+In an active-active configuration, both instances of the gateway VMs establish S2S VPN tunnels to your on-premises VPN device, as shown the following diagram:
++
+In this configuration, each Azure gateway instance has a unique public IP address, and each will establish an IPsec/IKE S2S VPN tunnel to your on-premises VPN device specified in your local network gateway and connection. Both VPN tunnels are actually part of the same connection. You'll still need to configure your on-premises VPN device to accept or establish two S2S VPN tunnels to those two Azure VPN gateway public IP addresses.
+
+Because the Azure gateway instances are in an active-active configuration, the traffic from your Azure virtual network to your on-premises network are routed through both tunnels simultaneously, even if your on-premises VPN device might favor one tunnel over the other. For a single TCP or UDP flow, Azure attempts to use the same tunnel when sending packets to your on-premises network. However, your on-premises network could use a different tunnel to send packets to Azure.
+
+When a planned maintenance or unplanned event happens to one gateway instance, the IPsec tunnel from that instance to your on-premises VPN device will be disconnected. The corresponding routes on your VPN devices should be removed or withdrawn automatically so that the traffic will be switched over to the other active IPsec tunnel. On the Azure side, the switch over will happen automatically from the affected instance to the other active instance.
+
+> [!NOTE]
+> If only one tunnel is connected, or both the tunnels are connected to one instance in active-active mode, the tunnel will go down during maintenance.
+
+### Dual-redundancy active-active design
+
+The most reliable design option is to combine the active-active gateways on both your network and Azure, as shown in the following diagram.
++
+In this configuration, you create and set up the Azure VPN gateway in an active-active configuration, and create two local network gateways and two connections for your two on-premises VPN devices. The result is a full mesh connectivity of four IPsec tunnels between your Azure virtual network and your on-premises network.
+
+All gateways and tunnels are active from the Azure side, so the traffic is spread among all four tunnels simultaneously, although each TCP or UDP flow will follow the same tunnel or path from the Azure side. Even though by spreading the traffic, you might see slightly better throughput over the IPsec tunnels, the primary goal of this configuration is for high availability. And due to the statistical nature of the spreading, it's difficult to provide the measurement on how different application traffic conditions affect the aggregate throughput.
+
+This topology requires two local network gateways and two connections to support the pair of on-premises VPN devices. For more information, see [About highly available connectivity](vpn-gateway-highlyavailable.md).
+
+## Configure an active-active gateway
+
+You can configure an active-active gateway using the [Azure portal](tutorial-create-gateway-portal.md), PowerShell, or CLI. You can also change an active-standby gateway to active-active mode. For steps, see [Change a gateway to active-active](gateway-change-active-active.md).
+
+An active-active gateway has slightly different configuration requirements than an active-standby gateway.
+
+* You can't configure an active-active gateway using the Basic gateway SKU.
+* The VPN must be route based. It can't be policy based.
+* Two public IP addresses are required. Both must be **Standard SKU** public IP addresses that are assigned as **Static**.
+* An active-active gateway configuration costs the same as an active-standby configuration. However, active-active configurations require two public IP addresses instead of one. See [IP Address pricing](https://azure.microsoft.com/pricing/details/ip-addresses/).
+
+## Reset an active-active gateway
+
+If you need to reset an active-active gateway, you can reset both instances using the portal. You can also use PowerShell or CLI to reset each gateway instance separately using instance VIPS. See [Reset a connection or a gateway](reset-gateway.md#ps).
+
+## Next steps
+
+* [Configure an active-active gateway - Azure portal](tutorial-create-gateway-portal.md)
+* [Change a gateway to active-active mode](gateway-change-active-active.md)
+* [Reset an active-active gateway](reset-gateway.md#ps)
vpn-gateway Point To Site Entra Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-entra-gateway.md
Previously updated : 05/28/2024 Last updated : 07/24/2024 # Customer intent: As an VPN Gateway administrator, I want to configure point-to-site to allow Microsoft Entra ID authentication using the Microsoft-registered Azure VPN Client APP ID.
-# Configure P2S VPN Gateway for Microsoft Entra ID authentication ΓÇô Microsoft-registered app (Preview)
+# Configure P2S VPN Gateway for Microsoft Entra ID authentication ΓÇô Microsoft-registered app
This article helps you configure your point-to-site (P2S) VPN gateway for Microsoft Entra ID authentication using the new Microsoft-registered Azure VPN Client App ID.
vpn-gateway Point To Site Entra Vpn Client Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-entra-vpn-client-mac.md
description: Learn how to configure macOS client computers to connect to Azure u
Previously updated : 05/28/2024 Last updated : 07/24/2024
This article helps you configure your macOS client computer to connect to an Azu
## Prerequisites
-Configure your VPN gateway for point-to-site VPN connections that specify Microsoft Entra ID authentication. See [Configure a P2S VPN gateway for Microsoft Entra ID authentication](point-to-site-entra-gateway.md).
+Make sure you have the following prerequistes before you proceed with the steps in this article:
+
+* Configure your VPN gateway for point-to-site VPN connections that specify Microsoft Entra ID authentication. See [Configure a P2S VPN gateway for Microsoft Entra ID authentication](point-to-site-entra-gateway.md).
+* If your device is running MacOS M1 or MacOS M2, you must install Rosetta software if it's not already installed on the device. For more information, see the [Apple support article](https://support.apple.com/en-us/HT211861).
## Workflow
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
To help configure your VPN device, refer to the links that correspond to the app
| Citrix |NetScaler MPX, SDX, VPX |10.1 and later |[Configuration guide](https://docs.citrix.com/en-us/netscaler/11-1/system/cloudbridge-connector-introduction/cloudbridge-connector-azure.html) |Not compatible | | F5 |BIG-IP series |12.0 |[Configuration guide](https://community.f5.com/t5/technical-articles/connecting-to-windows-azure-with-the-big-ip/ta-p/282476) |[Configuration guide](https://community.f5.com/t5/technical-articles/big-ip-to-azure-dynamic-ipsec-tunneling/ta-p/282665) | | Fortinet |FortiGate |FortiOS 5.6 | Not tested |[Configuration guide](https://docs.fortinet.com/document/fortigate/5.6.0/cookbook/255100/ipsec-vpn-to-azure) |
-| Fujitsu | Si-R G series | V04: V04.12<br>V20: V20.14 | [Configuration guide](https://www.fujitsu.com/jp/products/network/router/sir/example/#cloud00) | [Configuration guide](https://www.fujitsu.com/jp/products/network/router/sir/example/#cloud00) |
+| Fsas Technologies | Si-R G series | V04: V04.12<br>V20: V20.14 | [Configuration guide](https://www.fujitsu.com/jp/products/network/router/sir/example/#cloud00) | [Configuration guide](https://www.fujitsu.com/jp/products/network/router/sir/example/#cloud00) |
| Hillstone Networks | Next-Gen Firewalls (NGFW) | 5.5R7 | Not tested | [Configuration guide](https://www.hillstonenet.com/wp-content/uploads/How-to-setup-Site-to-Site-VPN-between-Microsoft-Azure-and-an-on-premise-Hillstone-Networks-Security-Gateway.pdf) | | HPE Aruba | EdgeConnect SDWAN Gateway | ECOS Release v9.2<br>Orchestrator OS v9.2 | [Configuration guide](https://www.arubanetworks.com/website/techdocs/sdwan-PDFs/integrations/int_Azure-EC-IPSec_latest.pdf) | [Configuration guide](https://www.arubanetworks.com/website/techdocs/sdwan-PDFs/integrations/int_Azure-EC-IPSec_latest.pdf)| | Internet Initiative Japan (IIJ) |SEIL Series |SEIL/X 4.60<br>SEIL/B1 4.60<br>SEIL/x86 3.20 |[Configuration guide](https://www.iij.ad.jp/biz/seil/ConfigAzureSEILVPN.pdf) |Not compatible |