Updates from: 01/11/2024 02:13:17
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/gpt-with-vision.md
+
+ Title: GPT-4 Turbo with Vision concepts
+
+description: Learn about vision chats enabled by GPT-4 Turbo with Vision.
++++ Last updated : 01/02/2024+
+keywords:
++
+# GPT-4 Turbo with Vision concepts
+
+GPT-4 Turbo with Vision is a large multimodal model (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. It incorporates both natural language processing and visual understanding. This guide provides details on the capabilities and limitations of GPT-4 Turbo with Vision.
+
+To try out GPT-4 Turbo with Vision, see the [quickstart](/azure/ai-services/openai/gpt-v-quickstart).
+
+## Chats with vision
+
+The GPT-4 Turbo with Vision model answers general questions about what's present in the images or videos you upload.
++
+## Enhancements
+
+Enhancements let you incorporate other Azure AI services (such as Azure AI Vision) to add new functionality to the chat-with-vision experience.
+
+**Object grounding**: Azure AI Vision complements GPT-4 Turbo with VisionΓÇÖs text response by identifying and locating salient objects in the input images. This lets the chat model give more accurate and detailed responses about the contents of the image.
+++
+**Optical Character Recognition (OCR)**: Azure AI Vision complements GPT-4 Turbo with Vision by providing high-quality OCR results as supplementary information to the chat model. It allows the model to produce higher quality responses for images with dense text, transformed images, and numbers-heavy financial documents, and increases the variety of languages the model can recognize in text.
+++
+**Video prompt**: The **video prompt** enhancement lets you use video clips as input for AI chat, enabling the model to generate summaries and answers about video content. It uses Azure AI Vision Video Retrieval to sample a set of frames from a video and create a transcript of the speech in the video.
+
+In order to use the video prompt enhancement, you need both an Azure AI Vision resource and an Azure Video Indexer resource, in addition to your Azure OpenAI resource.
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RW1eHRf]
++
+## Special pricing information
+
+> [!IMPORTANT]
+> Pricing details are subject to change in the future.
+
+GPT-4 Turbo with Vision accrues charges like other Azure OpenAI chat models. You pay a per-token rate for the prompts and completions, detailed on the [Pricing page](/pricing/details/cognitive-services/openai-service/). The base charges and additional features are outlined here:
+
+Base Pricing for GPT-4 Turbo with Vision is:
+- Input: $0.01 per 1000 tokens
+- Output: $0.03 per 1000 tokens
+
+See the [Tokens section of the overview](/azure/ai-services/openai/overview#tokens) for information on how text and images translate to tokens.
+
+Additionally, if you use video prompt integration with the Video Retrieval add-on, it accrues other costs:
+- Ingestion: $0.05 per minute of video
+- Transactions: $0.25 per 1000 queries of the Video Retrieval index
+
+Processing videos involves the use of extra tokens to identify key frames for analysis. The number of these additional tokens will be roughly equivalent to the sum of the tokens in the text input, plus 700 tokens.
+
+### Example price calculation
+
+> [!IMPORTANT]
+> The following content is an example only, and prices are subject to change in the future.
+
+For a typical use case, take a 3-minute video with a 100-token prompt input. The video has a transcript that's 100 tokens long, and when the service processes the prompt, it generates 100 tokens of output. The pricing for this transaction would be:
+
+| Item | Detail | Total Cost |
+|--|--|--|
+| GPT-4 Turbo with Vision input tokens | 100 text tokens | $0.001 |
+| Additional Cost to identify frames | 100 input tokens + 700 tokens + 1 Video Retrieval transaction | $0.00825 |
+| Image Inputs and Transcript Input | 20 images (85 tokens each) + 100 transcript tokens | $0.018 |
+| Output Tokens | 100 tokens (assumed) | $0.003 |
+| **Total Cost** | | **$0.03025** |
+
+Additionally, there's a one-time indexing cost of $0.15 to generate the Video Retrieval index for this 3-minute video. This index can be reused across any number of Video Retrieval and GPT-4 Turbo with Vision API calls.
+
+## Limitations
+
+This section describes the limitations of GPT-4 Turbo with Vision.
+
+### Image support
+
+- **Limitation on image enhancements per chat session**: Enhancements cannot be applied to multiple images within a single chat call.
+- **Maximum input image size**: The maximum size for input images is restricted to 20 MB.
+- **Object grounding in enhancement API**: When the enhancement API is used for object grounding, and the model detects duplicates of an object, it will generate one bounding box and label for all the duplicates instead of separate ones for each.
+- **Low resolution accuracy**: When images are analyzed using the "low resolution" setting, it allows for faster responses and uses fewer input tokens for certain use cases. However, this could impact the accuracy of object and text recognition within the image.
+- **Image chat restriction**: When you upload images in Azure OpenAI Studio or the API, there is a limit of 10 images per chat call.
+
+### Video support
+
+- **Low resolution**: Video frames are analyzed using GPT-4 Turbo with Vision's "low resolution" setting, which may affect the accuracy of small object and text recognition in the video.
+- **Video file limits**: Both MP4 and MOV file types are supported. In Azure OpenAI Studio, videos must be less than 3 minutes long. When you use the API there is no such limitation.
+- **Prompt limits**: Video prompts only contain one video and no images. In Azure OpenAI Studio, you can clear the session to try another video or images.
+- **Limited frame selection**: The service selects 20 frames from the entire video, which might not capture all the critical moments or details. Frame selection can be approximately evenly spread through the video or focused by a specific video retrieval query, depending on the prompt.
+- **Language support**: The service primarily supports English for grounding with transcripts. Transcripts don't provide accurate information on lyrics in songs.
+
+## Next steps
+
+- Get started using GPT-4 Turbo with Vision by following the [quickstart](/azure/ai-services/openai/gpt-v-quickstart).
+- For a more in-depth look at the APIs, and to use video prompts in chat, follow the [how-to guide](../how-to/gpt-with-vision.md).
+- See the [completions and embeddings API reference](../reference.md)
ai-services Provisioned Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md
We introduced a new deployment type called **ProvisionedManaged** which provides
Provisioned throughput quota represents a specific amount of total throughput you can deploy. Quota in the Azure OpenAI Service is managed at the subscription level meaning that it can be consumed by different resources within that subscription.
-Quota is specific to a (deployment type, mode, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. Customers can raise a support request to move the quota across deployment types, models, or regions but we can't guarantee that it will be possible.
+Quota is specific to a (deployment type, model, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. Customers can raise a support request to move the quota across deployment types, models, or regions but we can't guarantee that it will be possible.
While we make every attempt to ensure that quota is always deployable, quota does not represent a guarantee that the underlying capacity is available for the customer to use. The service assigns capacity to the customer at deployment time and if capacity is unavailable the deployment will fail with an out of capacity error.
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployme
The format is similar to that of the chat completions API for GPT-4, but the message content can be an array containing strings and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image).
-You must also include the `enhancements` and `dataSources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` property, which each have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service. `dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVision"` and a `parameters` property. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource. Remember to set a `"max_tokens"` value, or the return output will be cut off.
+You must also include the `enhancements` and `dataSources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` property, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service. `dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVision"` and a `parameters` property. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource. Remember to set a `"max_tokens"` value, or the return output will be cut off.
```json {
GPT-4 Turbo with Vision provides exclusive access to Azure AI Services tailored
> To use Vision enhancement, you need a Computer Vision resource, and it must be in the same Azure region as your GPT-4 Turbo with Vision resource. > [!CAUTION]
-> Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges.
+> Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
Follow these steps to set up a video retrieval system and integrate it with your AI chat model: 1. Get an Azure AI Vision resource in the same region as the Azure OpenAI resource you're using.
Base Pricing for GPT-4 Turbo with Vision is:
Video prompt integration with Video Retrieval Add-on: - Ingestion: $0.05 per minute of video-- Transactions: $0.25 per 1000 queries of the Video Retrieval index-
-Processing videos will involve the use of extra tokens to identify key frames for analysis. The number of these additional tokens will be roughly equivalent to the sum of the tokens in the text input plus 700 tokens.
-
-#### Calculation
-For a typical use case let's imagine that I have use a 3-minute video with a 100-token prompt input. The section of video has a transcript that's 100-tokens long and when I process the prompt, I generate 100-tokens of output. The pricing for this transaction would be as follows:
-
-| Item | Detail | Total Cost |
-|-||--|
-| GPT-4 Turbo with Vision Input Tokens | 100 text tokens | $0.001 |
-| Additional Cost to identify frames | 100 input tokens + 700 tokens + 1 Video Retrieval txn | $0.00825 |
-| Image Inputs and Transcript Input | 20 images (85 tokens each) + 100 transcript tokens | $0.018 |
-| Output Tokens | 100 tokens (assumed) | $0.003 |
-| **Total Cost** | | **$0.03025** |
-
-Additionally, there's a one-time indexing cost of $0.15 to generate the Video Retrieval index for this 3-minute segment of video. This index can be reused across any number of Video Retrieval and GPT-4 Turbo with Vision calls.
-
-## Limitations
-
-### Image support
--- **Limitation on image enhancements per chat session**: Enhancements cannot be applied to multiple images within a single chat call.-- **Maximum input image size**: The maximum size for input images is restricted to 20 MB.-- **Object grounding in enhancement API**: When the enhancement API is used for object grounding, and the model detects duplicates of an object, it will generate one bounding box and label for all the duplicates instead of separate ones for each.-- **Low resolution accuracy**: When images are analyzed using the "low resolution" setting, it allows for faster responses and uses fewer input tokens for certain use cases. However, this could impact the accuracy of object and text recognition within the image.-- **Image chat restriction**: When uploading images in the chat playground or the API, there is a limit of 10 images per chat call.-
-### Video support
--- **Low resolution**: Video frames are analyzed using GPT-4 Turbo with Vision's "low resolution" setting, which may affect the accuracy of small object and text recognition in the video.-- **Video file limits**: Both MP4 and MOV file types are supported. In the Azure AI Playground, videos must be less than 3 minutes long. When you use the API there is no such limitation.-- **Prompt limits**: Video prompts only contain one video and no images. In Playground, you can clear the session to try another video or images.-- **Limited frame selection**: The service selects 20 frames from the entire video, which might not capture all the critical moments or details. Frame selection can be approximately evenly spread through the video or focused by a specific video retrieval query, depending on the prompt.-- **Language support**: The service primarily supports English for grounding with transcripts. Transcripts don't provide accurate information on lyrics in songs.
+- Transactions: $0.25 per 1000 queries of the Video Retrieval indexer
## Next steps
ai-services Provisioned Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-get-started.md
+
+ Title: 'Quickstart - Get started using Provisioned Deployments with Azure OpenAI Service'
+
+description: Walkthrough on how to get started provisioned deployments on Azure OpenAI Service.
++++++ Last updated : 12/15/2023
+recommendations: false
++
+# Get started using Provisioned Deployments on the Azure OpenAI Service
+
+The following guide walks you through setting up a provisioned deployment with your Azure OpenAI Service resource.
+
+## Prerequisites
+
+- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)
+- Access granted to Azure OpenAI in the desired Azure subscription.
+ Currently, access to this service is by application. You can apply for access to Azure OpenAI Service by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access?azure-portal=true).
+- Obtained Quota for a provisioned deployment and purchased a commitment.
+
+> [!NOTE]
+> Provisioned Throughput Units (PTU) are different from standard quota in Azure OpenAI and are not available by default. To learn more about this offering contact your Microsoft Account Team.
++
+## Create your provisioned deployment
+
+After you purchase a commitment on your quota, you can create a deployment. To create a provisioned deployment, you can follow these steps; the choices described reflect the entries shown in the screenshot.
++++
+1. Sign into the [Azure OpenAI Studio](https://oai.azure.com)
+2. Choose the subscription that was enabled for provisioned deployments & select the desired resource in a region where you have the quota.
+3. Under **Management** in the left-nav select **Deployments**.
+4. Select Create new deployment and configure the following fields. Expand the ΓÇÿadvanced optionsΓÇÖ drop-down.
+5. Fill out the values in each field. Here's an example:
+
+| Field | Description | Example |
+|--|--|--|
+| Select a model| Choose the specific model you wish to deploy. | GPT-4 |
+| Model version | Choose the version of the model to deploy. | 0613 |
+| Deployment Name | The deployment name is used in your code to call the model by using the client libraries and the REST APIs. | gpt-4|
+| Content filter | Specify the filtering policy to apply to the deployment. Learn more on our [Content Filtering](../concepts/content-filter.md) how-tow | Default |
+| Deployment Type |This impacts the throughput and performance. Choose Provisioned-Managed for your provisioned deployment | Provisioned-Managed |
+| Provisioned Throughput Units | Choose the amount of throughput you wish to include in the deployment. | 100 |
++
+If you wish to create your deployment programmatically, you can do so with the following Azure CLI command. Update the `sku-capacity` with the desired number of provisioned throughput units.
+
+```cli
+az cognitiveservices account deployment create \
+--name <myResourceName> \
+--resource-group <myResourceGroupName> \
+--deployment-name MyModel \
+--model-name GPT-4 \
+--model-version 0613 \
+--model-format OpenAI \
+--sku-capacity 100 \
+--sku-name Provisioned-Managed
+```
+
+REST, ARM template, Bicep and Terraform can also be used to create deployments. See the section on automating deployments in the [Managing Quota](https://learn.microsoft.com/azure/ai-services/openai/how-to/quota?tabs=rest#automate-deployment) how-to guide and replace the `sku.name` with "Provisioned-Managed" rather than "Standard."
+
+## Make your first calls
+The inferencing code for provisioned deployments is the same a standard deployment type. The following code snippet shows a chat completions call to a GPT-4 model. For your first time using these models programmatically, we recommend starting with our [quickstart start guide](../quickstart.md). Our recommendation is to use the OpenAI library with version 1.0 or greater since this includes retry logic within the library.
++
+```python
+ #Note: The openai-python library support for Azure OpenAI is in preview.
+ import os
+ from openai import AzureOpenAI
+
+ client = AzureOpenAI(
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2023-05-15"
+ )
+
+ response = client.chat.completions.create(
+ model="gpt-4", # model = "deployment_name".
+ messages=[
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
+ {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
+ {"role": "user", "content": "Do other Azure AI services support this too?"}
+ ]
+ )
+
+ print(response.choices[0].message.content)
+```
+
+> [!IMPORTANT]
+> For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). For more information about credential security, see the Azure AI services [security](../../security-features.md) article.
++
+## Understanding expected throughput
+The amount of throughput that you can achieve on the endpoint is a factor of the number of PTUs deployed, input size, output size and call rate. The number of concurrent calls and total tokens processed can vary based on these values. Our recommended way for determining the throughput for your deployment is as follows:
+1. Use the Capacity calculator for a sizing estimate. You can find the capacity calculator in the Azure OpenAI Studio under the quotas page and Provisioned tab.
+2. Benchmark the load using real traffic workload. For more information about benchmarking, see the [benchmarking](#run-a-benchmark) section.
++
+## Measuring your deployment utilization
+When you deploy a specified number of provisioned throughput units (PTUs), a set amount of inference throughput is made available to that endpoint. Utilization of this throughput is a complex formula based on the model, model-version call rate, prompt size, generation size. To simplify this calculation, we provide a utilization metric in Azure Monitor. Your deployment returns a 429 on any new calls after the utilization rises above 100%. The Provisioned utilization is defined as follows:
+
+PTU deployment utilization = (PTUs consumed in the time period) / (PTUs deployed in the time period)
+
+You can find the utilization measure in the Azure-Monitor section for your resource. To access the monitoring dashboards sign-in to [https://portal.azure.com](https://portal.azure.com), go to your Azure OpenAI resource and select the Metrics page from the left nav. On the metrics page, select the 'Provisioned-managed utilization' measure. If you have more than one deployment in the resource, you should also split the values by each deployment by clicking the 'Apply Splitting' button.
++
+For more information about monitoring your deployments, see the [Monitoring Azure OpenAI Service](./monitoring.md) page.
++
+## Handling high utilization
+Provisioned deployments provide you with an allocated amount of compute capacity to run a given model. The ΓÇÿProvisioned-Managed UtilizationΓÇÖ metric in Azure Monitor measures the utilization of the deployment in one-minute increments. Provisioned-Managed deployments are also optimized so that calls accepted are processed with a consistent per-call max latency. When the workload exceeds its allocated capacity, the service returns a 429 HTTP status code until the utilization drops down below 100%. The time before retrying is provided in the `retry-after` and `retry-after-ms` response headers that provide the time in seconds and milliseconds respectively. This approach maintains the per-call latency targets while giving the developer control over how to handle high-load situations ΓÇô for example retry or divert to another experience/endpoint.
+
+### What should I do when I receive a 429 response?
+A 429 response indicates that the allocated PTUs are fully consumed at the time of the call. The response includes the `retry-after-ms` and `retry-after` headers that tell you the time to wait before the next call will be accepted. How you choose to handle a 429 response depends on your application requirements. Here are some considerations:
+- If you are okay with longer per-call latencies, implement client-side retry logic to wait the `retry-after-ms` time and retry. This approach lets you maximize the throughput on the deployment. Microsoft-supplied client SDKs already handle it with reasonable defaults. You might still need further tuning based on your use-cases.
+- Consider redirecting the traffic to other models, deployments or experiences. This approach is the lowest-latency solution because this action can be taken as soon as you receive the 429 signal.
+The 429 signal isn't an unexpected error response when pushing to high utilization but instead part of the design for managing queuing and high load for provisioned deployments.
+
+### Modifying retry logic within the client libraries
+The Azure OpenAI SDKs retry 429 responses by default and behind the scenes in the client (up to the maximum retries). The libraries respect the `retry-after` time. You can also modify the retry behavior to better suite your experience. Here's an example with the python library.
++
+You can use the `max_retries` option to configure or disable retry settings:
+
+```python
+from openai import AzureOpenAI
+
+# Configure the default for all requests:
+client = AzureOpenAI(
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2023-05-15",
+ max_retries=5,# default is 2
+)
+
+# Or, configure per-request:
+client.with_options(max_retries=5).chat.completions.create(
+ model="gpt-4", # model = "deployment_name".
+ messages=[
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
+ {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
+ {"role": "user", "content": "Do other Azure AI services support this too?"}
+ ]
+)
+```
++
+## Run a benchmark
+The exact performance and throughput capabilities of your instance depends on the kind of requests you make and the exact workload. The best way to determine the throughput for your workload is to run a benchmark on your own data.
+
+To assist you in this work, the benchmarking tool provides a way to easily run benchmarks on your deployment. The tool comes with several possible preconfigured workload shapes and outputs key performance metrics. Learn more about the tool and configuration settings in our GitHub Repo: [https://aka.ms/aoai/benchmarking](https://aka.ms/aoai/benchmarking).
+
+We recommend the following workflow:
+1. Estimate your throughput PTUs using the capacity calculator.
+1. Run a benchmark with this traffic shape for an extended period of time (10+ min) to observe the results in a steady state.
+1. Observe the utilization, tokens processed and call rate values from benchmark tool and Azure Monitor.
+1. Run a benchmark with your own traffic shape and workloads using your client implementation. Be sure to implement retry logic using either an Azure Openai client library or custom logic.
+++
+## Next Steps
+
+* For more information on cloud application best practices, check out [Best practices in cloud applications](https://learn.microsoft.com/azure/architecture/best-practices/index-best-practices)
+* For more information on provisioned deployments, check out [What is provisioned throughput?](../concepts/provisioned-throughput.md)
+* For more information on retry logic within each SDK, check out:
+ * [Python reference documentation](https://github.com/openai/openai-python?tab=readme-ov-file#retries)
+ * [.NET reference documentation](https://learn.microsoft.com/dotnet/api/azure.ai.openai.openaiclientoptions?view=azure-dotnet-preview)
+ * [Java reference documentation](https://learn.microsoft.com/java/api/com.azure.ai.openai.openaiclientbuilder?view=azure-java-preview#com-azure-ai-openai-openaiclientbuilder-retryoptions(com-azure-core-http-policy-retryoptions))
+ * [JavaScript reference documentation](https://learn.microsoft.com/javascript/api/@azure/openai/openaiclientoptions?view=azure-node-preview#@azure-openai-openaiclientoptions-retryoptions)
+ * [GO reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai#ChatCompletionsOptions)
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
We recommend using environment variables. If you haven't done this before our [P
from openai import OpenAI client = OpenAI(
- api_key=os.environ['OPENAI_API_KEY']
+ api_key=os.environ["OPENAI_API_KEY"]
)
client = AzureOpenAI(
from openai import OpenAI client = OpenAI(
- api_key=os.environ['OPENAI_API_KEY']
+ api_key=os.environ["OPENAI_API_KEY"]
)
client = AzureOpenAI(
## Keyword argument for model
-OpenAI uses the `model` keyword argument to specify what model to use. Azure OpenAI has the concept of unique model [deployments](create-resource.md?pivots=web-portal#deploy-a-model). When using Azure OpenAI `model` should refer to the underling deployment name you chose when you deployed the model.
+OpenAI uses the `model` keyword argument to specify what model to use. Azure OpenAI has the concept of unique model [deployments](create-resource.md?pivots=web-portal#deploy-a-model). When using Azure OpenAI `model` should refer to the underlying deployment name you chose when you deployed the model.
+
+> [!IMPORTANT]
+> When you access the model via the API in Azure OpenAI you will need to refer to the deployment name rather than the underlying model name in API calls. This is one of the [key differences](../how-to/switching-endpoints.md) between OpenAI and Azure OpenAI. OpenAI only requires the model name, Azure OpenAI always requires deployment name, even when using the model parameter. In our docs we often have examples where deployment names are represented as identical to model names to help indicate which model works with a particular API endpoint. Ultimately your deployment names can follow whatever naming convention is best for your use case.
<table> <tr>
OpenAI uses the `model` keyword argument to specify what model to use. Azure Ope
```python completion = client.completions.create(
- model='gpt-3.5-turbo-instruct',
+ model="gpt-3.5-turbo-instruct",
prompt="<prompt>") )
embedding = client.embeddings.create(
```python completion = client.completions.create(
- model=gpt-35-turbo-instruct, # This must match the custom deployment name you chose for your model.
+ model="gpt-35-turbo-instruct", # This must match the custom deployment name you chose for your model.
prompt=<"prompt"> )
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
The batch transcription API supports a number of different formats and codecs, s
- WAV - MP3 - OPUS/OGG-- AAC - FLAC - WMA - ALAW in WAV container
ai-services How To Custom Speech Display Text Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-display-text-format.md
Previously updated : 12/14/2023 Last updated : 1/10/2024
Here are the grammar punctuation rules:
#### Spelling correction
-The name `CVOID-19` might be recognized as `covered 19`. To make sure that `COVID-19 is a virus` is displayed instead of `covered 19 is a virus`, use the following rewrite rule:
+The name `COVID-19` might be recognized as `covered 19`. To make sure that `COVID-19 is a virus` is displayed instead of `covered 19 is a virus`, use the following rewrite rule:
```text #rewrite
ai-services Batch Synthesis Avatar Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/batch-synthesis-avatar-properties.md
The following table describes the avatar properties.
| properties.talkingAvatarStyle | The style name of the talking avatar.<br/><br/>The supported avatar styles can be found [here](avatar-gestures-with-ssml.md#supported-pre-built-avatar-characters-styles-and-gestures).<br/><br/>This property is required for prebuilt avatar, and optional for customized avatar.| | properties.customized | A bool value indicating whether the avatar to be used is customized avatar or not. True for customized avatar, and false for prebuilt avatar.<br/><br/>This property is optional, and the default value is `false`.| | properties.videoFormat | The format for output video file, could be mp4 or webm.<br/><br/>The `webm` format is required for transparent background.<br/><br/>This property is optional, and the default value is mp4.|
-| properties.videoCodec | The codec for output video, could be h264, hevc or vp9.<br/><br/>Vp9 is required for transparent background.<br/><br/>This property is optional, and the default value is hevc.|
+| properties.videoCodec | The codec for output video, could be h264, hevc or vp9.<br/><br/>Vp9 is required for transparent background. The synthesis speed will be slower with vp9 codec, as vp9 encoding is slower.<br/><br/>This property is optional, and the default value is hevc.|
| properties.kBitrate (bitrateKbps) | The bitrate for output video, which is integer value, with unit kbps.<br/><br/>This property is optional, and the default value is 2000.| | properties.videoCrop | This property allows you to crop the video output, which means, to output a rectangle subarea of the original video. This property has two fields, which define the top-left vertex and bottom-right vertex of the rectangle.<br/><br/>This property is optional, and the default behavior is to output the full video.| | properties.videoCrop.topLeft |The top-left vertex of the rectangle for video crop. This property has two fields x and y, to define the horizontal and vertical position of the vertex.<br/><br/>This property is required when properties.videoCrop is set.|
ai-studio Ai Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/ai-resources.md
- ignite-2023 Last updated 12/14/2023---+++ # Azure AI resources
ai-studio Rbac Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md
Title: Role-based access control in Azure AI Studio description: This article introduces role-based access control in Azure AI Studio-+ - ignite-2023 Last updated 11/15/2023---+++ # Role-based access control in Azure AI Studio
Here's a table of the built-in roles and their permissions for the Azure AI reso
| Role | Description | | | | | Owner | Full access to the Azure AI resource, including the ability to manage and create new Azure AI resources and assign permissions. This role is automatically assigned to the Azure AI resource creator|
-| Contributor | User has full access to the Azure AI resource, including the ability to create new Azure AI resources, but isn't able to manage Azure AI resource permissions on the existing resource. |
-| Azure AI Developer | Perform all actions except create new Azure AI resources and manage the Azure AI resource permissions. For example, users can create projects, compute, and connections. Users can assign permissions within their project. Users can interact with existing AI resources such as Azure OpenAI, Azure AI Search, and Azure AI services. |
-| Reader | Read only access to the Azure AI resource. This role is automatically assigned to all project members within the Azure AI resource. |
+| Contributor | User has full access to the Azure AI resource, including the ability to create new Azure AI resources, but isn't able to manage Azure AI resource permissions on the existing resource. |
+| Azure AI Developer | Perform all actions except create new Azure AI resources and manage the Azure AI resource permissions. For example, users can create projects, compute, and connections. Users can assign permissions within their project. Users can interact with existing AI resources such as Azure OpenAI, Azure AI Search, and Azure AI services. |
+| Reader | Read only access to the Azure AI resource. This role is automatically assigned to all project members within the Azure AI resource. |
The key difference between Contributor and Azure AI Developer is the ability to make new Azure AI resources. If you don't want users to make new Azure AI resources (due to quota, cost, or just managing how many Azure AI resources you have), assign the AI Developer role.
Here's a table of the built-in roles and their permissions for the Azure AI proj
| Role | Description | | | | | Owner | Full access to the Azure AI project, including the ability to assign permissions to project users. |
-| Contributor | User has full access to the Azure AI project but can't assign permissions to project users. |
-| Azure AI Developer | User can perform most actions, including create deployments, but can't assign permissions to project users. |
-| Reader | Read only access to the Azure AI project. |
+| Contributor | User has full access to the Azure AI project but can't assign permissions to project users. |
+| Azure AI Developer | User can perform most actions, including create deployments, but can't assign permissions to project users. |
+| Reader | Read only access to the Azure AI project. |
When a user gets access to a project, two more roles are automatically assigned to the project user. The first role is Reader on the Azure AI resource. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```.
ai-studio Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/autoscale.md
Title: Autoscale Azure AI limits description: Learn how you can manage and increase quotas for resources with Azure AI Studio.-+ - ignite-2023 Last updated 11/15/2023---+++ # Autoscale Azure AI limits
ai-studio Commitment Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/commitment-tier.md
Title: Commitment tier pricing for Azure AI description: Learn how to sign up for commitment tier pricing instead of pay-as-you-go pricing.-+ - ignite-2023 Last updated 11/15/2023---+++ # Commitment tier pricing for Azure AI
ai-studio Configure Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-managed-network.md
Title: How to configure a managed network for Azure AI description: Learn how to configure a managed network for Azure AI-+ - ignite-2023 Last updated 11/15/2023---+++ # How to configure a managed network for Azure AI
ai-studio Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-private-link.md
Title: How to configure a private link for Azure AI description: Learn how to configure a private link for Azure AI-+ Last updated 11/15/2023---+++ # How to configure a private link for Azure AI
ai-studio Connections Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/connections-add.md
- ignite-2023 Last updated 11/15/2023---+++ # How to add a new connection in Azure AI Studio
ai-studio Costs Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/costs-plan-manage.md
Title: Plan and manage costs for Azure AI Studio description: Learn how to plan for and manage costs for Azure AI Studio by using cost analysis in the Azure portal.-+ - ignite-2023 Last updated 11/15/2023---+++ # Plan and manage costs for Azure AI Studio
ai-studio Create Azure Ai Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-azure-ai-resource.md
Title: How to create and manage an Azure AI resource description: This article describes how to create and manage an Azure AI resource-+ - ignite-2023 Last updated 11/15/2023---+++ # How to create and manage an Azure AI resource
Follow these steps to create a new Azure AI resource in AI Studio.
If your organization is using [Azure Policy](../../governance/policy/overview.md), setup a resource that meets your organization's requirements instead of using AI Studio for resource creation. 1. From the Azure portal, search for `Azure AI Studio` and create a new resource by selecting **+ New Azure AI**
-1. Fill in **Subscription**, **Resource group**, and **Region**. **Name** your new Azure AI resource.
+1. Fill in **Subscription**, **Resource group**, and **Region**. **Name** your new Azure AI resource.
- For advanced settings, select **Next: Resources** to specify resources, networking, encryption, identity, and tags. - Your subscription must have access to Azure AI to create this resource.
- :::image type="content" source="../media/how-to/resource-create-basics.png" alt-text="Screenshot of the option to set Azure AI resource basic information." lightbox="../media/how-to/resource-create-basics.png":::
+ :::image type="content" source="../media/how-to/resource-create-basics.png" alt-text="Screenshot of the option to set Azure AI resource basic information." lightbox="../media/how-to/resource-create-basics.png":::
-1. Select an existing **Azure AI services** or create a new one. New Azure AI services include multiple API endpoints for Speech, Content Safety and Azure OpenAI. You can also bring an existing Azure OpenAI resource. Optionally, choose an existing **Storage account**, **Key vault**, **Container Registry**, and **Application insights** to host artifacts generated when you use AI Studio.
+1. Select an existing **Azure AI services** or create a new one. New Azure AI services include multiple API endpoints for Speech, Content Safety and Azure OpenAI. You can also bring an existing Azure OpenAI resource. Optionally, choose an existing **Storage account**, **Key vault**, **Container Registry**, and **Application insights** to host artifacts generated when you use AI Studio.
:::image type="content" source="../media/how-to/resource-create-resources.png" alt-text="Screenshot of the Create an Azure AI resource with the option to set resource information." lightbox="../media/how-to/resource-create-resources.png":::
-1. Set up Network isolation. Read more on [network isolation](configure-managed-network.md).
+1. Set up Network isolation. Read more on [network isolation](configure-managed-network.md).
:::image type="content" source="../media/how-to/resource-create-networking.png" alt-text="Screenshot of the Create an Azure AI resource with the option to set network isolation information." lightbox="../media/how-to/resource-create-networking.png":::
-1. Set up data encryption. You can either use **Microsoft-managed keys** or enable **Customer-managed keys**.
+1. Set up data encryption. You can either use **Microsoft-managed keys** or enable **Customer-managed keys**.
:::image type="content" source="../media/how-to/resource-create-encryption.png" alt-text="Screenshot of the Create an Azure AI resource with the option to select your encryption type." lightbox="../media/how-to/resource-create-encryption.png":::
-1. By default, **System assigned identity** is enabled, but you can switch to **User assigned identity** if existing storage, key vault, and container registry are selected in Resources.
+1. By default, **System assigned identity** is enabled, but you can switch to **User assigned identity** if existing storage, key vault, and container registry are selected in Resources.
:::image type="content" source="../media/how-to/resource-create-identity.png" alt-text="Screenshot of the Create an Azure AI resource with the option to select a managed identity." lightbox="../media/how-to/resource-create-identity.png"::: >[!Note] >If you select **User assigned identity**, your identity needs to have the `Cognitive Services Contributor` role in order to successfully create a new Azure AI resource.
-1. Add tags.
+1. Add tags.
:::image type="content" source="../media/how-to/resource-create-tags.png" alt-text="Screenshot of the Create an Azure AI resource with the option to add tags." lightbox="../media/how-to/resource-create-tags.png":::
-1. Select **Review + create**
+1. Select **Review + create**
## Manage your Azure AI resource from the Azure portal
View your keys and endpoints for your Azure AI resource from the overview page w
Manage role assignments from **Access control (IAM)** within the Azure portal. Learn more about Azure AI resource [role-based access control](../concepts/rbac-ai-studio.md). To add grant users permissions:
-1. Select **+ Add** to add users to your Azure AI resource
+1. Select **+ Add** to add users to your Azure AI resource
-1. Select the **Role** you want to assign.
+1. Select the **Role** you want to assign.
:::image type="content" source="../media/how-to/resource-rbac-role.png" alt-text="Screenshot of the page to add a role within the Azure AI resource Azure portal view." lightbox="../media/how-to/resource-rbac-role.png":::
-1. Select the **Members** you want to give the role to.
+1. Select the **Members** you want to give the role to.
:::image type="content" source="../media/how-to/resource-rbac-members.png" alt-text="Screenshot of the add members page within the Azure AI resource Azure portal view." lightbox="../media/how-to/resource-rbac-members.png":::
-1. **Review + assign**. It can take up to an hour for permissions to be applied to users.
+1. **Review + assign**. It can take up to an hour for permissions to be applied to users.
### Networking Azure AI resource networking settings can be set during resource creation or changed in the Networking tab in the Azure portal view. Creating a new Azure AI resource invokes a Managed Virtual Network. This streamlines and automates your network isolation configuration with a built-in Managed Virtual Network. The Managed Virtual Network settings are applied to all projects created within an Azure AI resource.
You can view all Projects that use this Azure AI resource. Be linked to the Azur
### Permissions Within Permissions you can view who has access to the Azure AI resource and also manage permissions. Learn more about [permissions](../concepts/rbac-ai-studio.md). To add members:
-1. Select **+ Add member**
-1. Enter the member's name in **Add member** and assign a **Role**. For most users, we recommend the AI Developer role. This permission applies to the entire Azure AI resource. If you wish to only grant access to a specific Project, manage permissions in the [Project](create-projects.md)
+1. Select **+ Add member**
+1. Enter the member's name in **Add member** and assign a **Role**. For most users, we recommend the AI Developer role. This permission applies to the entire Azure AI resource. If you wish to only grant access to a specific Project, manage permissions in the [Project](create-projects.md)
### Compute instances View and manage computes for your Azure AI resource. Create computes, delete computes, and review all compute resources you have in one place.
ai-studio Create Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-compute.md
Title: How to create and manage compute instances in Azure AI Studio description: This article provides instructions on how to create and manage compute instances in Azure AI Studio.-+ - ignite-2023 Last updated 11/15/2023---+++ # How to create and manage compute instances in Azure AI Studio
ai-studio Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-runtime.md
Title: How to create and manage prompt flow runtimes
+ Title: Create and manage prompt flow runtimes
description: Learn how to create and manage prompt flow runtimes in Azure AI Studio.
-# How to create and manage prompt flow runtimes in Azure AI Studio
+# Create and manage prompt flow runtimes in Azure AI Studio
[!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)]
-In Azure AI Studio, you can create and manage prompt flow runtimes. You need a runtime to use prompt flow.
+In Azure AI Studio, you can create and manage prompt flow runtimes. You need a runtime to use prompt flow.
-Prompt flow runtime has the computing resources required for the application to run, including a Docker image that contains all necessary dependency packages. In addition to flow execution, the runtime is also utilized to validate and ensure the accuracy and functionality of the tools incorporated within the flow, when you make updates to the prompt or code content.
+A prompt flow runtime has computing resources that are required for the application to run, including a Docker image that contains all necessary dependency packages. In addition to flow execution, Azure AI Studio uses the runtime to ensure the accuracy and functionality of the tools incorporated within the flow when you make updates to the prompt or code content.
-We support following types of runtimes:
+Azure AI Studio supports the following types of runtimes:
|Runtime type|Underlying compute type|Life cycle management| Customize packages | ||-|||
-|automatic runtime |Serverless compute| Automatically | Easily customize python packages|
-|Compute instance runtime | Compute instance | Manually | |
+|Automatic runtime |Serverless compute| Automatic | Easily customize Python packages|
+|Compute instance runtime | Compute instance | Manual | |
-If you're a new user, we recommend using the automatic runtime that can be used out of box. You can easily customize the environment for this runtime.
+If you're a new user, we recommend that you use an automatic runtime. You can easily customize the environment for this runtime.
If you have a compute instance, you can use it to build your compute instance runtime. ## Create a runtime
-### Create automatic runtime in flow page
+### Create an automatic runtime on a flow page
-Automatic is the default option for runtime, you can start automatic runtime in runtime dropdown in flow page.
+Automatic is the default option for a runtime. You can start an automatic runtime by selecting an option from the runtime dropdown list on a flow page:
+- Select **Start**. Start creating an automatic runtime by using the environment defined in `flow.dag.yaml` in the flow folder on the virtual machine (VM) size where you have a quota in the project.
-- **Start** creates automatic runtime using the environment defined in `flow.dag.yaml` in flow folder on the VM size you have quota in the project.
+ :::image type="content" source="../media/prompt-flow/how-to-create-manage-runtime/runtime-create-automatic-init.png" alt-text="Screenshot of prompt flow with default settings for starting an automatic runtime on a flow page." lightbox = "../media/prompt-flow/how-to-create-manage-runtime/runtime-create-automatic-init.png":::
- :::image type="content" source="../media/prompt-flow/how-to-create-manage-runtime/runtime-create-automatic-init.png" alt-text="Screenshot of prompt flow on the start automatic with default settings on flow page. " lightbox = "../media/prompt-flow/how-to-create-manage-runtime/runtime-create-automatic-init.png":::
+- Select **Start with advanced settings**. In the advanced settings, you can:
-- **Start with advanced settings**, you can customize the VM size used by the runtime. You can also customize the idle time, which will delete runtime automatically if it isn't in use to save code. Meanwhile, you can set the user assigned manage identity used by automatic runtime, it's used to pull base image (please make sure user assigned manage identity have ACR pull permission) and install packages. If you don't set it, we use user identity as default. Learn more about [how to create update user assigned identities to project](../../machine-learning/how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
+ - Customize the VM size that the runtime uses.
+ - Customize the idle time, which saves code by deleting the runtime automatically if it isn't in use.
+ - Set the user-assigned managed identity. The automatic runtime uses this identity to pull a base image and install packages. Make sure that the user-assigned managed identity has Azure Container Registry pull permission.
- :::image type="content" source="../media/prompt-flow/how-to-create-manage-runtime/runtime-creation-automatic-settings.png" alt-text="Screenshot of prompt flow on the start automatic with advanced setting on flow page. " lightbox = "../media/prompt-flow/how-to-create-manage-runtime/runtime-creation-automatic-settings.png":::
+ If you don't set this identity, you use the user identity by default. [Learn more about how to create and update user-assigned identities for a project](../../machine-learning/how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
-### Create compute instance runtime in runtime page
+ :::image type="content" source="../media/prompt-flow/how-to-create-manage-runtime/runtime-creation-automatic-settings.png" alt-text="Screenshot of prompt flow with advanced settings for starting an automatic runtime on a flow page." lightbox = "../media/prompt-flow/how-to-create-manage-runtime/runtime-creation-automatic-settings.png":::
-A runtime requires a compute instance. If you don't have a compute instance, you can [create one in Azure AI Studio](./create-manage-compute.md).
+### Create a compute instance runtime on a runtime page
-To create a prompt flow runtime in Azure AI Studio:
+1. Sign in to [Azure AI Studio](https://ai.azure.com) and select your project from the **Build** page. If you don't have a project, create one.
-1. Sign in to [Azure AI Studio](https://ai.azure.com) and select your project from the **Build** page. If you don't have a project already, first create a project.
+1. On the collapsible left menu, select **Settings**.
-1. From the collapsible left menu, select **Settings**.
1. In the **Compute instances** section, select **View all**. :::image type="content" source="../media/compute/compute-view-settings.png" alt-text="Screenshot of project settings with the option to view all compute instances." lightbox="../media/compute/compute-view-settings.png":::
-1. Make sure that you have a compute instance available and running. If you don't have a compute instance, you can [create one in Azure AI Studio](./create-manage-compute.md).
+1. Make sure that a compute instance is available and running. If you don't have a compute instance, you can [create one in Azure AI Studio](./create-manage-compute.md).
+ 1. Select the **Prompt flow runtimes** tab. :::image type="content" source="../media/compute/compute-runtime.png" alt-text="Screenshot of where to select prompt flow runtimes from the compute instances page." lightbox="../media/compute/compute-runtime.png"::: 1. Select **Create**.
- :::image type="content" source="../media/compute/runtime-create.png" alt-text="Screenshot of the create runtime button." lightbox="../media/compute/runtime-create.png":::
+ :::image type="content" source="../media/compute/runtime-create.png" alt-text="Screenshot of the button for creating a runtime." lightbox="../media/compute/runtime-create.png":::
-1. Select a compute instance for the runtime and then select **Create**.
+1. Select the compute instance for the runtime, and then select **Create**.
:::image type="content" source="../media/compute/runtime-select-compute.png" alt-text="Screenshot of the option to select a compute instance during runtime creation." lightbox="../media/compute/runtime-select-compute.png":::
-1. Acknowledge the warning that the compute instance will be restarted and select **Confirm**.
-
- :::image type="content" source="../media/compute/runtime-create-confirm.png" alt-text="Screenshot of the option to confirm auto restart via the runtime creation." lightbox="../media/compute/runtime-create-confirm.png":::
+1. Acknowledge the warning that the compute instance will be restarted by selecting **Confirm**.
-1. You'll be taken to the runtime details page. The runtime will be in the **Not available** status until the runtime is ready. This can take a few minutes.
+ :::image type="content" source="../media/compute/runtime-create-confirm.png" alt-text="Screenshot of the option to confirm automatic restart via the runtime creation." lightbox="../media/compute/runtime-create-confirm.png":::
- :::image type="content" source="../media/compute/runtime-creation-in-progress.png" alt-text="Screenshot of the runtime not yet available status." lightbox="../media/compute/runtime-creation-in-progress.png":::
+1. On the page for runtime details, monitor the status of the runtime. The runtime has a status of **Not available** until it's ready. This process can take a few minutes.
-1. When the runtime is ready, the status will change to **Running**. You might need to select **Refresh** to see the updated status.
+ :::image type="content" source="../media/compute/runtime-creation-in-progress.png" alt-text="Screenshot of a runtime with a status that shows it's not yet available." lightbox="../media/compute/runtime-creation-in-progress.png":::
- :::image type="content" source="../media/compute/runtime-running.png" alt-text="Screenshot of the runtime is running status." lightbox="../media/compute/runtime-running.png":::
+1. When the runtime is ready, the status changes to **Running**. You might need to select **Refresh** to see the updated status.
-1. Select the runtime from the **Prompt flow runtimes** tab to see the runtime details.
+ :::image type="content" source="../media/compute/runtime-running.png" alt-text="Screenshot of a runtime with a running status." lightbox="../media/compute/runtime-running.png":::
- :::image type="content" source="../media/compute/runtime-details.png" alt-text="Screenshot of the runtime details including environment." lightbox="../media/compute/runtime-details.png":::
+1. Select the runtime on the **Prompt flow runtimes** tab to see its details.
+ :::image type="content" source="../media/compute/runtime-details.png" alt-text="Screenshot of runtime details, including environment." lightbox="../media/compute/runtime-details.png":::
-## Update runtime from UI
+## Update a runtime on the UI
-### Update automatic runtime in flow page
+### Update an automatic runtime on a flow page
-You can manage automatic runtime in the flow page. Here are options you can use:
+On a flow page, you can use the following options to manage an automatic runtime:
-- **Install packages** triggers the `pip install -r requirements.txt` in the flow folder. It takes minutes depending on the packages you install.-- **Reset** deletes the current runtime and creates a new one with the same environment. If you encounter package conflict issue, you can try this option.-- **Edit** opens the runtime config page where you can define the VM side and idle time for the runtime.-- **Stop** deletes the current runtime. If there's no active runtime on the underlining compute, the compute resource will also be deleted.
+- **Install packages** triggers `pip install -r requirements.txt` in the flow folder. The process can take a few minutes, depending on the packages that you install.
+- **Reset** deletes the current runtime and creates a new one with the same environment. If you encounter a package conflict, you can try this option.
+- **Edit** opens the runtime configuration page, where you can define the VM side and the idle time for the runtime.
+- **Stop** deletes the current runtime. If there's no active runtime on the underlying compute, the compute resource is also deleted.
-You can also customize the environment used to run this flow.
+You can also customize the environment that you use to run this flow by adding packages in the `requirements.txt` file in the flow folder. After you add more packages in this file, you can choose either of these options:
-- You can easily customize the environment by adding packages in `requirements.txt` file in the flow folder. After you add more packages in this file, you can choose either save and install or save only. Save and install will trigger the `pip install -r requirements.txt` in flow folder. It takes minutes depends on the packages you install. Save only will only save the `requirements.txt` file, you can install the packages later by yourself.
+- **Save and install** triggers `pip install -r requirements.txt` in the flow folder. The process can take a few minutes, depending on the packages that you install.
+- **Save only** just saves the `requirements.txt` file. You can install the packages later yourself.
- :::image type="content" source="../media/prompt-flow/how-to-create-manage-runtime/runtime-create-automatic-save-install.png" alt-text="Screenshot of save and install packages for automatic runtime on flow page. " lightbox = "../media/prompt-flow/how-to-create-manage-runtime/runtime-create-automatic-save-install.png":::
> [!NOTE]
-> You can change the location and even file name of `requirements.txt` by change it in `flow.dag.yaml` file in flow folder as well.
-> Please don't pin version of promptflow and promptflow-tools in `requirements.txt`, as we already include them in runtime base image.
+> You can change the location and even the file name of `requirements.txt`, but be sure to also change it in the `flow.dag.yaml` file in the flow folder.
+>
+> Don't pin the version of `promptflow` and `promptflow-tools` in `requirements.txt`, because we already include them in the runtime base image.
-#### Add packages in private feed in Azure DevOps
+#### Add packages in a private feed in Azure DevOps
-If you want to use a private feed in Azure DevOps, you need follow these steps:
+If you want to use a private feed in Azure DevOps, follow these steps:
-1. Create user assigned managed identity and add this user assigned managed identity in the Azure DevOps organization. To learn more, see [Use service principals & managed identities](/azure/devops/integrate/get-started/authentication/service-principal-managed-identity).
+1. Create a user-assigned managed identity and add this identity in the Azure DevOps organization. To learn more, see [Use service principals and managed identities](/azure/devops/integrate/get-started/authentication/service-principal-managed-identity).
- > [!NOTE]
- > If the 'Add Users' button isn't visible, it's likely you don't have the necessary permissions to perform this action.
-
-1. [Add or update user assigned identities to project](../../machine-learning/how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
+ > [!NOTE]
+ > If the **Add Users** button isn't visible, you probably don't have the necessary permissions to perform this action.
+1. [Add or update user-assigned identities to your project](../../machine-learning/how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
-1. You need to add `{private}` to your private feed URL. For example, if you want to install `test_package` from `test_feed` in Azure devops, add `-i https://{private}@{test_feed_url_in_azure_devops}` in `requirements.txt`.
+1. Add `{private}` to your private feed URL. For example, if you want to install `test_package` from `test_feed` in Azure devops, add `-i https://{private}@{test_feed_url_in_azure_devops}` in `requirements.txt`:
```txt -i https://{private}@{test_feed_url_in_azure_devops} test_package
- ```
-
-1. Specify the user assigned managed identity in `start with advanced setting` if automatic runtime is not running or `edit` button if automatic runtime is running.
+ ```
- :::image type="content" source="../media/prompt-flow/how-to-create-manage-runtime/runtime-advanced-setting-msi.png" alt-text="Screenshot of specify user assigned managed identity. " lightbox = "../media/prompt-flow/how-to-create-manage-runtime/runtime-advanced-setting-msi.png":::
+1. Specify the user-assigned managed identity in **Start with advanced settings** if automatic runtime isn't running, or use the **Edit** button if automatic runtime is running.
-### Update compute instance runtime in runtime page
+ :::image type="content" source="../media/prompt-flow/how-to-create-manage-runtime/runtime-advanced-setting-msi.png" alt-text="Screenshot that shows the toggle for using a workspace user-assigned managed identity." lightbox = "../media/prompt-flow/how-to-create-manage-runtime/runtime-advanced-setting-msi.png":::
-Azure AI Studio gets regular updates to the base image (`mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable`) to include the latest features and bug fixes. You should periodically update your runtime to the [latest version](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime-stable/tags/list) to get the best experience and performance.
+### Update a compute instance runtime on a runtime page
-Go to the runtime details page and select **Update**. Here you can update the runtime environment. If you select **use default environment**, system will attempt to update your runtime to the latest version.
+Azure AI Studio gets regular updates to the base image (`mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable`) to include the latest features and bug fixes. To get the best experience and performance, periodically update your runtime to the [latest version](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime-stable/tags/list).
-Every time you view the runtime details page, AI Studio will check whether there are new versions of the runtime. If there are new versions available, you'll see a notification at the top of the page. You can also manually check the latest version by selecting the **check version** button.
+Go to the page for runtime details and select **Update**. On the **Edit compute instance runtime** pane, you can update the runtime environment. If you select **Use default environment**, the system tries to update your runtime to the latest version.
+Every time you open the page for runtime details, AI Studio checks whether there are new versions of the runtime. If new versions are available, a notification appears at the top of the page. You can also manually check the latest version by selecting the **Check version** button.
## Next steps - [Learn more about prompt flow](./prompt-flow.md) - [Develop a flow](./flow-develop.md)-- [Develop an evaluation flow](./flow-develop-evaluation.md)
+- [Develop an evaluation flow](./flow-develop-evaluation.md)
ai-studio Create Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md
Title: Create an Azure AI project in Azure AI Studio description: This article describes how to create an Azure AI Studio project.-+ - ignite-2023 Last updated 11/15/2023---+++ # Create an Azure AI project in Azure AI Studio
ai-studio Index Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md
Title: How to create vector indexes description: Learn how to create and use a vector index for performing Retrieval Augmented Generation (RAG).-+ - ignite-2023 Last updated 11/15/2023-+
ai-studio Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/quota.md
Title: Manage and increase quotas for resources with Azure AI Studio description: This article provides instructions on how to manage and increase quotas for resources with Azure AI Studio.-+ - ignite-2023 Last updated 11/15/2023---+++ # Manage and increase quotas for resources with Azure AI Studio
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
You can provide outbound (egress) connectivity to the internet for Overlay pods
You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md). You cannot configure ingress connectivity using Azure App Gateway. For details see [Limitations with Azure CNI Overlay](#limitations-with-azure-cni-overlay).
-## Limitations
-
-Azure CNI Overlay networking in AKS currently has the following limitations:
-
-* In case you are using your own subnet to deploy the cluster, the names of the subnet, VNET and resource group which contains the VNET, must be 63 characters or less. This comes from the fact that these names will be used as labels in AKS worker nodes, and are therefore subjected to [Kubernetes label syntax rules](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
-
-## Regional availability for ARM64 node pools
-
-Azure CNI Overlay is currently unavailable for ARM64 node pools in the following regions:
--- East US 2-- France Central-- Southeast Asia-- South Central US-- West Europe-- West US 3- ## Differences between Kubenet and Azure CNI Overlay Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet, but it has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you don't want to assign VNet IP addresses to pods due to IP shortage, we recommend using Azure CNI Overlay.
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
Azure CNI powered by Cilium currently has the following limitations:
* Hubble is disabled.
-* Network policies cannot use `ipBlock` to allow access to node or pod IPs ([Cilium issue #9209](https://github.com/cilium/cilium/issues/9209) and [#12277](https://github.com/cilium/cilium/issues/12277)).
+* Network policies cannot use `ipBlock` to allow access to node or pod IPs. See [frequently asked questions](#frequently-asked-questions) for details and recommended workaround.
* Kubernetes services with `internalTrafficPolicy=Local` aren't supported ([Cilium issue #17796](https://github.com/cilium/cilium/issues/17796)).
az aks update -n <clusterName> -g <resourceGroupName> \
`CiliumNetworkPolicy` custom resources aren't officially supported. We recommend that customers use Kubernetes `NetworkPolicy` resources to configure network policies.
+- **Why is traffic being blocked when the `NetworkPolicy` has an `ipBlock` that allows the IP address?**
+
+ A limitation of Azure CNI Powered by Cilium is that a `NetworkPolicy`'s `ipBlock` cannot select pod or node IPs.
+
+ For example, this `NetworkPolicy` has an `ipBlock` that allows all egress to `0.0.0.0/0`:
+ ```yaml
+ apiVersion: networking.k8s.io/v1
+ kind: NetworkPolicy
+ metadata:
+ name: example-ipblock
+ spec:
+ podSelector: {}
+ policyTypes:
+ - Egress
+ egress:
+ - to:
+ - ipBlock:
+ cidr: 0.0.0.0/0 # This will still block pod and node IPs.
+ ```
+
+ However, when this `NetworkPolicy` is applied, Cilium will block egress to pod and node IPs even though the IPs are within the `ipBlock` CIDR.
+
+ As a workaround, you can add `namespaceSelector` and `podSelector` to select pods. The example below selects all pods in all namespaces:
+ ```yaml
+ apiVersion: networking.k8s.io/v1
+ kind: NetworkPolicy
+ metadata:
+ name: example-ipblock
+ spec:
+ podSelector: {}
+ policyTypes:
+ - Egress
+ egress:
+ - to:
+ - ipBlock:
+ cidr: 0.0.0.0/0
+ - namespaceSelector: {}
+ - podSelector: {}
+ ```
+
+ > [!NOTE]
+ > It is not currently possible to specify a `NetworkPolicy` with an `ipBlock` to allow traffic to node IPs.
+ - **Does AKS configure CPU or memory limits on the Cilium `daemonset`?** No, AKS doesn't configure CPU or memory limits on the Cilium `daemonset` because Cilium is a critical system component for pod networking and network policy enforcement.
aks Cis Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-kubernetes.md
Title: Center for Internet Security (CIS) Kubernetes benchmark description: Learn how AKS applies the CIS Kubernetes benchmark Previously updated : 12/20/2022 Last updated : 01/10/2024 # Center for Internet Security (CIS) Kubernetes benchmark
As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI
## Kubernetes CIS benchmark
-The following are the results from the [CIS Kubernetes V1.24 Benchmark v1.0.0][cis-benchmark-kubernetes] recommendations on AKS. These are applicable to AKS 1.21.x through AKS 1.24.x.
+The following are the results from the [CIS Kubernetes V1.27 Benchmark v1.8.0][cis-benchmark-kubernetes] recommendations on AKS. The results are applicable to AKS 1.21.x through AKS 1.27.x.
-*Scored* recommendations affect the benchmark score if they are not applied, while *Not Scored* recommendations don't.
+*Scored* recommendations affect the benchmark score if they aren't applied, while *Not Scored* recommendations don't.
CIS benchmarks provide two levels of security settings:
CIS benchmarks provide two levels of security settings:
Recommendations can have one of the following statuses: * *Pass* - The recommendation has been applied.
-* *Fail* - The recommendation has not been applied.
+* *Fail* - The recommendation hasn't been applied.
* *N/A* - The recommendation relates to manifest file permission requirements that are not relevant to AKS. Kubernetes clusters by default use a manifest model to deploy the control plane pods, which rely on files from the node VM. The CIS Kubernetes benchmark recommends these files must have certain permission requirements. AKS clusters use a Helm chart to deploy control plane pods and don't rely on files in the node VM.
-* *Depends on Environment* - The recommendation is applied in the user's specific environment and is not controlled by AKS. *Scored* recommendations affect the benchmark score whether the recommendation applies to the user's specific environment or not.
+* *Depends on Environment* - The recommendation is applied in the user's specific environment and isn't controlled by AKS. *Scored* recommendations affect the benchmark score whether the recommendation applies to the user's specific environment or not.
* *Equivalent Control* - The recommendation has been implemented in a different, equivalent manner. | CIS ID | Recommendation description|Scoring Type|Level|Status|
Recommendations can have one of the following statuses:
|1.2|API Server|||| |1.2.1|Ensure that the `--anonymous-auth` argument is set to false|Not Scored|L1|Pass| |1.2.2|Ensure that the `--token-auth-file` parameter is not set|Scored|L1|Fail|
-|1.2.3|Ensure that `--DenyServiceExternalIPs` is not set|Scored|L1|Pass|
+|1.2.3|Ensure that `--DenyServiceExternalIPs` is not set|Scored|L1|Fail|
|1.2.4|Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate|Scored|L1|Pass| |1.2.5|Ensure that the `--kubelet-certificate-authority` argument is set as appropriate|Scored|L1|Fail| |1.2.6|Ensure that the `--authorization-mode` argument is not set to AlwaysAllow|Scored|L1|Pass|
Recommendations can have one of the following statuses:
|1.2.15|Ensure that the admission control plugin NodeRestriction is set|Scored|L1|Pass| |1.2.16|Ensure that the `--secure-port` argument is not set to 0|Scored|L1|Pass| |1.2.17|Ensure that the `--profiling` argument is set to false|Scored|L1|Pass|
-|1.2.18|Ensure that the `--audit-log-path` argument is set|Scored|L1|Pass|
+|1.2.18|Ensure that the `--audit-log-path` argument is set|Scored|L1|Equivalent Control|
|1.2.19|Ensure that the `--audit-log-maxage` argument is set to 30 or as appropriate|Scored|L1|Equivalent Control|
-|1.2.20|Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate|Scored|L1|Equivalent Control|
+|1.2.20|Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate|Scored|L1|Pass|
|1.2.21|Ensure that the `--audit-log-maxsize` argument is set to 100 or as appropriate|Scored|L1|Pass| |1.2.22|Ensure that the `--request-timeout` argument is set as appropriate|Scored|L1|Pass| |1.2.23|Ensure that the `--service-account-lookup` argument is set to true|Scored|L1|Pass|
Recommendations can have one of the following statuses:
|1.2.25|Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate|Scored|L1|Pass| |1.2.26|Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate|Scored|L1|Pass| |1.2.27|Ensure that the `--client-ca-file` argument is set as appropriate|Scored|L1|Pass|
-|1.2.28|Ensure that the `--etcd-cafile` argument is set as appropriate|Scored|L1|Pass|
+|1.2.28|Ensure that the `--etcd-cafile` argument is set as appropriate|Scored|L1|Depends on Environment|
|1.2.29|Ensure that the `--encryption-provider-config` argument is set as appropriate|Scored|L1|Depends on Environment| |1.2.30|Ensure that encryption providers are appropriately configured|Scored|L1|Depends on Environment| |1.2.31|Ensure that the API Server only makes use of Strong Cryptographic Ciphers|Not Scored|L1|Pass|
Recommendations can have one of the following statuses:
|3|Control Plane Configuration|||| |3.1|Authentication and Authorization|||| |3.1.1|Client certificate authentication should not be used for users|Not Scored|L2|Pass|
+|3.1.2|Service account token authentication should not be used for users|Not Scored|L2|Pass|
+|3.1.3|Bootstrap token authentication should not be used for users|Not Scored|L2|Pass|
|3.2|Logging|||| |3.2.1|Ensure that a minimal audit policy is created|Scored|L1|Pass| |3.2.2|Ensure that the audit policy covers key security concerns|Not Scored|L2|Pass|
Recommendations can have one of the following statuses:
|5.1.4|Minimize access to create pods|Not Scored|L1|Depends on Environment| |5.1.5|Ensure that default service accounts are not actively used|Scored|L1|Depends on Environment| |5.1.6|Ensure that Service Account Tokens are only mounted where necessary|Not Scored|L1|Depends on Environment|
+|5.1.7|Avoid use of system:masters group|Not Scored|L1|Depends on Environment|
+|5.1.8|Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster|Not Scored|L1|Depends on Environment|
+|5.1.9|Minimize access to create persistent volumes|Not Scored|L1|Depends on Environment|
+|5.1.10|Minimize access to the proxy sub-resource of nodes|Not Scored|L1|Depends on Environment|
+|5.1.11|Minimize access to the approval sub-resource of certificatesigningrequests objects|Not Scored|L1|Depends on Environment|
+|5.1.12|Minimize access to webhook configuration objects|Not Scored|L1|Depends on Environment|
+|5.1.13|Minimize access to the service account token creation|Not Scored|L1|Depends on Environment|
|5.2|Pod Security Policies||||
-|5.2.1|Minimize the admission of privileged containers|Not Scored|L1|Depends on Environment|
-|5.2.2|Minimize the admission of containers wishing to share the host process ID namespace|Scored|L1|Depends on Environment|
-|5.2.3|Minimize the admission of containers wishing to share the host IPC namespace|Scored|L1|Depends on Environment|
-|5.2.4|Minimize the admission of containers wishing to share the host network namespace|Scored|L1|Depends on Environment|
-|5.2.5|Minimize the admission of containers with allowPrivilegeEscalation|Scored|L1|Depends on Environment|
+|5.2.1|Ensure that the clsuter has at least one active policy control mechanism in place|Not Scored|L1|Depends on Environment|
+|5.2.2|Minimize the admission of privileged containers|Not Scored|L1|Depends on Environment|
+|5.2.3|Minimize the admission of containers wishing to share the host process ID namespace|Scored|L1|Depends on Environment|
+|5.2.4|Minimize the admission of containers wishing to share the host IPC namespace|Scored|L1|Depends on Environment|
+|5.2.5|Minimize the admission of containers wishing to share the host network namespace|Scored|L1|Depends on Environment|
+|5.2.6|Minimize the admission of containers with allowPrivilegeEscalation|Scored|L1|Depends on Environment|
|5.2.6|Minimize the admission of root containers|Not Scored|L2|Depends on Environment| |5.2.7|Minimize the admission of containers with the NET_RAW capability|Not Scored|L1|Depends on Environment| |5.2.8|Minimize the admission of containers with added capabilities|Not Scored|L1|Depends on Environment|
-|5.2.9|Minimize the admission of containers with capabilities assigned|Not Scored|L2|Depends on Environment|
+|5.2.9|Minimize the admission of containers with capabilities assigned|Not Scored|L1|Depends on Environment|
+|5.2.10|Minimize the admission of containers with capabilities assigned|Not Scored|L2||
+|5.2.11|Minimize the admission of Windows HostProcess Containers|Not Scored|L1|Depends on Environment|
+|5.2.12|Minimize the admission of HostPath volumes|Not Scored|L1|Depends on Environment|
+|5.2.13|Minimize the admission of containers which use HostPorts|Not Scored|L1|Depends on Environment|
|5.3|Network Policies and CNI|||| |5.3.1|Ensure that the CNI in use supports Network Policies|Not Scored|L1|Pass| |5.3.2|Ensure that all Namespaces have Network Policies defined|Scored|L2|Depends on Environment|
Recommendations can have one of the following statuses:
|5.4.1|Prefer using secrets as files over secrets as environment variables|Not Scored|L1|Depends on Environment| |5.4.2|Consider external secret storage|Not Scored|L2|Depends on Environment| |5.5|Extensible Admission Control||||
-|5.5.1|Configure Image Provenance using ImagePolicyWebhook admission controller|Not Scored|L2|Depends on Environment|
-|5.6|General Policies||||
-|5.6.1|Create administrative boundaries between resources using namespaces|Not Scored|L1|Depends on Environment|
-|5.6.2|Ensure that the seccomp profile is set to docker/default in your pod definitions|Not Scored|L2|Depends on Environment|
-|5.6.3|Apply Security Context to Your Pods and Containers|Not Scored|L2|Depends on Environment|
-|5.6.4|The default namespace should not be used|Scored|L2|Depends on Environment|
+|5.5.1|Configure Image Provenance using ImagePolicyWebhook admission controller|Not Scored|L2|Fail|
+|5.7|General Policies||||
+|5.7.1|Create administrative boundaries between resources using namespaces|Not Scored|L1|Depends on Environment|
+|5.7.2|Ensure that the seccomp profile is set to docker/default in your pod definitions|Not Scored|L2|Depends on Environment|
+|5.7.3|Apply Security Context to Your Pods and Containers|Not Scored|L2|Depends on Environment|
+|5.7.4|The default namespace should not be used|Scored|L2|Depends on Environment|
> [!NOTE] > In addition to the Kubernetes CIS benchmark, there is an [AKS CIS benchmark][cis-benchmark-aks] available as well.
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
You can also configure more granular details of the cluster autoscaler by changi
| daemonset-eviction-for-occupied-nodes (Preview) | Whether DaemonSet pods will be gracefully terminated from non-empty nodes | true | | scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, in which a node can be considered for scale down | 0.5 | | max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds |
-| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false |
| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false | | expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random | | skip-nodes-with-local-storage | If true, cluster autoscaler doesn't delete nodes with pods with local storage, for example, EmptyDir or HostPath | true |
aks Deploy Confidential Containers Default Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-confidential-containers-default-policy.md
Title: Deploy an AKS cluster with Confidential Containers (preview) description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Confidential Containers (preview) and a default security policy by using the Azure CLI. Previously updated : 11/13/2023 Last updated : 01/10/2024
In general, getting started with AKS Confidential Containers involves the follow
- The `aks-preview` Azure CLI extension version 0.5.169 or later. -- The `confcom` Confidential Container Azure CLI extension 0.3.0 or later. `confcom` is required to generate a [security policy][confidential-containers-security-policy].
+- The `confcom` Confidential Container Azure CLI extension 0.3.3 or later. `confcom` is required to generate a [security policy][confidential-containers-security-policy].
- Register the `Preview` feature in your Azure subscription.
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
This article shows you how to create an Azure Kubernetes Service (AKS) cluster w
## Create an AKS cluster with a managed NAT gateway
-* Create an AKS cluster with a new managed NAT gateway using the [`az aks create`][az-aks-create] command with the `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` parameters. If you want the NAT gateway to operate out of a specific availability zone, specify the zones using `--zones`.
-* A managed NAT gateway resource cannot be used across multiple availability zones. When you deploy a managed NAT gateway instance, it is deployed to "no zone". No zone NAT gateway resources are deployed to a single availability zone for you by Azure. For more information on non-zonal deployment model, see [non-zonal NAT gateway](/azure/nat-gateway/nat-availability-zones#non-zonal).
+* Create an AKS cluster with a new managed NAT gateway using the [`az aks create`][az-aks-create] command with the `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` parameters. If you want the NAT gateway to operate out of a specific availability zone, specify the zone using `--zones`.
+* If no zone is specified when creating a managed NAT gateway, than NAT gateway is deployed to "no zone" by default. No zone NAT gateway resources are deployed to a single availability zone for you by Azure. For more information on non-zonal deployment model, see [non-zonal NAT gateway](/azure/nat-gateway/nat-availability-zones#non-zonal).
+* A managed NAT gateway resource can't be used across multiple availability zones.
```azurecli-interactive az aks create \
This article shows you how to create an Azure Kubernetes Service (AKS) cluster w
--nat-gateway-idle-timeout 4 ```
- > [!IMPORTANT]
- > Zonal configuration for your NAT gateway resource can be done with user-assigned NAT gateway resources. See [Create an AKS cluster with a user-assigned NAT gateway](#create-an-aks-cluster-with-a-user-assigned-nat-gateway) for more details.
- > If no value for the outbound IP address is specified, the default value is one.
- ### Update the number of outbound IP addresses * Update the outbound IP address or idle timeout using the [`az aks update`][az-aks-update] command with the `--nat-gateway-managed-outbound-ip-count` or `--nat-gateway-idle-timeout` parameter.
This article shows you how to create an Azure Kubernetes Service (AKS) cluster w
This configuration requires bring-your-own networking (via [Kubenet][byo-vnet-kubenet] or [Azure CNI][byo-vnet-azure-cni]) and that the NAT gateway is preconfigured on the subnet. The following commands create the required resources for this scenario.
+> [!IMPORTANT]
+> Zonal configuration for your NAT gateway resource can be done with managed or user-assigned NAT gateway resources.
+> If no value for the outbound IP address is specified, the default value is one.
+ 1. Create a resource group using the [`az group create`][az-group-create] command. ```azurecli-interactive
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
If using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0
4. In **Kubernetes version**, select **Upgrade version**. This redirects you to a new page. 5. In **Kubernetes version**, select the version to check for available upgrades.
+ :::image type="content" source="media/tutorial-kubernetes-upgrade-cluster/upgrade-kubernetes-version.png" alt-text="Screenshot of the Upgrade version screen.":::
+ If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
You can either [manually upgrade your cluster](#manually-upgrade-cluster) or [co
3. In **Kubernetes version**, select **Upgrade version**. This redirects you to a new page. 4. In **Kubernetes version**, select your desired version and then select **Save**.
+ :::image type="content" source="media/tutorial-kubernetes-upgrade-cluster/available-upgrade-versions.png" alt-text="Screenshot of the Upgrade version screen with available upgrade versions.":::
+ It takes a few minutes to upgrade the cluster, depending on how many nodes you have.
It takes a few minutes to upgrade the cluster, depending on how many nodes you h
3. In **Kubernetes version**, select **Upgrade version**. 4. For **Automatic upgrade**, select **Enabled with patch (recommended)** > **Save**.
+ :::image type="content" source="media/tutorial-kubernetes-upgrade-cluster/automatic-upgrade-kubernetes-version.png" alt-text="Screenshot of the Upgrade version screen with the Automatic upgrade option set to Enabled with patch (recommended).":::
+ For more information, see [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][aks-auto-upgrade].
AKS regularly provides new node images. Linux node images are updated weekly, an
1. In the Azure portal, navigate to your AKS cluster. 2. On the **Overview** page, select the **Kubernetes version** and ensure it's the latest version you installed in the previous step.
+ :::image type="content" source="media/tutorial-kubernetes-upgrade-cluster/validate-kubernetes-upgrade.png" alt-text="Screenshot of the Upgrade version screen with the current updated Kubernetes version.":::
+ ## Delete the cluster
As this tutorial is the last part of the series, you might want to delete your A
1. In the Azure portal, navigate to your AKS cluster. 2. On the **Overview** page, select **Delete**.
-3. On the popup that asks you to confirm the deletion of the cluster, select **Yes**.
+3. On the **Delete cluster confirmation** page, select **Delete**.
+
+ :::image type="content" source="media/tutorial-kubernetes-upgrade-cluster/delete-cluster-confirmation.png" alt-text="Screenshot of the Delete cluster confirmation screen.":::
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
### Usage notes * You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Microsoft Entra authentication by applying the `validate-azure-ad-token` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control.
+* [Microsoft Entra ID for customers (preview)](/entra/external-id/customers/concept-supported-features-customers) is not supported.
## Examples
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
The `validate-jwt` policy enforces existence and validity of a supported JSON we
* The policy supports tokens encrypted with symmetric keys using the following encryption algorithms: A128CBC-HS256, A192CBC-HS384, A256CBC-HS512. * To configure the policy with one or more OpenID configuration endpoints for use with a self-hosted gateway, the OpenID configuration endpoints URLs must also be reachable by the cloud gateway. * You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Microsoft Entra authentication by applying the `validate-jwt` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control.-
+* When using a custom header (`header-name`), the configured required scheme (`require-scheme`) will be ignored. To use a required scheme, JWT tokens must be provided in the `Authorization` header.
## Examples
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
The free certificate comes with the following limitations:
- Isn't exportable. - Isn't supported in an App Service Environment (ASE). - Only supports alphanumeric characters, dashes (-), and periods (.).
+- Only custom domains of length up to 64 characters are supported.
### [Apex domain](#tab/apex) - Must have an A record pointing to your web app's IP address.
app-service Deploy Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-best-practices.md
App Service has [built-in continuous delivery](deploy-continuous-deployment.md)
### Use GitHub Actions
-You can also automate your container deployment [with GitHub Actions](./deploy-ci-cd-custom-container.md). The workflow file below will build and tag the container with the commit ID, push it to a container registry, and update the specified site slot with the new image tag.
+You can also automate your container deployment [with GitHub Actions](https://github.com/Azure/webapps-deploy). The workflow file below will build and tag the container with the commit ID, push it to a container registry, and update the specified web app with the new image tag.
```yaml
-name: Build and deploy a container image to Azure Web Apps
- on: push: branches: - <your-branch-name>
+name: Linux_Container_Node_Workflow
+ jobs: build-and-deploy: runs-on: ubuntu-latest
-
steps:
- - uses: actions/checkout@main
-
- -name: Authenticate using a Service Principal
- uses: azure/actions/login@v1
- with:
- creds: ${{ secrets.AZURE_SP }}
+ # checkout the repo
+ - name: 'Checkout Github Action'
+ uses: actions/checkout@main
- - uses: azure/container-actions/docker-login@v1
+ - uses: azure/docker-login@v1
with:
- username: ${{ secrets.DOCKER_USERNAME }}
- password: ${{ secrets.DOCKER_PASSWORD }}
+ login-server: contoso.azurecr.io
+ username: ${{ secrets.REGISTRY_USERNAME }}
+ password: ${{ secrets.REGISTRY_PASSWORD }}
- - name: Build and push the image tagged with the git commit hash
- run: |
- docker build . -t contoso/demo:${{ github.sha }}
- docker push contoso/demo:${{ github.sha }}
+ - run: |
+ docker build . -t contoso.azurecr.io/nodejssampleapp:${{ github.sha }}
+ docker push contoso.azurecr.io/nodejssampleapp:${{ github.sha }}
- - name: Update image tag on the Azure Web App
- uses: azure/webapps-container-deploy@v1
+ - uses: azure/webapps-deploy@v2
with:
- app-name: '<your-webapp-name>'
- slot-name: '<your-slot-name>'
- images: 'contoso/demo:${{ github.sha }}'
+ app-name: 'node-rnc'
+ publish-profile: ${{ secrets.azureWebAppPublishProfile }}
+ images: 'contoso.azurecr.io/nodejssampleapp:${{ github.sha }}'
``` ### Use other automation providers
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
Note that _/api/health_ is just an example added for illustration purposes. We d
> - The Health check path should check critical components of your application. For example, if your application depends on a database and a messaging system, the Health check endpoint should connect to those components. If the application can't connect to a critical component, then the path should return a 500-level response code to indicate the app is unhealthy. Also, if the path does not return a response within 1 minute, the health check ping is considered unhealthy. > - When selecting the Health check path, make sure you're selecting a path that returns a 200 status code, only when the app is fully warmed up. > - In order to use Health check on your Function App, you must use a [premium or dedicated hosting plan](../azure-functions/functions-scale.md#overview-of-plans).
+> - Details about Health check on Function Apps can be found here: [Monitor function apps using Health check](/azure-functions/configure-monitoring?tabs=v2#monitor-function-apps-using-health-check).
> [!CAUTION] > Health check configuration changes restart your app. To minimize impact to production apps, we recommend [configuring staging slots](deploy-staging-slots.md) and swapping to production.
Health check integrates with App Service's [authentication and authorization fea
If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. Once you have those features in-place, you can authenticate the health check request by inspecting the header, `x-ms-auth-internal-token`, and validating that it matches the SHA256 hash of the environment variable `WEBSITE_AUTH_ENCRYPTION_KEY`. If they match, then the health check request is valid and originating from App Service.
+> [!NOTE]
+> Specifically for [Azure Functions authentication](/azure/azure-functions/security-concepts?tabs=v4#function-access-keys), the function that serves as Health check endpoint needs to allow anonymous access.
+ ##### [.NET](#tab/dotnet) ```C#
app-service Terraform Secure Backend Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/terraform-secure-backend-frontend.md
This article illustrates an example use of [Private Endpoint](../networking/priv
- Deploy a VNet - Create the first subnet for the integration - Create the second subnet for the private endpoint, you have to set a specific parameter to disable network policies-- Deploy one App Service plan of type PremiumV2 or PremiumV3, required for Private Endpoint feature
+- Deploy one App Service plan of type Basic, Standard, PremiumV2, PremiumV3, IsolatedV2, Functions Premium (sometimes referred to as the Elastic Premium plan), required for Private Endpoint feature
- Create the frontend web app with specific app settings to consume the private DNS zone, [more details](../overview-vnet-integration.md#azure-dns-private-zones) - Connect the frontend web app to the integration subnet - Create the backend web app
application-gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/diagnostics.md
Previously updated : 10/11/2023 Last updated : 1/10/2023
Here an example of the access log emitted in JSON format to a storage account.
"location": "northcentralus" } ```-
-### Limitations
-- Although it's possible to configure logging to log analytics, logs are currently not emitted to a log analytics workspace or event hub. Log analytics and event hub streaming will be supported in a future update.
azure-arc Network Requirements Consolidated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md
Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 11/15/2023 Last updated : 01/10/2024
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 11/09/2023 Last updated : 01/10/2024
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
The default _target executions per instance_ values come from the SDKs used by t
The following considerations apply when using target-based scaling: + Target-based scaling is enabled by default for function apps on the Consumption plan or for Premium plans, but you can [opt-out](#opting-out). Event-driven scaling isn't supported when running on Dedicated (App Service) plans.
-+ Your [function app runtime version](set-runtime-version.md) must be 4.3.0 or a later version.
+ Target Based Scaling is enabled by default on function app runtime 4.19.0 or a later version. + When using target-based scaling, the `functionAppScaleLimit` site setting is still honored. For more information, see [Limit scale out](event-driven-scaling.md#limit-scale-out). + To achieve the most accurate scaling based on metrics, use only one target-based triggered function per function app.
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
You can remove CORS:
## Understand billing transactions
-Azure Maps doesn't count billing transactions for:
--- 5xx HTTP Status Codes-- 401 (Unauthorized)-- 403 (Forbidden)-- 408 (Timeout)-- 429 (TooManyRequests)-- CORS preflight requests-
-For more information on billing transactions and other Azure Maps pricing information, see [Azure Maps pricing].
## Next steps
azure-maps Azure Maps Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-event-grid-integration.md
Title: React to Azure Maps events by using Event Grid
description: Find out how to react to Azure Maps events involving geofences. See how to listen to map events and how to use Event Grid to reroute events to event handlers. Previously updated : 07/16/2020 Last updated : 01/08/2024
Azure Maps integrates with Azure Event Grid, so that users can send event notifications to other services and trigger downstream processes. The purpose of this article is to help you configure your business applications to listen to Azure Maps events. This allows users to react to critical events in a reliable, scalable, and secure manner. For example, users can build an application to update a database, create a ticket, and deliver an email notification, every time a device enters a geofence.
-> [!NOTE]
-> The Geofence API async event requires the region property of your Azure Maps account be set to ***Global***. When creating an Azure Maps account in the Azure portal, this isn't given as an option. For more information, see [Create an Azure Maps account with a global region].
- Azure Event Grid is a fully managed event routing service, which uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions] and [Azure Logic Apps]. It can deliver event alerts to non-Azure services using webhooks. For a complete list of the event handlers that Event Grid supports, see [An introduction to Azure Event Grid]. ![Azure Event Grid functional model](./media/azure-maps-event-grid-integration/azure-event-grid-functional-model.png)
The following example shows the schema for GeofenceResult:
Applications that handle Azure Maps geofence events should follow a few recommended practices:
-* The Geofence API async event requires the region property of your Azure Maps account be set to ***Global***. When creating an Azure Maps account in the Azure portal, this isn't given as an option. For more information, see [Create an Azure Maps account with a global region].
* Configure multiple subscriptions to route events to the same event handler. It's important not to assume that events are from a particular source. Always check the message topic to ensure that the message came from the source that you expect. * Use the `X-Correlation-id` field in the response header to understand if your information about objects is up to date. Messages can arrive out of order or after a delay. * When a GET or a POST request in the Geofence API is called with the mode parameter set to `EnterAndExit`, then an Enter or Exit event is generated for each geometry in the geofence for which the status has changed from the previous Geofence API call.
To learn more about how to use geofencing to control operations at a constructio
[Azure Functions]: ../azure-functions/functions-overview.md [Azure Logic Apps]: ../azure-functions/functions-overview.md [Azure Maps as an Event Grid source]: ../event-grid/event-schema-azure-maps.md
-[Create an Azure Maps account with a global region]: tutorial-geofence.md#create-an-azure-maps-account-with-a-global-region
[event subscriptions]: ../event-grid/concepts.md#event-subscriptions [Set up a geofence by using Azure Maps]: tutorial-geofence.md
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
Azure Maps provides services to support the tracking of equipment entering and e
> [!div class="checklist"] >
-> * Create an Azure Maps account with a global region.
> * Upload [Geofencing GeoJSON data] that defines the construction site areas you want to monitor. You'll upload geofences as polygon coordinates to your Azure storage account, then use the [data registry] service to register that data with your Azure Maps account. > * Set up two [logic apps] that, when triggered, send email notifications to the construction site operations manager when equipment enters and exits the geofence area. > * Use [Azure Event Grid] to subscribe to enter and exit events for your Azure Maps geofence. You set up two webhook event subscriptions that call the HTTP endpoints defined in your two logic apps. The logic apps then send the appropriate email notifications of equipment moving beyond or entering the geofence.
Azure Maps provides services to support the tracking of equipment entering and e
## Prerequisites
-* This tutorial uses the [Postman] application, but you can use a different API development environment.
+* An [Azure Maps account]
+* A [subscription key]
+* An [Azure storage account]
+
+This tutorial uses the [Postman] application, but you can use a different API development environment.
>[!IMPORTANT] >
-> * In the URL examples, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
-
-## Create an Azure Maps account with a global region
-
-The Geofence API async event requires the region property of your Azure Maps account be set to ***Global***. This setting isn't given as an option when creating an Azure Maps account in the Azure portal, however you do have several other options for creating a new Azure Maps account with the *global* region setting. This section lists the three methods that can be used to create an Azure Maps account with the region set to *global*.
-
-> [!NOTE]
-> The `location` property in both the ARM template and PowerShell `New-AzMapsAccount` command refer to the same property as the `Region` field in the Azure portal.
-
-### Use an ARM template to create an Azure Maps account with a global region
-
-[Create your Azure Maps account using an ARM template], making sure to set `location` to `global` in the `resources` section of the ARM template.
-
-### Use PowerShell to create an Azure Maps account with a global region
-
-```powershell
-New-AzMapsAccount -ResourceGroupName your-Resource-Group -Name name-of-maps-account -SkuName g2 -Location global
-```
-
-### Use Azure CLI to create an Azure Maps account with a global region
-
-The Azure CLI command [az maps account create] doesnΓÇÖt have a location property, but defaults to `global`, making it useful for creating an Azure Maps account with a global region setting for use with the Geofence API async event.
+> In the URL examples, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
## Upload geofencing GeoJSON data
Create the geofence JSON file using the following geofence data. You'll upload t
} ```
-Follow the steps outlined in the [How to create data registry] article to upload the geofence JSON file into your Azure storage account then register it in your Azure Maps account.
+Follow the steps outlined in the [How to create data registry] article to upload the geofence JSON file into your Azure storage account and register it in your Azure Maps account.
> [!IMPORTANT] > Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the geofence you uploaded into your Azure storage account from your source code and HTTP requests.
To create the logic apps:
2. In the upper-left corner of the Azure portal, select **Create a resource**.
-3. In the **Search the Marketplace** box, type **Logic App**.
+3. In the **Search services and marketplace** box, type **Logic App**.
4. From the results, select **Logic App**. Then, select **Create**.
To create the logic apps:
* The **Subscription** that you want to use for this logic app. * The **Resource group** name for this logic app. You can choose to **Create new** or **Use existing** resource group. * The **Logic App name** of your logic app. In this case, use `Equipment-Enter` as the name.
+ * Select **Consumption** as the **Plan type**. For more information, see [Billing and pricing models] in the Logic App documentation.
For the purposes of this tutorial, keep all other values on their default settings.
- :::image type="content" source="./media/tutorial-geofence/logic-app-create.png" alt-text="Screenshot of create a logic app.":::
+ :::image type="content" border="false" source="./media/tutorial-geofence/logic-app-create.png" alt-text="Screenshot of create a logic app.":::
-6. Select **Review + Create**. Review your settings and select **Create**.
+6. When you're done, select **Review + Create**. After Azure validates the information about your logic app resource, select **Create**.
7. When the deployment completes successfully, select **Go to resource**.
-8. In the **Logic App Designer**, scroll down to the **Start with a common trigger** section. Select **When an HTTP request is received**.
+8. Select **Logic app designer** in the **Development Tools** section in the menu of the left, scroll down to the **Start with a common trigger** section. Select **When an HTTP request is received**.
- :::image type="content" source="./media/tutorial-geofence/logic-app-trigger.png" alt-text="Screenshot of create a logic app HTTP trigger.":::
+ :::image type="content" border="false" source="./media/tutorial-geofence/logic-app-trigger.png" alt-text="Screenshot of create a logic app HTTP trigger.":::
9. In the upper-right corner of Logic App Designer, select **Save**. The **HTTP POST URL** is automatically generated. Save the URL. You need it in the next section to create an event endpoint.
- :::image type="content" source="./media/tutorial-geofence/logic-app-httprequest.png" alt-text="Screenshot of Logic App HTTP Request URL and JSON.":::
+ :::image type="content" border="false" source="./media/tutorial-geofence/logic-app-httprequest.png" alt-text="Screenshot of Logic App HTTP Request URL and JSON.":::
10. Select **+ New Step**. 11. In the search box, type `outlook.com email`. In the **Actions** list, scroll down and select **Send an email (V2)**.
- :::image type="content" source="./media/tutorial-geofence/logic-app-designer.png" alt-text="Screenshot of create a logic app designer.":::
+ :::image type="content" border="false" source="./media/tutorial-geofence/logic-app-designer.png" alt-text="Screenshot of create a logic app designer.":::
12. Sign in to your Outlook account. Make sure to select **Yes** to allow the logic app to access the account. Fill in the fields for sending an email.
- :::image type="content" source="./media/tutorial-geofence/logic-app-email.png" alt-text="Screenshot of create a logic app send email step.":::
+ :::image type="content" border="false" source="./media/tutorial-geofence/logic-app-email.png" alt-text="Screenshot of create a logic app send email step.":::
>[!TIP] > You can retrieve GeoJSON response data, such as `geometryId` or `deviceId`, in your email notifications. You can configure Logic Apps to read the data sent by Event Grid. For information on how to configure Logic Apps to consume and pass event data into email notifications, see [Tutorial: Send email notifications about Azure IoT Hub events using Event Grid and Logic Apps].
Create geofence exit and enter event subscriptions:
* For **Endpoint Type**, choose `Web Hook`. * For **Endpoint**, copy the HTTP POST URL for the logic app enter endpoint that you created in the previous section. If you forgot to save it, you can just go back into Logic App Designer and copy it from the HTTP trigger step.
- :::image type="content" source="./media/tutorial-geofence/events-subscription.png" alt-text="Screenshot of Azure Maps events subscription details.":::
+ :::image type="content" border="false" source="./media/tutorial-geofence/events-subscription.png" alt-text="Screenshot of Azure Maps events subscription details.":::
6. Select **Create**.
In the preceding GeoJSON response, the equipment has remained in the main site g
In the preceding GeoJSON response, the equipment has exited the main site geofence. As a result, the `isEventPublished` parameter is set to `true`, and the Operations Manager receives an email notification indicating that the equipment has exited a geofence.
-You can also [Send email notifications using Event Grid and Logic Apps] and check [Supported Events Handlers in Event Grid] using Azure Maps.
+You can also [Send email notifications using Event Grid and Logic Apps]. For more information,see [Event handlers in Azure Event Grid].
## Clean up resources
There are no resources that require cleanup.
> [!div class="nextstepaction"] > [Handle content types in Azure Logic Apps]
-[az maps account create]: /cli/azure/maps/account?view=azure-cli-latest&preserve-view=true#az-maps-account-create
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
[Azure Event Grid]: ../event-grid/overview.md [Azure Maps service geographic scope]: geographic-scope.md [Azure portal]: https://portal.azure.com
-[Create your Azure Maps account using an ARM template]: how-to-create-template.md
+[Azure storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
+[Billing and pricing models]: /azure/logic-apps/logic-apps-pricing#standard-pricing
[data registry]: /rest/api/maps/data-registry [Geofencing GeoJSON data]: geofence-geojson.md [Handle content types in Azure Logic Apps]: ../logic-apps/logic-apps-content-type.md
There are no resources that require cleanup.
[Search Geofence Get API]: /rest/api/maps/spatial/getgeofence [Send email notifications using Event Grid and Logic Apps]: ../event-grid/publish-iot-hub-events-to-logic-apps.md [Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence
-[Supported Events Handlers in Event Grid]: ../event-grid/event-handlers.md
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Event handlers in Azure Event Grid]: ../event-grid/event-handlers.md
[three event types]: ../event-grid/event-schema-azure-maps.md [Tutorial: Send email notifications about Azure IoT Hub events using Event Grid and Logic Apps]: ../event-grid/publish-iot-hub-events-to-logic-apps.md [Upload Geofencing GeoJSON data section]: #upload-geofencing-geojson-data
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
The following table summarizes the Azure Maps services that generate transaction
| Web feature | 1k transactions | $21 | -->
+## Understand billing transactions
++ ## Next steps > [!div class="nextstepaction"]
azure-monitor Azure Monitor Agent Mma Removal Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-mma-removal-tool.md
+
+ Title: Azure Monitor Agent MMA legacy agent removal tool
+description: This article describes a PowerShell script used to remove MMA agent from systems that users have migrated to AMA.
++++ Last updated : 01/09/2024+
+# Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics Agent to Azure Monitor Agent and track the status of the migration in my account.
++
+# MMA Discovery and Removal Tool
+After you migrate your machines to AMA, you need to remove the MMA agent to avoid duplication of logs. AzTS MMA Discovery and Removal Utility can centrally remove MMA extension from Azure Virtual Machine (VMs), Azure Virtual Machine Scale Sets and Azure Arc Servers from a tenant.
+The utility works in two steps
+1. Discovery ΓÇô First the utility creates an inventory of all machines that have the MMA agents installed. We recommend that no new VMs, Virtual Machine Scale Sets or Azure Arc Servers with MMA extension are created while the utility is running.
+2. Removal - Second the utility selects machines with both MMA and AMA and removes the MMA extension. You can disable this step and run after validating the list of machines. There's an option remove from machines that only have the MMA agent, but we recommended that you first migrate all dependencies to AMA and then remove MMA.
+
+## Prerequisites
+You do all the setup steps in a [Visual Studio Code](https://code.visualstudio.com/) with the [PowerShell Extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
+ - Windows 10+ or Windows Server 2019+
+ - PowerShell 5.0 or higher. Check the version by running `$PSVersionTable` and checking the PS Version
+ - PowerShell. The language must be set to mode `FullLanguage`. Check the mode by running `$ExecutionContext.SessionState.LanguageMode` in PowerShell. You can find more details [here](https://learn.microsoft.com/powershell/module/microsoft.powershell.core/about/about_language_modes?source=recommendations)
+ - Bicep. The setup scripts us Bicep to automate the installation. Check the installation by running `bicep --version`. See [install in PowerShell](https://learn.microsoft.com/azure/azure-resource-manager/bicep/install#azure-powershell)
+ - A [User-Assigned Managed Identity (MI)](/azure/active-directory/managed-identities-azure-resources/overview) which has 'Reader', Virtual Machine Contributor' and 'Azure Arc ScVmm VM Contributor' access on target scopes configured.
+ - A new Resource Group to contain all the Azure resources created automatically by the set up automation.
+ - For granting remediation user-assigned MI with above mentioned roles on the target scopes,
+ - You must have User Access Administrator (UAA) or Owner on the configured scopes. For example, the set up is being configured for a subscription 'x', you must UAA role assignment on subscription 'x' so the script can provide the remediated user-assigned MI permissions.
++
+## Download Deployment package
+ The package contains:
+- Bicep templates, which contain resource configuration details that you create as part of setup.
+- Deployment set up scripts, which provides the cmdlet to run installation.
+- Download deployment package zip from [here](https://github.com/azsk/AzTS-docs/raw/main/TemplateFiles/AzTSMMARemovalUtilityDeploymentFiles.zip) to your local machine.
+- Extract zip to local folder location.
+- Unblock the files with this script.
+
+ ``` PowerShell
+ Get-ChildItem -Path "<Extracted folder path>" -Recurse | Unblock-File
+ ```
+
+## Set up the tool
+
+### [Single Tenant](#tab/Single)
+
+You perform set up in two steps:
+1. Go to deployment folder and load consolidated setup script. You must have **Owner** access on the subscription.
+
+ ``` PowerShell
+ CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
+ . ".\MMARemovalUtilitySetupConsolidated.ps1"
+ ```
+
+2. The Install-AzTSMMARemovalUtilitySolutionConsolidated does the following operations:
+ - Installs required Az modules.
+ - Set up remediation user-assigned managed identity.
+ - Prompts and collects onboarding details for usage telemetry collection based on user preference.
+ - Creates or updates the RG.
+ - Creates or updates the resources with MIs assigned.
+ - Creates or updates the monitoring dashboard.
+ - Configures target scopes.
+
+You must log in to Azure Account using the following PowerShell command.
+
+``` PowerShell
+$TenantId = "<TenantId>"
+Connect-AzAccount -Tenant $TenantId
+```
+Run the setup script
+``` PowerShell
+$SetupInstallation = Install-AzTSMMARemovalUtilitySolutionConsolidated `
+ -RemediationIdentityHostSubId <MIHostingSubId> `
+ -RemediationIdentityHostRGName <MIHostingRGName> `
+ -RemediationIdentityName <MIName> `
+ -TargetSubscriptionIds @("<SubId1>","<SubId2>","<SubId3>") `
+ -TargetManagementGroupNames @("<MGName1>","<MGName2>","<MGName3>") `
+ -TenantScope `
+ -SubscriptionId <HostingSubId> `
+ -HostRGName <HostingRGName> `
+ -Location <Location> `
+ -AzureEnvironmentName <AzureEnvironmentName>
+```
+
+Parameters
+
+|Param Name | Description | Required |
+|:-|:-|:-:|
+|RemediationIdentityHostSubId| Subscription ID to create remediation resources | Yes |
+|RemediationIdentityHostRGName| New ResourceGroup name to create remediation. Defaults to 'AzTS-MMARemovalUtility-RG'| No |
+|RemediationIdentityName| Name of the remediation MI| Yes |
+|TargetSubscriptionIds| List of target subscription ID(s) to run on | No |
+|TargetManagementGroupNames| List of target management group name(s) to run on | No|
+|TenantScope| Activate tenant scope and assigns roles using your tenant id| No|
+|SubscriptionId| Subscription ID where setup is installed| Yes|
+|HostRGName| New resource group name where remediation MI is created. Default value is 'AzTS-MMARemovalUtility-Host-RG'| No|
+|Location| Location DC where setup is created. Default value is 'EastUS2'| No|
+|AzureEnvironmentName| Azure environment where solution is to be installed: AzureCloud, AzureGovernmentCloud. Default value is 'AzureCloud'| No|
+
+### [MultiTenant](#tab/MultiTenant)
+
+In this section, we walk you through the steps for setting up multitenant AzTS MMA Removal Utility. This set up may take up to 30 minutes and has 9 steps
+
+1. Load setup script
+Point the current path to the folder containing the extracted deployment package and run the setup script.
+
+ ``` PowerShell
+ CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
+ . ".\MMARemovalUtilitySetup.ps1"
+```
+
+2. Installing required Az modules.
+Az modules contain cmdlets to deploy Azure resources, which are used to create resources. Install the required Az PowerShell Modules using this command. For more details of Az Modules, refer [link](https://docs.microsoft.com/powershell/azure/install-az-ps). You must point current path to the extracted folder location.
+
+``` PowerShell
+Set-Prerequisites
+```
+
+3. Set up multitenant identity
+The Microsoft Entra ID Application identity is used to associate the MEI Application using service principal. You perform the following operations. You must log in to the Microsoft Entra ID account where you want to install the Removal Utility setup using the PowerShell command.
+ - Creates a new multitenant MEI application if not provided with pre-existing MEI application objectId.
+ - Creates password credentials for the MEI application.
+
+``` PowerShell
+Disconnect-AzAccount
+Disconnect-AzureAD
+$TenantId = "<TenantId>"
+Connect-AzAccount -Tenant $TenantId
+Connect-AzureAD -TenantId $TenantId
+```
+
+``` PowerShell
+$Identity = Set-AzTSMMARemovalUtilitySolutionMultiTenantRemediationIdentity `
+ -DisplayName <AADAppDisplayName> `
+ -ObjectId <PreExistingAADAppId> `
+ -AdditionalOwnerUPNs @("<OwnerUPN1>","<OwnerUPN2>")
+$Identity.ApplicationId
+$Identity.ObjectId
+$Identity.Secret
+```
+
+Parameters
+
+|Param Name| Description | Required |
+|:-|:-|:-:|
+| DisplayName | Display Name of the Remediation Identity| Yes |
+| ObjectId | Object Id of the Remediation Identity | No |
+| AdditionalOwnerUPNs | User Principal Names (UPNs) of the owners for the App to be created | No |
+
+4. Set up secrets storage
+In this step you create secrets storage. You must have owner access on the subscription to create a new RG. You perform the following operations.
+ - Creates or updates the resource group for Key Vault.
+ - Creates or updates the Key Vault.
+ - Store the secret.
+
+``` PowerShell
+$KeyVault = Set-AzTSMMARemovalUtilitySolutionSecretStorage `
+ -SubscriptionId <KVHostingSubId> `
+ -ResourceGroupName <KVHostingRGName> `
+ -Location <Location> `
+ -KeyVaultName <KeyVaultName> `
+ -AADAppPasswordCredential $Identity.Secret
+$KeyVault.Outputs.keyVaultResourceId.Value
+$KeyVault.Outputs.secretURI.Value
+$KeyVault.Outputs.logAnalyticsResourceId.Value
+```
+
+Parameters
+
+|Param Name|Description|Required?
+|:-|:-|:-|
+| SubscriptionId | Subscription ID where keyvault is created.| Yes |
+| ResourceGroupName | Resource group name where Key Vault is created. Should be in a different RG from the set up RG | Yes |
+|Location| Location DC where Key Vault is created. For better performance, we recommend creating all the resources related to set up to be in one location. Default value is 'EastUS2'| No |
+|KeyVaultName| Name of the Key Vault that is created.| Yes |
+|AADAppPasswordCredential| Removal Utility MEI application password credentials| Yes |
+
+5. Set up Installation
+This step install the MMA Removal Utility, which discovers and removes MMA agents installed on Virtual Machines. You must have owner access to the subscription where the setup is created. We recommend that you use a new resource group for the tool. You perform the following operations.
+ - Prompts and collects onboarding details for usage telemetry collection based on user preference.
+ - Creates the RG if it doesn't exist.
+ - Creates or updates the resources with MIs.
+ - Creates or updates the monitoring dashboard.
+
+``` PowerShell
+$Solution = Install-AzTSMMARemovalUtilitySolution `
+ -SubscriptionId <HostingSubId> `
+ -HostRGName <HostingRGName> `
+ -Location <Location> `
+ -SupportMultipleTenant `
+ -IdentityApplicationId $Identity.ApplicationId `
+ -IdentitySecretUri ('@Microsoft.KeyVault(SecretUri={0})' -f $KeyVault.Outputs.secretURI.Value)
+$Solution.Outputs.internalMIObjectId.Value
+```
+
+Parameters
+
+| Param Name | Description | Required |
+|:-|:-|:-|
+| SubscriptionId | Subscription ID where setup is created | Yes |
+| HostRGName | Resource group name where setup is created Default value is 'AzTS-MMARemovalUtility-Host-RG'| No |
+| Location | Location DC where setup is created. For better performance, we recommend hosting the MI and Removal Utility in the same location. Default value is 'EastUS2'| No |
+| SupportMultiTenant | Switch to support multitenant set up | No |
+| IdentityApplicationId | MEI application Id.| Yes |
+|I dentitySecretUri | MEI application secret uri| No |
+
+6. Grant internal remediation identity with access to Key Vault
+In this step a user assigned managed ident is created to enable function apps to read the Key Vault for authentication. You must have Owner access to the RG.
+
+``` PowerShell
+Grant-AzTSMMARemediationIdentityAccessOnKeyVault `
+ -SubscriptionId <HostingSubId> `
+ -ResourceId $KeyVault.Outputs.keyVaultResourceId.Value `
+ -UserAssignedIdentityObjectId $Solution.Outputs.internalMIObjectId.Value `
+ -SendAlertsToEmailIds @("<EmailId1>","<EmailId2>") `
+ -IdentitySecretUri $KeyVault.Outputs.secretURI.Value `
+ -LAWorkspaceResourceId $KeyVault.Outputs.logAnalyticsResourceId.Value `
+ -DeployMonitoringAlert
+```
+
+Parameters
+
+| Param Name | Description | Required |
+|:-|:-|:-:|
+|SubscriptionId| Subscription ID where setup is created | Yes |
+|ResourceId| Resource Id of existing key vault | Yes |
+|UserAssignedIdentityObjectId| Object ID of your managed identity | Yes |
+|SendAlertsToEmailIds| User email Ids to whom alerts should be sent| No, Yes if DeployMonitoringAlert switch is enabled |
+| SecretUri | Key Vault SecretUri of the Removal Utility App's credentials | No, Yes if DeployMonitoringAlert switch is enabled |
+| LAWorkspaceResourceId | ResourceId of the LA Workspace associated with key vault| No, Yes if DeployMonitoringAlert switch is enabled.|
+| DeployMonitoringAlert | Create alerts on top of Key Vault auditing logs | No, Yes if DeployMonitoringAlert switch is enabled |
+
+7. Set up runbook for managing key vault IP ranges
+This step creates a secure Key Vault with public network access disabled. IP Ranges for function apps must be allowed access to the Key Vault. You must have owner access to the RG. You perform the following operations:
+ - Creates or updates the automation account.
+ - Grants access for automation account using system-assigned managed identity on Key Vault.
+ - Set up the runbook with script to fetch the IP ranges published by Azure every week.
+ - Runs the runbook one-time at the time of set up and schedule task to run every week.
+
+```
+Set-AzTSMMARemovalUtilityRunbook `
+ -SubscriptionId <HostingSubId> `
+ -ResourceGroupName <HostingRGName> `
+ -Location <Location> `
+ -FunctionAppUsageRegion <FunctionAppUsageRegion> `
+ -KeyVaultResourceId $KeyVault.Outputs.keyVaultResourceId.Value
+```
+
+Parameters
+
+|Param Name |Description | Required|
+|:-|:-|:-|
+|SubscriptionId| Subscription ID where the automation account and key vault are present.| Yes|
+|ResourceGroupName| Name of resource group where the automation account and key vault are | Yes|
+|Location| Location where your automation account is created. For better performance, we recommend creating all the resources related to setup in the same location. Default value is 'EastUS2'| No|
+|FunctionAppUsageRegion| Location of dynamic ip addresses that are allowed on keyvault. Default location is EastUS2| Yes|
+|KeyVaultResourceId| Resource ID of the keyvault for ip addresses that are allowed.| Yes|
+
+8. Set up SPN and grant required roles for each tenant
+In this step you create SPNs for each tenant and grant permission on each tenant. Set up requires Reader, Virtual Machine Contributor, and Azure Arc ScVmm VM contributor access on your scopes. Scopes Configured can be a Tenant/ManagementGroup(s)/Subscription(s) or both ManagementGroup(s) and Subscription(s).
+For each tenant, perform the steps and make sure you have enough permissions on the other tenant for creating SPNs. You must have **User Access Administrator (UAA) or Owner** on the configured scopes. For example, to run setup on subscription 'X' you have to have UAA role assignment on subscription 'X' to grant the SPN with the required permissions.
+
+``` PowerShell
+$TenantId = "<TenantId>"
+Disconnect-AzureAD
+Connect-AzureAD -TenantId $TenantId
+$SPN = Set-AzSKTenantSecuritySolutionMultiTenantIdentitySPN -AppId $Identity.ApplicationId
+Grant-AzSKAzureRoleToMultiTenantIdentitySPN -AADIdentityObjectId $SPN.ObjectId `
+ -TargetSubscriptionIds @("<SubId1>","<SubId2>","<SubId3>") `
+ -TargetManagementGroupNames @("<MGName1>","<MGName2>","<MGName3>")
+```
+
+Parameters
+For Set-AzSKTenantSecuritySolutionMultiTenantIdentitySPN,
+
+|Param Name | Description | Required |
+|:-|:-|:-:|
+|AppId| Your application Id that is created| Yes |
+
+For Grant-AzSKAzureRoleToMultiTenantIdentitySPN,
+
+|Param Name | Description | Required|
+|:-|:-|:-:|
+| AADIdentityObjectId | Your identity object| Yes|
+| TargetSubscriptionIds| Your list of target subscription ID(s) to run set up on | No |
+| TargetManagementGroupNames | Your list of target management group name(s) to run set up on | No|
+
+9. Configure target scopes
+You can configure target scopes using the `Set-AzTSMMARemovalUtilitySolutionScopes`
+
+``` PowerShell
+$ConfiguredTargetScopes = Set-AzTSMMARemovalUtilitySolutionScopes `
+ -SubscriptionId <HostingSubId> `
+ -ResourceGroupName <HostingRGName> `
+ -ScopesFilePath <ScopesFilePath>
+```
+Parameters
+
+|Param Name|Description|Required|
+|:-|:-|:-:|
+|SubscriptionId| Your subscription ID where setup is installed | Yes |
+|ResourceGroupName| Your resource group name where setup is installed| Yes|
+|ScopesFilePath| File path with target scope configurations. See scope configuration| Yes |
+
+Scope configuration file is a CSV file with a header row and three columns
+
+| ScopeType | ScopeId | TenantId |
+|:|:|:|
+| Subscription | /subscriptions/abb5301a-22a4-41f9-9e5f-99badff261f8 | 72f988bf-86f1-41af-91ab-2d7cd011db47 |
+| Subscription | /subscriptions/71bdd12b-ae1d-499a-a4ea-e32d4c1d9c35 | e60f12c0-e1dc-4be1-8d86-e979a5527830 |
+
+## Run The Tool
+
+### [Discovery](#tab/Discovery)
+
+``` PowerShell
+Update-AzTSMMARemovalUtilityDiscoveryTrigger `
+ -SubscriptionId <HostingSubId> `
+ -ResourceGroupName <HostingRGName> `
+ -StartScopeResolverAfterMinutes 60 `
+ -StartExtensionDiscoveryAfterMinutes 30
+```
+
+Parameters
+
+|Param Name|Description|Required?
+|:-|:-|:-:|
+|SubscriptionId| Subscription ID where you installed the Utility | Yes|
+|ResourceGroupName| ResourceGroup name where you installed the Utility | Yes|
+|StartScopeResolverAfterMinutes| Time in minutes to wait before running resolver | Yes (Mutually exclusive with param '-StartScopeResolverImmediatley')|
+|StartScopeResolverImmediatley | Run resolver immediately | Yes (Mutually exclusive with param '-StartScopeResolverAfterMinutes') |
+|StartExtensionDiscoveryAfterMinutes | Time in minutes to wait to run discovery (should be after resolver is done) | Yes (Mutually exclusive with param '-StartExtensionDiscoveryImmediatley')|
+|StartExtensionDiscoveryImmediatley | Run extensions discovery immediately | Yes (Mutually exclusive with param '-StartExtensionDiscoveryAfterMinutes')|
+
+### [Removal](#tab/Removal)
+
+By default, the removal phase is disabled. We recommend that you run it after validating the inventory of machines from the discovery step.
+``` PowerShell
+Update-AzTSMMARemovalUtilityRemovalTrigger `
+ -SubscriptionId <HostingSubId> `
+ -ResourceGroupName <HostingRGName> `
+ -StartAfterMinutes 60 `
+ -EnableRemovalPhase `
+ -RemovalCondition 'CheckForAMAPresence'
+```
+
+Parameters
+
+| Param Name | Description | Required?
+|:-|:-|:-:|
+| SubscriptionId | Subscription ID where you installed the Utility | Yes |
+| ResourceGroupName | ResourceGroup name where you installed the Utility| Yes|
+| StartAfterMinutes | Time in minutes to wait before starting removal | Yes (Mutually exclusive with param '-StartImmediately')|
+| StartImmediately | Run removal phase immediately | Yes (Mutually exclusive with param '-StartAfterMinutes') |
+| EnableRemovalPhase | Enable removal phase | Yes (Mutually exclusive with param '-DisableRemovalPhase')|
+| RemovalCondition | MMA extension should be removed when:</br>ChgeckForAMAPresence AMA extension is present </br> SkipAMAPresenceCheck in all cases whether AMA extension is present or not) | No |
+| DisableRemovalPhase | Disable removal phase | Yes (Mutually exclusive with param '-EnableRemovalPhase')|
+
+**Know issues**
+- Removal of MMA agent in Virtual Machine Scale Set(VMSS) where orchestration mode is 'Uniform' depend on its upgrade policy. We recommend that you manually upgrade the instance if the policy is set to 'Manual.'
+- If you get the error message, "The deployment MMARemovalenvironmentsetup-20233029T103026 failed with error(s). Showing 1 out of 1 error(s). Status Message: (Code:BadRequest) - We observed intermittent issue with App service deployment." Rerun the installation command with same parameter values. Command should proceed without any error in next attempt.
+- Extension removal progress tile on Monitoring dashboards shows some failures - Progress tile groups failures by error code, some known error code, reason and next steps to resolve are listed:
+
+| Error Code | Description/Reason | Next steps
+|:-|:-|:-|
+| AuthorizationFailed | Remediation Identity doesn't have permission to perform 'Extension delete' operation on VM(s), VMSS, Azure Arc Servers.| Grant 'VM Contributor' role to Remediation Identity on VM(s) and Grant 'Azure Arc ScVmm VM Contributor' role to Remediation Identity on VMSS and rerun removal phase.|
+| OperationNotAllowed | Resource(s) are in a de-allocated state or a Lock is applied on the resource(s) | Turn on failed resource(s) and/or Remove Lock and rerun removal phase |
+
+The utility collects error details in the Log Analytics workspace that was used during set up. Go to Log Analytics workspace > Select Logs and run following query:
+
+``` KQL
+let timeago = timespan(7d);
+InventoryProcessingStatus_CL
+| where TimeGenerated > ago(timeago) and Source_s == "AzTS_07_ExtensionRemovalProcessor"
+| where ProcessingStatus_s !~ "Initiated"
+| summarize arg_max(TimeGenerated,*) by tolower(ResourceId)
+| project ResourceId, ProcessingStatus_s, ProcessErrorDetails_s
+```
+
+## [CleanUp](#tab/CleanUp)
+
+The utility creates resources that you should clean up once you have remove MMA from your infrastructure. Execute the following steps to clean up.
+ 1. Go to the folder containing the deployment package and load the cleanup script
+
+ ``` PowerShell
+ CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
+ . ".\MMARemovalUtilityCleanUpScript.ps1"
+```
+
+2. Run the cleanup script
+
+``` PowerShell
+Remove-AzTSMMARemovalUtilitySolutionResources `
+ -SubscriptionId <HostingSubId> `
+ -ResourceGroupName <HostingRGName> `
+ [-DeleteResourceGroup `]
+ -KeepInventoryAndProcessLogs
+```
+
+Parameters
+
+|Param Name|Description|Required|
+|:-|:-|:-:|
+|SubscriptionId| Subscription ID that the Utility is deleting| Yes|
+|ResourceGroupName| ResourceGroup name, which is deleting| Yes|
+|DeleteResourceGroup| Boolean flag to delete entire resource group| Yes|
+|KeepInventoryAndProcessLogs| Boolean flag to exclude log analytics workspace and application insights. CanΓÇÖt be used with DeleteResourceGroup.| No|
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
If any other type is used as a scope, it's stored under the property `Scope` in
`ApplicationInsightsLoggerProvider` captures `ILogger` logs and creates `TraceTelemetry` from them. If an `Exception` object is passed to the `Log` method on `ILogger`, `ExceptionTelemetry` is created instead of `TraceTelemetry`.
-These telemetry items can be found in the same places as any other `TraceTelemetry` or `ExceptionTelemetry` items for Application Insights, including the Azure portal, analytics, or the Visual Studio local debugger.
+**Viewing ILogger Telemetry**
+
+In the Azure Portal:
+1. Go to the Azure Portal and access your Application Insights resource.
+2. Click on the "Logs" section inside Application Insights.
+3. Use Kusto Query Language (KQL) to query ILogger messages, usually stored in the `traces` table.
+ - Example Query: `traces | where message contains "YourSearchTerm"`.
+4. Refine your queries to filter ILogger data by severity, time range, or specific message content.
+
+In Visual Studio (Local Debugger):
+1. Start your application in debug mode within Visual Studio.
+2. Open the "Diagnostic Tools" window while the application runs.
+3. In the "Events" tab, ILogger logs appear along with other telemetry data.
+4. Utilize the search and filter features in the "Diagnostic Tools" window to locate specific ILogger messages.
If you prefer to always send `TraceTelemetry`, use this snippet:
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
The `mask` action requires the following settings:
* `replace` * `action`: `mask`
-`pattern` can contain a named group placed betwen `?<` and `>:`. Example: `(?<userGroupName>[a-zA-Z.:\/]+)\d+`? The group is `(?<userGroupName>[a-zA-Z.:\/]+)` and `userGroupName` is the name of the group. `pattern` can then contain the same named group placed between `${` and `}` followed by the mask. Example where the mask is **: `${userGroupName}**`.
+`pattern` can contain a named group placed between `?<` and `>:`. Example: `(?<userGroupName>[a-zA-Z.:\/]+)\d+`? The group is `(?<userGroupName>[a-zA-Z.:\/]+)` and `userGroupName` is the name of the group. `pattern` can then contain the same named group placed between `${` and `}` followed by the mask. Example where the mask is **: `${userGroupName}**`.
See [Telemetry processor examples](./java-standalone-telemetry-processors-examples.md) for masking examples.
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Examples of using the Python logging library can be found on [GitHub](https://gi
Telemetry emitted by Azure SDKS is automatically [collected](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/README.md#officially-supported-instrumentations) by default. ++ **Footnotes** - ┬╣: Supports automatic reporting of *unhandled/uncaught* exceptions - ┬▓: Supports OpenTelemetry Metrics
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Dependencies | Other Span Types (Client, Internal, etc.)
[!INCLUDE [azure-monitor-app-insights-opentelemetry-support](../includes/azure-monitor-app-insights-opentelemetry-support.md)]
+## Frequently asked questions
+
+#### Where can I find a list of Application Insights SDK versions and their names?
+
+A list of SDK versions and names is hosted on GitHub. For more information, see [SDK Version](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/docs/versions_and_names.md).
+ ## Next steps Select your enablement approach:
batch Simplified Compute Node Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md
Title: Use simplified compute node communication description: Learn about the simplified compute node communication mode in the Azure Batch service and how to enable it. Previously updated : 08/14/2023 Last updated : 01/10/2024
This article describes the *simplified* communication mode and the associated ne
Simplified compute node communication in Azure Batch is currently available for the following regions: -- **Public**: all public regions where Batch is present except for West India and France South.
+- **Public**: all public regions where Batch is present except for West India.
- **Government**: USGov Arizona, USGov Virginia, USGov Texas. - **China**: all China regions where Batch is present except for China North 1 and China East 1.
chaos-studio Chaos Studio Samples Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-samples-rest-api.md
Title: Use the REST APIs to manage Azure Chaos Studio experiments
-description: Run and manage a chaos experiment with Azure Chaos Studio by using REST APIs.
+ Title: Use REST APIs to interact with Chaos Studio
+description: Create, view, and manage Azure Chaos Studio experiments, targets, and capabilities with REST APIs.
-# Use the Chaos Studio REST APIs to run and manage chaos experiments
+# Use REST APIs to interact with Chaos Studio
-> [!WARNING]
-> Injecting faults can affect your application or service. Be careful not to disrupt customers.
+If you're integrating Azure Chaos Studio into your CI/CD pipelines, or you simply prefer to use direct API calls to interact with your Azure resources, you can use Chaos Studio's REST API. For the full API reference, visit the [Azure Chaos Studio REST API reference](/rest/api/chaosstudio/). This page provides samples for using the REST API effectively, and is not intended as a comprehensive reference.
-The Azure Chaos Studio REST API provides support for starting experiments programmatically. You can also use the Azure Resource Manager client and the Azure CLI to execute these commands from the console. The examples in this article are for the Azure CLI.
+This article assumes you're using [Azure CLI](/cli/azure/install-azure-cli) to execute these commands, but you can adapt them to other standard REST clients.
-> [!Warning]
-> These APIs are still under development and subject to change.
-
-## REST APIs
You can use the Chaos Studio REST APIs to:
-* Start, stop, and manage experiments.
-* View and manage targets.
-* Query experiment status.
-* Query and delete subscription configurations.
+* Create, modify, and delete experiments
+* View, start, and stop experiment executions
+* View and manage targets
+* Register and unregister your subscription with the Chaos Studio resource provider
+* View available resource provider operations.
-Use the `AZ CLI` utility to perform these actions from the command line.
+Use the `az cli` utility to perform these actions from the command line.
> [!TIP]
-> To get more verbose output with the AZ CLI, append `--verbose` to the end of each command. This variable returns more metadata when commands execute, including `x-ms-correlation-request-id`, which aids in debugging.
+> To get more verbose output with Azure CLI, append `--verbose` to the end of each command. This variable returns more metadata when commands execute, including `x-ms-correlation-request-id`, which aids in debugging.
+
+These examples have been reviewed with the generally available Chaos Studio API version `2023-11-01`.
-### Chaos Studio provider commands
+## Resource provider commands
-This section lists the Chaos Studio provider commands.
+This section lists the Chaos Studio provider commands, which help you understand the resource provider's status and available operations.
-#### List details about the Microsoft.Chaos resource provider
+### List details about the Microsoft.Chaos resource provider
+
+This shows information such as available API versions for the Chaos resource provider and region availability. The most recent `api-version` required for this may differ from the `api-version` for Chaos resource provider operations.
```azurecli
-az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos?api-version={apiVersion}" --resource "https://management.azure.com"
+az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos?api-version={apiVersion}"
```
-#### List all the operations of the Microsoft.Chaos resource provider
+### List all the operations of the Microsoft.Chaos resource provider
```azurecli
-az rest --method get --url "https://management.azure.com/providers/Microsoft.Chaos/operations?api-version={apiVersion}" --resource "https://management.azure.com"
+az rest --method get --url "https://management.azure.com/providers/Microsoft.Chaos/operations?api-version={apiVersion}"
```
-#### List Chaos provider configurations
+## Targets and capabilities
+
+These operations help you see what [targets and capabilities](chaos-studio-targets-capabilities.md) are available, and add them to a target.
+
+### List all target types available in a region
```azurecli
-az rest --method get --urlΓÇ»"https://management.azure.com/subscriptions/{subscriptionId}/providers/microsoft.chaos/chaosProviderConfigurations/?api-version={apiVersion}" --resource "https://management.azure.com" --verbose
+az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos/locations/{locationName}/targetTypes?api-version={apiVersion}"
```
-#### Create Chaos provider configuration
+### List all capabilities available for a target type
```azurecli
-az rest --method put --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/microsoft.chaos/chaosProviderConfigurations/{chaosProviderType}?api-version={apiVersion}" --body @{providerSettings.json} --resource "https://management.azure.com"
+az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos/locations/{locationName}/targetTypes/{targetType}/capabilityTypes?api-version={apiVersion}"
```
-### Chaos Studio target and agent commands
+### Enable a resource as a target
-This section lists the Chaos Studio target and agent commands.
+To use a resource in an experiment, you need to enable it as a target.
-#### List all the targets or agents under a subscription
+```azurecli
+az rest --method put --url "https://management.azure.com/{resourceId}/providers/Microsoft.Chaos/targets/{targetType}?api-version={apiVersion}" --body "{'properties':{}}"
+```
+
+### Enable capabilities for a target
+
+Once a resource has been enabled as a target, you need to specify what capabilities (corresponding to faults) are allowed.
```azurecli
-az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos/chaosTargets/?api-version={apiVersion}" --url-parameter "chaosProviderType={chaosProviderType}" --resource "https://management.azure.com"
+az rest --method put --url "https://management.azure.com/{resourceId}/providers/Microsoft.Chaos/targets/{targetType}/capabilities/{capabilityName}?api-version={apiVersion}" --body "{'properties':{}}"
```
-### Chaos Studio experiment commands
+### See what capabilities are enabled for a target
+
+Once a target and capabilities have been enabled, you can view the enabled capabilities. This is useful for constructing your chaos experiment, since it includes the parameter schema for each fault.
+
+```azurecli
+az rest --method get --url "https://management.azure.com/{resourceId}/providers/Microsoft.Chaos/targets/{targetType}/capabilities?api-version={apiVersion}"
+```
-This section lists the Chaos Studio experiment commands.
+## Experiments
-#### List all the experiments in a resource group
+These operations help you view, run, and manage experiments.
+
+### List all the experiments in a resource group
```azurecli
-az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Chaos/chaosExperiments?api-version={apiVersion}" --resource "https://management.azure.com"
+az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Chaos/experiments?api-version={apiVersion}"
```
-#### Get an experiment's configuration details by name
+### Get an experiment's configuration details by name
```azurecli
-az rest --method get --url "https://management.azure.com/{experimentId}?api-version={apiVersion}" --resource "https://management.azure.com"
+az rest --method get --url "https://management.azure.com/{experimentId}?api-version={apiVersion}"
```
-#### Create or update an experiment
+### Create or update an experiment
```azurecli
-az rest --method put --url "https://management.azure.com/{experimentId}?api-version={apiVersion}" --body @{experimentName.json} --resource "https://management.azure.com"
+az rest --method put --url "https://management.azure.com/{experimentId}?api-version={apiVersion}" --body @{experimentName.json}
```
-#### Delete an experiment
+### Delete an experiment
```azurecli
-az rest --method delete --url "https://management.azure.com/{experimentId}?api-version={apiVersion}" --resource "https://management.azure.com" --verbose
+az rest --method delete --url "https://management.azure.com/{experimentId}?api-version={apiVersion}"
```
-#### Start an experiment
+### Start an experiment
```azurecli az rest --method post --url "https://management.azure.com/{experimentId}/start?api-version={apiVersion}" ```
-#### Get past statuses of an experiment
+### Get all executions of an experiment
```azurecli
-az rest --method get --url "https://management.azure.com/{experimentId}/statuses?api-version={apiVersion}" --resource "https://management.azure.com"
+az rest --method get --url "https://management.azure.com/{experimentId}/executions?api-version={apiVersion}"
```
-#### Get the status of an experiment
+### List the details of a specific experiment execution
+
+If an experiment has failed, this can be used to find error messages and specific targets, branches, steps, or actions that failed.
```azurecli
-az rest --method get --url "https://management.azure.com/{experimentId}/status?api-version={apiVersion}" --resource "https://management.azure.com"
+az rest --method post --url "https://management.azure.com/{experimentId}/executions/{executionDetailsId}/getExecutionDetails?api-version={apiVersion}"
```
-#### Cancel (stop) an experiment
+### Cancel (stop) an experiment
```azurecli
-az rest --method get --url "https://management.azure.com/{experimentId}/cancel?api-version={apiVersion}" --resource "https://management.azure.com"
+az rest --method post --url "https://management.azure.com/{experimentId}/cancel?api-version={apiVersion}"
```
-#### List the details of the last two experiment executions
+## Other helpful commands and tips
+
+While these commands don't use the Chaos Studio API specifically, they can be helpful for using Chaos Studio effectively.
+
+### View Chaos Studio resources with Azure Resource Graph
+
+You can use the Azure Resource Graph [REST API](../governance/resource-graph/first-query-rest-api.md) to query resources associated with Chaos Studio, like targets and capabilities.
```azurecli
-az rest --method get --url "https://management.azure.com/{experimentId}/executiondetails?api-version={apiVersion}" --resource "https://management.azure.com"
+ az rest --method post --url "https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01" --body "{'query':'chaosresources'}"
+```
+
+Alternatively, you can use Azure Resource Graph's `az cli` [extension](../governance/resource-graph/first-query-azurecli.md).
+
+```azurecli-interactive
+az graph query -q "chaosresources | summarize count() by type"
+```
+
+For example, if you want a summary of all the Chaos Studio targets active in your subscription by resource group, you can use:
+
+```azurecli-interactive
+az graph query -q "chaosresources | where type == 'microsoft.chaos/targets' | summarize count() by resourceGroup"
```
-#### List the details of a specific experiment execution
+### Filtering and querying
+
+Like other Azure CLI commands, you can use the `--query` and `--filter` parameters with the Azure CLI `rest` commands. For example, to see a table of available capability types for a specific target type, use the following command:
```azurecli
-az rest --method get --url "https://management.azure.com/{experimentId}/executiondetails/{executionDetailsId}?api-version={apiVersion}" --resource "https://management.azure.com"
+az rest --method get --url "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Chaos/locations/{locationName}/targetTypes/{targetType}/capabilityTypes?api-version=2023-11-01" --output table --query 'value[].{name:name, faultType:properties.runtimeProperties.kind, urn:properties.urn}'
``` ## Parameter definitions
-| Parameter name | Definition | Lookup |
-| | | |
-| {apiVersion} | Version of the API to use when you execute the command provided | Can be found in the [API documentation](/rest/api/chaosstudio/) |
-| {experimentId} | Azure Resource ID for the experiment | Can be found on the [Chaos Studio Experiment page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) |
-| {chaosProviderType} | Type or Name of Chaos Studio provider | Available providers can be found in the [List of current Provider Config Types](chaos-studio-fault-providers.md) |
-| {experimentName.json} | JSON that contains the configuration of the chaos experiment | Generated by the user |
-| {subscriptionId} | Subscription ID where the target resource is located | Can be found on the [Subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) |
-| {resourceGroupName} | Name of the resource group where the target resource is located | Can be found on the [Resource groups page](https://portal.azure.com/#blade/HubsExtension/BrowseResourceGroups) |
-| {executionDetailsId} | Execution ID of an experiment execution | Can be found on the [Chaos Studio Experiment page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) |
+This section describes the parameters used throughout this document and how you can fill them in.
+
+| Parameter name | Definition | Lookup | Example |
+| | | | |
+| {apiVersion} | Version of the API to use when you execute the command provided | Can be found in the [API documentation](/rest/api/chaosstudio/) | `2023-11-01` |
+| {experimentId} | Azure Resource ID for the experiment | Can be found on the [Chaos Studio Experiment page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) or with a [GET call](#list-all-the-experiments-in-a-resource-group) to the `/experiments` endpoint | `/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/my-resource-group/providers/Microsoft.Chaos/experiments/my-chaos-experiment` |
+| {experimentName.json} | JSON that contains the configuration of the chaos experiment | Generated by the user | `experiment.json` (See [a CLI tutorial](chaos-studio-tutorial-service-direct-cli.md) for a full example file) |
+| {subscriptionId} | Subscription ID where the target resource is located | Find in the [Azure portal Subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) or by running `az account list --output table` | `6b052e15-03d3-4f17-b2e1-be7f07588291` |
+| {resourceGroupName} | Name of the resource group where the target resource is located | Find in the [Resource Groups page](https://portal.azure.com/#blade/HubsExtension/BrowseResourceGroups) or by running `az group list --output table` | `my-resource-group` |
+| {executionDetailsId} | Execution ID of an experiment execution | Find in the [Chaos Studio Experiment page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) or with a [GET call](#get-all-executions-of-an-experiment) to the `/executions` endpoint | `C69E7FCD-1548-47E5-9DDA-92A5DD60E610` |
+| {targetType} | Type of target for the corresponding resource | Find in the [Fault providers list](chaos-studio-fault-providers.md) or a [GET call](#list-all-target-types-available-in-a-region) to the `/locations/{locationName}/targetTypes` endpoint | `Microsoft-VirtualMachine` |
+| {capabilityName} | Name of an individual capability resource, extending a target resource | Find in the [fault reference documentation](chaos-studio-fault-library.md) or with a [GET call](#list-all-capabilities-available-for-a-target-type) to the `capabilityTypes` endpoint | `Shutdown-1.0` |
+| {locationName} | Azure region for a resource or regional endpoint | Find all possible regions for your account with `az account list-locations --output table` | `eastus` |
communication-services Voice And Video Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md
For each endpoint within a call, a distinct call diagnostic log is created for o
| `RecvResolutionHeight` | The average of vertical size of the incoming video stream that is transmitted during a video/screensharing call. It's measured in pixels and is one of the factors that determines the overall resolution and quality of the video stream. The specific resolution used may depend on the capabilities of the devices and network conditions involved in the call. <br><br> The stream quality is considered poor when this value is less than 240 for video stream, or less than 768 for screensharing stream. | `RecvFreezeDurationPerMinuteInMs` | The average freeze duration in milliseconds per minute for incoming video/screensharing stream. Freezes are typically due to bad network condition and can degrade the stream quality. <br><br> The stream quality is considered poor when this value is greater than 6,000 ms for video stream, or greater than 25,000 ms for screensharing stream.
+### Call client operations log schema
++
+The **call client operations** log provides client-side information about the calling endpoints and participants involved in a call. These logs are currently in preview and show client events that occurred in a call and what actions a customer may have taken during a call.
+
+This log provides detailed information on actions taken during a call and can be used to visualize and investigate call issues by using Call Diagnostics for your Azure Communication Services Resource. [Learn more about Call Diagnostics](../../voice-video-calling/call-diagnostics.md)
+
+| Property | Description |
+||-|
+| `CallClientTimeStamp` | The timestamp for when on operation occurred on the SDK in UTC. |
+| `OperationName` | The name of the operation triggered on the calling SDK. |
+| `CallId` | The unique ID for a call. It identifies correlated events from all of the participants and endpoints that connect during a single call, and you can use it to join data from different logs. It is similar to the correlationId in call summary log and call diagnostic log. |
+| `ParticipantId` | The unique identifier for each call leg (in Group calls) or call participant (in Peer to Peer calls). This ID is the main correlation point between CallSummary, CallDiagnostic, CallClientOperations, and CallClientMediaStats logs. |
+| `OperationType` | Call Client Operation. |
+| `OperationId` | A unique GGUID identifying an SDK operation. |
+| `DurationMs` | The time took by a Calling SDK operation to fail or succeed. |
+| `ResultType` | Field describing success or failure of an operation. |
+| `ResultSignature` | HTTP like failure or success code (200, 500). |
+| `SdkVersion` | The version of Calling SDK being used. |
+| `UserAgent` | The standard user agent string based on the browser or the platform Calling SDK is used. |
+| `ClientInstanceId` | A unique GGUID identifying the CallClient object. |
+| `EndpointId` | The unique ID that represents each endpoint connected to the call, where endpointType defines the endpoint type. When the value is null, the connected entity is the Communication Services server (endpointType = "Server"). <BR><BR> The endpointId value can sometimes persist for the same user across multiple calls (correlationId) for native clients. The number of endpointId values determines the number of call summary logs. A distinct summary log is created for each endpointId value. |
+| `OperationPayload` | A dynamic payload that varies based on the operation providing more operation specific details. |
+
+<!-- ### Call client media stats time series log schema
+
+The **call client media statistics time series** log provides
+client-side information about the media streams between individual
+participants involved in a call. These logs are currently in limited preview and provide detailed time series
+data on the audio, video, and screenshare media steams between
+participants with a default 10-seconds aggregation interval. The logs contain granular time series information about media stream type, direction, codec as well as bitrate properties (for example, max, min, average).
++
+This log provides more detailed information than the Call Diagnostic log
+to understand the quality of media steams between participants. It can be used to
+visualize and investigate quality issues for your calls through Call
+Diagnostics for your Azure Communication Services Resource. [Learn more about Call Diagnostics](../../voice-video-calling/call-diagnostics.md)
++++
+| Property | Description |
+|-|-|
+| `OperationName` | The operation associated with the log record. |
+| `CallId` | The unique ID for a call. It identifies correlated events from all of the participants and endpoints that connect during a single call, and you can use it to join data from different logs. It is similar to the correlationId in call summary log and call diagnostic log. |
+| `CallClientTimeStamp` | The timestamp when the media stats is recorded. |
+| `MetricName` | The name of the media statistics, such as Bitrate, JitterInMs, PacketsPerSecond etc. |
+| `Count` | The number of data points sampled at a given timestamp. |
+| `Sum` | The sum of metric values of all the data points sampled. |
+| `Average` | The average metric value of the data points sampled. Average = Sum / Count |
+| `Minimum` | The minimum of metric values of all the data points sampled. |
+| `Maximum` | The maximum of metric values of all the data points sampled. |
+| `MediaStreamDirection` | The direction of the media stream. It can be send or receive |
+| `MediaStreamType` | The type of the media stream. It can be video, audio or screen. |
+| `MediaStreamCodec` | The codec used to encode/decode the media stream, such as H264, OPUS, VP8 etc. |
+| `ParticipantId` | The unique ID that is generated to represent each endpoint in the call. |
+| `ClientInstanceId` | The unique ID that represents the Call Client object created in the calling SDK. |
+| `EndpointId` | The unique ID that represents each endpoint that is connected to the call. EndpointId can persist for the same user across multiple calls (callIds) for native clients but is unique for every call when the client is a web browser. Note that EndpointId is not currently instrumented in this log. When implemented in future, it will match the values in CallSummary/Diagnostics logs |
+| `RemoteParticipantId` | The unique ID that represents the remote endpoint in the media stream. For example, a user can render multiple video streams for the other users in the same call. Each video stream has a different RemoteParticipantId. |
+| `RemoteEndpointId` | Same as EndpointId, but it represents the user on the remote side of the stream. |
+| `MediaStreamId` | A unique ID that represents each media stream in the call. MediaStreamId is not currently instrumented in clients. When implemented, it will match the streamId column in CallDiagnostics logs. |
+| `AggregationIntervalSeconds` | The time interval for aggregating the media statistics. Currently in calling SDK, the media metrics are sampled every 1 second, and when we report in the log we aggregate all samples every 10 seconds. So each row in this table at most have 10 sampling points. -->
++ ### P2P vs. group calls There are two types of calls, as represented by `callType`: -- **P2P call**: A connection between only two endpoints, with no server endpoint. P2P calls are initiated as a call between those endpoints and are not created as a group call event before the connection.
+- **Peer to Peer (P2P) call**: A connection between only two endpoints, with no server endpoint. P2P calls are initiated as a call between those endpoints and are not created as a group call event before the connection.
:::image type="content" source="../media/call-logs-azure-monitor/p2p-diagram.png" alt-text="Diagram that shows a P2P call across two endpoints.":::
There are two types of calls, as represented by `callType`:
:::image type="content" source="../media/call-logs-azure-monitor/group-call-version-a.png" alt-text="Diagram that shows a group call across multiple endpoints.":::
-## Log structure
+## Log structure
-Azure Communication Services creates two types of logs:
+Azure Communication Services creates four types of logs:
- **Call summary logs**: Contain basic information about the call, including all the relevant IDs, time stamps, endpoints, and SDK information. For each participant within a call, Communication Services creates a distinct call summary log. If someone rejoins a call, that participant has the same `EndpointId` value but a different `ParticipantId` value. That endpoint can then have two call summary logs. -- **Call diagnostic logs**: Contain information about the stream, along with a set of metrics that indicate quality of experience measurements. For each endpoint within a call (including the server), Communication Services creates a distinct call diagnostic log for each media stream (audio or video, for example) between endpoints.
+- **Call diagnostic logs**: Contain information about the stream, along with a set of metrics that indicate quality of experience measurements. For each `EndpointId` within a call (including the server), Azure Communication Services creates a distinct call diagnostic log for each media stream (audio or video, for example) between endpoints.
++
+- **Call client operations logs**: Contain detailed call client events. These log events are generated for each `EndpointId` in a call and the number of event logs generated will depend on the operations the participant performed during the call.
-In a P2P call, each log contains data that relates to each of the outbound streams associated with each endpoint. In a group call, each stream associated with `endpointType` = `"Server"` creates a log that contains data for the inbound streams. All other streams create logs that contain data for the outbound streams for all non-server endpoints. In group calls, use the `participantId` value as the key to join the related inbound and outbound logs into a distinct participant connection.
+<!-
+
+In a P2P call, each log contains data that relates to each of the outbound streams associated with each endpoint. In a group call, each stream associated with `endpointType` = `"Server"` creates a log that contains data for the inbound streams. All other streams create logs that contain data for the outbound streams for all non-server endpoints. In group calls, use the `participantId` value as the key to join the related inbound and outbound logs into a distinct participant connection. -->
### Example: P2P call
-The following diagram represents two endpoints connected directly in a P2P call. In this example, Communication Services creates two call summary logs (one for each `participantID` value) and four call diagnostic logs (one for each media stream). Each log contains data that relates to the outbound stream of `participantID`.
+The following diagram represents two endpoints connected directly in a P2P call. In this example, Communication Services creates two call summary logs (one for each `participantID` value) and four call diagnostic logs (one for each media stream).
+
+<!-- For Azure Communication Services (ACS) call client participants there will also be a series of call client operations logs and call client media stats time series logs. The exact number of these logs depend on what kind of SDK operations are called and how long the call is. -->
:::image type="content" source="../media/call-logs-azure-monitor/example-1-p2p-call-same-tenant.png" alt-text="Diagram that shows a P2P call within the same tenant."::: ### Example: Group call
-The following diagram represents a group call example with three `participantID` values (which means three participants) and a server endpoint. Values for `endpointId` can potentially appear in multiple participants--for example, when they rejoin a call from the same device. Communication Services creates one call summary log for each `participantID` value. It creates four call diagnostic logs: one for each media stream per `participantID`.
+The following diagram represents a group call example with three `participantId` values (which means three participants) and a server endpoint. Multiple values for `endpointId` can potentially appear in multiple participants--for example, when they rejoin a call from the same device. Communication Services creates one call summary log for each `participantId` value. It creates four call diagnostic logs: one for each media stream per `participantId`.
+
+For Azure Communication Services (ACS) call client participants the call client operations logs are the same as P2P calls. For each participant using calling SDK, there will be a series of call client operations logs.
+
+<!-- For Azure Communication Services (ACS) call client participants the call client operations logs and call client media statistics time series logs are the same as P2P calls. For each participant using calling SDK, there will be a series of call client operations logs and call client media statistics time series logs. -->
:::image type="content" source="../media/call-logs-azure-monitor/example-2-group-call-same-tenant.png" alt-text="Diagram that shows a group call within the same tenant.":::
-### Example: Cross-tenant P2P call
+### Example: Cross-tenant P2P call
The following diagram represents two participants across multiple tenants that are connected directly in a P2P call. In this example, Communication Services creates one call summary log (one for each participant) with redacted OS and SDK versions. Communication Services also creates four call diagnostic logs (one for each media stream). Each log contains data that relates to the outbound stream of `participantID`. :::image type="content" source="../media/call-logs-azure-monitor/example-3-p2p-call-cross-tenant.png" alt-text="Diagram that shows a cross-tenant P2P call.":::
-### Example: Cross-tenant group call
+### Example: Cross-tenant group call
The following diagram represents a group call example with three `participantId` values across multiple tenants. Communication Services creates one call summary log for each participant with redacted OS and SDK versions. Communication Services also creates four call diagnostic logs that relate to each `participantId` value (one for each media stream).
The following diagram represents a group call example with three `participantId`
## Sample data
-### P2P call
+### P2P call
Here are shared fields for all logs in a P2P call:
Here are shared fields for all logs in a P2P call:
"correlationId": "8d1a8374-344d-4502-b54b-ba2d6daaf0ae", ```
-#### Call summary logs
+#### Call summary logs
Call summary logs have shared operation and category information:
Here's a call summary for a PSTN call:
} ```
-#### Call diagnostic logs
+#### Call diagnostic logs
Call diagnostic logs share operation information:
Here's a diagnostic log for an audio stream from a server endpoint to VoIP endpo
"jitterMax": "4", "packetLossRateAvg": "0", ```
+### Call client operations logs for P2P and group calls
+
+For call client operations log, there is no difference between P2P and group call scenarios and the number of logs depends on the SDK operations and call duration. The following provide some generic samples that show the schema of these logs.
++
+<!-- ### Call client operations log and call client media statistics logs for P2P and group calls
+
+For call client operations log and call client media stats time series log, there is no difference between P2P and group call scenarios and the number of logs depends on the SDK operations and call duration. The following provide some generic samples that show the schema of these logs. -->
-### Error codes
+#### Call client operations log
+
+Here's a call client operations log for "CreateView" operation:
+
+```json
+"properties": {
+ "TenantId": "4e7403f8-515a-4df5-8e13-59f0e2b76e3a",
+ "TimeGenerated": "2024-01-09T17:06:50.3Z",
+ "CallClientTimeStamp": "2024-01-09T15:07:56.066Z",
+ "OperationName": "CreateView" ,
+ "CallId": "92d800c4-abde-40be-91e9-3814ee786b19",
+ "ParticipantId": "2656fd6c-6d4a-451d-a1a5-ce1baefc4d5c",
+ "OperationType": "client-api-request",
+ "OperationId": "0d987336-37e0-4acc-aba3-e48741d88103",
+ "DurationMs": "577",
+ "ResultType": "Succeeded",
+ "ResultSignature": "200",
+ "SdkVersion": "1.19.2.2_beta",
+ "UserAgent": "azure-communication-services/1.3.1-beta.1 azsdk-js-communication-calling/1.19.2-beta.2 (javascript_calling_sdk;#clientTag:904f667c-5f25-4729-9ee8-6968b0eaa40b). Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
+ "ClientInstanceId": "d08a3d05-db90-415f-88a7-87ae74edc1dd",
+ "OperationPayload": "{"StreamType":"Video","StreamId":"2.0","Source":"remote","RemoteParticipantId":"remote"}",
+ "Type": "ACSCallClientOperations"
+}
+```
+Each participant can have many different metrics for a call. The following query can be run in Log Analytics in Azure portal to list all the possible Operations in the call client operations log:
+
+`ACSCallClientOperations | distinct OperationName`
+
+<!-- #### Call client media statistics time series log
+
+The following is an example of media statistics time series log. It shows the participant's Jitter metric for receiving an audio stream at a specific timestamp.
+
+```json
+"properties": {
+ "TenantId": "4e7403f8-515a-4df5-8e13-59f0e2b76e3a",
+ "TimeGenerated": "2024-01-10T07:36:51.771Z",
+ "OperationName": "CallClientMediaStatsTimeSeries" ,
+ "CallId": "92d800c4-abde-40be-91e9-3814ee786b19",
+ "CallClientTimeStamp": "2024-01-09T15:07:56.066Z",
+ "MetricName": "JitterInMs",
+ "Count": "2",
+ "Sum": "34",
+ "Average": "17",
+ "Minimum": "10",
+ "Maximum": "25",
+ "MediaStreamDirection": "recv",
+ "MediaStreamType": "audio",
+ "MediaStreamCodec": "OPUS",
+ "ParticipantId": "2656fd6c-6d4a-451d-a1a5-ce1baefc4d5c",
+ "ClientInstanceId": "d08a3d05-db90-415f-88a7-87ae74edc1dd",
+ "AggregationIntervalSeconds": "10",
+ "Type": "ACSCallClientMediaStatsTimeSeries"
+}
+```
+
+Each participant can have many different media statistics metrics for a call. The following query can be run in Log Analytics in Azure Portal to show all possible metrics in this log:
+
+`ACSCallClientMediaStatsTimeSeries | distinct MetricName` -->
+
+### Error codes
The `participantEndReason` property contains a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, for each endpoint. See [Troubleshooting in Azure Communication Services](../../troubleshooting-info.md?tabs=csharp%2cios%2cdotnet#calling-sdk-error-codes). ## Next steps - Learn about the [insights dashboard to monitor Voice Calling and Video Calling logs and metrics](/azure/communication-services/concepts/analytics/insights/voice-and-video-insights).+
+- Learn best practices to manage your call quality and reliability, see: [Improve and manage call quality](../../voice-video-calling/manage-call-quality.md)
+
+
+- Learn how to use call logs to diagnose call quality and reliability
+ issues with Call Diagnostics, see: [Call Diagnostics](../../voice-video-calling/call-diagnostics.md)
communication-services Manage Call Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/manage-call-quality.md
The call may have fired a User Facing Diagnostic indicating a severe problem wit
### Request support
-If you encounter quality or reliability issues you are unable to resolve and need support, you can submit a request for technical support. See: [How to create azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request)
+If you encounter quality or reliability issues you are unable to resolve and need support, you can submit a request for technical support. The more information you can provide in your request the better, however you can still submit requests with partial information to start your inquiry. See: [How to create azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request)
+ - If you are notified of license requirements while attempting to request technical support, you may need to choose a paid Azure support plan that best aligns to your needs. See: [Compare Support Plans](https://azure.microsoft.com/support/plans). - If you prefer not to purchase support you can leverage community support. See: [Community Support](https://azure.microsoft.com/support/community/).
container-apps Azure Arc Create Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-create-container-app.md
The following example creates a Node.js app.
--name $myContainerApp \ --environment $myConnectedEnvironment \ --environment-type connected \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --image mcr.microsoft.com/k8se/quickstart:latest \
--target-port 80 \ --ingress 'external'
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
New-AzContainerApp @ContainerAppArgs
-Before you run this command, replace `<REGISTRY_CONTAINER_NAME>` with the full name the public container registry location, including the registry path and tag. For example, a valid container name is `mcr.microsoft.com/azuredocs/containerapps-helloworld:latest`.
+Before you run this command, replace `<REGISTRY_CONTAINER_NAME>` with the full name the public container registry location, including the registry path and tag. For example, a valid container name is `mcr.microsoft.com/k8se/quickstart:latest`.
::: zone-end
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Previously updated : 03/29/2023 Last updated : 01/10/2024 ms.devlang: azurecli
Now that your Azure CLI setup is complete, you can define the environment variab
## Create a resource group ```azurepowershell
-az group create --location centralus --resource-group name my-container-apps
+az group create --location centralus --resource-group my-container-apps
``` ## Create and deploy the container app
az containerapp up \
--resource-group my-container-apps \ --location centralus \ --environment 'my-container-apps' \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --image mcr.microsoft.com/k8se/quickstart:latest \
--target-port 80 \ --ingress external \ --query properties.configuration.ingress.fqdn
az containerapp up `
--resource-group my-container-apps ` --location centralus ` --environment my-container-apps `
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest `
+ --image mcr.microsoft.com/k8se/quickstart:latest `
--target-port 80 ` --ingress external ` --query properties.configuration.ingress.fqdn
container-apps Managed Identity Image Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md
az containerapp create \
--name $CONTAINERAPP_NAME \ --resource-group $RESOURCE_GROUP \ --environment $CONTAINERAPPS_ENVIRONMENT \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --image mcr.microsoft.com/k8se/quickstart:latest \
--target-port 80 \ --ingress external ```
az containerapp create \
```powershell $ImageParams = @{ Name = "my-container-app"
- Image = "mcr.microsoft.com/azuredocs/containerapps-helloworld:latest"
+ Image = "mcr.microsoft.com/k8se/quickstart:latest"
} $TemplateObj = New-AzContainerAppTemplateObject @ImageParams $EnvId = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).Id
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
Previously updated : 12/13/2021 Last updated : 01/10/2024
In this quickstart, you create a secure Container Apps environment and deploy yo
## Prerequisites
-An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Also, please make sure to have the Resource Provider "Microsoft.App" registered.
+- An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Register the `Microsoft.App` resource provider.
## Setup <!-- Create --> [!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)]
-7. Select the **Create** button at the bottom of the *Create Container App Environment* page.
+7. Select the **Container** tab.
+
+8. Check the box next to the *Use quickstart image* box.
+
+9. Select the **Create** button at the bottom of the *Create Container Apps Environment* page.
<!-- Deploy the container app --> [!INCLUDE [container-apps-create-portal-deploy.md](../../includes/container-apps-create-portal-deploy.md)] ### Verify deployment
-Select **Go to resource** to view your new container app. Select the link next to *Application URL* to view your application. You'll see the following message in your browser.
+Select **Go to resource** to view your new container app.
+
+Select the link next to *Application URL* to view your application. The following message appears in your browser.
:::image type="content" source="media/get-started/azure-container-apps-quickstart.png" alt-text="Your first Azure Container Apps deployment.":::
If you're not going to continue to use this application, you can delete the Azur
1. Select the **my-container-apps** resource group from the *Overview* section. 1. Select the **Delete resource group** button at the top of the resource group *Overview*. 1. Enter the resource group name **my-container-apps** in the *Are you sure you want to delete "my-container-apps"* confirmation dialog.
-1. Select **Delete**.
- The process to delete the resource group may take a few minutes to complete.
+1. Select **Delete**.
+
+ The process to delete the resource group could take a few minutes to complete.
> [!TIP] > Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
Replace the \<Placeholders\> with your values.
```azurepowershell $ImageParams = @{ Name = '<ContainerName>'
- Image = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
+ Image = 'mcr.microsoft.com/k8se/quickstart'
} $TemplateObj = New-AzContainerAppTemplateObject @ImageParams
container-apps Tutorial Deploy First App Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-deploy-first-app-cli.md
az containerapp create \
--name my-container-app \ --resource-group $RESOURCE_GROUP \ --environment $CONTAINERAPPS_ENVIRONMENT \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --image mcr.microsoft.com/k8se/quickstart:latest \
--target-port 80 \ --ingress 'external' \ --query properties.configuration.ingress.fqdn
By setting `--ingress` to `external`, you make the container app available to pu
```azurepowershell $ImageParams = @{ Name = 'my-container-app'
- Image = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
+ Image = 'mcr.microsoft.com/k8se/quickstart:latest'
} $TemplateObj = New-AzContainerAppTemplateObject @ImageParams $EnvId = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).Id
container-apps Tutorial Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-scaling.md
az containerapp up \
--resource-group my-container-apps \ --location centralus \ --environment 'my-container-apps' \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --image mcr.microsoft.com/k8se/quickstart:latest \
--target-port 80 \ --ingress external \ --query properties.configuration.ingress.fqdn \
az containerapp up `
--resource-group my-container-apps ` --location centralus ` --environment my-container-apps `
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest `
+ --image mcr.microsoft.com/k8se/quickstart:latest `
--target-port 80 ` --ingress external ` --query properties.configuration.ingress.fqdn `
container-apps Workload Profiles Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-manage-cli.md
Use the following commands to create a workload profiles environment.
--name "<CONTAINER_APP_NAME>" \ --target-port 80 \ --ingress external \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --image mcr.microsoft.com/k8se/quickstart:latest \
--environment "<ENVIRONMENT_NAME>" \ --workload-profile-name "Consumption" ```
Use the following commands to create a workload profiles environment.
--name "<CONTAINER_APP_NAME>" \ --target-port 80 \ --ingress internal \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --image mcr.microsoft.com/k8se/quickstart:latest \
--environment "<ENVIRONMENT_NAME>" \ --workload-profile-name "Consumption" ```
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
The following steps show how to create a self-signed certificate for testing pur
"keyType": "RSA", "reuseKey": true },
+ "secretProperties": {
+ "contentType": "application/x-pem-file"
+ },
"x509CertificateProperties": { "ekus": [ "1.3.6.1.5.5.7.3.3"
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 01/02/2024 Last updated : 01/08/2024 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### December 2023
+* Preview: [PgBouncer](./concepts-connection-pool.md) is now supported with [Microsoft Entra ID authentication](./concepts-authentication.md#microsoft-entra-id-authentication-preview).
* General availability: Azure Cosmos DB for PostgreSQL is now available in Poland Central and South India. * See [all supported regions](./resources-regions.md).
cosmos-db Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md
Previously updated : 01/03/2024 Last updated : 01/08/2024 # Azure Cosmos DB for PostgreSQL limits and limitations
currently **not supported**:
If [Microsoft Entra ID](./concepts-authentication.md#azure-active-directory-authentication-preview) is enabled on an Azure Cosmos DB for PostgreSQL cluster, the following is currently **not supported**: * PostgreSQL 11, 12, and 13
-* PgBouncer
* Microsoft Entra groups ### Database creation
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md
Azure Enterprise users can convert from a Microsoft Account (MSA or Live ID) to
To begin: 1. Add the work or school account to the Azure EA Portal in the role(s) needed.
-1. If you get errors, the account may not be valid in the active directory. Azure uses User Principal Name (UPN), which isn't always identical to the email address.
+1. If you get errors, the account may not be valid in Microsoft Entra ID. Azure uses User Principal Name (UPN), which isn't always identical to the email address.
1. Authenticate to the Azure EA portal using the work or school account. ### To convert subscriptions from Microsoft accounts to work or school accounts:
cost-management-billing Ea Portal Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-troubleshoot.md
You use the Azure EA portal to grant access to users with different authenticati
### Authentication level types - Microsoft Account Only - For organizations that want to use, create, and manage users through Microsoft accounts.-- Work or School Account - For organizations that have set up Active Directory with Federation to the Cloud and all accounts are on a single tenant.-- Work or School Account Cross Tenant - For organizations that have set up Active Directory with Federation to the Cloud and will have accounts in multiple tenants.
+- Work or School Account - For organizations that have set up Microsoft Entra ID with federation to the cloud and all accounts are on a single tenant.
+- Work or School Account Cross Tenant - For organizations that have set up Microsoft Entra ID with federation to the cloud and will have accounts in multiple tenants.
- Mixed Account - Allows you to add users with Microsoft Account and/or with a Work or School Account. The first work or school account added to the enrollment determines the _default_ domain. To add a work or school account with another tenant, you must change the authentication level under the enrollment to cross-tenant authentication.
To update the Authentication Level:
Microsoft accounts must have an associated ID created at [https://signup.live.com](https://signup.live.com/).
-Work or school accounts are available to organizations that have set up Active Directory with federation and where all accounts are on a single tenant. Users can be added with work or school federated user authentication if the company's internal Active Directory is federated.
+Work or school accounts are available to organizations that have set up Microsoft Entra ID with federation and where all accounts are on a single tenant. Users can be added with work or school federated user authentication if the company's internal Microsoft Entra ID is federated.
-If your organization doesn't use Active Directory federation, you can't use your work or school email address. Instead, register or create a new email address and register it as a Microsoft account.
+If your organization doesn't use Microsoft Entra ID federation, you can't use your work or school email address. Instead, register or create a new email address and register it as a Microsoft account.
## Unable to access the Azure EA portal
If you get an error message when you try to sign in to the Azure EA portal, use
- Ensure that you're using the correct Azure EA portal URL, which is https://ea.azure.com. - Determine if your access to the Azure EA portal was added as a work or school account or as a Microsoft account.
- - If you're using your work account, enter your work email and work password. Your work password is provided by your organization. You can check with your IT department about how to reset the password if you've issues with it.
+ - If you're using your work account, enter your work email and work password. Your work password is provided by your organization. You can check with your IT department about how to reset the password if you have issues with it.
- If you're using a Microsoft account, enter your Microsoft account email address and password. If you've forgotten your Microsoft account password, you can reset it at [https://account.live.com/password/reset](https://account.live.com/password/reset). - Use an in-private or incognito browser session to sign in so that no cookies or cached information from previous or existing sessions are kept. Clear your browser's cache and use an in-private or incognito window to open https://ea.azure.com. - If you get an _Invalid User_ error when using a Microsoft account, it might be because you have multiple Microsoft accounts. The one that you're trying to sign in with isn't the primary email address. Or, if you get an _Invalid User_ error, it might be because the wrong account type was used when the user was added to the enrollment. For example, a work or school account instead of a Microsoft account. In this example, you have another EA admin add the correct account or you need to contact [support](https://support.microsoft.com/supportforbusiness/productselection?sapId=cf791efa-485b-95a3-6fad-3daf9cd4027c). - If you need to check the primary alias, go to [https://account.live.com](https://account.live.com). Then, select **Your Info** and then select **Manage how to sign in to Microsoft**. Follow the prompts to verify an alternate email address and obtain a code to access sensitive information. Enter the security code. Select **Set it up later** if you don't want to set up two-factor authentication.
- - You'll see the **Manage how to sign in to Microsoft** page where you can view your account aliases. Check that the primary alias is the one that you're using to sign in to the Azure EA portal. If it isn't, you can make it your primary alias. Or, you can use the primary alias for Azure EA portal instead.
+ - You see the **Manage how to sign in to Microsoft** page where you can view your account aliases. Check that the primary alias is the one that you're using to sign in to the Azure EA portal. If it isn't, you can make it your primary alias. Or, you can use the primary alias for Azure EA portal instead.
## Next steps
cost-management-billing Manage Azure Subscription Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-azure-subscription-policy.md
This article helps you configure Azure subscription policies for subscription op
Use the following policy settings to control the movement of Azure subscriptions from and into directories.
-### Subscriptions leaving AAD directory
+### Subscriptions leaving a Microsoft Entra ID directory
The policy allows or stops users from moving subscriptions out of the current directory. [Subscription owners](../../role-based-access-control/built-in-roles.md#owner) can [change the directory of an Azure subscription](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md) to another one where they're a member. It poses governance challenges, so global administrators can allow or disallow directory users from changing the directory.
-### Subscriptions entering AAD directory
+### Subscriptions entering a Microsoft Entra ID directory
The policy allows or stops users from other directories, who have access in the current directory, to move subscriptions into the current directory. [Subscription owners](../../role-based-access-control/built-in-roles.md#owner) can [change the directory of an Azure subscription](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md) to another one where they're a member. It poses governance challenges, so global administrators can allow or disallow directory users from changing the directory.
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You have four options to scope a reservation, depending on your needs:
- **Single resource group scope** - Applies the reservation discount to the matching resources in the selected resource group only. - **Single subscription scope** - Applies the reservation discount to the matching resources in the selected subscription. - **Shared scope** - Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. If a subscription is moved to different billing context, the benefit no longer applies to the subscription. It continues to apply to other subscriptions in the billing context.
- - For Enterprise Agreement customers, the billing context is the enrollment. The reservation shared scope would include multiple Active Directory tenants in an enrollment.
+ - For Enterprise Agreement customers, the billing context is the enrollment. The reservation shared scope would include multiple Microsoft Entra tenants in an enrollment.
- For Microsoft Customer Agreement customers, the billing scope is the billing profile. - For individual subscriptions with pay-as-you-go rates, the billing scope is all eligible subscriptions created by the account administrator. - **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. The management group scope applies to all subscriptions throughout the entire management group hierarchy. To buy a reservation for a management group, you must have at least read permission on the management group and be a reservation owner or reservation purchaser on the billing subscription.
cost-management-billing Prepay Sql Data Warehouse Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-sql-data-warehouse-charges.md
For pricing information, see the [Azure Synapse Analytics reserved capacity offe
You can buy Azure Synapse Analytics reserved capacity in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade). Pay for the reservation [up front or with monthly payments](./prepare-buy-reservation.md). To buy reserved capacity: - You must have the owner role for at least one enterprise, Pay-As-You-Go, or Microsoft Customer Agreement subscription.-- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin to enable it. Direct Enterprise customers can update the **Reserved Instances** setting ine the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). Navigate to the Policies menu to change settings.
+- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin to enable it. Direct Enterprise customers can update the **Reserved Instances** setting in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). Navigate to the Policies menu to change settings.
- For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Synapse Analytics reserved capacity.
For example, assume your total consumption of Azure Synapse Analytics is DW3000c
- **Single resource group scope** ΓÇö Applies the reservation discount to the matching resources in the selected resource group only. - **Single subscription scope** ΓÇö Applies the reservation discount to the matching resources in the selected subscription. - **Shared scope** ΓÇö Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. For Enterprise Agreement customers, the billing context is the enrollment. For individual subscriptions with pay-as-you-go rates, the billing scope is all eligible subscriptions created by the account administrator.
- - For enterprise customers, the billing context is the EA enrollment. The reservation shared scope would include multiple Active Directory Microsoft Entra tenants in an enrollment.
+ - For enterprise customers, the billing context is the EA enrollment. The reservation shared scope would include multiple Microsoft Entra tenants in an enrollment.
- For Microsoft Customer Agreement customers, the billing scope is the billing profile. - For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator. - **Management group** ΓÇö Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope.
cost-management-billing Scope Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/scope-savings-plan.md
You have the following options to scope a savings plan, depending on your needs:
- **Single resource group scope** - Applies the savings plan benefit to the eligible resources in the selected resource group only. - **Single subscription scope** - Applies the savings plan benefit to the eligible resources in the selected subscription. - **Shared scope** - Applies the savings plan benefit to eligible resources within subscriptions that are in the billing context. If a subscription was moved to different billing context, the benefit will no longer be applied to this subscription and will continue to apply to other subscriptions in the billing context.
- - For Enterprise Agreement customers, the billing context is the enrollment. The savings plan shared scope would include multiple Active Directory tenants in an enrollment.
+ - For Enterprise Agreement customers, the billing context is the enrollment. The savings plan shared scope would include multiple Microsoft Entra tenants in an enrollment.
- For Microsoft Customer Agreement customers, the billing scope is the billing profile. - **Management group** - Applies the savings plan benefit to eligible resources in the list of subscriptions that are a part of both the management group and billing scope. To buy a savings plan for a management group, you must have at least read permission on the management group and be a savings plan owner on the billing subscription.
data-factory Airflow Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-configurations.md
Title: Apache Airflow configuration options on Azure Managed Airflow
-description: Apache Airflow configuration options can be attached to your Azure Managed Integration Runtimes for Apache Airflow environment as key value pairs
+ Title: Apache Airflow configuration options on Managed Airflow
+description: This article explains how Apache Airflow configuration options can be attached to your Managed Airflow integration runtimes for an Apache Airflow environment as key-value pairs.
Last updated 12/11/2023
-# Apache Airflow configuration options on Azure Managed Airflow
+# Apache Airflow configuration options on Managed Airflow
-Apache Airflow configuration options can be attached to your Azure Managed Integration runtime as key value pairs. While we don't expose the `airflow.cfg` in Managed Airflow UI, users can override the Apache Airflow configurations directly on UI as key value pairs under `Airflow Configuration overrides` section and continue using all other settings in `airflow.cfg`. In Azure Managed Airflow, developers can override any of the Airflow configurations provided by Apache Airflow except the following shown in the table.
+Apache Airflow configuration options can be attached to your Azure Data Factory Managed Airflow integration runtime as key-value pairs. We don't expose the `airflow.cfg` in the Managed Airflow UI. However, users can override the Apache Airflow configurations directly on the UI as key-value pairs under the `Airflow Configuration overrides` section. They can continue using all other settings in `airflow.cfg`. In Managed Airflow, developers can override any of the Airflow configurations provided by Apache Airflow except the configurations shown in the table.
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-For the Airflow configurations reference, see [Airflow Configurations](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html)
+For more information on Apache Airflow configurations, see [Configuration Reference](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html).
-**The following table contains the list of configurations does not support overrides.**
+The following table contains the list of configurations that don't support overrides.
-|Configuration |Description | Default Value
+|Configuration |Description | Default value
|||| |[AIRFLOW__CELERY__FLOWER_URL_PREFIX](https://airflow.apache.org/docs/apache-airflow-providers-celery/stable/configurations-ref.html#flower-url-prefix) |The root URL for Flower. |"" | |[AIRFLOW__CORE__DAGS_FOLDER](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dags-folder) |The path of the folder where Airflow pipelines live.|AIRFLOW_DAGS_FOLDER |
-|[AIRFLOW__CORE__DONOT_PICKLE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#donot-pickle) |Whether to disable pickling dags. |false |
-|[AIRFLOW__CORE__ENABLE_XCOM_PICKLING](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#enable-xcom-pickling) |Whether to enable pickling for xcom. |false |
-|[AIRFLOW__CORE__EXECUTOR](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#executor) |The executor class that airflow should use. |CeleryExecutor |
-|[AIRFLOW__CORE__FERNET_KEY](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#fernet-key) |Secret key to save connection passwords in the db. |AIRFLOW_FERNET_KEY |
-|[AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dags-are-paused-at-creation) |Are DAGs paused by default at creation |False |
-|[AIRFLOW__CORE__PLUGINS_FOLDER](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#plugins-folder) |Path to the folder containing Airflow plugins. |AIRFLOW_PLUGINS_FOLDER |
-|[AIRFLOW__LOGGING__BASE_LOG_FOLDER](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#base-log-folder) |The folder where airflow should store its log files.|/opt/airflow/logs |
-|[AIRFLOW__LOGGING__LOG_FILENAME_TEMPLATE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#log-filename-template) |Formatting for how airflow generates file names/paths for each task run. |{{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log |
-|[AIRFLOW__LOGGING__DAG_PROCESSOR_MANAGER_LOG_LOCATION](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dag-processor-manager-log-location) |Full path of dag_processor_manager logfile. |/opt/airflow/logs/dag_processor_manager/dag_processor_manager.log |
-|[AIRFLOW__LOGGING__LOGGING_CONFIG_CLASS](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#logging-config-class) |Logging config class specifies the logging configuration. This class has to be on the python classpath. |log_config.LOGGING_CONFIG |
+|[AIRFLOW__CORE__DONOT_PICKLE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#donot-pickle) |Whether to disable pickling DAGs. |False |
+|[AIRFLOW__CORE__ENABLE_XCOM_PICKLING](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#enable-xcom-pickling) |Whether to enable pickling for xcom. |False |
+|[AIRFLOW__CORE__EXECUTOR](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#executor) |The executor class that Airflow should use. |CeleryExecutor |
+|[AIRFLOW__CORE__FERNET_KEY](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#fernet-key) |Secret key to save connection passwords in the database. |AIRFLOW_FERNET_KEY |
+|[AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dags-are-paused-at-creation) |Are DAGs paused by default at creation? |False |
+|[AIRFLOW__CORE__PLUGINS_FOLDER](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#plugins-folder) |Path to the folder that contains Airflow plugins. |AIRFLOW_PLUGINS_FOLDER |
+|[AIRFLOW__LOGGING__BASE_LOG_FOLDER](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#base-log-folder) |The folder where Airflow should store its log files.|/opt/airflow/logs |
+|[AIRFLOW__LOGGING__LOG_FILENAME_TEMPLATE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#log-filename-template) |Formatting for how Airflow generates file names or paths for each task run. |{{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log |
+|[AIRFLOW__LOGGING__DAG_PROCESSOR_MANAGER_LOG_LOCATION](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dag-processor-manager-log-location) |Full path of the `dag_processor_manager` log file. |/opt/airflow/logs/dag_processor_manager/dag_processor_manager.log |
+|[AIRFLOW__LOGGING__LOGGING_CONFIG_CLASS](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#logging-config-class) |Logging config class specifies the logging configuration. This class has to be on the Python class path. |log_config.LOGGING_CONFIG |
|[AIRFLOW__LOGGING__COLORED_LOG_FORMAT](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#colored-log-format) |Log format for when Colored logs is enabled. |[%(asctime)s] {{%(filename)s:%(lineno)d}} %(levelname)s - %(message)s | |[AIRFLOW__LOGGING__LOGGING_LEVEL](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#logging-level) |Logging level. |INFO | |[AIRFLOW__METRICS__STATSD_ON](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#statsd-on) |Enables sending metrics to StatsD. |True |
For the Airflow configurations reference, see [Airflow Configurations](https://a
|[AIRFLOW__METRICS__STATSD_PORT](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#statsd-port) |Port number of the StatsD server. |8125 | |AIRFLOW__METRICS__STATSD_PREFIX |Prefix for all Airflow metrics sent to StatsD. |AirflowMetrics| |[AIRFLOW__SCHEDULER__CHILD_PROCESS_LOG_DIRECTORY](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#child-process-log-directory) |Path of the directory where the Airflow scheduler writes its child process logs. |/opt/airflow/logs/scheduler |
-|[AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dag-dir-list-interval) |How often (in seconds) to scan the DAGs'directory for new files. Default to 5 minutes. |5|
-|[AIRFLOW__WEBSERVER__BASE_URL](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#webserver) |The base url of your website as airflow cannot guess what domain or cname you are using. This url is used in automated emails that airflow sends to point links to the right web server. |https://localhost:8080 |
-|[AIRFLOW__WEBSERVER__COOKIE_SAMESITE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#cookie-samesite) |Set samesite policy on session cookie |None |
-|[AIRFLOW__WEBSERVER__COOKIE_SECURE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#cookie-secure) |Set secure flag on session cookie |True |
+|[AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#dag-dir-list-interval) |How often (in seconds) to scan the DAGs' directory for new files. Default to 5 minutes. |5|
+|[AIRFLOW__WEBSERVER__BASE_URL](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#webserver) |The base URL of your website because Airflow can't guess what domain or cname you're using. This URL is used in automated emails that Airflow sends to point links to the right web server. |https://localhost:8080 |
+|[AIRFLOW__WEBSERVER__COOKIE_SAMESITE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#cookie-samesite) |Set samesite policy on session cookie. |None |
+|[AIRFLOW__WEBSERVER__COOKIE_SECURE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#cookie-secure) |Set secure flag on session cookie. |True |
|[AIRFLOW__WEBSERVER__EXPOSE_CONFIG](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#expose-config) |Expose the configuration file in the web server. |False |
-|AIRFLOW__WEBSERVER__AUTHENTICATE |Authenticate user to login into Airflow UI. |True |
+|AIRFLOW__WEBSERVER__AUTHENTICATE |Authenticate user to sign in to the Airflow UI. |True |
|AIRFLOW__WEBSERVER__AUTH_BACKEND ||airflow.api.auth.backend.basic_auth |
-|[AIRFLOW__WEBSERVER__RELOAD_ON_PLUGIN_CHANGE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#reload-on-plugin-change) |If set to True, Airflow tracks files in plugins_folder directory. When it detects changes, then reload the gunicorn. |True |
+|[AIRFLOW__WEBSERVER__RELOAD_ON_PLUGIN_CHANGE](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#reload-on-plugin-change) |If set to True, Airflow tracks files in the `plugins_folder` directory. When it detects changes, then reload the gunicorn. |True |
|[AIRFLOW__WEBSERVER__SECRET_KEY](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key) |Secret key used to run your flask app. |AIRFLOW_FERNET_KEY |
-|[AIRFLOW__API__AUTH_BACKEND](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#auth-backends) |Comma separated list of auth backends to authenticate users of the API. |airflow.api.auth.backend.basic_auth |
+|[AIRFLOW__API__AUTH_BACKEND](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#auth-backends) |Comma-separated list of auth backends to authenticate users of the API. |airflow.api.auth.backend.basic_auth |
|[AIRFLOW__API__ENABLE_EXPERIMENTAL_API](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#enable-experimental-api) ||True |
data-factory Airflow Install Private Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-install-private-package.md
+
+ Title: Install a Private package
+description: This article provides step-by-step instructions on how to install a private package in a Managed Airflow environment.
+++++ Last updated : 09/23/2023++
+# Install a Private package
++
+A python package is a way to organize related Python modules into a single directory hierarchy. A package is typically represented as a directory that contains a special file called `__init__.py`. Inside a package directory, you can have multiple Python module files (.py files) that define functions, classes, and variables.
+In the context of Managed Airflow, you can create packages to add your custom code.
+
+This guide provides step-by-step instructions on installing `.whl` (Wheel) file, which serve as a binary distribution format for Python package, in your Managed Airflow runtime.
+
+For illustration purpose, I create a simple custom operator as python package that can be imported as a module inside dags file.
+
+### Step 1: Develop a custom operator and a file to test it.
+- Create a file `sample_operator.py`
+```python
+from airflow.models.baseoperator import BaseOperator
++
+class SampleOperator(BaseOperator):
+ def __init__(self, name: str, **kwargs) -> None:
+ super().__init__(**kwargs)
+ self.name = name
+
+ def execute(self, context):
+ message = f"Hello {self.name}"
+ return message
+```
+
+- To create Python package for this file, Refer to the guide: [Creating a package in python](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/modules_management.html#creating-a-package-in-python)
+
+- Create a dag file, `sample_dag.py` to test your operator defined in Step 1.
+```python
+from datetime import datetime
+from airflow import DAG
+
+from airflow_operator.sample_operator import SampleOperator
++
+with DAG(
+ "test-custom-package",
+ tags=["example"]
+ description="A simple tutorial DAG",
+ schedule_interval=None,
+ start_date=datetime(2021, 1, 1),
+) as dag:
+ task = SampleOperator(task_id="sample-task", name="foo_bar")
+
+ task
+```
+
+### Step 2: Create a storage container.
+
+Use the steps described in [Manage blob containers using the Azure portal](/azure/storage/blobs/blob-containers-portal) to create a storage account to upload dag and your package file.
+
+### Step 3: Upload the private package into your storage account.
+
+1. Navigate to the designated container where you intend to store your Airflow DAGs and Plugins files.
+1. Upload your private package file to the container. Common file formats include `zip`, `.whl`, or `tar.gz`. Place the file within either the 'Dags' or 'Plugins' folder, as appropriate.
+
+### Step 4: Add your private package as a requirement.
+
+Add your private package as a requirement in the requirements.txt file. Add this file if it doesn't already exist. For the Git-sync, you need to add all the requirements in the UI itself.
+
+- **Blob Storage -**
+Be sure to prepend the prefix "**/opt/airflow/**" to the package path. For instance, if your private package resides at "**/dags/test/private.whl**", your requirements.txt file should feature the requirement "**/opt/airflow/dags/test/private.whl**".
+
+- **Git Sync -**
+For all the Git services, prepend the "**/opt/airflow/git/`<repoName>`.git/**" to the package path. For example, if your private package is in "**/dags/test/private.whl**" in a GitHub repo, then you should add the requirement "**/opt/airflow/git/`<repoName>`.git/dags/test/private.whl**" to the Airflow environment.
+
+- **ADO -**
+For the ADO, prepend the "**/opt/airflow/git/`<repoName>`/**" to the package path.
+
+### Step 5: Import your folder to an Airflow integrated runtime (IR) environment.
+
+When performing the import of your folder into an Airflow IR environment, ensure that you check the import requirements checkbox to load your requirements inside your airflow env.
+++
+### Step 6: Inside Airflow UI, you can run your dag file created at step 1, to check if import is successful.
++
+## Next steps
+
+- [What is Azure Data Factory Managed Airflow?](concept-managed-airflow.md)
+- [Run an existing pipeline with Airflow](tutorial-run-existing-pipeline-with-airflow.md)
data-factory Ci Cd Pattern With Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-pattern-with-airflow.md
Title: CI/CD Patterns with Azure Managed Airflow
-description: This document talks about recommended deployment patterns with Azure Managed Airflow
+ Title: CI/CD patterns with Managed Airflow
+description: This article talks about recommended deployment patterns with Managed Airflow.
Last updated 10/17/2023
-# CI/CD Patterns with Azure Managed Airflow
+# CI/CD patterns with Managed Airflow
-Azure Data Factory's Managed Airflow service is a simple and efficient way to create and manage Apache Airflow environments, enabling you to run data pipelines at scale with ease. There are two primary methods to run directed acyclic graphs (DAGs) in Azure Managed Airflow. You can either upload the DAG files in your blob storage and link them with the Airflow environment.
-Alternatively, you can use the Git-sync feature to automatically sync your Git repository with the Airflow environment.
+Azure Data Factory Managed Airflow provides a simple and efficient way to create and manage Apache Airflow environments. The service enables you to run data pipelines at scale with ease. There are two primary methods to run directed acyclic graphs (DAGs) in Managed Airflow. You can upload the DAG files in your blob storage and link them with the Airflow environment. Alternatively, you can use the Git-sync feature to automatically sync your Git repository with the Airflow environment.
-Working with data pipelines in Airflow requires you to create or update your DAGs, plugins and requirement files frequently, based upon your workflow needs. While developers can manually upload or edit DAG files in blob storage, many organizations prefer to use a CI/CD approach for code deployment. Therefore, this guide walks you through the recommended deployment patterns to seamlessly integrate and deploy your Apache Airflow DAGs with the Azure Managed Airflow service. 
+Working with data pipelines in Airflow requires you to create or update your DAGs, plugins, and requirement files frequently, based on your workflow needs. Although developers can manually upload or edit DAG files in blob storage, many organizations prefer to use a continuous integration and continuous delivery (CI/CD) approach for code deployment. This article walks you through the recommended deployment patterns to seamlessly integrate and deploy your Apache Airflow DAGs with Managed Airflow.
-## Understanding CI/CD 
+## Understand CI/CD
-### Continuous Integration (CI) 
+To integrate and deploy your Apache Airflow DAGs by using Managed Airflow, it's important to understand continuous integration and continuous delivery.
-Continuous Integration (CI) is a software development practice that emphasizes frequent and automated integration of code changes into a shared repository. It involves developers regularly committing their code, and upon each commit, an automated CI pipeline builds the code, runs tests, and performs validation checks. The primary goal is to detect and address integration issues early in the development process, providing rapid feedback to developers. CI ensures that the codebase remains in a constantly testable and deployable state. This practice leads to enhanced code quality, collaboration, and the ability to catch and fix bugs before they become significant problems. 
+### Continuous integration
-### Continuous Deployment (CD)
+Continuous integration is a software development practice that emphasizes frequent and automated integration of code changes into a shared repository. It involves developers regularly committing their code, and upon each commit, an automated CI pipeline builds the code, runs tests, and performs validation checks. The primary goals are to detect and address integration issues early in the development process and provide rapid feedback to developers.
-Continuous Deployment (CD) is an extension of CI that takes the automation one step further. While CI focuses on automating the integration and testing phases, CD automates the deployment of code changes to production or other target environments. This practice helps organizations release software updates quickly and reliably. It reduces mistakes in manual deployment and ensures that approved code changes are delivered to end-users swiftly. 
+CI ensures that the codebase remains in a constantly testable and deployable state. This practice leads to enhanced code quality, collaboration, and the ability to catch and fix bugs before they become significant problems.
-## CI/CD Workflow Within Azure Managed Airflow: 
+### Continuous delivery
-#### Git-sync with Dev/QA IR: Map your Managed Airflow environment with your Git repository’s Development/QA branch. 
+Continuous delivery is an extension of CI that takes the automation one step further. While CI focuses on automating the integration and testing phases, CD automates the deployment of code changes to production or other target environments. This practice helps organizations release software updates quickly and reliably. It reduces mistakes in manual deployment and ensures that approved code changes are delivered to users swiftly.
-**CI Pipeline with Dev/QA IR:**
+## CI/CD workflow within Managed Airflow
-When a pull request (PR) is made from a feature branch to the Development branch, it triggers a PR pipeline. This pipeline is designed to efficiently perform quality checks on your feature branches, ensuring code integrity and reliability. The following types of checks can be included in the pipeline: 
-- **Python Dependencies Testing**: These tests install and verify the correctness of Python dependencies to ensure that the project's dependencies are properly configured. -- **Code Analysis and Linting:** Tools for static code analysis and linting are applied to evaluate code quality and adherence to coding standards. -- **Airflow DAG’s Tests:** These tests execute validation tests, including tests for the DAG definition and unit tests designed for Airflow DAGs. -- **Unit Tests for Airflow custom operators, hooks, sensors and triggers.**
-If any of these checks fail, the pipeline terminates, signaling that the developer needs to address the issues identified. 
+### Git sync with Dev/QA integration runtime
-#### Git-sync with Production IR: Map your Managed Airflow environment with your Git repository’s Production branch. 
+Map your Managed Airflow environment with your Git repository's development/QA branch.
-**PR pipeline with Prod IR:** 
+#### CI pipeline with Dev/QA integration runtime
-It's considered a best practice to maintain a separate production environment to prevent every development feature from becoming publicly accessible. 
-Once the Feature branch successfully merges into Development branch, you can create a pull request to the production branch in order to make your newly merged feature public. This pull request triggers the PR pipeline that conducts rapid quality checks on the Development branch to ensure that all features have been integrated correctly and there are no errors in the production environment.
+When a pull request (PR) is made from a feature branch to the development branch, it triggers a PR pipeline. This pipeline is designed to efficiently perform quality checks on your feature branches, ensuring code integrity and reliability. You can include the following types of checks in the pipeline:
-### Benefits of using CI/CD workflow in Managed Airflow 
+- **Python dependencies testing:** These tests install and verify the correctness of Python dependencies to ensure that the project's dependencies are properly configured.
+- **Code analysis and linting:** Tools for static code analysis and linting are applied to evaluate code quality and adherence to coding standards.
+- **Airflow DAG tests:** These tests execute validation tests, including tests for the DAG definition and unit tests designed for Airflow DAGs.
+- **Unit tests for Airflow custom operators, hooks, sensors, and triggers**
-- **Fail-fast approach:** Without the integration of CI/CD process, the first time you know DAG contains errors is likely when it's pushed to GitHub, synchronized with Managed Airflow and throws an `Import Error`. Meanwhile the other developer can unknowingly pull the faulty code from the repository, potentially leading to inefficiencies down the line. 
+If any of these checks fail, the pipeline terminates. You then need to address the issues identified.
-- **Code quality improvement:** Neglecting fundamental checks like syntax verification, necessary imports, and checks for other best coding practices, can increase the likelihood of delivering subpar code. 
+### Git sync with production integration runtime
-## Deployment Patterns in Azure Managed Airflow: 
+Map your Managed Airflow environment with your Git repository's production branch.
-### Pattern 1: Develop data pipelines directly in Azure Managed Airflow. 
+#### PR pipeline with production integration runtime
-### Prerequisites: 
+A best practice is to maintain a separate production environment to prevent every development feature from becoming publicly accessible.
-- **Azure subscription:** If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. Create or select an existing Data Factory in the region where the managed airflow preview is supported. 
+After the feature branch successfully merges into the development branch, you can create a pull request to the production branch to make your newly merged feature public. This pull request triggers the PR pipeline that conducts rapid quality checks on the development branch. Quality checks ensure that all features were integrated correctly and there are no errors in the production environment.
-- **Access to GitHub Repository:** [https://github.com/join](https://github.com/join) 
+### Benefits of using the CI/CD workflow in Managed Airflow
-### Advantages: 
+- **Fail-fast approach:** Without the integration of the CI/CD process, the first time you know DAG contains errors is likely when it's pushed to GitHub, synchronized with Managed Airflow, and throws an `Import Error`. Meanwhile, another developer can unknowingly pull the faulty code from the repository, which potentially leads to inefficiencies down the line.
+- **Code quality improvement:** If you neglect fundamental checks like syntax verification, necessary imports, and checks for other best coding practices, you increase the likelihood of delivering subpar code.
-- **No Local Development Environment Required:** Managed Airflow handles the underlying infrastructure, updates, and maintenance, reducing the operational overhead of managing Airflow clusters. The service allows you to focus on building and managing workflows rather than managing infrastructure. 
+## Deployment patterns in Managed Airflow
-- **Scalability:** Managed Airflow provides auto scaling capability to scale resources as needed, ensuring that your data pipelines can handle increasing workloads or bursts of activity without manual intervention. 
+We recommend two deployment patterns.
-- **Monitoring and Logging:** Managed Airflow includes Diagnostic logs and monitoring, making it easier to track the execution of your workflows, diagnose issues, setting up alerts and optimize performance. 
+### Pattern 1: Develop data pipelines directly in Managed Airflow
+You can develop data pipelines directly in Managed Airflow when you use pattern 1.
-### Workflow: 
+### Prerequisites
-1. **Leverage Git-sync feature:** 
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. Create or select an existing Data Factory instance in the region where the Managed Airflow preview is supported.
+- You need access to a [GitHub repository](https://github.com/join).
-In this workflow, there's no requirement to establish your own local environment. Instead, you can start by using the Git-sync feature offered by the Managed Airflow service. This feature automatically synchronizes your DAG files with Airflow webservers, schedulers, and workers, allowing you to develop, test, and execute your data pipelines directly through the Managed Airflow UI. 
+### Advantages
-Learn more about how to use Azure Managed Airflow's [Git-sync feature](airflow-sync-github-repository.md).
+- **No local development environment required:** Managed Airflow handles the underlying infrastructure, updates, and maintenance, reducing the operational overhead of managing Airflow clusters. The service allows you to focus on building and managing workflows rather than managing infrastructure.
+- **Scalability:** Managed Airflow provides autoscaling capability to scale resources as needed, ensuring that your data pipelines can handle increasing workloads or bursts of activity without manual intervention.
+- **Monitoring and logging:** Managed Airflow includes diagnostic logs and monitoring to help you track the execution of your workflows, diagnose issues, set up alerts, and optimize performance.
-2. **Individual Feature branch Environment:** 
+### Workflow
-You can choose the branch from your repository to sync with Azure Managed Airflow. This capability lets you create individual Airflow Environment for each feature branch, allowing developers to work on specific tasks for data pipelines. 
+1. Use the Git-sync feature.
-3. **Create a Pull Request:** 
+ In this workflow, there's no requirement to establish your own local environment. Instead, you can start by using the Git-sync feature offered by Managed Airflow. This feature automatically synchronizes your DAG files with Airflow web servers, schedulers, and workers. Now you can develop, test, and execute your data pipelines directly through the Managed Airflow UI.
+
+ Learn more about how to use the Managed Airflow [Git-sync feature](airflow-sync-github-repository.md).
-Proceed to submit a Pull Request (PR) to the Airflow Development Environment (DEV IR), once you have thoroughly developed and tested your features within your dedicated Airflow Environment.
+1. Create individual feature branch environments.
-### Pattern 2: Develop DAGs Locally and Deploy on Managed Airflow
+ You can choose the branch from your repository to sync with Managed Airflow. This capability lets you create an individual Airflow environment for each feature branch. In this way, developers can work on specific tasks for data pipelines.
-### Prerequisites: 
+1. Create a pull request.
-- GitHub Repository: [https://github.com/join](https://github.com/join) 
+ Proceed to submit a pull request to the Airflow development environment integration runtime after you thoroughly develop and test your features within your dedicated Airflow environment.
-- Ensure that at least a single branch of your code repository is synchronized with the Managed Airflow to see the code changes on the service. 
+### Pattern 2: Develop DAGs locally and deploy on Managed Airflow
-### Advantages: 
+You can develop DAGs locally and deploy them on Managed Airflow when you use pattern 2.
-**Limited Access:** You can limit access to Azure resources to admin only. 
+### Prerequisites
-### Workflow: 
+- You need access to a [GitHub repository](https://github.com/join).
+- Ensure that at least a single branch of your code repository is synchronized with Managed Airflow to see the code changes on the service.
-1. **Local Environment Setup** 
+### Advantages
-Begin by setting up a local development environment for Apache Airflow on your development machine. In this environment, you can develop and test your Airflow code, including DAGs and tasks. This approach allows you to develop pipelines without relying on direct access to Azure resources.
+You can limit access to Azure resources to admins only.
-2. **Leverage Git-sync feature:** 
+### Workflow
-Synchronize your GitHub repository’s branch with Azure Managed Airflow Service. 
+1. Set up a local environment.
-Learn more about how to use Azure Managed Airflow's [Git-sync feature](airflow-sync-github-repository.md).
+ Begin by setting up a local development environment for Apache Airflow on your development machine. In this environment, you can develop and test your Airflow code, including DAGs and tasks. This approach allows you to develop pipelines without relying on direct access to Azure resources.
-3. **Utilize Managed Airflow Service as Production environment:** 
+1. Use the Git-sync feature.
-After successfully developing and testing data pipelines on your local setup, you can raise a Pull Request (PR) to the branch synchronized with the Managed Airflow Service. Once the branch is merged, utilize the Managed Airflow service's features like autoscaling and monitoring and logging at production level. 
+ Synchronize your GitHub repository's branch with Managed Airflow.
-## Sample CI/CD Pipeline
-- [Azure Devops](https://azure.microsoft.com/products/devops)-- [GitHub Actions](https://github.com/features/actions)
+ Learn more about how to use the Managed Airflow [Git-sync feature](airflow-sync-github-repository.md).
-**Step 1:** Copy the code of a DAG deployed in Managed Airflow IR using the Git-sync feature.
-```python
-from datetime import datetime
-from airflow import DAG
-from airflow.operators.bash import BashOperator
+1. Use Managed Airflow as a production environment.
-with DAG(
-ΓÇ» ΓÇ» dag_id="airflow-ci-cd-tutorial",
-ΓÇ» ΓÇ» start_date=datetime(2023, 8, 15),
-ΓÇ» ΓÇ» schedule="0 0 * * *",
-ΓÇ» ΓÇ» tags=["tutorial", "CI/CD"]
-) as dag:
-ΓÇ» ΓÇ» # Tasks are represented as operators
-ΓÇ» ΓÇ» task1 = BashOperator(task_id="task1", bash_command="echo task1")
-ΓÇ» ΓÇ» task2 = BashOperator(task_id="task2", bash_command="echo task2")
-ΓÇ» ΓÇ» task3 = BashOperator(task_id="task3", bash_command="echo task3")
-ΓÇ» ΓÇ» task4 = BashOperator(task_id="task4", bash_command="echo task4")
+ After you successfully develop and test data pipelines on your local setup, you can raise a pull request to the branch synchronized with Managed Airflow. After the branch is merged, use Managed Airflow features like autoscaling and monitoring and logging at the production level.
-ΓÇ» ΓÇ» # Set dependencies between tasks
-ΓÇ» ΓÇ» task1 >> task2 >> task3 >> task4
-```
+## Sample CI/CD pipeline
-**Step 2:** Create a CI/CD pipeline.
+For more information, see:
-### Using Azure Devops
-**Step 2.1:** Create a file `azure-devops-ci-cd.yaml` and copy the following code. The pipeline triggers on pull request or push request to dev branch:
-```python
-trigger:
-- dev
+- [Azure DevOps](https://azure.microsoft.com/products/devops)
+- [GitHub actions](https://github.com/features/actions)
-pr:
-- dev-
-pool:
- vmImage: ubuntu-latest
-strategy:
- matrix:
- Python3.11:
- python.version: '3.11.5'
-
-steps:
-- task: UsePythonVersion@0
- inputs:
- versionSpec: '$(python.version)'
- displayName: 'Use Python $(python.version)'
--- script: |
- python -m pip install --upgrade pip
- pip install -r requirements.txt
- displayName: 'Install dependencies'
--- script: |
- airflow webserver &
- airflow db init
- airflow scheduler &
- pytest
- displayName: 'Pytest'
-```
-
-For more information, See [Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-sign-up)
-
-### Using GitHub Actions
-
-**Step 2.1:** Create a `.github/workflows` directory in your GitHub repository. 
-
-**Step 2.2:** In the `.github/workflows` directory, create a file named `github-actions-ci-cd.yml` 
-
-**Step 2.3:** Copy the following code: The pipeline triggers whenever there's pull request or push request to dev branch:
-```python
-name: GitHub Actions CI/CD
-
-on:
-ΓÇ» pull_request:
-ΓÇ» ΓÇ» branches:
-ΓÇ» ΓÇ» ΓÇ» - "dev"
-ΓÇ» push:
-ΓÇ» ΓÇ» branches:
-ΓÇ» ΓÇ» ΓÇ» - "dev"
-
-jobs:
-ΓÇ» flake8:
-ΓÇ» ΓÇ» strategy:
-ΓÇ» ΓÇ» ΓÇ» matrix:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» python-version: [3.11.5]
-ΓÇ» ΓÇ» runs-on: ubuntu-latest
-ΓÇ» ΓÇ» steps:
-ΓÇ» ΓÇ» ΓÇ» - name: Check out source repository
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» uses: actions/checkout@v4
-ΓÇ» ΓÇ» ΓÇ» - name: Setup Python
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» uses: actions/setup-python@v4
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» with:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» python-version: ${{matrix.python-version}}
-ΓÇ» ΓÇ» ΓÇ» - name: flake8 Lint
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» uses: py-actions/flake8@v1
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» with:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» max-line-length: 120
-ΓÇ» tests:
-ΓÇ» ΓÇ» strategy:
-ΓÇ» ΓÇ» ΓÇ» matrix:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» python-version: [3.11.5]
-ΓÇ» ΓÇ» runs-on: ubuntu-latest
-ΓÇ» ΓÇ» needs: [flake8]
-ΓÇ» ΓÇ» steps:
-ΓÇ» ΓÇ» ΓÇ» - uses: actions/checkout@v4
-ΓÇ» ΓÇ» ΓÇ» - name: Setup Python
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» uses: actions/setup-python@v4
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» with:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» python-version: ${{matrix.python-version}}
-ΓÇ» ΓÇ» ΓÇ» - name: Install dependencies
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» run: |
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» python -m pip install --upgrade pip
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» pip install -r requirements.txt
-ΓÇ» ΓÇ» ΓÇ» - name: Pytest
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» run: |
- airflow webserver &
- airflow db init
- airflow scheduler &
- pytest tests/
-```
-
-**Step 3:** In the tests folder, create the tests for Airflow DAGs. Following are the few examples: 
-
-* At the least, it's crucial to conduct initial testing using `import_errors` to ensure the DAG's integrity and correctness. 
-This test ensures: 
--- **Your DAG does not contain cyclicity:** Cyclicity, where a task forms a loop or circular dependency within  the workflow, can lead to unexpected and infinite execution loops. --- **There are no import errors:** Import errors can arise due to issues like missing dependencies, incorrect module paths, or coding errors.  --- **Tasks are defined correctly:** Confirm that the tasks within your DAG are correctly defined.-
-```python
-@pytest.fixture()
-
-def dagbag():
-ΓÇ» ΓÇ» return DagBag(dag_folder="dags")
-
-def test_no_import_errors(dagbag):
-ΓÇ» ΓÇ» """
-ΓÇ» ΓÇ» Test Dags to contain no import errors.
-ΓÇ» ΓÇ» """
-ΓÇ» ΓÇ» assert not dagbag.import_errors
-```
-
-* Test to ensure specific Dag IDs to be present in your feature branch before merging it into the development (dev) branch. 
-
-```python
-def test_expected_dags(dagbag):
-ΓÇ» ΓÇ» """
-ΓÇ» ΓÇ» Test whether expected dag Ids are present.
-ΓÇ» ΓÇ» """
-ΓÇ» ΓÇ» expected_dag_ids = ["airflow-ci-cd-tutorial"]
-
-ΓÇ» ΓÇ» for dag_id in expected_dag_ids:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» dag = dagbag.get_dag(dag_id)
-
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» assert dag is not None
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» assert dag_id == dag.dag_id
-```
-
-* Test to ensure only approved tags are associated with your DAGs. This test helps to enforce the approved tag usage. 
-
-```python
-def test_requires_approved_tag(dagbag):
-ΓÇ» ΓÇ» """
-ΓÇ» ΓÇ» Test if DAGS contain one or more tags from list of approved tags only.
-ΓÇ» ΓÇ» """
-ΓÇ» ΓÇ» Expected_tags = {"tutorial", "CI/CD"}
-ΓÇ» ΓÇ» dagIds = dagbag.dag_ids
-
-ΓÇ» ΓÇ» for id in dagIds:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» dag = dagbag.get_dag(id)
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» assert dag.tags
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» if Expected_tags:
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» assert not set(dag.tags) - Expected_tags
-```
-
-**Step 4:** Now, when you raise pull request to dev branch, GitHub Actions triggers the CI pipeline to run all the tests. 
-
-#### For more information:
--- [https://airflow.apache.org/docs/apache-airflow/stable/_modules/airflow/models/dagbag.html](https://airflow.apache.org/docs/apache-airflow/stable/_modules/airflow/models/dagbag.html) --- https://airflow.apache.org/docs/apache-airflow/stable/best-practices.html#unit-tests 
+1. Copy the code of a DAG deployed in Managed Airflow integration runtime by using the Git-sync feature.
+ ```python
+ from datetime import datetime
+ from airflow import DAG
+ from airflow.operators.bash import BashOperator
+
+ with DAG(
+ ΓÇ» ΓÇ» dag_id="airflow-ci-cd-tutorial",
+ ΓÇ» ΓÇ» start_date=datetime(2023, 8, 15),
+ ΓÇ» ΓÇ» schedule="0 0 * * *",
+ ΓÇ» ΓÇ» tags=["tutorial", "CI/CD"]
+ ) as dag:
+ ΓÇ» ΓÇ» # Tasks are represented as operators
+ ΓÇ» ΓÇ» task1 = BashOperator(task_id="task1", bash_command="echo task1")
+ ΓÇ» ΓÇ» task2 = BashOperator(task_id="task2", bash_command="echo task2")
+ ΓÇ» ΓÇ» task3 = BashOperator(task_id="task3", bash_command="echo task3")
+ ΓÇ» ΓÇ» task4 = BashOperator(task_id="task4", bash_command="echo task4")
+
+ ΓÇ» ΓÇ» # Set dependencies between tasks
+ ΓÇ» ΓÇ» task1 >> task2 >> task3 >> task4
+ ```
+
+1. Create a CI/CD pipeline. You have two options: Azure DevOps or GitHub actions.
+
+ 1. **Azure DevOps option**: Create the file `azure-devops-ci-cd.yaml` and copy the following code. The pipeline triggers on a pull request or push request to the development branch:
+
+ ```python
+ trigger:
+ - dev
+
+ pr:
+ - dev
+
+ pool:
+ vmImage: ubuntu-latest
+ strategy:
+ matrix:
+ Python3.11:
+ python.version: '3.11.5'
+
+ steps:
+ - task: UsePythonVersion@0
+ inputs:
+ versionSpec: '$(python.version)'
+ displayName: 'Use Python $(python.version)'
+
+ - script: |
+ python -m pip install --upgrade pip
+ pip install -r requirements.txt
+ displayName: 'Install dependencies'
+
+ - script: |
+ airflow webserver &
+ airflow db init
+ airflow scheduler &
+ pytest
+ displayName: 'Pytest'
+ ```
+
+ For more information, see [Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-sign-up).
+
++
+ 1. **GitHub actions option**: Create a `.github/workflows` directory in your GitHub repository.
+
+ 1. In the `.github/workflows` directory, create a file named `github-actions-ci-cd.yml`.
+
+ 1. Copy the following code. The pipeline triggers whenever there's a pull request or push request to the development branch:
+
+ ```python
+ name: GitHub Actions CI/CD
+
+ on:
+ ΓÇ» pull_request:
+ ΓÇ» ΓÇ» branches:
+ ΓÇ» ΓÇ» ΓÇ» - "dev"
+ ΓÇ» push:
+ ΓÇ» ΓÇ» branches:
+ ΓÇ» ΓÇ» ΓÇ» - "dev"
+
+ jobs:
+ ΓÇ» flake8:
+ ΓÇ» ΓÇ» strategy:
+ ΓÇ» ΓÇ» ΓÇ» matrix:
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» python-version: [3.11.5]
+ ΓÇ» ΓÇ» runs-on: ubuntu-latest
+ ΓÇ» ΓÇ» steps:
+ ΓÇ» ΓÇ» ΓÇ» - name: Check out source repository
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» uses: actions/checkout@v4
+ ΓÇ» ΓÇ» ΓÇ» - name: Setup Python
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» uses: actions/setup-python@v4
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» with:
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» python-version: ${{matrix.python-version}}
+ ΓÇ» ΓÇ» ΓÇ» - name: flake8 Lint
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» uses: py-actions/flake8@v1
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» with:
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» max-line-length: 120
+ ΓÇ» tests:
+ ΓÇ» ΓÇ» strategy:
+ ΓÇ» ΓÇ» ΓÇ» matrix:
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» python-version: [3.11.5]
+ ΓÇ» ΓÇ» runs-on: ubuntu-latest
+ ΓÇ» ΓÇ» needs: [flake8]
+ ΓÇ» ΓÇ» steps:
+ ΓÇ» ΓÇ» ΓÇ» - uses: actions/checkout@v4
+ ΓÇ» ΓÇ» ΓÇ» - name: Setup Python
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» uses: actions/setup-python@v4
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» with:
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» python-version: ${{matrix.python-version}}
+ ΓÇ» ΓÇ» ΓÇ» - name: Install dependencies
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» run: |
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» python -m pip install --upgrade pip
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» pip install -r requirements.txt
+ ΓÇ» ΓÇ» ΓÇ» - name: Pytest
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» run: |
+ airflow webserver &
+ airflow db init
+ airflow scheduler &
+ pytest tests/
+ ```
+
+1. In the tests folder, create the tests for Airflow DAGs. Here are a few examples:
+
+ 1. At the least, it's crucial to conduct initial testing by using `import_errors` to ensure the DAG's integrity and correctness. This test ensures:
+
+ - **Your DAG doesn't contain cyclicity:** Cyclicity, where a task forms a loop or circular dependency within the workflow, can lead to unexpected and infinite execution loops.
+ - **There are no import errors:** Import errors can arise because of issues like missing dependencies, incorrect module paths, or coding errors.  
+ - **Tasks are defined correctly:** Confirm that the tasks within your DAG are correctly defined.
+
+ ```python
+ @pytest.fixture()
+
+ def dagbag():
+ ΓÇ» ΓÇ» return DagBag(dag_folder="dags")
+
+ def test_no_import_errors(dagbag):
+ ΓÇ» ΓÇ» """
+ ΓÇ» ΓÇ» Test Dags to contain no import errors.
+ ΓÇ» ΓÇ» """
+ ΓÇ» ΓÇ» assert not dagbag.import_errors
+ ```
+
+ 1. Test to ensure specific DAG IDs are present in your feature branch before merging them into the development branch.
+
+ ```python
+ def test_expected_dags(dagbag):
+ ΓÇ» ΓÇ» """
+ ΓÇ» ΓÇ» Test whether expected dag Ids are present.
+ ΓÇ» ΓÇ» """
+ ΓÇ» ΓÇ» expected_dag_ids = ["airflow-ci-cd-tutorial"]
+
+ ΓÇ» ΓÇ» for dag_id in expected_dag_ids:
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» dag = dagbag.get_dag(dag_id)
+
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» assert dag is not None
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» assert dag_id == dag.dag_id
+ ```
+
+ 1. Test to ensure only approved tags is associated with your DAGs. This test helps to enforce the approved tag usage.
+
+ ```python
+ def test_requires_approved_tag(dagbag):
+ ΓÇ» ΓÇ» """
+ ΓÇ» ΓÇ» Test if DAGS contain one or more tags from list of approved tags only.
+ ΓÇ» ΓÇ» """
+ ΓÇ» ΓÇ» Expected_tags = {"tutorial", "CI/CD"}
+ ΓÇ» ΓÇ» dagIds = dagbag.dag_ids
+
+ ΓÇ» ΓÇ» for id in dagIds:
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» dag = dagbag.get_dag(id)
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» assert dag.tags
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» if Expected_tags:
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» assert not set(dag.tags) - Expected_tags
+ ```
+
+1. Now when you raise a pull request to the development branch, you can see that GitHub actions trigger the CI pipeline to run all the tests.
+
+## Related content
+
+- [Source code for airflow.models.dagbag](https://airflow.apache.org/docs/apache-airflow/stable/_modules/airflow/models/dagbag.html)
+- [Apache Airflow unit tests](https://airflow.apache.org/docs/apache-airflow/stable/best-practices.html#unit-tests)
data-factory Connector Microsoft Fabric Lakehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md
Title: Copy and transform data in Microsoft Fabric Lakehouse (Preview)
+ Title: Copy and transform data in Microsoft Fabric Lakehouse
-description: Learn how to copy and transform data to and from Microsoft Fabric Lakehouse (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines.
+description: Learn how to copy and transform data in Microsoft Fabric Lakehouse using Azure Data Factory or Azure Synapse Analytics pipelines.
Previously updated : 12/08/2023 Last updated : 01/08/2024
-# Copy and transform data in Microsoft Fabric Lakehouse (Preview) using Azure Data Factory or Azure Synapse Analytics
+# Copy and transform data in Microsoft Fabric Lakehouse using Azure Data Factory or Azure Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] Microsoft Fabric Lakehouse is a data architecture platform for storing, managing, and analyzing structured and unstructured data in a single location. In order to achieve seamless data access across all compute engines in Microsoft Fabric, go to [Lakehouse and Delta Tables](/fabric/data-engineering/lakehouse-and-delta-tables) to learn more.
-This article outlines how to use Copy activity to copy data from and to Microsoft Fabric Lakehouse (Preview) and use Data Flow to transform data in Microsoft Fabric Lakehouse (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+This article outlines how to use Copy activity to copy data from and to Microsoft Fabric Lakehouse and use Data Flow to transform data in Microsoft Fabric Lakehouse. To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
-> [!IMPORTANT]
-> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
## Supported capabilities This Microsoft Fabric Lakehouse connector is supported for the following capabilities:
-| Supported capabilities|IR | Managed private endpoint|
-|| --| --|
-|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
-|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |- |
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |
*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
The Microsoft Fabric Lakehouse connector supports the following authentication t
To use service principal authentication, follow these steps.
-1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service:
+1. [Register an application with the Microsoft Identity platform](../active-directory/develop/quickstart-register-app.md) and [add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret). Afterwards, make note of these values, which you use to define the linked service:
- - Application ID
- - Application key
+ - Application (client) ID, which is the service principal ID in the linked service.
+ - Client secret value, which is the service principal key in the linked service.
- Tenant ID 2. Grant the service principal at least the **Contributor** role in Microsoft Fabric workspace. Follow these steps:
These properties are supported for the linked service:
| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes | | servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
-| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's client secret value. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No | **Example: using service principal key authentication**
Assuming you have the following source folder structure and want to copy the fil
| Sample source structure | Content in FileListToCopy.txt | ADF configuration | | | | |
-| filesystem<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- File system: `filesystem`<br>- Folder path: `FolderA`<br><br>**In copy activity source:**<br>- File list path: `filesystem/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line with the relative path to the path configured in the dataset. |
+| filesystem<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- Folder path: `FolderA`<br><br>**In copy activity source:**<br>- File list path: `Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line with the relative path to the path configured in the dataset. |
#### Some recursive and copyBehavior examples
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 11/06/2023 Last updated : 01/09/2024
data-factory How To Diagnostic Logs And Metrics For Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-diagnostic-logs-and-metrics-for-managed-airflow.md
Title: Diagnostics logs and metrics for Managed Airflow
-description: This article explains how to use diagnostic logs and metrics to monitor Airflow IR.
+description: This article explains how to use diagnostic logs and metrics to monitor the Managed Airflow integration runtime.
Last updated 09/28/2023
# Diagnostics logs and metrics for Managed Airflow
-This guide walks you through the following:
+This article walks you through the steps to:
-1. How to enable diagnostics logs and metrics for the Managed Airflow.
-2. How to view logs and metrics.
-3. How to run a query.
-4. How to monitor metrics and set the alert system in Dag failure.
+- Enable diagnostics logs and metrics for Managed Airflow in Azure Data Factory.
+- View logs and metrics.
+- Run a query.
+- Monitor metrics and set the alert system in directed acyclic graph (DAG) failure.
-## How to enable Diagnostics logs and metrics for the Managed Airflow
+## Prerequisites
-1. Open your Azure Data Factory resource -> Select **Diagnostic settings** on the left navigation pane -> Select ΓÇ£Add Diagnostic setting.ΓÇ¥
+You need an Azure subscription. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
- :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/start-with-diagnostic-logs.png" alt-text="Screenshot that shows where diagnostic logs tab is located in data factory." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/start-with-diagnostic-logs.png":::
+## Enable diagnostics logs and metrics for Managed Airflow
-2. Fill out the Diagnostic settings name -> Select the following categories for the Airflow Logs
+1. Open your Data Factory resource and select **Diagnostic settings** on the leftmost pane. Then select **Add diagnostic setting**.
+
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/start-with-diagnostic-logs.png" alt-text="Screenshot that shows where the Diagnostic logs tab is located in Data Factory." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/start-with-diagnostic-logs.png":::
+
+1. Fill out the **Diagnostic settings** name. Select the following categories for the Airflow logs:
- Airflow task execution logs - Airflow worker logs
- - Airflow dag processing logs
+ - Airflow DAG processing logs
- Airflow scheduler logs - Airflow web logs
- - If you select **AllMetrics**, various Data Factory metrics are made available for you to monitor or raise alerts on. These metrics include the metrics for Data Factory activity and Managed Airflow IR such as AirflowIntegrationRuntimeCpuUsage, AirflowIntegrationRuntimeMemory.
+ - If you select **AllMetrics**, various Data Factory metrics are made available for you to monitor or raise alerts on. These metrics include the metrics for Data Factory activity and the Managed Airflow integration runtime, such as `AirflowIntegrationRuntimeCpuUsage` and `AirflowIntegrationRuntimeMemory`.
+
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-category-and-all-metrics.png" alt-text="Screenshot that shows which logs to select for the Airflow environment." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-category-and-all-metrics.png":::
+
+1. Under **Destination details**, select the **Send to Log Analytics workspace** checkbox.
+
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-analytics-workspace.png" alt-text="Screenshot that shows selecting Log Analytics workspace as the destination for diagnostic logs." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-analytics-workspace.png":::
- :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-category-and-all-metrics.png" alt-text="Screenshot that shows which logs to select for Airflow environment." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-category-and-all-metrics.png":::
+1. Select **Save**.
-3. Select the destination details, Log Analytics workspace:
+## View logs
- :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-analytics-workspace.png" alt-text="Screenshot that shows select log analytics workspace as destination for diagnostic logs." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-analytics-workspace.png":::
+1. After you add diagnostic settings, you can find them listed in the **Diagnostic setting** section. To access and view logs, select the Log Analytics workspace that you configured.
-4. Click on Save.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/01-click-on-log-analytics-workspace.png" alt-text="Screenshot that shows selecting the Log Analytics workspace URL." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/01-click-on-log-analytics-workspace.png":::
-## How to view logs
+1. Under the section **Maximize your Log Analytics experience**, select **View logs**.
-1. After adding Diagnostic settings, you can find them listed in the "**Diagnostic settings**" section. To access and view logs, simply click on the Log Analytics workspace that you've configured.
- :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/01-click-on-log-analytics-workspace.png" alt-text="Screenshot that shows click on log analytics workspace url." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/01-click-on-log-analytics-workspace.png":::
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/02-view-logs.png" alt-text="Screenshot that shows selecting View logs." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/02-view-logs.png":::
-2. Click on **View Logs**, under the section ΓÇ£Maximize your Log Analytics experienceΓÇ¥.
- :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/02-view-logs.png" alt-text="Screenshot that shows click on view logs." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/02-view-logs.png":::
+1. You're directed to your Log Analytics workspace where you can see that the tables you selected were imported into the workspace automatically.
-3. You are directed to your log analytics workspace, where the chosen tables are imported into the workspace automatically.
- :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/03-log-analytics-workspace.png" alt-text="Screenshot that shows logs analytics workspace." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/03-log-analytics-workspace.png":::
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/03-log-analytics-workspace.png" alt-text="Screenshot that shows the Log Analytics workspace." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/03-log-analytics-workspace.png":::
Other useful links for the schema:
-1. [Azure Monitor Logs reference - ADFAirflowSchedulerLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/ADFAirflowSchedulerLogs)
-2. [Azure Monitor Logs reference - ADFAirflowTaskLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/adfairflowtasklogs)
-3. [Azure Monitor Logs reference - ADFAirflowWebLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/adfairflowweblogs)
-4. [Azure Monitor Logs reference - ADFAirflowWorkerLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/adfairflowworkerlogs)
-5. [Azure Monitor Logs reference - AirflowDagProcessingLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/AirflowDagProcessingLogs)
+- [Azure Monitor Logs reference - ADFAirflowSchedulerLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/ADFAirflowSchedulerLogs)
+- [Azure Monitor Logs reference - ADFAirflowTaskLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/adfairflowtasklogs)
+- [Azure Monitor Logs reference - ADFAirflowWebLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/adfairflowweblogs)
+- [Azure Monitor Logs reference - ADFAirflowWorkerLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/adfairflowworkerlogs)
+- [Azure Monitor Logs reference - AirflowDagProcessingLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/AirflowDagProcessingLogs)
-## How to write a query
+## Write a query
-1. LetΓÇÖs start with simplest query that returns all the records in the ADFAirflowTaskLogs.
- You can double click on the table name to add it to query window, or you can directly type table name in window.
+1. Let's start with the simplest query that returns all the records in `ADFAirflowTaskLogs`. You can double-click the table name to add it to a query window. You can also enter the table name directly in the window.
-2. To narrow down your search results, such as filtering them based on a specific task ID, you can use the following query:
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/simple-query.png" alt-text="Screenshot that shows a Kusto query to retrieve all logs." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/simple-query.png":::
-```kusto
-ADFAirflowTaskLogs
-| where DagId == "<your_dag_id>"
-and TaskId == "<your_task_id>"
-```
+1. To narrow down your search results, such as filtering them based on a specific task ID, you can use the following query:
+
+ ```kusto
+ ADFAirflowTaskLogs
+ | where DagId == "<your_dag_id>"
+ and TaskId == "<your_task_id>"
+ ```
-Similarly, you can create custom queries according to your needs using any tables available in LogManagement.
+Similarly, you can create custom queries according to your needs by using any tables available in `LogManagement`.
-For more information:
+For more information, see:
-1. [Log Analytics Tutorial](../azure-monitor/logs/log-analytics-tutorial.md)
+- [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md)
+- [Kusto Query Language (KQL) overview - Azure Data Explorer | Microsoft Learn](/azure/data-explorer/kusto/query/)
-2. [Kusto Query Language (KQL) overview - Azure Data Explorer | Microsoft Learn](/azure/data-explorer/kusto/query/)
+## Monitor metrics
-## How to monitor metrics.
+Data Factory offers comprehensive metrics for Airflow integration runtimes, allowing you to effectively monitor the performance of your Airflow integration runtime and establish alerting mechanisms as needed.
-Azure Data Factory offers comprehensive metrics for Airflow Integration Runtimes (IR), allowing you to effectively monitor the performance of your Airflow IR and establish alerting mechanisms as needed.
+1. Open your Data Factory resource.
-1. Open your Azure Data Factory Resource.
+1. In the leftmost pane, under the **Monitoring** section, select **Metrics**.
-2. In the left navigation pane, Click **Metrics** under Monitoring section.
- :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/metrics-in-data-factory-studio.png" alt-text="Screenshot that shows where metrics tab is located in data factory." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/metrics-in-data-factory-studio.png":::
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/metrics-in-data-factory-studio.png" alt-text="Screenshot that shows where the Metrics tab is located in Data Factory." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/metrics-in-data-factory-studio.png":::
-3. Select the scope -> Metric Namespace -> Metric you want to monitor.
- :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/monitor-metrics.png" alt-text="Screenshot that shows metrics to select." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/monitor-metrics.png":::
+1. Select the **Scope** > **Metric Namespace** > **Metric** you want to monitor.
+
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/monitor-metrics.png" alt-text="Screenshot that shows the metrics to select." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/monitor-metrics.png":::
+
+1. Review the multiline chart that visualizes the **Integration Runtime CPU Percentage** and **Integration Runtime Dag Bag Size**.
-4. For example, we created the multi-line chart, to visualize the Integration Runtime CPU Percentage and Airflow Integration Runtime Dag Bag Size.
:::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/multi-line-chart.png" alt-text="Screenshot that shows multiline chart of metrics." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/multi-line-chart.png":::
-5. You can set up an alert rule that triggers when specific conditions are met by your metrics.
- Refer to guide: [Overview of Azure Monitor alerts - Azure Monitor | Microsoft Learn](/azure/azure-monitor/alerts/alerts-overview)
+1. You can set up an alert rule that triggers when your metrics meet specific conditions.
+ For more information, see [Overview of Azure Monitor alerts](/azure/azure-monitor/alerts/alerts-overview).
-6. Click on Save to Dashboard, once your chart is complete, else your chart disappears.
- :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/save-to-dashboard.png" alt-text="Screenshot that shows save to dashboard." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/save-to-dashboard.png":::
+1. Select **Save to dashboard** after your chart is finished or else your chart disappears.
-## Airflow Metrics
-The following table lists the metrics available for the Managed Airflow.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/save-to-dashboard.png" alt-text="Screenshot that shows Save to dashboard." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/save-to-dashboard.png":::
-Table headings
+## Airflow metrics
-Metric - The metric display name as it appears in the Azure portal.
-Name in Rest API - Metric name as referred to in the REST API.
-Unit - Unit of measure.
-Aggregation - The default aggregation type. Valid values: Average, Minimum, Maximum, Total, Count.
-Dimensions - Dimensions available for the metric.
-Time Grains - Intervals at which the metric is sampled. For example, PT1M indicates that the metric is sampled every minute, PT30M every 30 minutes, PT1H every hour, and so on.
-DS Export- Whether the metric is exportable to Azure Monitor Logs via Diagnostic Settings.
+The following table lists the metrics available for Managed Airflow. The table headings are:
-|Metric|Name in REST API|Description|Unit|Aggregation|Dimensions|Time Grains|DS Export|
+- **Metric**: The metric display name as it appears in the Azure portal.
+- **Name in REST API**: The metric name as referred to in the REST API.
+- **Description**: A description of the metric.
+- **Unit**: Unit of measure.
+- **Aggregation**: The default aggregation type. Valid values are Average, Minimum, Maximum, Total, and Count.
+- **Dimensions**: Dimensions available for the metric.
+- **Time grains**: Intervals at which the metric is sampled. For example, PT1M indicates that the metric is sampled every minute, PT30M every 30 minutes, PT1H every hour, and so on.
+- **DS export**: Whether the metric is exportable to Azure Monitor Logs via diagnostic settings.
+
+|Metric|Name in REST API|Description|Unit|Aggregation|Dimensions|Time grains|DS export|
|||||||| |**Airflow Integration Runtime Celery Task Timeout Error** |`AirflowIntegrationRuntimeCeleryTaskTimeoutError` |Number of `AirflowTaskTimeout` errors raised when publishing Task to Celery Broker. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Collect DB Dags** |`AirflowIntegrationRuntimeCollectDBDags` |Milliseconds taken for fetching all Serialized Dags from DB. |Milliseconds |Average |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Collect DB Dags** |`AirflowIntegrationRuntimeCollectDBDags` |Milliseconds taken for fetching all Serialized DAGs from database. |Milliseconds |Average |`IntegrationRuntimeName`|PT1M |No|
|**Airflow Integration Runtime Cpu Percentage** |`AirflowIntegrationRuntimeCpuPercentage` |CPU usage percentage of the Airflow integration runtime. |Percent |Average |`IntegrationRuntimeName`, `ContainerName`|PT1M |No|
-|**Airflow Integration Runtime Memory Usage** |`AirflowIntegrationRuntimeCpuUsage` |Millicores consumed by Airflow Integration Runtime, indicating the CPU resources used in thousandths of a CPU core. |Millicores |Average |`IntegrationRuntimeName`, `ContainerName`|PT1M |Yes|
+|**Airflow Integration Runtime Memory Usage** |`AirflowIntegrationRuntimeCpuUsage` |Millicores consumed by Airflow integration runtime, indicating the CPU resources used in thousandths of a CPU core. |Millicores |Average |`IntegrationRuntimeName`, `ContainerName`|PT1M |Yes|
|**Airflow Integration Runtime Dag Bag Size** |`AirflowIntegrationRuntimeDagBagSize` |Number of DAGs found when the scheduler ran a scan based on its configuration. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Dag Callback Exceptions** |`AirflowIntegrationRuntimeDagCallbackExceptions` |Number of exceptions raised from DAG callbacks. When this happens, it means DAG callback is not working. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Dag Callback Exceptions** |`AirflowIntegrationRuntimeDagCallbackExceptions` |Number of exceptions raised from DAG callbacks. When exceptions occur, it means DAG callback isn't working. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
|**Airflow Integration Runtime DAG File Refresh Error** |`AirflowIntegrationRuntimeDAGFileRefreshError` |Number of failures loading any DAG files. |Count |Total |`IntegrationRuntimeName`|PT1M |No| |**Airflow Integration Runtime DAG Processing Import Errors** |`AirflowIntegrationRuntimeDAGProcessingImportErrors` |Number of errors from trying to parse DAG files. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime DAG Processing Last Duration** |`AirflowIntegrationRuntimeDAGProcessingLastDuration` |Seconds taken to load the given DAG file. |Milliseconds |Average |`IntegrationRuntimeName`, `DagFile`|PT1M |No|
+|**Airflow Integration Runtime DAG Processing Last Duration** |`AirflowIntegrationRuntimeDAGProcessingLastDuration` |Seconds taken to load the specific DAG file. |Milliseconds |Average |`IntegrationRuntimeName`, `DagFile`|PT1M |No|
|**Airflow Integration Runtime DAG Processing Last Run Seconds Ago** |`AirflowIntegrationRuntimeDAGProcessingLastRunSecondsAgo` |Seconds since <dag_file> was last processed. |Seconds |Average |`IntegrationRuntimeName`, `DagFile`|PT1M |No|
-|**Airflow Integration Runtime DAG ProcessingManager Stalls** |`AirflowIntegrationRuntimeDAGProcessingManagerStalls` |Number of stalled DagFileProcessorManager. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime DAG Processing Processes** |`AirflowIntegrationRuntimeDAGProcessingProcesses` |Relative number of currently running DAG parsing processes (ie this delta is negative when, since the last metric was sent, processes have completed). |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime DAG Processing Processor Timeouts** |`AirflowIntegrationRuntimeDAGProcessingProcessorTimeouts` |Number of file processors that have been killed due to taking too long. |Seconds |Average |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime DAG Processing Total Parse Time** |`AirflowIntegrationRuntimeDAGProcessingTotalParseTime` |Seconds taken to scan and import dag_processing.file_path_queue_size DAG files. |Seconds |Average |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime DAG ProcessingManager Stalls** |`AirflowIntegrationRuntimeDAGProcessingManagerStalls` |Number of stalled `DagFileProcessorManager`. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime DAG Processing Processes** |`AirflowIntegrationRuntimeDAGProcessingProcesses` |Relative number of currently running DAG parsing processes. (For example, this delta is negative when, since the last metric was sent, processes were completed.) |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime DAG Processing Processor Timeouts** |`AirflowIntegrationRuntimeDAGProcessingProcessorTimeouts` |Number of file processors that were killed because they took too long. |Seconds |Average |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime DAG Processing Total Parse Time** |`AirflowIntegrationRuntimeDAGProcessingTotalParseTime` |Seconds taken to scan and import `dag_processing.file_path_queue_size` DAG files. |Seconds |Average |`IntegrationRuntimeName`|PT1M |No|
|**Airflow Integration Runtime DAG Run Dependency Check** |`AirflowIntegrationRuntimeDAGRunDependencyCheck` |Milliseconds taken to check DAG dependencies. |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`|PT1M |No|
-|**Airflow Integration Runtime DAG Run Duration Failed** |`AirflowIntegrationRuntimeDAGRunDurationFailed` |Seconds taken for a DagRun to reach failed state. |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`|PT1M |No|
-|**Airflow Integration Runtime DAG Run Duration Success** |`AirflowIntegrationRuntimeDAGRunDurationSuccess` |Seconds taken for a DagRun to reach success state. |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`|PT1M |No|
-|**Airflow Integration Runtime DAG Run First Task Scheduling Delay** |`AirflowIntegrationRuntimeDAGRunFirstTaskSchedulingDelay` |Seconds elapsed between first task start_date and dagrun expected start. |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`|PT1M |No|
-|**Airflow Integration Runtime DAG Run Schedule Delay** |`AirflowIntegrationRuntimeDAGRunScheduleDelay` |Seconds of delay between the scheduled DagRun start date and the actual DagRun start date. |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`|PT1M |No|
-|**Airflow Integration Runtime Executor Open Slots** |`AirflowIntegrationRuntimeExecutorOpenSlots` |Number of open slots on executor. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Executor Queued Tasks** |`AirflowIntegrationRuntimeExecutorQueuedTasks` |Number of queued tasks on executor. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Executor Running Tasks** |`AirflowIntegrationRuntimeExecutorRunningTasks` |Number of running tasks on executor. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Job End** |`AirflowIntegrationRuntimeJobEnd` |Number of ended <job_name> job, ex. SchedulerJob, LocalTaskJob. |Count |Total |`IntegrationRuntimeName`, `Job`|PT1M |No|
-|**Airflow Integration Runtime Heartbeat Failure** |`AirflowIntegrationRuntimeJobHeartbeatFailure` |Number of failed Heartbeats for a <job_name> job, ex. SchedulerJob, LocalTaskJob. |Count |Total |`IntegrationRuntimeName`, `Job`|PT1M |No|
-|**Airflow Integration Runtime Job Start** |`AirflowIntegrationRuntimeJobStart` |Number of started <job_name> job, ex. SchedulerJob, LocalTaskJob. |Count |Total |`IntegrationRuntimeName`, `Job`|PT1M |No|
-|**Airflow Integration Runtime Memory Percentage** |`AirflowIntegrationRuntimeMemoryPercentage` |Memory Percentage used by Airflow Integration Runtime environments. |Percent |Average |`IntegrationRuntimeName`, `ContainerName`|PT1M |Yes|
+|**Airflow Integration Runtime DAG Run Duration Failed** |`AirflowIntegrationRuntimeDAGRunDurationFailed` |Seconds taken for a `DagRun` to reach failed state. |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`|PT1M |No|
+|**Airflow Integration Runtime DAG Run Duration Success** |`AirflowIntegrationRuntimeDAGRunDurationSuccess` |Seconds taken for a `DagRun` to reach success state. |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`|PT1M |No|
+|**Airflow Integration Runtime DAG Run First Task Scheduling Delay** |`AirflowIntegrationRuntimeDAGRunFirstTaskSchedulingDelay` |Seconds elapsed between the first task `start_date` and the `DagRun` expected start. |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`|PT1M |No|
+|**Airflow Integration Runtime DAG Run Schedule Delay** |`AirflowIntegrationRuntimeDAGRunScheduleDelay` |Seconds of delay between the scheduled `DagRun` start date and the actual `DagRun` start date. |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`|PT1M |No|
+|**Airflow Integration Runtime Executor Open Slots** |`AirflowIntegrationRuntimeExecutorOpenSlots` |Number of open slots on the executor. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Executor Queued Tasks** |`AirflowIntegrationRuntimeExecutorQueuedTasks` |Number of queued tasks on the executor. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Executor Running Tasks** |`AirflowIntegrationRuntimeExecutorRunningTasks` |Number of running tasks on the executor. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Job End** |`AirflowIntegrationRuntimeJobEnd` |Number of ended <job_name> job, for example, `SchedulerJob` and `LocalTaskJob`. |Count |Total |`IntegrationRuntimeName`, `Job`|PT1M |No|
+|**Airflow Integration Runtime Heartbeat Failure** |`AirflowIntegrationRuntimeJobHeartbeatFailure` |Number of failed Heartbeats for a <job_name> job, for example, `SchedulerJob` and `LocalTaskJob`. |Count |Total |`IntegrationRuntimeName`, `Job`|PT1M |No|
+|**Airflow Integration Runtime Job Start** |`AirflowIntegrationRuntimeJobStart` |Number of started <job_name> jobs, for example, `SchedulerJob` and `LocalTaskJob`. |Count |Total |`IntegrationRuntimeName`, `Job`|PT1M |No|
+|**Airflow Integration Runtime Memory Percentage** |`AirflowIntegrationRuntimeMemoryPercentage` |Memory Percentage used by Airflow integration runtime environments. |Percent |Average |`IntegrationRuntimeName`, `ContainerName`|PT1M |Yes|
|**Airflow Integration Runtime Node Count** |`AirflowIntegrationRuntimeNodeCount` | |Count |Average |`IntegrationRuntimeName`, `ComputeNodeSize`|PT1M |Yes|
-|**Airflow Integration Runtime Operator Failures** |`AirflowIntegrationRuntimeOperatorFailures` |Total Operator failures. |Count |Total |`IntegrationRuntimeName`, `Operator`|PT1M |No|
-|**Airflow Integration Runtime Operator Successes** |`AirflowIntegrationRuntimeOperatorSuccesses` |Total Operator successes. |Count |Total |`IntegrationRuntimeName`, `Operator`|PT1M |No|
+|**Airflow Integration Runtime Operator Failures** |`AirflowIntegrationRuntimeOperatorFailures` |Total operator failures. |Count |Total |`IntegrationRuntimeName`, `Operator`|PT1M |No|
+|**Airflow Integration Runtime Operator Successes** |`AirflowIntegrationRuntimeOperatorSuccesses` |Total operator successes. |Count |Total |`IntegrationRuntimeName`, `Operator`|PT1M |No|
|**Airflow Integration Runtime Pool Open Slots** |`AirflowIntegrationRuntimePoolOpenSlots` |Number of open slots in the pool. |Count |Total |`IntegrationRuntimeName`, `Pool`|PT1M |No| |**Airflow Integration Runtime Pool Queued Slots** |`AirflowIntegrationRuntimePoolQueuedSlots` |Number of queued slots in the pool. |Count |Total |`IntegrationRuntimeName`, `Pool`|PT1M |No| |**Airflow Integration Runtime Pool Running Slots** |`AirflowIntegrationRuntimePoolRunningSlots` |Number of running slots in the pool. |Count |Total |`IntegrationRuntimeName`, `Pool`|PT1M |No| |**Airflow Integration Runtime Pool Starving Tasks** |`AirflowIntegrationRuntimePoolStarvingTasks` |Number of starving tasks in the pool. |Count |Total |`IntegrationRuntimeName`, `Pool`|PT1M |No| |**Airflow Integration Runtime Scheduler Critical Section Busy** |`AirflowIntegrationRuntimeSchedulerCriticalSectionBusy` |Count of times a scheduler process tried to get a lock on the critical section (needed to send tasks to the executor) and found it locked by another process. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Scheduler Critical Section Duration** |`AirflowIntegrationRuntimeSchedulerCriticalSectionDuration` |Milliseconds spent in the critical section of scheduler loop ΓÇô only a single scheduler can enter this loop at a time. |Milliseconds |Average |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Scheduler Critical Section Duration** |`AirflowIntegrationRuntimeSchedulerCriticalSectionDuration` |Milliseconds spent in the critical section of a scheduler loop. Only a single scheduler can enter this loop at a time. |Milliseconds |Average |`IntegrationRuntimeName`|PT1M |No|
|**Airflow Integration Runtime Scheduler Failed SLA Email Attempts** |`AirflowIntegrationRuntimeSchedulerFailedSLAEmailAttempts` |Number of failed SLA miss email notification attempts. |Count |Total |`IntegrationRuntimeName`|PT1M |No| |**Airflow Integration Runtime Scheduler Heartbeats** |`AirflowIntegrationRuntimeSchedulerHeartbeat` |Scheduler heartbeats. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Scheduler Orphaned Tasks Adopted** |`AirflowIntegrationRuntimeSchedulerOrphanedTasksAdopted` |Number of Orphaned tasks adopted by the Scheduler. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Scheduler Orphaned Tasks Cleared** |`AirflowIntegrationRuntimeSchedulerOrphanedTasksCleared` |Number of Orphaned tasks cleared by the Scheduler. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Scheduler Orphaned Tasks Adopted** |`AirflowIntegrationRuntimeSchedulerOrphanedTasksAdopted` |Number of orphaned tasks adopted by the Scheduler. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Scheduler Orphaned Tasks Cleared** |`AirflowIntegrationRuntimeSchedulerOrphanedTasksCleared` |Number of orphaned tasks cleared by the Scheduler. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
|**Airflow Integration Runtime Scheduler Tasks Executable** |`AirflowIntegrationRuntimeSchedulerTasksExecutable` |Number of tasks that are ready for execution (set to queued) with respect to pool limits, DAG concurrency, executor state, and priority. |Count |Total |`IntegrationRuntimeName`|PT1M |No| |**Airflow Integration Runtime Scheduler Tasks Killed Externally** |`AirflowIntegrationRuntimeSchedulerTasksKilledExternally` |Number of tasks killed externally. |Count |Total |`IntegrationRuntimeName`|PT1M |No| |**Airflow Integration Runtime Scheduler Tasks Running** |`AirflowIntegrationRuntimeSchedulerTasksRunning` | |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Scheduler Tasks Starving** |`AirflowIntegrationRuntimeSchedulerTasksStarving` |Number of tasks that cannot be scheduled because of no open slot in pool. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Scheduler Tasks Starving** |`AirflowIntegrationRuntimeSchedulerTasksStarving` |Number of tasks that can't be scheduled because of no open slot in the pool. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
|**Airflow Integration Runtime Started Task Instances** |`AirflowIntegrationRuntimeStartedTaskInstances` | |Count |Total |`IntegrationRuntimeName`, `DagId`, `TaskId`|PT1M |No|
-|**Airflow Integration Runtime Task Instance Created Using Operator** |`AirflowIntegrationRuntimeTaskInstanceCreatedUsingOperator` |Number of tasks instances created for a given Operator. |Count |Total |`IntegrationRuntimeName`, `Operator`|PT1M |No|
+|**Airflow Integration Runtime Task Instance Created Using Operator** |`AirflowIntegrationRuntimeTaskInstanceCreatedUsingOperator` |Number of task instances created for a specific operator. |Count |Total |`IntegrationRuntimeName`, `Operator`|PT1M |No|
|**Airflow Integration Runtime Task Instance Duration** |`AirflowIntegrationRuntimeTaskInstanceDuration` | |Milliseconds |Average |`IntegrationRuntimeName`, `DagId`, `TaskID`|PT1M |No|
-|**Airflow Integration Runtime Task Instance Failures** |`AirflowIntegrationRuntimeTaskInstanceFailures` |Overall task instances failures |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Task Instance Failures** |`AirflowIntegrationRuntimeTaskInstanceFailures` |Overall task instances failures. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
|**Airflow Integration Runtime Task Instance Finished** |`AirflowIntegrationRuntimeTaskInstanceFinished` |Overall task instances finished. |Count |Total |`IntegrationRuntimeName`, `DagId`, `TaskId`, `State`|PT1M |No| |**Airflow Integration Runtime Task Instance Previously Succeeded** |`AirflowIntegrationRuntimeTaskInstancePreviouslySucceeded` |Number of previously succeeded task instances. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Task Instance Successes** |`AirflowIntegrationRuntimeTaskInstanceSuccesses` |Overall task instances successes. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Task Removed From DAG** |`AirflowIntegrationRuntimeTaskRemovedFromDAG` |Number of tasks removed for a given dag (i.e. task no longer exists in DAG). |Count |Total |`IntegrationRuntimeName`, `DagId`|PT1M |No|
-|**Airflow Integration Runtime Task Restored To DAG** |`AirflowIntegrationRuntimeTaskRestoredToDAG` |Number of tasks restored for a given dag (i.e. task instance which was previously in REMOVED state in the DB is added to DAG file). |Count |Total |`IntegrationRuntimeName`, `DagId`|PT1M |No|
-|**Airflow Integration Runtime Triggers Blocked Main Thread** |`AirflowIntegrationRuntimeTriggersBlockedMainThread` |Number of triggers that blocked the main thread (likely due to not being fully asynchronous). |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Task Instance Successes** |`AirflowIntegrationRuntimeTaskInstanceSuccesses` |Overall task instance successes. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Task Removed From DAG** |`AirflowIntegrationRuntimeTaskRemovedFromDAG` |Number of tasks removed for a specific DAG. (That is, the task no longer exists in DAG.) |Count |Total |`IntegrationRuntimeName`, `DagId`|PT1M |No|
+|**Airflow Integration Runtime Task Restored To DAG** |`AirflowIntegrationRuntimeTaskRestoredToDAG` |Number of tasks restored for a specific DAG. (That is, a task instance that was previously in a REMOVED state in the database is added to a DAG file.) |Count |Total |`IntegrationRuntimeName`, `DagId`|PT1M |No|
+|**Airflow Integration Runtime Triggers Blocked Main Thread** |`AirflowIntegrationRuntimeTriggersBlockedMainThread` |Number of triggers that blocked the main thread (likely because they weren't fully asynchronous). |Count |Total |`IntegrationRuntimeName`|PT1M |No|
|**Airflow Integration Runtime Triggers Failed** |`AirflowIntegrationRuntimeTriggersFailed` |Number of triggers that errored before they could fire an event. |Count |Total |`IntegrationRuntimeName`|PT1M |No| |**Airflow Integration Runtime Triggers Running** |`AirflowIntegrationRuntimeTriggersRunning` |Number of triggers currently running for a triggerer (described by hostname). |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Triggers Succeeded** |`AirflowIntegrationRuntimeTriggersSucceeded` |Number of triggers that have fired at least one event. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-|**Airflow Integration Runtime Zombie Tasks Killed** |`AirflowIntegrationRuntimeZombiesKilled` |Zombie tasks killed |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-
+|**Airflow Integration Runtime Triggers Succeeded** |`AirflowIntegrationRuntimeTriggersSucceeded` |Number of triggers that fired at least one event. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
+|**Airflow Integration Runtime Zombie Tasks Killed** |`AirflowIntegrationRuntimeZombiesKilled` |Zombie tasks killed. |Count |Total |`IntegrationRuntimeName`|PT1M |No|
-For more information: [https://learn.microsoft.com/azure/azure-monitor/reference/supported-metrics/microsoft-datafactory-factories-metrics](/azure/azure-monitor/reference/supported-metrics/microsoft-datafactory-factories-metrics)
+For more information, see [Supported metrics for Microsoft.DataFactory/factories](/azure/azure-monitor/reference/supported-metrics/microsoft-datafactory-factories-metrics).
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **Unusual data exploration in a storage account**<br>(Storage.Blob_DataExplorationAnomaly<br>Storage.Files_DataExplorationAnomaly) | Indicates that blobs or containers in a storage account have been enumerated in an abnormal way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Execution | High/Medium | | **Unusual deletion in a storage account**<br>(Storage.Blob_DeletionAnomaly<br>Storage.Files_DeletionAnomaly) | Indicates that one or more unexpected delete operations has occurred in a storage account, compared to recent activity on this account. A potential cause is that an attacker has deleted data from your storage account.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | High/Medium | | **Unusual unauthenticated public access to a sensitive blob container (Preview)**<br>Storage.Blob_AnonymousAccessAnomaly.Sensitive | The alert indicates that someone accessed a blob container with sensitive data in the storage account without authentication, using an external (public) IP address. This access is suspicious since the blob container is open to public access and is typically only accessed with authentication from internal networks (private IP addresses). This access could indicate that the blob container's access level is misconfigured, and a malicious actor may have exploited the public access. The security alert includes the discovered sensitive information context (scanning time, classification label, information types, and file types). Learn more on sensitive data threat detection. <br> Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Initial Access | High |
-| **Unusual amount of data extracted from a sensitive blob container (Preview)**<br>Storage.Blob_DataExfiltration.AmountOfDataAnomaly.Sensitive | The alert indicates that someone has extracted an unusually large number of blobs from a blob container with sensitive data in the storage account.<br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Exfiltration | Medium |
-| **Unusual number of blobs extracted from a sensitive blob container (Preview)**<br>Storage.Blob_DataExfiltration.NumberOfBlobsAnomaly.Sensitive | The alert indicates that someone has extracted an unusually large amount of data from a blob container with sensitive data in the storage account. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Exfiltration | |
+| **Unusual amount of data extracted from a sensitive blob container (Preview)**<br>Storage.Blob_DataExfiltration.AmountOfDataAnomaly.Sensitive |The alert indicates that someone has extracted an unusually large amount of data from a blob container with sensitive data in the storage account. Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Exfiltration | Medium |
+| **Unusual number of blobs extracted from a sensitive blob container (Preview)**<br>Storage.Blob_DataExfiltration.NumberOfBlobsAnomaly.Sensitive |The alert indicates that someone has extracted an unusually large number of blobs from a blob container with sensitive data in the storage account. Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Exfiltration | |
| **Access from a known suspicious application to a sensitive blob container (Preview)**<br>Storage.Blob_SuspiciousApp.Sensitive | The alert indicates that someone with a known suspicious application accessed a blob container with sensitive data in the storage account and performed authenticated operations. <br>The access may indicate that a threat actor obtained credentials to access the storage account by using a known suspicious application. However, the access could also indicate a penetration test carried out in the organization. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Initial Access | High | | **Access from a known suspicious IP address to a sensitive blob container (Preview)**<br>Storage.Blob_SuspiciousIp.Sensitive | The alert indicates that someone accessed a blob container with sensitive data in the storage account from a known suspicious IP address associated with threat intel by Microsoft Threat Intelligence. Since the access was authenticated, it's possible that the credentials allowing access to this storage account were compromised. <br>Learn more aboutΓÇ»[Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Pre-Attack | High | | **Access from a Tor exit node to a sensitive blob container (Preview)**<br>Storage.Blob_TorAnomaly.Sensitive | The alert indicates that someone with an IP address known to be a Tor exit node accessed a blob container with sensitive data in the storage account with authenticated access. Authenticated access from a Tor exit node strongly indicates that the actor is attempting to remain anonymous for possible malicious intent. Since the access was authenticated, it's possible that the credentials allowing access to this storage account were compromised. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Pre-Attack | High |
defender-for-cloud Common Questions Microsoft Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/common-questions-microsoft-defender-vulnerability-management.md
There's no difference for coverage of language specific packages between the Qua
- [Full list of supported packages and their versions for Microsoft Defender Vulnerability Management](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-microsoft-defender-vulnerability-management) -- [Full list of supported packages and their versions for Qualys](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys)
+- [Full list of supported packages and their versions for Qualys](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated)
## Are there any other capabilities that are unique to the Microsoft Defender Vulnerability Management powered offering?
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Title: Vulnerability assessment for Azure powered by Qualys (Deprecated)
description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities. Previously updated : 12/25/2023 Last updated : 01/10/2024
In every subscription where this capability is enabled, all images stored in ACR
Container vulnerability assessment powered by Qualys has the following capabilities: -- **Scanning OS packages** - container vulnerability assessment can scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys).
+- **Scanning OS packages** - container vulnerability assessment can scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated).
-- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [full list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys).
+- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [full list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated).
- **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/allow-access-trusted-services).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
AWS Systems Manager manages autoprovisioning by using the SSM Agent. Some Amazon
Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html), which enables core functionality for the AWS Systems Manager service.
-**You must have the SSM Agent for auto provisioning Arc agent on EC2 machines. If the SSM doesn't exist, or is removed from the EC2, the Arc provisioning wonΓÇÖt be able to proceed.**
+**You must have the SSM Agent for auto provisioning Arc agent on EC2 machines. If the SSM doesn't exist, or is removed from the EC2, the Arc provisioning won't be able to proceed.**
> [!NOTE] > As part of the cloud formation template that is run during the onboarding process, an automation process is created and triggered every 30 days, over all the EC2s that existed during the initial run of the cloud formation. The goal of this scheduled scan is to ensure that all the relevant EC2s have an IAM profile with the required IAM policy that allows Defender for Cloud to access, manage, and provide the relevant security features (including the Arc agent provisioning). The scan does not apply to EC2s that were created after the run of the cloud formation.
Connecting your AWS account is part of the multicloud experience available in Mi
- [Protect all of your resources with Defender for Cloud](enable-all-plans.md). - Set up your [on-premises machines](quickstart-onboard-machines.md) and [GCP projects](quickstart-onboard-gcp.md). - Get answers to [common questions](faq-general.yml) about onboarding your AWS account.-- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector).
+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshoot-connectors).
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
The **DevOps security** blade shows your onboarded repositories grouped by Organ
- Learn more about [DevOps security in Defender for Cloud](defender-for-devops-introduction.md). - Configure the [Microsoft Security DevOps task in your Azure Pipelines](azure-devops-extension.md).-- [Troubleshoot your Azure DevOps connector](troubleshooting-guide.md#troubleshoot-azure-devops-organization-connector-issues)
+- [Troubleshoot your Azure DevOps connector](troubleshooting-guide.md#troubleshoot-connector-problems-for-the-azure-devops-organization)
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Connecting your GCP project is part of the multicloud experience available in Mi
- [Protect all of your resources with Defender for Cloud](enable-all-plans.md). - Set up your [on-premises machines](quickstart-onboard-machines.md) and [AWS account](quickstart-onboard-aws.md).-- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector).
+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshoot-connectors).
- Get answers to [common questions](faq-general.yml) about connecting your GCP project.
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
This article summarizes support information for Container capabilities in Micros
> [!NOTE] > Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## Azure
-
-| Domain | Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Plans | Azure clouds availability |
-|--|--|--|--|--|--|--|--|
-| Security posture management | [Agentless discovery for Kubernetes](defender-for-containers-introduction.md#security-posture-management) | AKS | GA | GA | Agentless | Defender for Containers **OR** Defender CSPM | Azure commercial clouds |
-| Security posture management | Comprehensive inventory capabilities | ACR, AKS | GA | GA | Agentless| Defender for Containers **OR** Defender CSPM | Azure commercial clouds |
-| Security posture management | Attack path analysis | ACR, AKS | GA | - | Agentless | Defender CSPM | Azure commercial clouds |
-| Security posture management | Enhanced risk-hunting | ACR, AKS | GA | - | Agentless | Defender for Containers **OR** Defender CSPM | Azure commercial clouds |
-| Security posture management | [Control plane hardening](defender-for-containers-architecture.md) | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Security posture management | [Kubernetes data plane hardening](kubernetes-workload-protections.md) |AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Security posture management | Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Microsoft Azure operated by 21Vianet |
-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) | Agentless registry scan (powered by Qualys) <BR> [Supported OS packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) | Agentless registry scan (powered by Qualys) <BR> [Supported language packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) | Agentless/agent-based runtime scan(powered by Qualys) [OS packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys) | AKS | GA | Preview | Defender agent | Defender for Containers | Commercial clouds |
-| [Vulnerability assessment](agentless-vulnerability-assessment-azure.md) | Agentless registry scan (powered by Microsoft Defender Vulnerability Management) [supported packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-microsoft-defender-vulnerability-management)| ACR, Private ACR | GA | Preview | Agentless | Defender for Containers or Defender CSPM | Commercial clouds<br/><br/> National clouds: Azure Government, Azure operated by 21Vianet |
-| [Vulnerability assessment](agentless-vulnerability-assessment-azure.md) | Agentless/agent-based runtime (powered by Microsoft Defender Vulnerability Management) [supported packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-microsoft-defender-vulnerability-management)| AKS | GA | Preview | Agentless **OR/AND** Defender agent | Defender for Containers or Defender CSPM | Commercial clouds<br/><br/> National clouds: Azure Government, Azure operated by 21Vianet |
-| Runtime threat protection | [Control plane](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Runtime threat protection | Workload | AKS | GA | - | Defender agent | Defender for Containers | Commercial clouds |
-| Deployment & monitoring | Discovery of unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Deployment & monitoring | Defender agent auto provisioning | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Deployment & monitoring | Azure Policy for Kubernetes auto provisioning | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-
-### Registries and images support for Azure - vulnerability assessment powered by Qualys
+## Azure
+
+Following are the features for each of the domains in Defender for Containers:
+
+### Security posture management
+
+| Feature | Description | Supported resources | Linux release state | Windows release state | Enablement method | Agent | Plans | Azure clouds availability |
+|--|--|--|--|--|--|--|--|--|
+| [Agentless discovery for Kubernetes](defender-for-containers-introduction.md#security-posture-management) | Provides zero footprint, API-based discovery of Kubernetes clusters, their configurations and deployments. | AKS | GA | GA | Enable **Agentless discovery on Kubernetes** toggle | Agentless | Defender for Containers **OR** Defender CSPM | Azure commercial clouds |
+| Comprehensive inventory capabilities | Enables you to explore resources, pods, services, repositories, images, and configurations through [security explorer](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) to easily monitor and manage your assets. | ACR, AKS | GA | GA | Enable **Agentless discovery on Kubernetes** toggle | Agentless| Defender for Containers **OR** Defender CSPM | Azure commercial clouds |
+| Attack path analysis | A graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment. | ACR, AKS | GA | - | Activated with plan | Agentless | Defender CSPM (requires Agentless discovery for Kubernetes to be enabled) | Azure commercial clouds |
+| Enhanced risk-hunting | Enables security admins to actively hunt for posture issues in their containerized assets through queries (built-in and custom) and [security insights](attack-path-reference.md#insights) in the [security explorer](how-to-manage-cloud-security-explorer.md). | ACR, AKS | GA | - | Enable **Agentless discovery on Kubernetes** toggle | Agentless | Defender for Containers **OR** Defender CSPM | Azure commercial clouds |
+| [Control plane hardening](defender-for-containers-architecture.md) | Continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations that are available on Defender for Cloud's Recommendations page. The recommendations let you investigate and remediate issues. | ACR, AKS | GA | Preview | Activated with plan | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| [Kubernetes data plane hardening](kubernetes-workload-protections.md) |Protect workloads of your Kubernetes containers with best practice recommendations. |AKS | GA | - | Enable **Azure Policy for Kubernetes** toggle | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Docker CIS | Docker CIS benchmark | VM, Virtual Machine Scale Set | GA | - | Enabled with plan | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Microsoft Azure operated by 21Vianet |
+
+### Vulnerability assessment
+
+| Feature | Description | Supported resources | Linux release state | Windows release state | Enablement method | Agent | Plans | Azure clouds availability |
+|--|--|--|--|--|--|--|--|--|
+| Agentless registry scan (powered by Microsoft Defender Vulnerability Management) [supported packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-microsoft-defender-vulnerability-management)| Vulnerability assessment for images in ACR | ACR, Private ACR | GA | Preview | Enable **Agentless container vulnerability assessment** toggle | Agentless | Defender for Containers or Defender CSPM | Commercial clouds<br/><br/> National clouds: Azure Government, Azure operated by 21Vianet |
+| Agentless/agent-based runtime (powered by Microsoft Defender Vulnerability Management) [supported packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-microsoft-defender-vulnerability-management)| Vulnerability assessment for running images in AKS | AKS | GA | Preview | Enable **Agentless container vulnerability assessment** toggle | Agentless (Requires Agentless discovery for Kubernetes) **OR/AND** Defender agent | Defender for Containers or Defender CSPM | Commercial clouds<br/><br/> National clouds: Azure Government, Azure operated by 21Vianet |
+| Deprecated: Agentless/agent-based runtime scan (powered by Qualys) [OS packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated) | Vulnerability assessment for running images in AKS | AKS | GA | Preview | Activated with plan | Defender agent | Defender for Containers | Commercial clouds |
+| Deprecated: Agentless registry scan (powered by Qualys) <BR>[Supported OS packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated) | Vulnerability assessment for images in ACR | ACR, Private ACR | GA | Preview | Activated with plan | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Deprecated: Agentless registry scan (powered by Qualys) <BR>[Supported language packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated) | Vulnerability assessment for images in ACR | ACR, Private ACR | Preview | - | Activated with plan | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+
+### Runtime threat protection
+
+| Feature | Description | Supported resources | Linux release state | Windows release state | Enablement method | Agent | Plans | Azure clouds availability |
+|--|--|--|--|--|--|--|--|--|
+| [Control plane](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) | Detection of suspicious activity for Kubernetes based on Kubernetes audit trail | AKS | GA | GA | Enabled with plan | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Workload | Detection of suspicious activity for Kubernetes for cluster level, node level, and workload level | AKS | GA | - | Enable **Defender Agent in Azure** toggle **OR** deploy Defender agent on individual clusters | Defender agent | Defender for Containers | Commercial clouds |
+
+### Deployment & monitoring
+
+| Feature | Description | Supported resources | Linux release state | Windows release state | Enablement method | Agent | Plans | Azure clouds availability |
+|--|--|--|--|--|--|--|--|--|
+| Discovery of unprotected clusters | Discovering Kubernetes clusters missing Defender agents | AKS | GA | GA | Enabled with plan | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Defender agent auto provisioning | Automatic deployment of Defender agent | AKS | GA | - | Enable **Defender Agent in Azure** toggle | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Azure Policy for Kubernetes auto provisioning | Automatic deployment of Azure policy agent for Kubernetes | AKS | GA | - | Enable **Azure policy for Kubernetes** toggle | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+
+### Registries and images support for Azure - vulnerability assessment powered by Qualys (Deprecated)
| Aspect | Details | |--|--|
Allowing data ingestion to occur only through Private Link Scope on your workspa
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
-## AWS
+## AWS
| Domain | Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --|
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
> [!NOTE] > For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Outbound proxy support
+### Outbound proxy support - AWS
Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
-## GCP
+## GCP
| Domain | Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --|
Outbound proxy without authentication and outbound proxy with basic authenticati
> [!NOTE] > For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Outbound proxy support
+### Outbound proxy support - GCP
Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Title: Troubleshooting guide
-description: This guide is for IT professionals, security analysts, and cloud admins who need to troubleshoot Microsoft Defender for Cloud related issues.
+ Title: Microsoft Defender for Cloud troubleshooting guide
+description: This guide is for IT professionals, security analysts, and cloud admins who need to troubleshoot problems related to Microsoft Defender for Cloud.
Last updated 06/18/2023
-# Microsoft Defender for Cloud Troubleshooting Guide
+# Microsoft Defender for Cloud troubleshooting guide
-This guide is for information technology (IT) professionals, information security analysts, and cloud administrators whose organizations need to troubleshoot Defender for Cloud related issues.
+This guide is for IT professionals, information security analysts, and cloud administrators whose organizations need to troubleshoot problems related to Microsoft Defender for Cloud.
> [!TIP]
-> When you're facing an issue or need advice from our support team, the **Diagnose and solve problems** section of the Azure portal is good place to look for solutions:
+> When you're facing a problem or need advice from our support team, the **Diagnose and solve problems** section of the Azure portal is good place to look for solutions.
>
-> :::image type="content" source="media/release-notes/solve-problems.png" alt-text="Defender for Cloud's 'Diagnose and solve problems' page":::
+> :::image type="content" source="media/release-notes/solve-problems.png" alt-text="Screenshot of the Azure portal that shows the page for diagnosing and solving problems in Defender for Cloud.":::
-## Use the Audit Log to investigate issues
+## Use the audit log to investigate problems
-The first place to look for troubleshooting information is the [Audit Log records](../azure-monitor/essentials/platform-logs-overview.md) records for the failed component. In the audit logs, you can see details including:
+The first place to look for troubleshooting information is the [audit log](../azure-monitor/essentials/platform-logs-overview.md) for the failed component. In the audit log, you can see details like:
-- Which operations were performed-- Who initiated the operation-- When the operation occurred-- The status of the operation
+- Which operations were performed.
+- Who initiated the operation.
+- When the operation occurred.
+- The status of the operation.
-The audit log contains all write operations (PUT, POST, DELETE) performed on your resources, but not read operations (GET).
+The audit log contains all write operations (`PUT`, `POST`, `DELETE`) performed on your resources, but not read operations (`GET`).
-## Troubleshooting the native multicloud connector
+## Troubleshoot connectors
-Defender for Cloud uses connectors to collect monitoring data from AWS accounts and GCP projects. If youΓÇÖre experiencing issues with the connector or you don't see data from AWS or GCP, we recommend that you review these troubleshooting tips:
+Defender for Cloud uses connectors to collect monitoring data from Amazon Web Services (AWS) accounts and Google Cloud Platform (GCP) projects. If you're experiencing problems with the connectors or you don't see data from AWS or GCP, review the following troubleshooting tips.
-Common connector issues:
+### Tips for common connector problems
-- Make sure that the subscription associated with the connector is selected in the **subscriptions filter**, located in the **Directories + subscriptions** section of the Azure portal.-- Standards should be assigned on the security connector. To check, go to the **Environment settings** in the Defender for Cloud left menu, select the connector, and select **Settings**. There should be standards assigned. You can select the three dots to check if you have permissions to assign standards.-- Connector resource should be present in Azure Resource Graph (ARG). Use the following ARG query to check: `resources | where ['type'] =~ "microsoft.security/securityconnectors"`
+- Make sure that the subscription associated with the connector is selected in the subscription filter located in the **Directories + subscriptions** section of the Azure portal.
+- Standards should be assigned on the security connector. To check, go to **Environment settings** on the Defender for Cloud left menu, select the connector, and then select **Settings**. If no standards are assigned, select the three dots to check if you have permissions to assign standards.
+- A connector resource should be present in Azure Resource Graph. Use the following Resource Graph query to check: `resources | where ['type'] =~ "microsoft.security/securityconnectors"`.
- Make sure that sending Kubernetes audit logs is enabled on the AWS or GCP connector so that you can get [threat detection alerts for the control plane](alerts-reference.md#alerts-k8scluster).-- Make sure that The Defender agent and the Azure Policy for Kubernetes Arc extensions were installed successfully to your Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) clusters. You can verify and install the agent with the following Defender for Cloud recommendations:
+- Make sure that the Microsoft Defender agent and the Azure Policy for Azure Arc-enabled Kubernetes extensions were installed successfully to your Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) clusters. You can verify and install the agent with the following Defender for Cloud recommendations:
- **EKS clusters should have Microsoft Defender's extension for Azure Arc installed** - **GKE clusters should have Microsoft Defender's extension for Azure Arc installed** - **Azure Arc-enabled Kubernetes clusters should have the Azure Policy extension installed** - **GKE clusters should have the Azure Policy extension installed**-- If youΓÇÖre experiencing issues with deleting the AWS or GCP connector, check if you have a lock (in this case there might be an error in the Azure Activity log, hinting at the presence of a lock).
+- If you're experiencing problems with deleting the AWS or GCP connector, check if you have a lock. An error in the Azure activity log might hint at the presence of a lock.
- Check that workloads exist in the AWS account or GCP project.
-AWS connector issues:
+### Tips for AWS connector problems
-- Make sure that the CloudFormation template deployment completed successfully.-- You need to wait at least 12 hours since the AWS root account was created.-- Make sure that EKS clusters are successfully connected to Arc-enabled Kubernetes.-- If you don't see AWS data in Defender for Cloud, make sure that the AWS resources required to send data to Defender for Cloud exist in the AWS account.
+- Make sure that the CloudFormation template deployment finished successfully.
+- Wait at least 12 hours after creation of the AWS root account.
+- Make sure that EKS clusters are successfully connected to Azure Arc-enabled Kubernetes.
+- If you don't see AWS data in Defender for Cloud, make sure that the required AWS resources for sending data to Defender for Cloud exist in the AWS account.
-Defender API calls to AWS:
+#### Cost impact of API calls to AWS
-Cost impact: When you onboard your AWS single or management account, our Discovery service initiates an immediate scan of your environment by executing API calls to various service endpoints in order to retrieve all resources that we secure.
+When you onboard your AWS single or management account, the discovery service in Defender for Cloud starts an immediate scan of your environment. The discovery service executes API calls to various service endpoints in order to retrieve all resources that Azure helps secure.
-Following this initial scan, the service will continue to periodically scan your environment at the interval that you configured during onboarding. It's important to note that in AWS, each API call to the account generates a lookup event that is recorded in the CloudTrail resource.
+After this initial scan, the service continues to periodically scan your environment at the interval that you configured during onboarding. In AWS, each API call to the account generates a lookup event that's recorded in the CloudTrail resource. The CloudTrail resource incurs costs. For pricing details, see the [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) page on the Amazon AWS site.
-The CloudTrail resource incurs costs, and the pricing details can be found in [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).
+If you connected your CloudTrail to GuardDuty, you're also responsible for associated costs. You can find these costs in the [GuardDuty documentation](https://docs.aws.amazon.com/guardduty/latest/ug/monitoring_costs.html) on the Amazon AWS site.
-Furthermore, if you have connected your CloudTrail to GuardDuty, you're also responsible for associated costs, which can be found in the [GuardDuty documentation](https://docs.aws.amazon.com/guardduty/latest/ug/monitoring_costs.html).
+#### Getting the number of native API calls
-**Getting the number of native API calls executed by Defender for Cloud**:
+There are two ways to get the number of calls that Defender for Cloud made:
-There are two ways to get the number of calls made by Defender for Cloud and both rely on querying AWS CloudTrail logs:
+- Use an existing Athena table or create a new one. For more information, see [Querying AWS CloudTrail logs](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html) on the Amazon AWS site.
+- Use an existing event data store or create a new one. For more information, see [Working with AWS CloudTrail Lake](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html) on the Amazon AWS site.
-- **CloudTrail and Athena tables**:
+Both methods rely on querying AWS CloudTrail logs.
-1. Use an existing or create a new *Athena table*. For more information, see [Querying AWS CloudTrail logs](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html).
+To get the number of calls, go to the Athena table or the event data store and use one of the following predefined queries, according to your needs. Replace `<TABLE-NAME>` with the ID of the Athena table or event data store.
-1. Navigate to the above Athena table and use one of the below predefined queries per your needs.
+- List the number of overall API calls by Defender for Cloud:
-- **CloudTrail lake**:
+ ```sql
+ SELECT COUNT(*) AS overallApiCallsCount FROM <TABLE-NAME>
+ WHERE userIdentity.arn LIKE 'arn:aws:sts::<YOUR-ACCOUNT-ID>:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
+ AND eventTime > TIMESTAMP '<DATETIME>'
+ ```
-1. Use an existing or create a new *Event Data Store*. For more information, see [Working with AWS CloudTrail Lake](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html).
+- List the number of overall API calls by Defender for Cloud aggregated by day:
-1. Navigate to the above lake and use one of the below predefined queries per your needs.
+ ```sql
+ SELECT DATE(eventTime) AS apiCallsDate, COUNT(*) AS apiCallsCountByRegion FROM <TABLE-NAME>
+ WHERE userIdentity.arn LIKE 'arn:aws:sts:: <YOUR-ACCOUNT-ID>:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
+ AND eventTime > TIMESTAMP '<DATETIME>' GROUP BY DATE(eventTime)
+ ```
- Sample Queries:
+- List the number of overall API calls by Defender for Cloud aggregated by event name:
- - List the number of overall API calls by Defender for Cloud:
+ ```sql
+ SELECT eventName, COUNT(*) AS apiCallsCountByEventName FROM <TABLE-NAME>
+ WHERE userIdentity.arn LIKE 'arn:aws:sts::<YOUR-ACCOUNT-ID>:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
+ AND eventTime > TIMESTAMP '<DATETIME>' GROUP BY eventName
+ ```
- ```sql
- SELECT COUNT(*) AS overallApiCallsCount FROM <TABLE-NAME>
- WHERE userIdentity.arn LIKE 'arn:aws:sts::<YOUR-ACCOUNT-ID>:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
- AND eventTime > TIMESTAMP '<DATETIME>'
- ```
+- List the number of overall API calls by Defender for Cloud aggregated by region:
- - List the number of overall API calls by Defender for Cloud aggregated by day:
+ ```sql
+ SELECT awsRegion, COUNT(*) AS apiCallsCountByRegion FROM <TABLE-NAME>
+ WHERE userIdentity.arn LIKE 'arn:aws:sts::120589537074:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
+ AND eventTime > TIMESTAMP '<DATETIME>' GROUP BY awsRegion
+ ```
- ```sql
- SELECT DATE(eventTime) AS apiCallsDate, COUNT(*) AS apiCallsCountByRegion FROM <TABLE-NAME>
- WHERE userIdentity.arn LIKE 'arn:aws:sts:: <YOUR-ACCOUNT-ID>:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
- AND eventTime > TIMESTAMP '<DATETIME>' GROUP BY DATE(eventTime)
- ```
+### Tips for GCP connector problems
- - List the number of overall API calls by Defender for Cloud aggregated by event name:
-
- ```sql
- SELECT eventName, COUNT(*) AS apiCallsCountByEventName FROM <TABLE-NAME>
- WHERE userIdentity.arn LIKE 'arn:aws:sts::<YOUR-ACCOUNT-ID>:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
- AND eventTime > TIMESTAMP '<DATETIME>' GROUP BY eventName
- ```
-
- - List the number of overall API calls by Defender for Cloud aggregated by region:
-
- ```sql
- SELECT awsRegion, COUNT(*) AS apiCallsCountByRegion FROM <TABLE-NAME>
- WHERE userIdentity.arn LIKE 'arn:aws:sts::120589537074:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
- AND eventTime > TIMESTAMP '<DATETIME>' GROUP BY awsRegion
- ```
-
- - The TABLE-NAME is Athena table or Event data store ID
-
-GCP connector issues:
--- Make sure that the GCP Cloud Shell script completed successfully.-- Make sure that GKE clusters are successfully connected to Arc-enabled Kubernetes.
+- Make sure that the GCP Cloud Shell script finished successfully.
+- Make sure that GKE clusters are successfully connected to Azure Arc-enabled Kubernetes.
- Make sure that Azure Arc endpoints are in the firewall allowlist. The GCP connector makes API calls to these endpoints to fetch the necessary onboarding files.-- If the onboarding of GCP projects failed, make sure you have ΓÇ£compute.regions.listΓÇ¥ permission and Microsoft Entra permission to create the service principle used as part of the onboarding process. Make sure that the GCP resources `WorkloadIdentityPoolId`, `WorkloadIdentityProviderId`, and `ServiceAccountEmail` are created in the GCP project.
+- If the onboarding of GCP projects fails, make sure you have `compute.regions.list` permission and Microsoft Entra permission to create the service principal for the onboarding process. Make sure that the GCP resources `WorkloadIdentityPoolId`, `WorkloadIdentityProviderId`, and `ServiceAccountEmail` are created in the GCP project.
-Defender API calls to GCP:
+#### Defender API calls to GCP
-When you onboard your GCP single project or organization, our Discovery service initiates an immediate scan of your environment by executing API calls to various service endpoints in order to retrieve all resources that we secure.
+When you onboard your GCP single project or organization, the discovery service in Defender for Cloud starts an immediate scan of your environment. The discovery service executes API calls to various service endpoints in order to retrieve all resources that Azure helps secure.
-Following this initial scan, the service will continue to periodically scan your environment at the interval that you configured during onboarding.
+After this initial scan, the service continues to periodically scan your environment at the interval that you configured during onboarding.
-**Getting the number of native API calls executed by Defender for Cloud**:
+To get the number of native API calls that Defender for Cloud executed:
- 1. Go to **Logging** -> **Log Explorer**
+1. Go to **Logging** > **Log Explorer**.
- 1. Filter the dates as you wish (for example, 1d)
+1. Filter the dates as you want (for example, **1d**).
- 1. To show API calls executed by Defender for Cloud, run this query:
+1. To show API calls that Defender for Cloud executed, run this query:
- ```json
- protoPayload.authenticationInfo.principalEmail : "microsoft-defender"
- ```
+ ```json
+ protoPayload.authenticationInfo.principalEmail : "microsoft-defender"
+ ```
Refer to the histogram to see the number of calls over time.
-## Troubleshooting the Log Analytics agent
+## Troubleshoot the Log Analytics agent
Defender for Cloud uses the Log Analytics agent to [collect and store data](./monitoring-components.md#log-analytics-agent). The information in this article represents Defender for Cloud functionality after transition to the Log Analytics agent.
-Alert types:
+The alert types are:
- Virtual Machine Behavioral Analysis (VMBA)-- Network Analysis-- SQL Database and Azure Synapse Analytics Analysis-- Contextual Information
+- Network analysis
+- Azure SQL Database and Azure Synapse Analytics analysis
+- Contextual information
-Depending on the alert types, customers can gather the necessary information to investigate the alert by using the following resources:
+Depending on the alert type, you can gather the necessary information to investigate an alert by using the following resources:
-- Security logs in the Virtual Machine (VM) event viewer in Windows-- AuditD in Linux-- The Azure activity logs and the enable diagnostic logs on the attack resource.
+- Security logs in the virtual machine (VM) event viewer in Windows
+- The audit daemon (`auditd`) in Linux
+- The Azure activity logs and the enabled diagnostic logs on the attack resource
-Customers can share feedback for the alert description and relevance. Navigate to the alert itself, select the **Was This Useful** button, select the reason, and then enter a comment to explain the feedback. We consistently monitor this feedback channel to improve our alerts.
+You can share feedback for the alert description and relevance. Go to the alert, select the **Was This Useful** button, select the reason, and then enter a comment to explain the feedback. We consistently monitor this feedback channel to improve our alerts.
### Check the Log Analytics agent processes and versions
-Just like the Azure Monitor, Defender for Cloud uses the Log Analytics agent to collect security data from your Azure virtual machines. After data collection is enabled and the agent is correctly installed in the target machine, the `HealthService.exe` process should be running.
+Just like Azure Monitor, Defender for Cloud uses the Log Analytics agent to collect security data from your Azure virtual machines. After you enable data collection and correctly install the agent in the target machine, the `HealthService.exe` process should be running.
-Open the services management console (services.msc), to make sure that the Log Analytics agent service running as shown:
+Open the services management console (*services.msc*) to make sure that the Log Analytics agent service is running.
:::image type="content" source="./media/troubleshooting-guide/troubleshooting-guide-fig5.png" alt-text="Screenshot of the Log Analytics agent service in Task Manager.":::
-To see which version of the agent you have, open **Task Manager**, in the **Processes** tab locate the **Log Analytics agent Service**, right-click on it and select **Properties**. In the **Details** tab, look the file version as shown:
+To see which version of the agent you have, open Task Manager. On the **Processes** tab, locate the Log Analytics agent service, right-click it, and then select **Properties**. On the **Details** tab, look for the file version.
-### Log Analytics agent installation scenarios
+### Check installation scenarios for the Log Analytics agent
-There are two installation scenarios that can produce different results when installing the Log Analytics agent on your computer. The supported scenarios are:
+There are two installation scenarios that can produce different results when you're installing the Log Analytics agent on your computer. The supported scenarios are:
-- **Agent installed automatically by Defender for Cloud**: You can view the alerts in Defender for Cloud and Log search. You'll receive email notifications to the email address that was configured in the security policy for the subscription the resource belongs to.
+- **Agent installed automatically by Defender for Cloud**: You can view the alerts in Defender for Cloud and log search. You receive email notifications at the email address that you configured in the security policy for the subscription that the resource belongs to.
-- **Agent manually installed on a VM located in Azure**: in this scenario, if you're using agents downloaded and installed manually prior to February 2017, you can view the alerts in the Defender for Cloud portal only if you filter on the subscription the workspace belongs to. If you filter on the subscription the resource belongs to, you won't see any alerts. You'll receive email notifications to the email address that was configured in the security policy for the subscription the workspace belongs to.
+- **Agent manually installed on a VM located in Azure**: In this scenario, if you're using agents downloaded and installed manually before February 2017, you can view the alerts in the Defender for Cloud portal only if you filter on the subscription that the *workspace* belongs to. If you filter on the subscription that the *resource* belongs to, you won't see any alerts. You receive email notifications at the email address that you configured in the security policy for the subscription that the workspace belongs to.
-> [!NOTE]
-> To avoid the behavior explained in the second scenario, make sure you download the latest version of the agent.
+ To avoid the filtering problem, be sure to download the latest version of the agent.
<a name="mon-network-req"></a>
-### Monitoring agent network connectivity issues
+### Monitor network connectivity problems for the agent
-For agents to connect to and register with Defender for Cloud, they must have access to the DNS addresses and network ports for Azure network resources.
+For agents to connect to and register with Defender for Cloud, they must have access to the DNS addresses and network ports for Azure network resources. To enable this access, take these actions:
-- When you use proxy servers, you need to make sure that the appropriate proxy server resources are configured correctly in the [agent settings](../azure-monitor/agents/agent-windows.md).-- You need to configure your network firewalls to permit access to Log Analytics.
+- When you use proxy servers, make sure that the appropriate proxy server resources are configured correctly in the [agent settings](../azure-monitor/agents/agent-windows.md).
+- Configure your network firewalls to permit access to Log Analytics.
The Azure network resources are:
-| Agent Resource | Ports | Bypass HTTPS inspection |
+| Agent resource | Port | Bypass HTTPS inspection |
||||
-| *.ods.opinsights.azure.com | 443 | Yes |
-| *.oms.opinsights.azure.com | 443 | Yes |
-| *.blob.core.windows.net | 443 | Yes |
-| *.azure-automation.net | 443 | Yes |
+| `*.ods.opinsights.azure.com` | 443 | Yes |
+| `*.oms.opinsights.azure.com` | 443 | Yes |
+| `*.blob.core.windows.net` | 443 | Yes |
+| `*.azure-automation.net` | 443 | Yes |
-If you're having trouble onboarding the Log Analytics agent, make sure to read [how to troubleshoot Operations Management Suite onboarding issues](https://support.microsoft.com/help/3126513/how-to-troubleshoot-operations-management-suite-onboarding-issues).
+If you're having trouble onboarding the Log Analytics agent, read [Troubleshoot Operations Management Suite onboarding issues](https://support.microsoft.com/help/3126513/how-to-troubleshoot-operations-management-suite-onboarding-issues).
-## Antimalware protection isn't working properly
+## Troubleshoot improperly working antimalware protection
-The guest agent is the parent process of everything the [Microsoft Antimalware](../security/fundamentals/antimalware.md) extension does. When the guest agent process fails, the Microsoft Antimalware protection that runs as a child process of the guest agent might also fail.
+The guest agent is the parent process of everything that the [Microsoft Antimalware](../security/fundamentals/antimalware.md) extension does. When the guest agent process fails, the Microsoft Antimalware protection that runs as a child process of the guest agent might also fail.
-Here are some other troubleshooting tips:
+Here are some troubleshooting tips:
-- If the target VM was created from a custom image, make sure that the creator of the VM installed guest agent.-- If the target is a Linux VM, then installing the Windows version of the antimalware extension will fail. The Linux guest agent has specific OS and package requirements.-- If the VM was created with an old version of guest agent, the old agents might not have the ability to autoupdate to the newer version. Always use the latest version of guest agent when you create your own images.-- Some third-party administration software might disable the guest agent, or block access to certain file locations. If third-party administration software is installed on your VM, make sure that the antimalware agent is on the exclusion list.-- Make sure that firewall settings and Network Security Group (NSG) aren't blocking network traffic to and from guest agent.-- Make sure that there are no Access Control Lists (ACLs) that prevent disk access.-- The guest agent requires sufficient disk space in order to function properly.
+- If the target VM was created from a custom image, make sure that the creator of the VM installed a guest agent.
+- If the target is a Linux VM, installing the Windows version of the antimalware extension will fail. The Linux guest agent has specific OS and package requirements.
+- If the VM was created with an old version of the guest agent, the old agent might not have the ability to automatically update to the newer version. Always use the latest version of the guest agent when you create your own images.
+- Some third-party administration software might disable the guest agent or block access to certain file locations. If third-party administration software is installed on your VM, make sure that the antimalware agent is on the exclusion list.
+- Make sure that firewall settings and a network security group aren't blocking network traffic to and from the guest agent.
+- Make sure that no access control lists are preventing disk access.
+- The guest agent needs sufficient disk space to function properly.
-By default the Microsoft Antimalware user interface is disabled, but you can [enable the Microsoft Antimalware user interface](/archive/blogs/azuresecurity/enabling-microsoft-antimalware-user-interface-post-deployment) on Azure Resource Manager VMs.
+By default, the Microsoft Antimalware user interface is disabled. But you can [enable the Microsoft Antimalware user interface](/archive/blogs/azuresecurity/enabling-microsoft-antimalware-user-interface-post-deployment) on Azure Resource Manager VMs.
-## Troubleshooting problems loading the dashboard
+## Troubleshoot problems with loading the dashboard
-If you experience issues loading the workload protection dashboard, make sure that the user that first enabled Defender for Cloud on the subscription and the user that want to turn on data collection have the *Owner* or *Contributor* role on the subscription. If that is the case, users with the *Reader* role on the subscription can see the dashboard, alerts, recommendations, and policy.
+If you experience problems with loading the workload protection dashboard, make sure that the user who first enabled Defender for Cloud on the subscription and the user who wants to turn on data collection have the *Owner* or *Contributor* role on the subscription. If so, users with the *Reader* role on the subscription can see the dashboard, alerts, recommendations, and policy.
-## Troubleshoot Azure DevOps Organization connector issues
+## Troubleshoot connector problems for the Azure DevOps organization
-If you are not able to onboard your Azure DevOps organization, follow the following troubleshooting tips:
+If you can't onboard your Azure DevOps organization, try the following troubleshooting tips:
-- It is important to know which account you are logged in to when you authorize the access, as that will be the account that is used. Your account can be associated with the same email address but also associated with different tenants. You should [check which account](https://app.vssps.visualstudio.com/profile/view) you are currently logged in on and ensure that the right account and tenant combination is selected.
+- It's important to know which account you're signed in to when you authorize the access, because that will be the account that the system uses for onboarding. Your account can be associated with the same email address but also associated with different tenants. Make sure that you select the right account/tenant combination. If you need to change the combination:
- 1. On your profile page, select the drop-down menu to select another account.
+ 1. On your [Azure DevOps profile page](https://app.vssps.visualstudio.com/profile/view), use the dropdown menu to select another account.
- :::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that is used to select an account.":::
+ :::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that's used to select an account.":::
- 1. After selecting the correct account/tenant combination, navigate to **Environment settings** in Defender for Cloud and edit your Azure DevOps connector. You will have the option to Re-authorize the connector, which will update the connector with the correct account/tenant combination. You should then see the correct list of organizations from the drop-down selection menu.
+ 1. After you select the correct account/tenant combination, go to **Environment settings** in Defender for Cloud and edit your Azure DevOps connector. Reauthorize the connector to update it with the correct account/tenant combination. You should then see the correct list of organizations on the dropdown menu.
-- Ensure you have **Project Collection Administrator** role on the Azure DevOps organization you wish to onboard.
+- Ensure that you have the *Project Collection Administrator* role on the Azure DevOps organization that you want to onboard.
-- Ensure **Third-party application access via OAuth** is toggled **On** for the Azure DevOps organization. [Learn more about enabling OAuth access](/azure/devops/organizations/accounts/change-application-access-policies)
+- Ensure that the **Third-party application access via OAuth** toggle is **On** for the Azure DevOps organization. [Learn more about enabling OAuth access](/azure/devops/organizations/accounts/change-application-access-policies).
-## Contacting Microsoft Support
+## Contact Microsoft support
-You can also find troubleshooting information for Defender for Cloud at the [Defender for Cloud Q&A page](/answers/topics/azure-security-center.html). If you need further troubleshooting, you can open a new support request using **Azure portal** as shown:
+You can also find troubleshooting information for Defender for Cloud at the [Defender for Cloud Q&A page](/answers/topics/azure-security-center.html).
+If you need more assistance, you can open a new support request on the Azure portal. On the **Help + support** page, select **Create a support request**.
-## See also
-In this page, you learned about troubleshooting steps for Defender for Cloud. To learn more about Microsoft Defender for Cloud:
+## See also
-- Learn how to [manage and respond to security alerts](managing-and-responding-alerts.md) in Microsoft Defender for Cloud-- [Alert validation](alert-validation.md) in Microsoft Defender for Cloud-- Review [common questions](faq-general.yml) about using Microsoft Defender for Cloud
+- Learn how to [manage and respond to security alerts](managing-and-responding-alerts.md) in Defender for Cloud.
+- Learn about [alert validation](alert-validation.md) in Defender for Cloud.
+- Review [common questions](faq-general.yml) about using Defender for Cloud.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
The following table explains how each capability will be provided after the Log
| Defender for Endpoint/Defender for Cloud integration for down level machines (Windows Server 2012 R2, 2016) | Defender for Endpoint integration that uses the legacy Defender for Endpoint sensor and the Log Analytics agent (for Windows Server 2016 and Windows Server 2012 R2 machines) wonΓÇÖt be supported after August 2024. | Enable the GA [unified agent](/microsoft-365/security/defender-endpoint/configure-server-endpoints#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) integration to maintain support for machines, and receive the full extended feature set. For more information, see [Enable the Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md#windows). | | OS-level threat detection (agent-based) | OS-level threat detection based on the Log Analytics agent wonΓÇÖt be available after August 2024. A full list of deprecated detections will be provided soon. | OS-level detections are provided by Defender for Endpoint integration and are already GA. | | Adaptive application controls | The [current GA version](adaptive-application-controls.md) based on the Log Analytics agent will be deprecated in August 2024, along with the preview version based on the Azure monitoring agent. | Adaptive Application Controls feature as it is today will be discontinued, and new capabilities in the application control space (on top of what Defender for Endpoint and Windows Defender Application Control offer today) will be considered as part of future Defender for Servers roadmap. |
-| Endpoint protection discovery recommendations | The current [GA recommendations](endpoint-protection-recommendations-technical.md) to install endpoint protection and fix health issues in the detected solutions will be deprecated in August 2024. The preview recommendations available today over Azure Monitor agent (AMA) will be deprecated when the alternative is provided over Agentless Disk Scanning capability. | A new agentless version will be provided for discovery and configuration gaps by April 2024. As part of this upgrade, this feature will be provided as a component of Defender for Servers plan 2 and Defender CSPM, and wonΓÇÖt cover on-premises or Arc-connected machines. |
+| Endpoint protection discovery recommendations | The current [GA recommendations](endpoint-protection-recommendations-technical.md) to install endpoint protection and fix health issues in the detected solutions will be deprecated in August 2024. The preview recommendations available today over Log analytic agent will be deprecated when the alternative is provided over Agentless Disk Scanning capability. | A new agentless version will be provided for discovery and configuration gaps by April 2024. As part of this upgrade, this feature will be provided as a component of Defender for Servers plan 2 and Defender CSPM, and wonΓÇÖt cover on-premises or Arc-connected machines. |
| Missing OS patches (system updates) | Recommendations to apply system updates based on the Log Analytics agent wonΓÇÖt be available after August 2024. The preview version available today over Guest Configuration agent will be deprecated when the alternative is provided over Microsoft Defender Vulnerability Management premium capabilities. Support of this feature for Docker-hub and VMMS will be deprecated in Aug 2024 and will be considered as part of future Defender for Servers roadmap.| [New recommendations](release-notes-archive.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga), based on integration with Update Manager, are already in GA, with no agent dependencies. | | OS misconfigurations (Azure Security Benchmark recommendations) | The [current GA version](apply-security-baseline.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. The current preview version that uses the Guest Configuration agent will be deprecated as the Microsoft Defender Vulnerability Management integration becomes available. | A new version, based on integration with Premium Microsoft Defender Vulnerability Management, will be available early in 2024, as part of Defender for Servers plan 2. | | File integrity monitoring | The [current GA version](file-integrity-monitoring-enable-log-analytics.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. The FIM [Public Preview version](file-integrity-monitoring-enable-ama.md) based on Azure Monitor Agent (AMA), will be deprecated when the alternative is provided over Defender for Endpoint.| A new version of this feature will be provided based on Microsoft Defender for Endpoint integration by April 2024. |
defender-for-iot Back Up Restore Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/back-up-restore-sensor.md
We recommend saving your OT sensor backup files on your internal network. To do
1. Create a shared folder on the external SMB server, and make sure that you have the folder's path and the credentials required to access the SMB server.
-1. Sign into your OT sensor via SFTP and create a directory for your backup files. Run:
+1. Sign into your OT sensor via SSH using the [*admin*](roles-on-premises.md#access-per-privileged-user) user.
+ If you're using a sensor version earlier than 23.2.0, use the [*cyberx_host*](roles-on-premises.md#legacy-users) user instead. Skip the next step for running `system shell` and jump directly to creating a directory for your backup files.
+
+1. Access the host by running the `system shell` command. Enter the admin user's password when prompted and press **ENTER**.
+
+1. Create a directory for your backup files. Run:
```bash sudo mkdir /<backup_folder_name>
We recommend saving your OT sensor backup files on your internal network. To do
``` 1. Edit the `fstab` file with details about your backup folder. Run:-
+
```bash sudo nano /etc/fstab
- add - //<server_IP>/<folder_path> /<backup_folder_name_on_cyberx_server> cifsrw,credentials=/etc/samba/user,vers=X.X,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0
+ add - //<server_IP>/<folder_path> /<backup_folder_name_on_cyberx_server> cifs rw,credentials=/etc/samba/user,vers=X.X,file_mode=0777,dir_mode=0777
```
+ Make sure you replace `vers=X.X` with the correct version of your external SMB server. For example `vers=3.0`.
1. Edit and create credentials to share for the SMB server. Run:
We recommend saving your OT sensor backup files on your internal network. To do
1. Configure your backup directory on the SMB server to use the shared file on the OT sensor. Run: ```bash
- sudo nano /var/cyberx/properties/backup.properties`
+ sudo dpkg-reconfigure iot-sensor
```
- Set the `backup_directory_path` to the folder on your OT sensor where you want to save your backup files.
+ Follow the instructions on screen and validate that the settings are correct on each step.
+
+ To move to the next step without making changes, press **ENTER**.
+
+ You'll be prompted to `Enter path to the mounted backups folder`. For example:
+
+ ![Screenshot of the Enter path to the mounted backups folder prompt.](media/back-up-restore-sensor/screenshot-of-enter-path-to-mounted-backups-folder-prompt.png)
++
+ The factory default value is `/opt/sensor/persist/backups`.
+
+ Set the value to the folder you created in the first few steps, using the following syntax: `/<backup_folder_name>`. For example:
+
+ ![Screenshot of the Enter path to the mounted backups folder with an updated value.](media/back-up-restore-sensor/screenshot-of-enter-path-to-mounted-backups-folder-with-updated-value.png)
++
+ Confirm the change by pressing **ENTER** and continue with the rest of the steps until the end.
## Restore an OT sensor
The following procedures describe how to restore your sensor using a backup file
### Restore an OT sensor from the sensor GUI 1. Sign into the OT sensor via SFTP and download the backup file you want to use to a location accessible from the OT sensor GUI. - Backup files are saved on your OT sensor machine, at `/var/cyberx/backups`, and are named using the following syntax: `<sensor name>-backup-version-<version>-<date>.tar`. For example: `Sensor_1-backup-version-2.6.0.102-2019-06-24_09:24:55.tar` > [!IMPORTANT]
- > - Make sure that the backup file you select uses the same OT sensor software version that's currently installed on your OT sensor.
+ > Make sure that the backup file you select uses the same OT sensor software version that's currently installed on your OT sensor.
>
- > - Your backup file must be one that had been generated automatically or manually via the CLI. If you're using a backup file generated manually by the GUI, you'll need to contact support to use it to restore your sensor.
-
+ > Your backup file must be one that had been generated automatically or manually via the CLI. If you're using a backup file generated manually by the GUI, contact support to use it to restore your sensor.
+++ 1. Sign into the OT sensor GUI and select **System settings** > **Sensor management** > **Health and troubleshooting** > **Backup & restore** > **Restore**. 1. Select **Browse** to select your downloaded backup file. The sensor will start to restore from the selected backup file.
For more information, see the [OT sensor CLI reference](cli-ot-sensor.md#start-a
## Next steps
-For more information, see [Maintain OT network sensors from the GUI](how-to-manage-individual-sensors.md).
+For more information, see [Maintain OT network sensors from the GUI](how-to-manage-individual-sensors.md).
devtest Concepts Gitops Azure Devtest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/concepts-gitops-azure-devtest.md
Title: GitOps & Azure Dev/Test offer
description: Use GitOps in association with Azure Dev/Test ++ Last updated 10/18/2023
devtest Concepts Security Governance Devtest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/concepts-security-governance-devtest.md
Title: Security, governance, and Azure Dev/Test subscriptions
description: Manage security and governance within your organization's Dev/Test subscriptions. ++ Last updated 10/18/2023
devtest How To Add Users Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-add-users-directory.md
Title: Add users to your Azure Dev/Test developer directory tenant
description: A how-to guide for adding users to your Azure credit subscription and managing their access with role-based controls. ++ Last updated 10/18/2023
devtest How To Change Directory Tenants Visual Studio Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-change-directory-tenants-visual-studio-azure.md
Title: Change directory tenants with your individual VSS Azure subscriptions
description: Change directory tenants with your Azure subscriptions. ++ Last updated 10/18/2023
devtest How To Manage Monitor Devtest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-manage-monitor-devtest.md
Title: Managing and monitoring your Azure Dev/Test subscriptions
description: Manage your Azure Dev/Test subscriptions with the flexibility of Azure's cloud environment. This guide also covers Azure Monitor to help maximize availability and performance for applications and services. ++ Last updated 10/18/2023
devtest How To Manage Reliability Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-manage-reliability-performance.md
Title: Manage reliability and performance with Azure Dev/Test subscriptions
description: Build reliability into your applications with Dev/Test subscriptions. ++ Last updated 10/18/2023
devtest How To Remove Credit Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-remove-credit-limits.md
Title: Removing credit limits and changing Azure Dev/Test offers
description: How to remove credit limits and change Azure Dev/Test offers. Switch from pay-as-you-go to another offer. ++ Last updated 10/18/2023
devtest How To Sign Into Azure With Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-sign-into-azure-with-github.md
Last updated 10/18/2023 ++
devtest Overview What Is Devtest Offer Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/overview-what-is-devtest-offer-visual-studio.md
Title: What is Azure Dev/Test offer? description: Use the Azure Dev/Test offer to get Azure credits for Visual Studio subscribers. ++ Last updated 10/18/2023
devtest Quickstart Create Enterprise Devtest Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/quickstart-create-enterprise-devtest-subscriptions.md
Title: Creating Enterprise Azure Dev/Test subscriptions
description: Create Enterprise and Organizational Azure Dev/Test subscriptions for teams and large organizations. ++ Last updated 10/18/2023
devtest Quickstart Individual Credit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/quickstart-individual-credit.md
Last updated 10/18/2023 ++
devtest Troubleshoot Expired Removed Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/troubleshoot-expired-removed-subscription.md
Last updated 10/18/2023 ++
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-for-azure-services.md
na Previously updated : 11/30/2023 Last updated : 01/10/2024
This scenario differs from the ability to [host the reverse DNS lookup zones](dn
Before reading this article, you should familiarize yourself with [reverse DNS in Azure DNS](dns-reverse-dns-overview.md).
-In Azure DNS, compute resources such as virtual machines, virtual machine scale sets, and Service Fabric clusters have Public IP addresses. Reverse DNS lookups are configured using the 'ReverseFqdn' property of the Public IP address.
+In Azure DNS, compute resources such as virtual machines, virtual machine scale sets, and Service Fabric clusters have public IP addresses. Reverse DNS lookups are configured using the 'ReverseFqdn' property of the public IP address.
Reverse DNS is currently not supported for the Azure App Service and Application Gateway. ## Validation of reverse DNS records
-A third party shouldn't have access to create reverse DNS records for Azure service mapping to your DNS domains. That's why Azure only allows you to create a reverse DNS record if the domain name is the same or resolves to a Public IP address in the same subscription. This restriction also applies to Cloud Service.
+A third party shouldn't have access to create reverse DNS records for Azure service mapping to your DNS domains. That's why Azure only allows you to create a reverse DNS record if a forward DNS lookup resolves to the same public IP address, or to names that are defined in your subscription. See the following example. This restriction also applies to Cloud Service.
-This validation is only done when the reverse DNS record is set or modified. Periodic revalidation isn't done.
+Validation is only done when the reverse DNS record is set or modified. Periodic revalidation isn't done.
-For example, suppose the Public IP address resource has the DNS name `contosoapp1.northus.cloudapp.azure.com` and IP address `23.96.52.53`. The reverse FQDN for the Public IP address can be specified as:
+For example, suppose the public IP address resource has the DNS name `contosoapp1.northus.cloudapp.azure.com` and IP address `23.96.52.53`. The reverse FQDN for the public IP address can be specified as:
-* The DNS name for the Public IP address: `contosoapp1.northus.cloudapp.azure.com`.
+* The DNS name for the public IP address: `contosoapp1.northus.cloudapp.azure.com`.
* The DNS name for a different PublicIpAddress in the same subscription, such as: `contosoapp2.westus.cloudapp.azure.com`.
-* A vanity DNS name, such as: `app1.contoso.com`. As long as the name is *first* configured as a CNAME pointing to `contosoapp1.northus.cloudapp.azure.com`. The name can also be pointed to a different Public IP address in the same subscription.
+* A vanity DNS name, such as: `app1.contoso.com`. As long as the name is *first* configured as a CNAME pointing to `contosoapp1.northus.cloudapp.azure.com`. The name can also be pointed to a different public IP address in the same subscription.
* A vanity DNS name, such as: `app1.contoso.com`. As long as this name is *first* configured as an A record pointing to the IP address 23.96.52.53. The name can also be pointed to another IP address in the same subscription. The same constraints apply to reverse DNS for Cloud Services.
-## Reverse DNS for Public IP address resources
+## Reverse DNS for public IP address resources
-This section provides detailed instructions for how to configure reverse DNS for Public IP address resources in the Resource Manager deployment model. You can use either Azure PowerShell, Azure classic CLI, or Azure CLI to accomplish this task. Configuring reverse DNS for a Public IP address resource is currently not supported in the Azure portal.
+This section provides detailed instructions for how to configure reverse DNS for public IP address resources in the Resource Manager deployment model. You can use either Azure PowerShell, Azure classic CLI, or Azure CLI to accomplish this task. Configuring reverse DNS for a public IP address resource is currently not supported in the Azure portal.
-Azure currently supports reverse DNS only for Public IPv4 address resources.
+Azure currently supports reverse DNS only for public IPv4 address resources.
-### Add reverse DNS to an existing PublicIpAddresses
+> [!IMPORTANT]
+> New or updated PTR records must pass [validation](#validation-of-reverse-dns-records). If the PTR for a public IP address doesn't currently exist, you must specify the hostname using **DomainNameLabel** (Azure PowerShell), the **-d** parameter (Azure Classic CLI), or the **--dns-name** parameter (Azure CLI) as shown in the following examples.
+
+### Configure reverse DNS for a public IP address with an existing name
+
+Use the following procedures if a public IP address already has a [defined name](#validation-of-reverse-dns-records) in your subscription or via forward DNS lookup. After updating or adding a PTR to your existing public IP address, [view and verify that the correct PTR is configured](#view-reverse-dns-for-an-existing-public-ip-address).
#### Azure PowerShell
-To update reverse DNS to an existing PublicIpAddress:
+To update reverse DNS on a public IP address with an existing PTR:
```azurepowershell-interactive $pip = Get-AzPublicIpAddress -Name "PublicIp" -ResourceGroupName "MyResourceGroup"
$pip.DnsSettings.ReverseFqdn = "contosoapp1.westus.cloudapp.azure.com."
Set-AzPublicIpAddress -PublicIpAddress $pip ```
-To add reverse DNS to an existing PublicIpAddress that doesn't already have a DNS name, you must also specify a DNS name:
+To add reverse DNS to a public IP address that doesn't already have a PTR, you must specify the DomainNameLabel:
```azurepowershell-interactive $pip = Get-AzPublicIpAddress -Name "PublicIp" -ResourceGroupName "MyResourceGroup"
Set-AzPublicIpAddress -PublicIpAddress $pip
#### Azure Classic CLI
-To add reverse DNS to an existing PublicIpAddress:
+To update reverse DNS on a public IP address with an existing PTR:
```azurecli azure network public-ip set -n PublicIp -g MyResourceGroup -f contosoapp1.westus.cloudapp.azure.com. ```
-To add reverse DNS to an existing PublicIpAddress that doesn't already have a DNS name, you must also specify a DNS name:
+To add reverse DNS to a public IP address that doesn't already have a PTR, you must specify the DNS name (-d):
```azurecli-interactive azure network public-ip set -n PublicIp -g MyResourceGroup -d contosoapp1 -f contosoapp1.westus.cloudapp.azure.com.
azure network public-ip set -n PublicIp -g MyResourceGroup -d contosoapp1 -f con
#### Azure CLI
-To add reverse DNS to an existing PublicIpAddress:
+To update reverse DNS on a public IP address with an existing PTR:
```azurecli-interacgive az network public-ip update --resource-group MyResourceGroup --name PublicIp --reverse-fqdn contosoapp1.westus.cloudapp.azure.com. ```
-To add reverse DNS to an existing PublicIpAddress that doesn't already have a DNS name, you must also specify a DNS name:
+To add reverse DNS to a public IP address that doesn't already have a PTR, you must specify the DNS name (--dns-name):
```azurecli-interactive az network public-ip update --resource-group MyResourceGroup --name PublicIp --reverse-fqdn contosoapp1.westus.cloudapp.azure.com --dns-name contosoapp1 ```
-### Create a Public IP Address with reverse DNS
+### Create a public IP address with reverse DNS
+
+> [!NOTE]
+> If the public IP address already exists in your subscription, see [Configure reverse DNS for a public IP address with an existing name](#configure-reverse-dns-for-a-public-ip-address-with-an-existing-name)
To create a new PublicIpAddress with the reverse DNS property already specified:
azure network public-ip create -n PublicIp -g MyResourceGroup -l westus -d conto
az network public-ip create --name PublicIp --resource-group MyResourceGroup --location westcentralus --dns-name contosoapp1 --reverse-fqdn contosoapp1.westcentralus.cloudapp.azure.com ```
-### View reverse DNS for an existing PublicIpAddress
+### View reverse DNS for an existing public IP address
-To view the configured value for an existing PublicIpAddress:
+To view the configured reverse DNS value for an existing PublicIpAddress:
#### Azure PowerShell
azure network public-ip show -n PublicIp -g MyResourceGroup
az network public-ip show --name PublicIp --resource-group MyResourceGroup ```
-### Remove reverse DNS from existing Public IP Addresses
+### Remove reverse DNS from an existing public IP address
To remove a reverse DNS property from an existing PublicIpAddress:
Set-AzureService ΓÇôServiceName "contosoapp1" ΓÇôDescription "App1 with Reverse
They're free! There's no extra cost for reverse DNS records or queries.
-### Will my reverse DNS records resolve from the internet?
+### Do my reverse DNS records resolve from the internet?
Yes. Once you set the reverse DNS property for your Azure service, Azure manages all the DNS delegations and DNS zones needed to ensure it resolves for all internet users.
No. Reverse DNS is an opt-in feature. No default reverse DNS records are created
FQDNs are specified in forward order, and must be terminated by a dot (for example, "app1.contoso.com.").
-### What happens if the validation check for the reverse DNS I've specified fails?
+### What happens if the validation check for the specified reverse DNS entry fails?
-Where the reverse DNS validation check fails, the operation to configure the reverse DNS record fails. Correct the reverse DNS value as required, and retry.
+If the reverse DNS validation check fails, the operation to configure the reverse DNS record fails. Correct the reverse DNS value as required and retry.
### Can I configure reverse DNS for Azure App Service?
event-grid Custom Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart.md
Title: 'Quickstart: Send custom events with Event Grid and Azure CLI' description: 'Quickstart Use Azure Event Grid and Azure CLI to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 10/28/2022 Last updated : 01/05/2024
When you're finished, you see that the event data has been sent to the web app.
Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed.
-Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named *gridResourceGroup* in the *westus2* location. If you click **Try it**, you'll see the Azure Cloud Shell window in the right pane. Then, click **Copy** to copy the command and paste it in the Azure Cloud Shell window, and press ENTER to run the command. Change the name of the resource group and the location if you like.
+Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named *gridResourceGroup* in the *westus2* location. If you select **Try it**, you'll see the Azure Cloud Shell window in the right pane. Then, select **Copy** to copy the command and paste it in the Azure Cloud Shell window, and press ENTER to run the command. Change the name of the resource group and the location if you like.
```azurecli-interactive az group create --name gridResourceGroup --location westus2
az group create --name gridResourceGroup --location westus2
## Create a custom topic
-An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group using Bash in Azure Cloud Shell. Replace `<your-topic-name>` with a unique name for your topic. The custom topic name must be unique because it's part of the DNS entry. Additionally, it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-"
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group using Bash in Azure Cloud Shell. Replace `<your-topic-name>` with a unique name for your topic. The custom topic name must be unique because it's part of the Domain Name System (DNS) entry. Additionally, it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-"
1. Copy the following command, specify a name for the topic, and press ENTER to run the command.
An Event Grid topic provides a user-defined endpoint that you post your events t
## Create a message endpoint
-Before subscribing to the custom topic, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+Before subscribing to the custom topic, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [prebuilt web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
Before subscribing to the custom topic, let's create the endpoint for the event
--parameters siteName=$sitename hostingPlanName=viewerhost ```
-The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
+The deployment might take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
`https://<your-site-name>.azurewebsites.net` You should see the site with no messages currently displayed.
You subscribe to an Event Grid topic to tell Event Grid which events you want to
The endpoint for your web app must include the suffix `/api/updates/`.
-```azurecli-interactive
-endpoint=https://$sitename.azurewebsites.net/api/updates
-
-az eventgrid event-subscription create \
- --source-resource-id "/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.EventGrid/topics/$topicname" \
- --name demoViewerSub \
- --endpoint $endpoint
-
-```
+1. Copy the following command, replace `$sitename` with the name of the web app you created in the previous step, and press ENTER to run the command.
-View your web app again, and notice that a subscription validation event has been sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription.
+ ```azurecli-interactive
+ endpoint=https://$sitename.azurewebsites.net/api/updates
+ ```
+2. Run the following command to get the resource ID of the topic you created.
+
+ ```azurecli-interactive
+ topicresourceid=$(az eventgrid topic show --resource-group gridResourceGroup --name $topicname --query "id" --output tsv)
+ ```
+3. Run the following command to create a subscription to the custom topic using the endpoint.
+ ```azurecli-interactive
+ az eventgrid event-subscription create \
+ --source-resource-id $topicresourceid \
+ --name demoViewerSub \
+ --endpoint $endpoint
+ ```
-![View the subscription event in Azure Event Grid Viewer](./media/custom-event-quickstart/viewer-subscription-validation-event.png)
+ View your web app again, and notice that a subscription validation event has been sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription.
+
+ ![View the subscription event in Azure Event Grid Viewer](./media/custom-event-quickstart/viewer-subscription-validation-event.png)
## Send an event to your custom topic
governance Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/azure-management.md
+
+ Title: Azure Management Overview - Azure Governance
+description: Overview of the areas of management for Azure applications and resources with links to content on Azure management tools.
Last updated : 03/20/2022+++
+# What are the Azure Management areas?
+
+Governance in Azure is one aspect of Azure Management. This article covers the different areas of
+management for deploying and maintaining your resources in Azure.
+
+Management refers to the tasks and processes required to maintain your business applications and the
+resources that support them. Azure has many services and tools that work together to provide
+complete management. These services aren't only for resources in Azure, but also in other clouds and
+on-premises. Understanding the different tools and how they work together is the first step in
+designing a complete management environment.
+
+The following diagram illustrates the different areas of management that are required to maintain
+any application or resource. These different areas can be thought of as a lifecycle. Each area is
+required in continuous succession over the lifespan of a resource. This resource lifecycle starts
+with the initial deployment, through continued operation, and finally when retired.
+
+ Diagram that shows the Migrate, Secure, Protect, Monitor, Configure, and Govern elements of the wheel of services that support Management and Governance in Azure. Secure has Security management and Threat protection as sub items. Protect has Backup and Disaster recovery as sub items. Monitor has App, infrastructure and network monitoring, and Log Analytics and Diagnostics as sub items. Configure has Configuration, Update Management, Automation, and Scripting as sub items. And Govern has Policy management and Cost management as sub items.
+
+No single Azure service completely fills the requirements of a particular management area. Instead,
+each is realized by several services working together. Some services, such as Application Insights,
+provide targeted monitoring functionality for web applications. Others, like Azure Monitor logs,
+store management data for other services. This feature allows you to analyze data of different types
+collected by different services.
+
+The following sections briefly describe the different management areas and provide links to detailed
+content on the main Azure services intended to address them.
+
+## Monitor
+
+Monitoring is the act of collecting and analyzing data to audit the performance, health, and
+availability of your resources. An effective monitoring strategy helps you understand the operation
+of components and to increase your uptime with notifications. Read an overview of Monitoring that
+covers the different services used at [Monitoring Azure applications and resources](../../azure-monitor/overview.md).
+
+## Configure
+
+Configure refers to the initial deployment and configuration of resources and ongoing maintenance.
+Automation of these tasks allows you to eliminate redundancy, minimizing your time and effort and
+increasing your accuracy and efficiency. [Azure Automation](../../automation/overview.md)
+provides the bulk of services for automating configuration tasks. While runbooks handle process
+automation, configuration and update management help manage configuration.
+
+## Govern
+
+Governance provides mechanisms and processes to maintain control over your applications and
+resources in Azure. It involves planning your initiatives and setting strategic priorities.
+Governance in Azure is primarily implemented with two services. [Azure Policy](../policy/overview.md) allows you to create, assign, and manage policy definitions to enforce rules for your resources.
+This feature keeps those resources in compliance with your corporate standards.
+[Azure Cost Management](../../cost-management-billing/cost-management-billing-overview.md) allows you to track cloud usage and expenditures for your Azure resources and other cloud providers.
+
+## Secure
+
+Manage the security of your resources and data. A security program involves assessing threats,
+collecting and analyzing data, and compliance of your applications and resources. Security
+monitoring and threat analysis are provided by [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md), which includes unified security
+management and advanced threat protection across hybrid cloud workloads. See [Introduction to Azure Security](../../security/fundamentals/overview.md) for comprehensive information and guidance on
+securing Azure resources.
+
+## Protect
+
+Protection refers to keeping your applications and data available, even with outages that are beyond
+your control. Protection in Azure is provided by two services. [Azure Backup](../../backup/backup-overview.md) provides backup and recovery of your data, either in the cloud or on-premises. [Azure Site Recovery](../../site-recovery/site-recovery-overview.md) provides business continuity and immediate recovery during a disaster.
+
+## Migrate
+
+Migration refers to transitioning workloads currently running on-premises to the Azure cloud.
+[Azure Migrate](../../migrate/migrate-services-overview.md) is a service that helps you assess the
+migration suitability of on-premises virtual machines to Azure. Azure Site Recovery migrates virtual
+machines [from on-premises](../../site-recovery/migrate-tutorial-on-premises-azure.md) or [from Amazon Web Services](../../site-recovery/migrate-tutorial-aws-azure.md). [Azure Database Migration Service](../../dms/dms-overview.md) assists you in migrating database sources to Azure Data
+platforms.
+
+## Next Steps
+
+To learn more about Azure Governance, go to the following articles:
+
+- [Azure Governance hub](../index.yml)
+- [Governance in the Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/govern/)
hdinsight Apache Hbase Migrate New Version New Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-migrate-new-version-new-storage-account.md
description: Learn how to migrate an Apache HBase cluster in Azure HDInsight to
Previously updated : 12/23/2022 Last updated : 01/10/2024 # Migrate Apache HBase to a new version and storage account
To upgrade and migrate your Apache HBase cluster on Azure HDInsight to a new sto
Prepare the source cluster: 1. Stop data ingestion.
-1. Flush memstore data.
+1. Flush `memstore` data.
1. Stop HBase from Ambari. 1. For clusters with accelerated writes, back up the Write Ahead Log (WAL) directory.
Use these detailed steps and commands to migrate your Apache HBase cluster with
1. Flush the source HBase cluster you're upgrading.
- HBase writes incoming data to an in-memory store called a *memstore*. After the memstore reaches a certain size, HBase flushes it to disk for long-term storage in the cluster's storage account. Deleting the source cluster after an upgrade also deletes any data in the memstores. To retain the data, manually flush each table's memstore to disk before upgrading.
+ HBase writes incoming data to an in-memory store called a `memstore`. After the `memstore` reaches a certain size, HBase flushes it to disk for long-term storage in the cluster's storage account. Deleting the source cluster after an upgrade also deletes any data in the `memstores`. To retain the data, manually flush each table's `memstore` to disk before upgrading.
- You can flush the memstore data by running the [flush_all_tables.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/flush_all_tables.sh) script from the [hbase-utils GitHub repository](https://github.com/Azure/hbase-utils/).
+ You can flush the `memstore` data by running the [flush_all_tables.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/flush_all_tables.sh) script from the [hbase-utils GitHub repository](https://github.com/Azure/hbase-utils/).
- You can also flush the memstore data by running the following HBase shell command from inside the HDInsight cluster:
+ You can also flush the `memstore` data by running the following HBase shell command from inside the HDInsight cluster:
```bash hbase shell
hdinsight Hdinsight Operationalize Data Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-operationalize-data-pipeline.md
description: Set up and run an example data pipeline that is triggered by new da
Previously updated : 01/04/2024 Last updated : 01/10/2024 # Operationalize a data analytics pipeline
To use the Oozie Web Console to view the status of your coordinator and workflow
ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net ```
- 1. From you ssh session, use the HDFS command to copy the file from your head node local storage to Azure Storage.
+ 1. From your ssh session, use the HDFS command to copy the file from your head node local storage to Azure Storage.
```bash hadoop fs -mkdir /example/data/flights
As you can see, the majority of the coordinator is just passing configuration in
</dataset> ```
- The path to the data in HDFS is built dynamically according to the expression provided in the `uri-template` element. In this coordinator, a frequency of one day is also used with the dataset. While the start and end dates on the coordinator element control when the actions are scheduled (and defines their nominal times), the `initial-instance` and `frequency` on the dataset control the calculation of the date that is used in constructing the `uri-template`. In this case, set the initial instance to one day before the start of the coordinator to ensure that it picks up the first day's (1/1/2017) worth of data. The dataset's date calculation rolls forward from the value of `initial-instance` (12/31/2016) advancing in increments of dataset frequency (one day) until it finds the most recent date that doesn't pass the nominal time set by the coordinator (2017-01-01T00:00:00 GMT for the first action).
+ The path to the data in HDFS is built dynamically according to the expression provided in the `uri-template` element. In this coordinator, a frequency of one day is also used with the dataset. While the start and end dates on the coordinator element control when the actions are scheduled (and defines their nominal times), the `initial-instance` and `frequency` on the dataset control the calculation of the date that is used in constructing the `uri-template`. In this case, set the initial instance to one day before the start of the coordinator to ensure that it picks up the first day's (January 1, 2017) worth of data. The dataset's date calculation rolls forward from the value of `initial-instance` (12/31/2016) advancing in increments of dataset frequency (one day) until it finds the most recent date that doesn't pass the nominal time set by the coordinator (2017-01-01T00:00:00 GMT for the first action).
The empty `done-flag` element indicates that when Oozie checks for the presence of input data at the appointed time, Oozie determines data whether available by presence of a directory or file. In this case, it's the presence of a csv file. If a csv file is present, Oozie assumes the data is ready and launches a workflow instance to process the file. If there's no csv file present, Oozie assumes the data isn't yet ready and that run of the workflow goes into a waiting state.
The three preceding points combine to yield a situation where the coordinator sc
* Point 2: Oozie looks for data available in `sourceDataFolder/2017-01-FlightData.csv`.
-* Point 3: When Oozie finds that file, it schedules an instance of the workflow that will process the data for 2017-01-01. Oozie then continues processing for 2017-01-02. This evaluation repeats up to but not including 2017-01-05.
+* Point 3: When Oozie finds that file, it schedules an instance of the workflow that will process the data for January 1, 2017. Oozie then continues processing for 2017-01-02. This evaluation repeats up to but not including 2017-01-05.
As with workflows, the configuration of a coordinator is defined in a `job.properties` file, which has a superset of the settings used by the workflow.
To run the pipeline with a coordinator, proceed in a similar fashion as for the
:::image type="content" source="./media/hdinsight-operationalize-data-pipeline/hdi-oozie-web-console-coordinator-jobs.png" alt-text="Oozie Web Console Coordinator Jobs":::
-6. Select a coordinator instance to display the list of scheduled actions. In this case, you should see four actions with nominal times in the range from 1/1/2017 to 1/4/2017.
+6. Select a coordinator instance to display the list of scheduled actions. In this case, you should see four actions with nominal times in the range from January 1, 2017 to January 4, 2017.
:::image type="content" source="./media/hdinsight-operationalize-data-pipeline/hdi-oozie-web-console-coordinator-instance.png" alt-text="Oozie Web Console Coordinator Job":::
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Currently, the allowed actions for a given role are applied *globally* on the AP
* **Subscription limit** - By default, each subscription is limited to a maximum of 10 FHIR server instances. If you need more instances per subscription, open a support ticket and provide details about your needs.
+* **Resource size** - Individual resource size including history should not exceed 20GB.
+ ## Next steps In this article, you've read about the supported FHIR features in Azure API for FHIR. For information about deploying Azure API for FHIR, see
iot-operations Howto Configure Destination Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-destination-data-explorer.md
- ignite-2023 Previously updated : 10/09/2023 Last updated : 01/10/2024 #CustomerIntent: As an operator, I want to send data from a pipeline to Azure Data Explorer so that I can store and analyze my data in the cloud.
To grant admin access to your Azure Data Explorer database, run the following co
Data Processor writes to Azure Data Explorer in batches. While you batch data in data processor before sending it, Azure Data Explorer has its own default [ingestion batching policy](/azure/data-explorer/kusto/management/batchingpolicy). Therefore, you might not see your data in Azure Data Explorer immediately after Data Processor writes it to the Azure Data Explorer destination.
-To view data in Azure Data Explorer as soon as the pipeline sends it, you can set the ingestion batching policy `Count` to 1. To edit the ingestion batching policy, run the following command in your database query tab:
+To view data in Azure Data Explorer as soon as the pipeline sends it, you can set the ingestion batching policy count to `1`. To edit the ingestion batching policy, run the following command in your database query tab:
-```kusto
-.alter table <DatabaseName>.<TableName> policy ingestionbatching
+````kusto
+.alter database <YourDatabaseName> policy ingestionbatching
+```
{ "MaximumBatchingTimeSpan" : "00:00:30", "MaximumNumberOfItems" : 1, "MaximumRawDataSizeMB": 1024 } ```
+````
## Configure your secret
iot-operations Howto Prepare Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md
Last updated 12/07/2023
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-An Azure Arc-enabled Kubernetes cluster is a prerequisite for deploying Azure IoT Operations Preview. This article describes how to prepare an Azure Arc-enabled Kubernetes cluster before you deploy Azure IoT Operations. This article includes guidance for both Ubuntu, Windows, and cloud environments.
+An Azure Arc-enabled Kubernetes cluster is a prerequisite for deploying Azure IoT Operations Preview - enabled by Azure Arc. This article describes how to prepare an Azure Arc-enabled Kubernetes cluster before you [Deploy Azure IoT Operations extensions to a Kubernetes cluster](../deploy-iot-ops/howto-deploy-iot-operations.md) to run your own workloads. This article includes guidance for both Ubuntu, Windows, and cloud environments.
+
+> [!TIP]
+> If you want to deploy Azure IoT Operations and run a sample workload, see the [Quickstart: Deploy Azure IoT Operations to an Arc-enabled Kubernetes cluster](../get-started/quickstart-deploy.md).
[!INCLUDE [validated-environments](../includes/validated-environments.md)]
iot-operations Quickstart Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-deploy.md
The services deployed in this quickstart include:
* [Azure IoT Layered Network Management](../manage-layered-network/overview-layered-network.md) * [Observability](../monitor/howto-configure-observability.md)
+The following quickstarts in this series build on this one to define sample assets, data processing pipelines, and visualizations. If you want to deploy Azure IoT Operations to run your own workloads, see [Prepare your Azure Arc-enabled Kubernetes cluster](../deploy-iot-ops/howto-prepare-cluster.md) and [Deploy Azure IoT Operations extensions to a Kubernetes cluster](../deploy-iot-ops/howto-deploy-iot-operations.md).
+ ## Prerequisites Review the prerequisites based on the environment you use to host the Kubernetes cluster.
For this quickstart, we recommend GitHub Codespaces as a quick way to get starte
# [Windows](#tab/windows)
-* In this quickstart, you use the `AksEdgeQuickStartForAio.ps1` script to set up an AKS Edge Essentials single-machine K3S Linux-only cluster. To learn more, see the [AKS Edge Essentials system requirements](/azure/aks/hybrid/aks-edge-system-requirements). For this quickstart, ensure that your machine has a minimum of 10 GB RAM, 4 vCPUs, and 40 GB free disk space.
+* You'll use the `AksEdgeQuickStartForAio.ps1` script to set up an AKS Edge Essentials single-machine K3S Linux-only cluster. Ensure that your machine has a minimum of 10 GB RAM, 4 vCPUs, and 40 GB free disk space. To learn more, see the [AKS Edge Essentials system requirements](/azure/aks/hybrid/aks-edge-system-requirements).
* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
iot-operations Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/reference/glossary.md
+
+ Title: "Glossary for Azure IoT Operations"
+description: "List of terms with definitions and usage guidance related to Azure IoT Operations - enabled by Azure Arc."
++++ Last updated : 01/10/2024+
+#customer intent: As a user of Azure IoT Operations, I want learn about the terminology associated with Azure IoT Operations so that I can use the terminology correctly.
+++
+# Glossary for Azure IoT Operations Preview - enabled by Azure Arc
+
+This article lists and defines some of the key terms associated with Azure IoT Operations. The article includes usage guidance to help you use the terms correctly if you're talking or writing about Azure IoT Operations.
+
+## Service and component names
+
+This section lists the names of the services and components that make up Azure IoT Operations.
+
+### Azure IoT Operations Preview - enabled by Azure Arc
+
+A unified data plane for the edge. It's a collection of modular, scalable, and highly available data services that run on Azure Arc-enabled edge Kubernetes clusters. It enables data capture from various different systems and integrates with data modeling applications such as Microsoft Fabric to help organizations deploy the industrial metaverse.
+
+On first mention in an article, use _Azure IoT Operations Preview - enabled by Azure Arc_. On subsequent mentions, you can use _Azure IoT Operations_. Never use an acronym.
+
+### Azure IoT Akri Preview
+
+This component helps you discover and connect to devices and assets.
+
+On first mention in an article, use _Azure IoT Akri Preview_. On subsequent mentions, you can use _Azure IoT Akri_. Never use an acronym.
+
+### Azure IoT Data Processor Preview
+
+This component lets you aggregate, enrich, normalize, and filter the data from your devices and assets. Data Processor is a pipeline-based data processing engine that lets you process data at the edge before you send it to the other services either at the edge or in the cloud
+
+On first mention in an article, use _Azure IoT Data Processor Preview_. On subsequent mentions, you can use _Data Processor_. Never use an acronym.
+
+### Azure IoT Layered Network Management Preview
+
+This component lets you secure communication between devices and the cloud through isolated network environments based on the ISA-95/Purdue Network architecture.
+
+On first mention in an article, use _Azure IoT Layered Network Management Preview_. On subsequent mentions, you can use _Layered Network Management_. Never use an acronym.
+
+### Azure IoT MQ Preview
+
+An MQTT broker that runs on the edge. The component lets you publish and subscribe to MQTT topics. You can use MQ to build event-driven architectures that connect your devices and assets to the cloud.
+
+On first mention in an article, use _Azure IoT MQ Preview_. On subsequent mentions, you can use _MQ_.
+
+### Azure IoT OPC UA Broker Preview
+
+This component manages the connection to OPC UA servers and other leaf devices. The OPC UA Broker component publishes data from the OPC UA servers and the devices discovered by _Azure IoT Akri_ to Azure IoT MQ topics.
+
+On first mention in an article, use _Azure IoT OPC UA Broker Preview_. On subsequent mentions, you can use _OPC UA Broker_. Never use an acronym.
+
+### Azure IoT Orchestrator Preview
+
+This component manages the deployment, configuration, and update of the Azure IoT Operations components that run on your Arc-enabled Kubernetes cluster.
+
+On first mention in an article, use _Azure IoT Orchestrator Preview_. On subsequent mentions, you can use _Orchestrator_. Never use an acronym.
+
+### Azure IoT Operations Experience Preview
+
+This web UI provides a unified experience for operational technologists to manage assets and Data Processor pipelines in an Azure IoT Operations deployment.
+
+On first mention in an article, use _Azure IoT Operations Experience Preview_. On subsequent mentions, you can use _Operations Experience_. Never use an acronym.
+
+## Related content
+
+- [What is Azure IoT Operations?](../get-started/overview-iot-operations.md)
+- [Connect industrial assets using Azure IoT OPC UA Broker](../manage-devices-assets/overview-opcua-broker.md)
+- [Publish and subscribe MQTT messages using Azure IoT MQ](../manage-mqtt-connectivity/overview-iot-mq.md)
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
Only following built-in roles have permission to perform full backup:
- Managed HSM Administrator - Managed HSM Backup
-There are 2 ways to execute a full backup. You must provide the following information to execute a full backup:
+There are 2 ways to execute a full backup/restore:
+1. Assigning an User-Assigned Managed Identity (UAMI) to the Managed HSM service. You can backup and restore your MHSM using a user assigned managed identity regardless of whether your storage account has public network access or private network access enabled. If storage account is behind a private endpoint, the UAMI method works with trusted service bypass to allow for backup and restore.
+2. Using storage container SAS token with permissions 'crdw'. Backing up and restoring using storage container SAS token requires your storage account to have public network access enabled.
+
+You must provide the following information to execute a full backup:
- HSM name or URL - Storage account name - Storage account blob storage container - User assigned managed identity OR storage container SAS token with permissions 'crdw'
-> [!NOTE]
-> Backing up and restoring using storage container SAS token requires your storage account to have public network access enabled. You can backup and restore your MHSM using a user assigned managed identity regardless of whether your storage account has public network access or private network access enabled, including if the storage account is behind a private endpoint.
- [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] #### Prerequisites if backing up and restoring using user assigned managed identity:
load-balancer Configure Vm Scale Set Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-cli.md
- Title: Configure Virtual Machine Scale Set with an existing Azure Load Balancer - Azure CLI
-description: Learn how to configure a Virtual Machine Scale Set with an existing Azure Load Balancer using the Azure CLI.
---- Previously updated : 12/15/2022---
-# Configure a Virtual Machine Scale Set with an existing Azure Load Balancer using the Azure CLI
-
-In this article, you'll learn how to configure a Virtual Machine Scale Set with an existing Azure Load Balancer.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- You need an existing standard sku load balancer in the subscription where the Virtual Machine Scale Set will be deployed.--- You need an Azure Virtual Network for the Virtual Machine Scale Set.
-
--- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Deploy a Virtual Machine Scale Set with existing load balancer
-
-Deploy a Virtual Machine Scale Set with [`az vmss create`](/cli/azure/vmss#az-vmss-create).
-Replace the values in brackets with the names of the resources in your configuration.
-
-```azurecli-interactive
-az vmss create \
- --resource-group <resource-group> \
- --name <vmss-name>\
- --image <your-image> \
- --admin-username <admin-username> \
- --generate-ssh-keys \
- --upgrade-policy-mode Automatic \
- --instance-count 3 \
- --vnet-name <virtual-network-name> \
- --subnet <subnet-name> \
- --lb <load-balancer-name> \
- --backend-pool-name <backend-pool-name>
-```
-
-The below example deploys a Virtual Machine Scale Set with:
--- Virtual Machine Scale Set named **myVMSS**-- Azure Load Balancer named **myLoadBalancer**-- Load balancer backend pool named **myBackendPool**-- Azure Virtual Network named **myVnet**-- Subnet named **mySubnet**-- Resource group named **myResourceGroup**-- Ubuntu Server image for the Virtual Machine Scale Set-
-```azurecli-interactive
-az vmss create \
- --resource-group myResourceGroup \
- --name myVMSS \
- --image Canonical:UbuntuServer:18.04-LTS:latest \
- --admin-username adminuser \
- --generate-ssh-keys \
- --upgrade-policy-mode Automatic \
- --instance-count 3 \
- --vnet-name myVnet\
- --subnet mySubnet \
- --lb myLoadBalancer \
- --backend-pool-name myBackendPool
-```
-> [!NOTE]
-> After the scale set has been created, the backend port cannot be modified for a load balancing rule used by a health probe of the load balancer. To change the port, you can remove the health probe by updating the Azure virtual machine scale set, update the port and then configure the health probe again.
-
-## Next steps
-
-In this article, you deployed a Virtual Machine Scale Set with an existing Azure Load Balancer. To learn more about Virtual Machine Scale Sets and load balancer, see:
--- [What is Azure Load Balancer?](load-balancer-overview.md)-- [What are Virtual Machine Scale Sets?](../virtual-machine-scale-sets/overview.md)
-
load-balancer Configure Vm Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-portal.md
Title: Configure Virtual Machine Scale Set with an existing Azure Load Balancer - Azure portal
-description: Learn how to configure a Virtual Machine Scale Set with an existing Azure Load Balancer using the Azure portal.
+ Title: Configure Virtual Machine Scale Set with an existing Azure Load Balancer - Azure portal/CLI/PowerShell
+description: Learn to configure a Virtual Machine Scale Set with an existing Azure standard Load Balancer using the Azure portal, Azure CLI or Azure PowerShell.
Previously updated : 12/15/2022 Last updated : 01/11/2024
-# Configure a Virtual Machine Scale Set with an existing Azure Load Balancer using the Azure portal
+# Configure a Virtual Machine Scale Set with an existing Azure Standard Load Balancer
-In this article, you'll learn how to configure a Virtual Machine Scale Set with an existing Azure Load Balancer.
+In this article, you'll learn how to configure a Virtual Machine Scale Set with an existing Azure Load Balancer. With an existing virtual network and standard sku load balancer, you can deploy a Virtual Machine Scale Set with a few clicks in the Azure portal, or with a few lines of code in the Azure CLI or Azure PowerShell using the tabs below.
+
+# [Azure Portal](#tab/portal)
## Prerequisites
In this article, you'll learn how to configure a Virtual Machine Scale Set with
Sign in to the [Azure portal](https://portal.azure.com). -- ## Deploy Virtual Machine Scale Set with existing load balancer In this section, you'll create a Virtual Machine Scale Set in the Azure portal with an existing Azure load balancer.
In this section, you'll create a Virtual Machine Scale Set in the Azure portal w
9. Review the settings and select the **Create** button.
+# [Azure CLI](#tab/cli)
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- You need an existing standard sku load balancer in the subscription where the Virtual Machine Scale Set will be deployed.
+
+- You need an Azure Virtual Network for the Virtual Machine Scale Set.
+
+
+- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Deploy a Virtual Machine Scale Set with existing load balancer
+
+Deploy a Virtual Machine Scale Set with [`az vmss create`](/cli/azure/vmss#az-vmss-create).
+Replace the values in brackets with the names of the resources in your configuration.
+
+```azurecli-interactive
+az vmss create \
+ --resource-group <resource-group> \
+ --name <vmss-name>\
+ --image <your-image> \
+ --admin-username <admin-username> \
+ --generate-ssh-keys \
+ --upgrade-policy-mode Automatic \
+ --instance-count 3 \
+ --vnet-name <virtual-network-name> \
+ --subnet <subnet-name> \
+ --lb <load-balancer-name> \
+ --backend-pool-name <backend-pool-name>
+```
+
+The below example deploys a Virtual Machine Scale Set with:
+
+- Virtual Machine Scale Set named **myVMSS**
+- Azure Load Balancer named **myLoadBalancer**
+- Load balancer backend pool named **myBackendPool**
+- Azure Virtual Network named **myVnet**
+- Subnet named **mySubnet**
+- Resource group named **myResourceGroup**
+- Ubuntu Server image for the Virtual Machine Scale Set
+
+```azurecli-interactive
+az vmss create \
+ --resource-group myResourceGroup \
+ --name myVMSS \
+ --image Canonical:UbuntuServer:18.04-LTS:latest \
+ --admin-username adminuser \
+ --generate-ssh-keys \
+ --upgrade-policy-mode Automatic \
+ --instance-count 3 \
+ --vnet-name myVnet\
+ --subnet mySubnet \
+ --lb myLoadBalancer \
+ --backend-pool-name myBackendPool
+```
+> [!NOTE]
+> After the scale set has been created, the backend port cannot be modified for a load balancing rule used by a health probe of the load balancer. To change the port, you can remove the health probe by updating the Azure virtual machine scale set, update the port and then configure the health probe again.
++
+# [Azure PowerShell](#tab/powershell)
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing resource group for all resources.
+- An existing standard sku load balancer in the subscription where the Virtual Machine Scale Set will be deployed.
+- An Azure Virtual Network for the Virtual Machine Scale Set.
+++
+## Sign in to Azure CLI
+
+Sign into Azure with [`Connect-AzAccount`](/powershell/module/az.accounts/connect-azaccount#example-1-connect-to-an-azure-account)
+
+```azurepowershell-interactive
+Connect-AzAccount
+```
+
+## Deploy a Virtual Machine Scale Set with existing load balancer
+Deploy a Virtual Machine Scale Set with [`New-AzVMss`](/powershell/module/az.compute/new-azvmss). Replace the values in brackets with the names of the resources in your configuration.
+
+```azurepowershell-interactive
+
+$rsg = <resource-group>
+$loc = <location>
+$vms = <vm-scale-set-name>
+$vnt = <virtual-network>
+$sub = <subnet-name>
+$lbn = <load-balancer-name>
+$pol = <upgrade-policy-mode>
+$img = <image-name>
+$bep = <backend-pool-name>
+
+$lb = Get-AzLoadBalancer -ResourceGroupName $rsg -Name $lbn
+
+New-AzVmss -ResourceGroupName $rsg -Location $loc -VMScaleSetName $vms -VirtualNetworkName $vnt -SubnetName $sub -LoadBalancerName $lb -UpgradePolicyMode $pol
+
+```
+
+The below example deploys a Virtual Machine Scale Set with the following values:
+
+- Virtual Machine Scale Set named **myVMSS**
+- Azure Load Balancer named **myLoadBalancer**
+- Load balancer backend pool named **myBackendPool**
+- Azure Virtual Network named **myVnet**
+- Subnet named **mySubnet**
+- Resource group named **myResourceGroup**
+
+```azurepowershell-interactive
+
+$rsg = "myResourceGroup"
+$loc = "East US 2"
+$vms = "myVMSS"
+$vnt = "myVnet"
+$sub = "mySubnet"
+$pol = "Automatic"
+$lbn = "myLoadBalancer"
+$bep = "myBackendPool"
+
+$lb = Get-AzLoadBalancer -ResourceGroupName $rsg -Name $lbn
+
+New-AzVmss -ResourceGroupName $rsg -Location $loc -VMScaleSetName $vms -VirtualNetworkName $vnt -SubnetName $sub -LoadBalancerName $lb -UpgradePolicyMode $pol -BackendPoolName $bep
+
+```
+> [!NOTE]
+> After the scale set has been created, the backend port cannot be modified for a load balancing rule used by a health probe of the load balancer. To change the port, you can remove the health probe by updating the Azure virtual machine scale set, update the port and then configure the health probe again.
+ ## Next steps In this article, you deployed a Virtual Machine Scale Set with an existing Azure Load Balancer. To learn more about Virtual Machine Scale Sets and load balancer, see:
load-balancer Configure Vm Scale Set Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-powershell.md
- Title: Configure Virtual Machine Scale Set with an existing Azure Load Balancer - Azure PowerShell
-description: Learn how to configure a Virtual Machine Scale Set with an existing Azure Load Balancer using Azure PowerShell.
---- Previously updated : 12/15/2022---
-# Configure a Virtual Machine Scale Set with an existing Azure Load Balancer using Azure PowerShell
-
-In this article, you'll learn how to configure a Virtual Machine Scale Set with an existing Azure Load Balancer.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An existing resource group for all resources.-- An existing standard sku load balancer in the subscription where the Virtual Machine Scale Set will be deployed.-- An Azure Virtual Network for the Virtual Machine Scale Set.---
-## Sign in to Azure CLI
-
-Sign into Azure with [`Connect-AzAccount`](/powershell/module/az.accounts/connect-azaccount#example-1-connect-to-an-azure-account)
-
-```azurepowershell-interactive
-Connect-AzAccount
-```
-
-## Deploy a Virtual Machine Scale Set with existing load balancer
-Deploy a Virtual Machine Scale Set with [`New-AzVMss`](/powershell/module/az.compute/new-azvmss). Replace the values in brackets with the names of the resources in your configuration.
-
-```azurepowershell-interactive
-
-$rsg = <resource-group>
-$loc = <location>
-$vms = <vm-scale-set-name>
-$vnt = <virtual-network>
-$sub = <subnet-name>
-$lbn = <load-balancer-name>
-$pol = <upgrade-policy-mode>
-$img = <image-name>
-$bep = <backend-pool-name>
-
-$lb = Get-AzLoadBalancer -ResourceGroupName $rsg -Name $lbn
-
-New-AzVmss -ResourceGroupName $rsg -Location $loc -VMScaleSetName $vms -VirtualNetworkName $vnt -SubnetName $sub -LoadBalancerName $lb -UpgradePolicyMode $pol
-
-```
-
-The below example deploys a Virtual Machine Scale Set with the following values:
--- Virtual Machine Scale Set named **myVMSS**-- Azure Load Balancer named **myLoadBalancer**-- Load balancer backend pool named **myBackendPool**-- Azure Virtual Network named **myVnet**-- Subnet named **mySubnet**-- Resource group named **myResourceGroup**-
-```azurepowershell-interactive
-
-$rsg = "myResourceGroup"
-$loc = "East US 2"
-$vms = "myVMSS"
-$vnt = "myVnet"
-$sub = "mySubnet"
-$pol = "Automatic"
-$lbn = "myLoadBalancer"
-$bep = "myBackendPool"
-
-$lb = Get-AzLoadBalancer -ResourceGroupName $rsg -Name $lbn
-
-New-AzVmss -ResourceGroupName $rsg -Location $loc -VMScaleSetName $vms -VirtualNetworkName $vnt -SubnetName $sub -LoadBalancerName $lb -UpgradePolicyMode $pol -BackendPoolName $bep
-
-```
-> [!NOTE]
-> After the scale set has been created, the backend port cannot be modified for a load balancing rule used by a health probe of the load balancer. To change the port, you can remove the health probe by updating the Azure virtual machine scale set, update the port and then configure the health probe again.
-
-## Next steps
-
-In this article, you deployed a Virtual Machine Scale Set with an existing Azure Load Balancer. To learn more about Virtual Machine Scale Sets and load balancer, see:
--- [What is Azure Load Balancer?](load-balancer-overview.md)-- [What are Virtual Machine Scale Sets?](../virtual-machine-scale-sets/overview.md)
logic-apps Block Connections Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/block-connections-across-tenants.md
ms.suite: integration Previously updated : 08/01/2022 Last updated : 01/10/2024 # Customer intent: As a developer, I want to prevent access to and from other Microsoft Entra tenants.
logic-apps Block Connections Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/block-connections-connectors.md
ms.suite: integration Previously updated : 08/22/2022 Last updated : 01/10/2024 # Block connector usage in Azure Logic Apps
To block creating a connection altogether in a logic app workflow, follow these
| **Policy enforcement** | Yes | **Enabled** | This setting specifies whether to enable or disable the policy definition when you save your work. | |||||
-1. Under **POLICY RULE**, the JSON edit box is pre-populated with a policy definition template. Replace this template with your [policy definition](../governance/policy/concepts/definition-structure.md) based on the properties described in the table below and by following this syntax:
+1. Under **POLICY RULE**, the JSON edit box is prepopulated with a policy definition template. Replace this template with your [policy definition](../governance/policy/concepts/definition-structure.md) based on the properties described in the table below and by following this syntax:
```json {
When you create a connection in a logic app workflow, this connection exists as
| **Policy enforcement** | Yes | **Enabled** | This setting specifies whether to enable or disable the policy definition when you save your work. | |||||
-1. Under **POLICY RULE**, the JSON edit box is pre-populated with a policy definition template. Replace this template with your [policy definition](../governance/policy/concepts/definition-structure.md) based on the properties described in the table below and by following this syntax:
+1. Under **POLICY RULE**, the JSON edit box is prepopulated with a policy definition template. Replace this template with your [policy definition](../governance/policy/concepts/definition-structure.md) based on the properties described in the table below and by following this syntax:
```json {
Next, you need to assign the policy definition where you want to enforce the pol
1. After the policy takes effect, you can [test your policy](#test-policy).
-For more information, see [Quickstart: Create a policy assignment to identify non-compliant resources](../governance/policy/assign-policy-portal.md).
+For more information, see [Quickstart: Create a policy assignment to identify noncompliant resources](../governance/policy/assign-policy-portal.md).
<a name="test-policy"></a>
logic-apps Business Continuity Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/business-continuity-disaster-recovery-guidance.md
ms.suite: integration Previously updated : 05/11/2023 Last updated : 01/10/2024 # Business continuity and disaster recovery for Azure Logic Apps
logic-apps Call From Power Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/call-from-power-apps.md
+
+ Title: Call logic apps from Power Apps
+description: Call logic apps from Microsoft Power Apps by exporting logic apps as custom connectors.
+
+ms.suite: integration
++ Last updated : 01/10/2024++
+# Call logic app workflows from Power Apps
++
+To call your logic app workflow from a Power Apps flow, you can export your logic app resource and workflow as a custom connector. You can then call your workflow from a flow in a Power Apps environment.
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* A Power Apps license.
+
+* A Consumption logic app workflow with a request trigger to export.
+
+ > [!NOTE]
+ >
+ > The Export capability is available only for Consumption logic app workflows in multitenant Azure Logic Apps.
+
+* A Power Apps flow from where to call your logic app workflow.
+
+## Export your logic app as a custom connector
+
+Before you can call your workflow from Power Apps, you must first export your logic app resource as a custom connector.
+
+1. In the [Azure portal](https://portal.azure.com) search box, enter **logic apps**. From the results, select **Logic apps**.
+
+1. Select the logic app resource that you want to export.
+
+1. On your logic app menu, select **Overview**. On the **Overview** page toolbar, select **Export** > **Export to Power Apps**.
+
+ :::image type="content" source="./media/call-from-power-apps/export-logic-app.png" alt-text="Screenshot shows Azure portal and Overview toolbar with Export button selected.":::
+
+1. On the **Export to Power Apps** pane, provide the following information:
+
+ | Property | Description |
+ |-|-|
+ | **Name** | Provide a name for the custom connector to create from your logic app.
+ | **Environment** | Select the Power Apps environment from which you want to call your logic app.
+
+1. When you're done, select **OK**. To confirm that your logic app was successfully exported, check the notifications pane.
+
+### Export errors
+
+Here are errors that might happen when you export your logic app as a custom connector and suggested solutions:
+
+* **The current Logic App cannot be exported. To export, select a Logic App that has a request trigger.**: Check that your logic app workflow begins with a [Request trigger](../connectors/connectors-native-reqres.md).
+
+## Connect to your logic app workflow from Power Apps
+
+1. In [Power Apps](https://powerapps.microsoft.com/), on the **Power Apps** home page menu, select **Flows**.
+
+1. On the **Flows** page, select the flow from where you want to call your logic app workflow.
+
+1. On your flow page toolbar, select **Edit**.
+
+1. In the flow editor, select **&#43; New step**.
+
+1. In the **Choose an operation** search box, enter the name for your logic app custom connector.
+
+ Optionally, to see only custom connectors in your environment, filter the results using the **Custom** tab, for example:
+
+ :::image type="content" source="./media/call-from-power-apps/power-apps-custom-connector-action.png" alt-text="Screenshot shows Power Apps flow editor with a new operation added for custom connector and available actions.":::
+
+1. Select the custom connector operation that you want to call from your flow.
+
+1. Provide the necessary operation information to pass to the custom connector.
+
+1. On the Power Apps editor toolbar, select **Save** to save your changes.
+
+1. In the [Azure portal](https://portal.azure.com), find and open the logic app resource that you exported.
+
+1. Confirm that your logic app workflow works as expected with your Power Apps flow.
+
+## Delete logic app custom connector from Power Apps
+
+1. In [Power Apps](https://powerapps.microsoft.com), on the **Power Apps** home page menu, select **Discover**. On the **Discover** page, find the **Data** tile, and select **Custom connectors**.
+
+1. In the list, find your custom connector, and select the ellipses (**...**) button, and then select **Delete**.
+
+ :::image type="content" source="./media/call-from-power-apps/delete-custom-connector.png" alt-text="Screenshot shows Custom connectors page with custom connector management options.":::
+
+1. To confirm deletion, select **OK**.
+
+## Next steps
+
+* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
logic-apps Call From Power Automate Power Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/call-from-power-automate-power-apps.md
- Title: Call logic apps from Power Automate and Power Apps
-description: Call logic apps from Microsoft Power Automate flows by exporting logic apps as connectors.
--- Previously updated : 08/20/2022--
-# Call logic apps from Power Automate and Power Apps
--
-To call your logic apps from Microsoft Power Automate and Microsoft Power Apps, you can export your logic apps as connectors. When you expose a logic app as a custom connector in a Power Automate or Power Apps environment, you can then call your logic app from flows there.
-
-If you want to migrate your flow from Power Automate or Power to Logic Apps instead, see [Export flows from Power Automate and deploy to Azure Logic Apps](export-from-microsoft-flow-logic-app-template.md).
-
-> [!NOTE]
-> Not all Power Automate connectors are available in Azure Logic Apps. You can migrate only Power Automate flows
-> that have the equivalent connectors in Azure Logic Apps. For example, the Button trigger, the Approval connector,
-> and Notification connector are specific to Power Automate. Currently, OpenAPI-based flows in Power Automate aren't
-> supported for export and deployment as logic app templates.
->
-> * To find which Power Automate connectors don't have Logic Apps equivalents, see
-> [Power Automate connectors](/connectors/connector-reference/connector-reference-powerautomate-connectors).
->
-> * To find which Logic Apps connectors don't have Power Automate equivalents, see
-> [Logic Apps connectors](/connectors/connector-reference/connector-reference-logicapps-connectors).
-
-## Prerequisites
-
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* A Power Automate or Power Apps license.
-
-* A Consumption logic app workflow with a request trigger to export.
-
- > [!NOTE]
- >
- > The Export capability is available only for Consumption logic app workflows in multi-tenant Azure Logic Apps.
-
-* A flow in Power Automate or Power Apps from which you want to call your logic app.
-
-## Export your logic app as a custom connector
-
-Before you can call your logic app from Power Automate or Power Apps, you must first export your logic app as a custom connector.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the Azure portal search box, enter `Logic Apps`. In the results, under **Services**, select **Logic Apps**.
-
-1. Select the logic app that you want to export.
-
-1. From your logic app's menu, select **Export**.
-
- :::image type="content" source="./media/call-logic-apps-from-power-automate-power-apps/export-logic-app.png" alt-text="Screenshot of logic app's page in Azure portal, showing menu with 'Export' button selected.":::
-
-1. On the **Export** pane, for **Name**, enter a name for the custom connector to your logic app. From the **Environment** list, select the Power Automate or Power Apps environment from which you want to call your logic app. When you're done, select **OK**.
-
- :::image type="content" source="./media/call-logic-apps-from-power-automate-power-apps/export-logic-app2.png" alt-text="Screenshot of export pane for logic app, showing required fields for custom connector name and environment.":::
-
-1. To confirm that your logic app was successfully exported, check the notifications pane.
-
-### Exporting errors
-
-Here are errors that might happen when you export your logic app as a custom connector and their suggested solutions:
-
-* **Failed to get environments. Make sure your account is configured for Power Automate, then try again.**: Check that your Azure account has a Power Automate plan.
-
-* **The current Logic App cannot be exported. To export, select a Logic App that has a request trigger.**: Check that your logic app begins with a [request trigger](./logic-apps-workflow-actions-triggers.md#request-trigger).
-
-## Connect to your logic app from Power Automate
-
-To connect to the logic app that you exported with your Power Automate flow:
-
-1. Sign in to [Power Automate](https://make.powerautomate.com).
-
-1. From the **Power Automate** home page menu, select **My flows**.
-
-1. On the **Flows** page, select the flow that you want to connect to your logic app.
-
-1. From your flow page's menu, select **Edit**.
-
-1. In the flow editor, select **&#43; New step**.
-
-1. Under **Choose an action**, in the search box, enter the name of your logic app connector. Optionally, to show only the custom connectors in your environment, filter the results by selecting the **Custom** tab.
-
- :::image type="content" source="./media/call-logic-apps-from-power-automate-power-apps/power-automate-custom-connector-action.png" alt-text="Screenshot of Power Automate flow editor, showing a new step being added for the custom connector and available actions.":::
-
-1. Select the action that you want to take with your logic app connector.
-
-1. Provide the information that the action passes to the logic app connector.
-
-1. To save your changes, from the Power Automate editor menu, select **Save**.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the Logic Apps service, find the logic app that you exported.
-
-1. Confirm that your logic app works the way that you expect in your Power Automate flow.
-
-## Delete logic app connector from Power Automate
-
-1. Sign in to [Power Automate](https://make.powerautomate.com).
-
-1. On the **Power Automate** home page, select **Data** &gt; **Custom connectors** in the menu.
-
-1. In the list, find your custom connector, and select the ellipses (**...**) button &gt; **Delete**.
-
- :::image type="content" source="./media/call-logic-apps-from-power-automate-power-apps/delete-custom-connector.png" alt-text="Screenshot of Power Automate 'Custom connectors' page, showing logic app's custom connector management buttons.":::
-
-1. To confirm the deletion, select **OK**.
-
-## Connect to your logic app from Power Apps
-
-To connect to the logic app that you exported with your Power Apps flow:
-
-1. Sign in to [Power Apps](https://powerapps.microsoft.com/).
-
-1. On the **Power Apps** home page, select **Flows** in the menu.
-
-1. On the **Flows** page, select the flow that you want to connect to your logic app.
-
-1. On your flow's page, select **Edit** in the flow's menu.
-
-1. In the flow editor, select the **&#43; New step** button.
-
-1. Under **Choose an action** in the new step, enter the name of your logic app connector in the search box. Optionally, filter the results by the **Custom** tab to see only custom connectors in your environment.
-
- :::image type="content" source="./media/call-logic-apps-from-power-automate-power-apps/power-apps-custom-connector-action.png" alt-text="Screenshot of Power Apps flow editor, showing a new step being added for the custom connector and available actions.":::
-
-1. Select the action that you want to take with the connector.
-
-1. Configure what information your action passes to the logic app connector.
-
-1. In the Power Apps editor menu, select **Save** to save your changes.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the Logic Apps service, find the logic app that you exported.
-
-1. Confirm that your logic app is functioning as intended with your Power Apps flow.
-
-## Delete logic app connector from Power Apps
-
-1. Sign in to [Power Apps](https://powerapps.microsoft.com).
-
-1. On the **Power Apps** home page, select **Data** &gt; **Custom Connectors** in the menu.
-
-1. In the list, find your custom connector, and select the ellipses (**...**) button &gt; **Delete**.
-
- :::image type="content" source="./media/call-logic-apps-from-power-automate-power-apps/delete-custom-connector.png" alt-text="Screenshot of Power Apps 'Custom connectors' page, showing logic app's custom connector management buttons.":::
-
-1. To confirm the deletion, select **OK**.
-
-## Next steps
-
-* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
-* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
ms.suite: integration Previously updated : 01/03/2023 Last updated : 01/10/2024 # Customer intent: As a logic apps developer, I want to create a Standard logic app workflow that runs in single-tenant Azure Logic Apps using Visual Studio Code.
logic-apps Create Workflow With Trigger Or Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-workflow-with-trigger-or-action.md
ms.suite: integration Previously updated : 05/23/2023 Last updated : 01/10/2024 # As an Azure Logic Apps developer, I want to create a workflow using trigger and action operations in Azure Logic Apps.
logic-apps Estimate Storage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/estimate-storage-costs.md
ms.suite: integration Previously updated : 08/20/2022 Last updated : 01/10/2024 # Estimate storage costs for Standard logic app workflows in single-tenant Azure Logic Apps [!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-Azure Logic Apps uses [Azure Storage](../storage/index.yml) for any storage operations. In traditional *multi-tenant* Azure Logic Apps, any storage usage and costs are attached to the logic app. Now, in *single-tenant* Azure Logic Apps, you can use your own storage account. These storage costs are listed separately in your Azure billing invoice. This capability gives you more flexibility and control over your logic app data.
+Azure Logic Apps uses [Azure Storage](../storage/index.yml) for any storage operations. In traditional *multitenant* Azure Logic Apps, any storage usage and costs are attached to the logic app. Now, in *single-tenant* Azure Logic Apps, you can use your own storage account. These storage costs are listed separately in your Azure billing invoice. This capability gives you more flexibility and control over your logic app data.
> [!NOTE]
-> This article applies to workflows in the single-tenant Azure Logic Apps environment. These workflows exist in the same logic app and in a single tenant that share the same storage. For more information, see [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+> This article applies to workflows in the single-tenant Azure Logic Apps environment. These workflows exist in the same logic app and in a single tenant that share the same storage. For more information, see [Single-tenant versus multitenant and integration service environment](single-tenant-overview-compare.md).
Storage costs change based on your workflows' content. Different triggers, actions, and payloads result in different storage operations and needs. This article describes how to estimate your storage costs when you're using your own Azure Storage account with single-tenant based logic apps. First, you can [estimate the number of storage operations you'll perform](#estimate-storage-needs) using the Logic Apps storage calculator. Then, you can [estimate your possible storage costs](#estimate-storage-costs) using these numbers in the Azure pricing calculator.
logic-apps Export From Microsoft Flow Logic App Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-microsoft-flow-logic-app-template.md
ms.suite: integration
Previously updated : 01/23/2023 Last updated : 01/10/2024 # Export flows from Power Automate and deploy to Azure Logic Apps
logic-apps Handle Throttling Problems 429 Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/handle-throttling-problems-429-errors.md
ms.suite: integration Previously updated : 03/02/2023 Last updated : 01/10/2024 # Handle throttling problems (429 - "Too many requests" errors) in Azure Logic Apps
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
Previously updated : 02/22/2023 Last updated : 01/10/2024 # Encode and decode flat files in Azure Logic Apps
logic-apps Logic Apps Examples And Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-examples-and-scenarios.md
ms.suite: integration Previously updated : 03/07/2023 Last updated : 01/10/2024 # Common scenarios, examples, tutorials, and walkthroughs for Azure Logic Apps
logic-apps Logic Apps Http Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-http-endpoint.md
Previously updated : 10/06/2023 Last updated : 01/09/2024 # Create workflows that you can call, trigger, or nest using HTTPS endpoints in Azure Logic Apps
-Some scenarios might require that you create a workflow that you can call through a URL or that can receive and inbound requests from other services or workflows. For this task, you can natively expose a synchronous HTTPS endpoint for your workflow by using any of the following request-based trigger types:
+Some scenarios might require that you create a workflow that you can call using a URL or that can receive inbound requests from other services or workflows. For this task, you can expose a native synchronous HTTPS endpoint on your workflow when you use any of the following request-based trigger types:
* [Request](../connectors/connectors-native-reqres.md) * [HTTP Webhook](../connectors/connectors-native-webhook.md)
-* Managed connector triggers that have the [ApiConnectionWebhook type](../logic-apps/logic-apps-workflow-actions-triggers.md#apiconnectionwebhook-trigger) and can receive inbound HTTPS requests
+* Managed connector triggers that have the [ApiConnectionWebhook type](logic-apps-workflow-actions-triggers.md#apiconnectionwebhook-trigger) and can receive inbound HTTPS requests
-This how-to guide shows how to create a callable endpoint for your workflow by using the Request trigger and call that endpoint from another workflow. All principles identically apply to the other request-based trigger types that can receive inbound requests.
-
-For information about security, authorization, and encryption for inbound calls to your workflow, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [OAuth with Microsoft Entra ID](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](logic-apps-securing-a-logic-app.md#secure-inbound-requests).
+This guide shows how to create a callable endpoint for your workflow by adding the **Request** trigger and then call that endpoint from another workflow. All principles identically apply to the other request-based trigger types that can receive inbound requests.
## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The logic app workflow where you want to use the trigger to create the callable endpoint. You can start with either a blank workflow or an existing logic app workflow where you can replace the current trigger. This example starts with a blank workflow. If you're new to logic apps, see [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md).
+* A logic app workflow where you want to use the request-based trigger to create the callable endpoint. You can start with either a blank workflow or an existing workflow where you can replace the current trigger. This example starts with a blank workflow.
## Create a callable endpoint
-1. In the [Azure portal](https://portal.azure.com), create a logic app resource and blank workflow in the designer.
+Based on whether you have a Standard or Consumption logic app workflow, follow the corresponding steps:
-1. [Follow these general steps to add the **Request** trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource and blank workflow in the designer.
+
+1. [Follow these general steps to add the **Request** trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
1. Optionally, in the **Request Body JSON Schema** box, you can enter a JSON schema that describes the payload or data that you expect the trigger to receive. The designer uses this schema to generate tokens that represent trigger outputs. You can then easily reference these outputs throughout your logic app's workflow. Learn more about [tokens generated from JSON schemas](#generated-tokens).
- For this example, enter this schema:
+ For this example, enter the following schema:
```json
- {
+ {
"type": "object", "properties": { "address": {
For information about security, authorization, and encryption for inbound calls
} ```
- ![Provide JSON schema for the Request action](./media/logic-apps-http-endpoint/manual-request-trigger-schema.png)
+ ![Screenshot shows Standard workflow with Request trigger and Request Body JSON Schema parameter with example schema.](./media/logic-apps-http-endpoint/trigger-schema-standard.png)
Or, you can generate a JSON schema by providing a sample payload:
For information about security, authorization, and encryption for inbound calls
The **HTTP POST URL** box now shows the generated callback URL that other services can use to call and trigger your logic app. This URL includes query parameters that specify a Shared Access Signature (SAS) key, which is used for authentication.
- ![Generated callback URL for endpoint](./media/logic-apps-http-endpoint/generated-endpoint-url.png)
+ ![Screenshot shows Standard workflow, Request trigger, and generated callback URL for endpoint.](./media/logic-apps-http-endpoint/endpoint-url-standard.png)
+
+1. To copy the callback URL, you have these options:
+
+ * To the right of the **HTTP POST URL** box, select **Copy URL** (copy files icon).
+
+ * Make this call by using the method that the Request trigger expects. This example uses the `POST` method:
+
+ `POST https://management.azure.com/{logic-app-resource-ID}/triggers/{endpoint-trigger-name}/listCallbackURL?api-version=2016-06-01`
+
+ * Copy the callback URL from your workflow's **Overview** page.
+
+ 1. On your workflow menu, select **Overview**.
+
+ 1. On the **Overview** page, under **Workflow URL**, move your pointer over the URL, and select **Copy to clipboard**:
+
+ :::image type="content" source="./media/logic-apps-http-endpoint/find-trigger-url-standard.png" alt-text="Screenshot shows Standard workflow and Overview page with workflow URL." lightbox="./media/logic-apps-http-endpoint/find-trigger-url-standard.png":::
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app resource and blank workflow in the designer.
+
+1. [Follow these general steps to add the **Request** trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+
+1. Optionally, in the **Request Body JSON Schema** box, you can enter a JSON schema that describes the payload or data that you expect the trigger to receive.
+
+ The designer uses this schema to generate tokens that represent trigger outputs. You can then easily reference these outputs throughout your logic app's workflow. Learn more about [tokens generated from JSON schemas](#generated-tokens).
+
+ For this example, enter the following schema:
+
+ ```json
+ {
+ "type": "object",
+ "properties": {
+ "address": {
+ "type": "object",
+ "properties": {
+ "streetNumber": {
+ "type": "string"
+ },
+ "streetName": {
+ "type": "string"
+ },
+ "town": {
+ "type": "string"
+ },
+ "postalCode": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ ```
+
+ ![Screenshot shows Consumption workflow with Request trigger and Request Body JSON Schema parameter with example schema.](./media/logic-apps-http-endpoint/trigger-schema-consumption.png)
+
+ Or, you can generate a JSON schema by providing a sample payload:
+
+ 1. In the **Request** trigger, select **Use sample payload to generate schema**.
+
+ 1. In the **Enter or paste a sample JSON payload** box, enter your sample payload, for example:
+
+ ```json
+ {
+ "address": {
+ "streetNumber": "00000",
+ "streetName": "AnyStreet",
+ "town": "AnyTown",
+ "postalCode": "11111-1111"
+ }
+ }
+ ```
+
+ 1. When you're ready, select **Done**.
+
+ The **Request Body JSON Schema** box now shows the generated schema.
+
+1. Save your workflow.
+
+ The **HTTP POST URL** box now shows the generated callback URL that other services can use to call and trigger your logic app. This URL includes query parameters that specify a Shared Access Signature (SAS) key, which is used for authentication.
+
+ ![Screenshot shows Consumption workflow, Request trigger, and generated callback URL for endpoint.](./media/logic-apps-http-endpoint/endpoint-url-consumption.png)
1. To copy the callback URL, you have these options:
For information about security, authorization, and encryption for inbound calls
`POST https://management.azure.com/{logic-app-resource-ID}/triggers/{endpoint-trigger-name}/listCallbackURL?api-version=2016-06-01`
- * Copy the callback URL from your logic app's **Overview** pane.
+ * Copy the callback URL from your logic app's **Overview** page.
- 1. On your logic app's menu, select **Overview**.
+ 1. On your logic app menu, select **Overview**.
- 1. On the **Overview** pane, select **Trigger history**. Under **Callback url [POST]**, copy the URL:
+ 1. On the **Overview** page, under **Workflow URL**, move your pointer over the URL, and select **Copy to clipboard**:
- ![Screenshot showing logic app 'Overview' pane with 'Trigger history' selected.](./media/logic-apps-http-endpoint/find-manual-trigger-url.png)
+ :::image type="content" source="./media/logic-apps-http-endpoint/find-trigger-url-consumption.png" alt-text="Screenshot shows Consumption logic app Overview page with workflow URL." lightbox="./media/logic-apps-http-endpoint/find-trigger-url-consumption.png":::
++ <a name="select-method"></a>
For information about security, authorization, and encryption for inbound calls
By default, the Request trigger expects a `POST` request. However, you can specify a different method that the caller must use, but only a single method.
-1. In the Request trigger, open the **Add new parameter** list, and select **Method**, which adds this property to the trigger.
+### [Standard](#tab/standard)
- ![Add "Method" property to trigger](./media/logic-apps-http-endpoint/select-add-new-parameter-for-method.png)
+1. In the Request trigger, open the **Advanced parameters** list, and select **Method**, which adds this property to the trigger.
1. From the **Method** list, select the method that the trigger should expect instead. Or, you can specify a custom method. For example, select the **GET** method so that you can test your endpoint's URL later.
- ![Select request method expected by the trigger](./media/logic-apps-http-endpoint/select-method-request-trigger.png)
+### [Consumption](#tab/consumption)
+
+1. In the Request trigger, open the **Add new parameter** list, and select **Method**, which adds this property to the trigger.
+
+1. From the **Method** list, select the method that the trigger should expect instead. Or, you can specify a custom method.
+
+ For example, select the **GET** method so that you can test your endpoint's URL later.
++ <a name="endpoint-url-parameters"></a>
When you want to accept parameter values through the endpoint's URL, you have th
<a name="get-parameters"></a>
-### Accept values through GET parameters
+## Accept values through GET parameters
+
+### [Standard](#tab/standard)
+
+1. In the Request trigger, open the **Advanced parameters**, add the **Method** property to the trigger, and select the **GET** method.
+
+ For more information, see [Select expected request method](#select-method).
+
+1. In the designer, [follow these general steps to add the action where you want to use the parameter value](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+ For this example, select the action named **Response**.
+
+1. To build the `triggerOutputs()` expression that retrieves the parameter value, follow these steps:
+
+ 1. In the Response action, select inside the **Body** property so that the options for dynamic content (lightning icon) and expression editor (formula icon) appear. Select the formula icon to open the expression editor.
+
+ 1. In the expression box, enter the following expression, replacing `parameter-name` with your parameter name, and select **OK**.
+
+ `triggerOutputs()['queries']['parameter-name']`
+
+ ![Screenshot shows Standard workflow, Response action, and the triggerOutputs() expression.](./media/logic-apps-http-endpoint/trigger-outputs-expression-standard.png)
+
+ In the **Body** property, the expression resolves to the `triggerOutputs()` token.
+
+ ![Screenshot shows Standard workflow with Response action's resolved triggerOutputs() expression.](./media/logic-apps-http-endpoint/trigger-outputs-expression-token.png)
+
+ If you save the workflow, navigate away from the designer, and return to the designer, the token shows the parameter name that you specified, for example:
+
+ ![Screenshot shows Standard workflow with Response action's resolved expression for parameter name.](./media/logic-apps-http-endpoint/resolved-expression-parameter-token.png)
+
+ In code view, the **Body** property appears in the Response action's definition as follows:
+
+ `"body": "@{triggerOutputs()['queries']['parameter-name']}",`
+
+ For example, suppose that you want to pass a value for a parameter named `postalCode`. The **Body** property specifies the string, `Postal Code: ` with a trailing space, followed by the corresponding expression:
+
+ ![Screenshot shows Standard workflow with Response action and example triggerOutputs() expression.](./media/logic-apps-http-endpoint/trigger-outputs-expression-postal-code-standard.png)
+
+#### Test your callable endpoint
+
+1. From the Request trigger, copy the workflow URL, and paste the URL into another browser window. In the URL, add the parameter name and value to the URL in the following format, and press Enter.
+
+ `...invoke/{parameter-name}/{parameter-value}?api-version=2022-05-01...`
+
+ For example:
+
+ `https://mystandardlogicapp.azurewebsites.net/api/Stateful-Workflow/triggers/When_a_HTTP_request_is_received/invoke/address/12345?api-version=2022-05-01&sp=%2Ftriggers%2FWhen_a_HTTP_request_is_received%2Frun&sv=1.0&sig={shared-access-signature}`
+
+ The browser returns a response with this text: `Postal Code: 123456`
+
+ ![Screenshot shows browser with Standard workflow response from request to callback URL.](./media/logic-apps-http-endpoint/browser-response-callback-url-standard.png)
+
+> [!NOTE]
+>
+> If you want to include the hash or pound symbol (**#**) in the URI,
+> use this encoded version instead: `%25%23`
+
+### [Consumption](#tab/consumption)
1. In the Request trigger, open the **Add new parameter list**, add the **Method** property to the trigger, and select the **GET** method. For more information, see [Select expected request method](#select-method).
-1. In the designer, [follow these general steps to add the action where you want to use the parameter value](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). For this example, select the action named **Response**.
+1. In the designer, [follow these general steps to add the action where you want to use the parameter value](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+ For this example, select the action named **Response**.
1. To build the `triggerOutputs()` expression that retrieves the parameter value, follow these steps:
- 1. Select inside the Response action's **Body** property so that the dynamic content list appears, and select **Expression**.
+ 1. In the Response action, select inside the **Body** property so that the dynamic content list appears, and select **Expression**.
- 1. In the **Expression** box, enter this expression, replacing `parameter-name` with your parameter name, and select **OK**.
+ 1. In the **Expression** box, enter the following expression, replacing `parameter-name` with your parameter name, and select **OK**.
`triggerOutputs()['queries']['parameter-name']`
- ![Add "triggerOutputs()" expression to trigger](./media/logic-apps-http-endpoint/trigger-outputs-expression.png)
+ ![Screenshot shows Consumption workflow, Response action, and the triggerOutputs() expression.](./media/logic-apps-http-endpoint/trigger-outputs-expression-consumption.png)
In the **Body** property, the expression resolves to the `triggerOutputs()` token.
- ![Resolved "triggerOutputs()" expression](./media/logic-apps-http-endpoint/trigger-outputs-expression-token.png)
+ ![Screenshot shows Consumption workflow with Response action's resolved triggerOutputs() expression.](./media/logic-apps-http-endpoint/trigger-outputs-expression-token.png)
If you save the workflow, navigate away from the designer, and return to the designer, the token shows the parameter name that you specified, for example:
- ![Resolved expression for parameter name](./media/logic-apps-http-endpoint/resolved-expression-parameter-token.png)
+ ![Screenshot shows Consumption workflow with Response action's resolved expression for parameter name.](./media/logic-apps-http-endpoint/resolved-expression-parameter-token.png)
In code view, the **Body** property appears in the Response action's definition as follows:
When you want to accept parameter values through the endpoint's URL, you have th
For example, suppose that you want to pass a value for a parameter named `postalCode`. The **Body** property specifies the string, `Postal Code: ` with a trailing space, followed by the corresponding expression:
- ![Add example "triggerOutputs()" expression to trigger](./media/logic-apps-http-endpoint/trigger-outputs-expression-postal-code.png)
+ ![Screenshot shows Consumption workflow with Response action and example triggerOutputs() expression.](./media/logic-apps-http-endpoint/trigger-outputs-expression-postal-code-consumption.png)
-1. To test your callable endpoint, copy the callback URL from the Request trigger, and paste the URL into another browser window. In the URL, add the parameter name and value following the question mark (`?`) to the URL in the following format, and press Enter.
+#### Test your callable endpoint
- `...?{parameter-name=parameter-value}&api-version=2016-10-01...`
+1. From the Request trigger, copy the workflow URL, and paste the URL into another browser window. In the URL, add the parameter name and value following the question mark (`?`) to the URL in the following format, and press Enter.
+
+ `...invoke?{parameter-name=parameter-value}&api-version=2016-10-01...`
+
+ For example:
`https://prod-07.westus.logic.azure.com:433/workflows/{logic-app-resource-ID}/triggers/manual/paths/invoke?{parameter-name=parameter-value}&api-version=2016-10-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig={shared-access-signature}` The browser returns a response with this text: `Postal Code: 123456`
- ![Response from sending request to callback URL](./media/logic-apps-http-endpoint/callback-url-returned-response.png)
+ ![Screenshot shows browser with Consumption workflow response from request to callback URL.](./media/logic-apps-http-endpoint/browser-response-callback-url-consumption.png)
1. To put the parameter name and value in a different position within the URL, make sure to use the ampersand (`&`) as a prefix, for example:
When you want to accept parameter values through the endpoint's URL, you have th
* 2nd position: `https://prod-07.westus.logic.azure.com:433/workflows/{logic-app-resource-ID}/triggers/manual/paths/invoke?api-version=2016-10-01&postalCode=123456&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig={shared-access-signature}` > [!NOTE]
+>
> If you want to include the hash or pound symbol (**#**) in the URI, > use this encoded version instead: `%25%23` ++ <a name="relative-path"></a>
-### Accept values through a relative path
+## Accept values through a relative path
-1. In the Request trigger, open the **Add new parameter** list, and select **Relative path**, which adds this property to the trigger.
+### [Standard](#tab/standard)
- ![Add "Relative path" property to trigger](./media/logic-apps-http-endpoint/select-add-new-parameter-for-relative-path.png)
+1. In the Request trigger, open the **Advanced parameters** list, and select **Relative path**, which adds this property to the trigger.
+
+ ![Screenshot shows Standard workflow, Request trigger, and added property named Relative path.](./media/logic-apps-http-endpoint/add-relative-path-standard.png)
1. In the **Relative path** property, specify the relative path for the parameter in your JSON schema that you want your URL to accept, for example, `/address/{postalCode}`.
- ![Specify the relative path for the parameter](./media/logic-apps-http-endpoint/relative-path-url-value.png)
+ ![Screenshot shows Standard workflow, Request trigger, and Relative path parameter value.](./media/logic-apps-http-endpoint/relative-path-url-standard.png)
-1. Under the Request trigger, add the action where you want to use the parameter value. For this example, add the **Response** action.
+1. Under the Request trigger, [follow these general steps to add the action where you want to use the parameter value](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- 1. Under the Request trigger, select **New step** > **Add an action**.
+ For this example, add the **Response** action.
- 1. Under **Choose an action**, in the search box, enter `response` as your filter. From the actions list, select the **Response** action.
+1. In the Response action's **Body** property, include the token that represents the parameter that you specified in your trigger's relative path.
+
+ For example, suppose that you want the Response action to return `Postal Code: {postalCode}`.
+
+ 1. In the **Body** property, enter `Postal Code: ` with a trailing space. Keep your cursor inside the edit box so that the dynamic content list remains open.
+
+ 1. In the dynamic content list, from the **When a HTTP request is received** section, select the **Path Parameters postalCode** trigger output.
+
+ :::image type="content" source="./media/logic-apps-http-endpoint/response-trigger-output-standard.png" alt-text="Screenshot shows Standard workflow, Response action, and specified trigger output to include in response body." lightbox="./media/logic-apps-http-endpoint/response-trigger-output-standard.png":::
+
+ The **Body** property now includes the selected parameter:
+
+ ![Screenshot shows Standard workflow and example response body with parameter.](./media/logic-apps-http-endpoint/response-parameter-standard.png)
+
+1. Save your workflow.
+
+ In the Request trigger, the callback URL is updated and now includes the relative path, for example:
+
+ `https://mystandardlogicapp.azurewebsites.net/api/Stateful-Workflow/triggers/When_a_HTTP_request_is_received/invoke/address/%7BpostalCode%7D?api-version=2022-05-01&sp=%2Ftriggers%2FWhen_a_HTTP_request_is_received%2Frun&sv=1.0&sig={shared-access-signature}`
+
+1. To test your callable endpoint, copy the updated callback URL from the Request trigger, paste the URL into another browser window, replace `%7BpostalCode%7D` in the URL with `123456`, and press Enter.
+
+ The browser returns a response with this text: `Postal Code: 123456`
+
+ ![Screenshot shows browser with Standard workflow response from request to callback URL.](./media/logic-apps-http-endpoint/browser-response-callback-url-standard.png)
+
+> [!NOTE]
+>
+> If you want to include the hash or pound symbol (**#**) in the URI,
+> use this encoded version instead: `%25%23`
+
+### [Consumption](#tab/consumption)
+
+1. In the Request trigger, open the **Add new parameter** list, and select **Relative path**, which adds this property to the trigger.
+
+ ![Screenshot shows Consumption workflow, Request trigger, and added property named Relative path.](./media/logic-apps-http-endpoint/add-relative-path-consumption.png)
+
+1. In the **Relative path** property, specify the relative path for the parameter in your JSON schema that you want your URL to accept, for example, `/address/{postalCode}`.
+
+ ![Screenshot shows Consumption workflow, Request trigger, and Relative path parameter value.](./media/logic-apps-http-endpoint/relative-path-url-consumption.png)
+
+1. Under the Request trigger, [follow these general steps to add the action where you want to use the parameter value](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+ For this example, add the **Response** action.
1. In the Response action's **Body** property, include the token that represents the parameter that you specified in your trigger's relative path.
When you want to accept parameter values through the endpoint's URL, you have th
1. In the **Body** property, enter `Postal Code: ` with a trailing space. Keep your cursor inside the edit box so that the dynamic content list remains open.
- 1. In the dynamic content list, from the **When a HTTP request is received** section, select the **postalCode** token.
+ 1. In the dynamic content list, from the **When a HTTP request is received** section, select the **postalCode** trigger output.
- ![Add the specified parameter to response body](./media/logic-apps-http-endpoint/relative-url-with-parameter-token.png)
+ ![Screenshot shows Consumption workflow, Response action, and specified trigger output to include in response body.](./media/logic-apps-http-endpoint/response-trigger-output-consumption.png)
The **Body** property now includes the selected parameter:
- ![Example response body with parameter](./media/logic-apps-http-endpoint/relative-url-with-parameter.png)
+ ![Screenshot shows Consumption workflow and example response body with parameter.](./media/logic-apps-http-endpoint/response-parameter-consumption.png)
1. Save your workflow.
When you want to accept parameter values through the endpoint's URL, you have th
The browser returns a response with this text: `Postal Code: 123456`
- ![Response from sending request to callback URL](./media/logic-apps-http-endpoint/callback-url-returned-response.png)
+ ![Screenshot shows browser with Consumption workflow response from request to callback URL.](./media/logic-apps-http-endpoint/browser-response-callback-url-consumption.png)
> [!NOTE]
+>
> If you want to include the hash or pound symbol (**#**) in the URI, > use this encoded version instead: `%25%23` ++ ## Call workflow through endpoint URL
-After you create the endpoint, you can trigger the workflow by sending an HTTPS request to the endpoint's full URL. Logic app workflows have built-in support for direct-access endpoints.
+After you create the endpoint, you can trigger the workflow by sending an HTTPS request to the endpoint's full URL. Azure Logic Apps workflows have built-in support for direct-access endpoints.
<a name="generated-tokens"></a>
After you create the endpoint, you can trigger the workflow by sending an HTTPS
When you provide a JSON schema in the Request trigger, the workflow designer generates tokens for the properties in that schema. You can then use those tokens for passing data through your workflow.
-For example, if you add more properties, such as `"suite"`, to your JSON schema, tokens for those properties are available for you to use in the later steps for your workflow. Here is the complete JSON schema:
+For example, if you add more properties, such as `"suite"`, to your JSON schema, tokens for those properties are available for you to use in the later steps for your workflow. Here's the complete JSON schema:
```json
- {
+{
"type": "object", "properties": { "address": {
For example, if you add more properties, such as `"suite"`, to your JSON schema,
} ```
-## Create nested workflows
+## Call other workflows
+
+You can call other workflows that can receive requests by nesting them inside the current workflow. To call these workflows, follow these steps:
+
+### [Standard](#tab/standard)
+
+1. In the designer, [follow these general steps to add the **Workflow Operations** action named **Invoke a workflow in this workflow app**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-You can nest a workflow inside the current workflow by adding calls to other workflows that can receive requests. To call these workflows, follow these steps:
+ The **Workflow Name** list shows the eligible workflows for you to select.
-1. In the designer, [follow these general steps to add the action named **Choose a Logic Apps workflow**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+1. From the **Workflow Name** list, select the workflow that you want to call, for example:
- The designer shows the eligible workflows for you to select.
+ ![Screenshot shows Standard workflow, action named Invoke a workflow in this workflow app, opened Workflow Name list, and available workflows to call.](./media/logic-apps-http-endpoint/select-workflow-standard.png)
-1. Select the workflow to call from your current workflow.
+### [Consumption](#tab/consumption)
- ![Screenshot shows workflow to call from current workflow.](./media/logic-apps-http-endpoint/select-logic-app-to-nest.png)
+1. In the designer, [follow these general steps to add the **Azure Logic Apps** action named **Choose a Logic Apps workflow**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+ The **Choose an operation** box shows the eligible workflows for you to select.
+
+1. From the **Choose an operation** box, select an available workflow that you want to call, for example:
+
+ ![Screenshot shows Consumption workflow with Choose an operation box and available workflows to call.](./media/logic-apps-http-endpoint/select-workflow-consumption.png)
++
-## Reference content from an incoming request
+## Reference content from an inbound request
If the incoming request's content type is `application/json`, you can reference the properties in the incoming request. Otherwise, this content is treated as a single binary unit that you can pass to other APIs. To reference this content inside your logic app's workflow, you need to first convert that content.
-For example, if you're passing content that has `application/xml` type, you can use the [`@xpath()` expression](../logic-apps/workflow-definition-language-functions-reference.md#xpath) to perform an XPath extraction, or use the [`@json()` expression](../logic-apps/workflow-definition-language-functions-reference.md#json) for converting XML to JSON. Learn more about working with supported [content types](../logic-apps/logic-apps-content-type.md).
+For example, if you're passing content that has `application/xml` type, you can use the [`@xpath()` expression](workflow-definition-language-functions-reference.md#xpath) to perform an XPath extraction, or use the [`@json()` expression](workflow-definition-language-functions-reference.md#json) for converting XML to JSON. Learn more about working with supported [content types](logic-apps-content-type.md).
-To get the output from an incoming request, you can use the [`@triggerOutputs` expression](../logic-apps/workflow-definition-language-functions-reference.md#triggerOutputs). For example, suppose you have output that looks like this example:
+To get the output from an incoming request, you can use the [`@triggerOutputs` expression](workflow-definition-language-functions-reference.md#triggerOutputs). For example, suppose you have output that looks like this example:
```json {
To get the output from an incoming request, you can use the [`@triggerOutputs` e
} ```
-To access specifically the `body` property, you can use the [`@triggerBody()` expression](../logic-apps/workflow-definition-language-functions-reference.md#triggerBody) as a shortcut.
+To access specifically the `body` property, you can use the [`@triggerBody()` expression](workflow-definition-language-functions-reference.md#triggerBody) as a shortcut.
## Respond to requests Sometimes you want to respond to certain requests that trigger your workflow by returning content to the caller. To construct the status code, header, and body for your response, use the Response action. This action can appear anywhere in your workflow, not just at the end of your workflow. If your workflow doesn't include a Response action, the endpoint responds *immediately* with the **202 Accepted** status.
-For the original caller to successfully get the response, all the required steps for the response must finish within the [request timeout limit](./logic-apps-limits-and-config.md) unless the triggered workflow is called as a nested workflow. If no response is returned within this limit, the incoming request times out and receives the **408 Client timeout** response.
+For the original caller to successfully get the response, all the required steps for the response must finish within the [request timeout limit](logic-apps-limits-and-config.md#timeout-duration) unless the triggered workflow is called as a nested workflow. If no response is returned within this limit, the incoming request times out and receives the **408 Client timeout** response.
For nested workflows, the parent workflow continues to wait for a response until all the steps are completed, regardless of how much time is required. ### Construct the response
-In the response body, you can include multiple headers and any type of content. For example, this response's header specifies that the response's content type is `application/json` and that the body contains values for the `town` and `postalCode` properties, based on the JSON schema described earlier in this topic for the Request trigger.
+In the response body, you can include multiple headers and any type of content. For example, the following response's header specifies that the response's content type is `application/json` and that the body contains values for the `town` and `postalCode` properties, based on the JSON schema described earlier in this topic for the Request trigger.
-![Provide response content for HTTPS Response action](./media/logic-apps-http-endpoint/content-for-response-action.png)
+![Screenshot shows Response action and response content type.](./media/logic-apps-http-endpoint/content-for-response-action.png)
Responses have these properties: | Property (Display) | Property (JSON) | Description | |--|--|-|
-| **Status Code** | `statusCode` | The HTTPS status code to use in the response for the incoming request. This code can be any valid status code that starts with 2xx, 4xx, or 5xx. However, 3xx status codes are not permitted. |
+| **Status Code** | `statusCode` | The HTTPS status code to use in the response for the incoming request. This code can be any valid status code that starts with 2xx, 4xx, or 5xx. However, 3xx status codes aren't permitted. |
| **Headers** | `headers` | One or more headers to include in the response | | **Body** | `body` | A body object that can be a string, a JSON object, or even binary content referenced from a previous step |
-To view the JSON definition for the Response action and your workflow's complete JSON definition, on the designer toolbar, select **Code view**.
+To view the JSON definition for the Response action and your workflow's complete JSON definition, change from designer view to code view.
``` json "Response": {
To view the JSON definition for the Response action and your workflow's complete
## Q & A
-#### Q: What about URL security?
+#### Q: What about URL security for inbound calls?
-**A**: Azure securely generates logic app callback URLs by using [Shared Access Signature (SAS)](/rest/api/storageservices/delegate-access-with-shared-access-signature). This signature passes through as a query parameter and must be validated before your workflow can run. Azure generates the signature using a unique combination of a secret key per logic app, the trigger name, and the operation that's performed. So unless someone has access to the secret logic app key, they cannot generate a valid signature.
+**A**: Azure securely generates logic app callback URLs by using [Shared Access Signature (SAS)](/rest/api/storageservices/delegate-access-with-shared-access-signature). This signature passes through as a query parameter and must be validated before your workflow can run. Azure generates the signature using a unique combination of a secret key per logic app, the trigger name, and the operation that's performed. So unless someone has access to the secret logic app key, they can't generate a valid signature.
> [!IMPORTANT] > For production and higher security systems, we strongly advise against calling your workflow directly from the browser for these reasons:
To view the JSON definition for the Response action and your workflow's complete
> * The shared access key appears in the URL. > * You can't manage security content policies due to shared domains across Azure Logic Apps customers.
-For more information about security, authorization, and encryption for inbound calls to your workflow, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Microsoft Entra ID Open Authentication (Microsoft Entra ID OAuth)](../active-directory/develop/index.yml), exposing your logic app workflow with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
+For more information about security, authorization, and encryption for inbound calls to your workflow, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Microsoft Entra ID Open Authentication (Microsoft Entra ID OAuth)](../active-directory/develop/index.yml), exposing your logic app workflow with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](logic-apps-securing-a-logic-app.md#secure-inbound-requests).
#### Q: Can I configure callable endpoints further?
For more information about security, authorization, and encryption for inbound c
## Next steps * [Receive and respond to incoming HTTPS calls by using Azure Logic Apps](../connectors/connectors-native-reqres.md)
-* [Secure access and data in Azure Logic Apps - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests)
+* [Secure access and data in Azure Logic Apps - Access for inbound calls to request-based triggers](logic-apps-securing-a-logic-app.md#secure-inbound-requests)
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md
ms.suite: integration Previously updated : 04/18/2023 Last updated : 01/10/2024 # As a logic apps developer, I want to learn and understand how usage metering, billing, and pricing work in Azure Logic Apps.
logic-apps Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/plan-manage-costs.md
Previously updated : 01/04/2023 Last updated : 01/10/2024 # Note for Azure service writer: Links to Cost Management articles are full URLS with the ?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn campaign suffix. Leave those URLs intact. They're used to measure traffic to Cost Management articles.
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
ms.suite: integration
Previously updated : 02/22/2023 Last updated : 01/10/2024 # As a developer, I want to connect to my Standard logic app workflows with virtual networks using private endpoints and virtual network integration.
logic-apps View Workflow Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/view-workflow-metrics.md
Previously updated : 02/15/2023 Last updated : 01/10/2024 # As a developer, I want to review the health and performance metrics for workflows in Azure Logic Apps.
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Title: MLflow and Azure Machine Learning
-description: Learn about how Azure Machine Learning uses MLflow to log metrics and artifacts from machine learning models, and to deploy your machine learning models to an endpoint.
+description: Learn how Azure Machine Learning uses MLflow to log metrics and artifacts from machine learning models, and to deploy your machine learning models to an endpoint.
Previously updated : 08/15/2022 Last updated : 01/10/2024
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-[MLflow](https://www.mlflow.org) is an open-source framework that's designed to manage the complete machine learning lifecycle. Its ability to train and serve models on different platforms allows you to use a consistent set of tools regardless of where your experiments are running: locally on your computer, on a remote compute target, on a virtual machine, or on an Azure Machine Learning compute instance.
+[MLflow](https://www.mlflow.org) is an open-source framework designed to manage the complete machine learning lifecycle. Its ability to train and serve models on different platforms allows you to use a consistent set of tools regardless of where your experiments are running: whether locally on your computer, on a remote compute target, on a virtual machine, or on an Azure Machine Learning compute instance.
-Azure Machine Learning **workspaces are MLflow-compatible**, which means you can use Azure Machine Learning workspaces in the same way that you'd use an MLflow server. Such compatibility has the following advantages:
+Azure Machine Learning **workspaces are MLflow-compatible**, which means that you can use Azure Machine Learning workspaces in the same way that you'd use an MLflow server. This compatibility has the following advantages:
-* We don't host MLflow server instances under the hood. The workspace can talk the MLflow API language.
+* Azure Machine Learning doesn't host MLflow server instances under the hood; rather, the workspace can speak the MLflow API language.
* You can use Azure Machine Learning workspaces as your tracking server for any MLflow code, whether it runs on Azure Machine Learning or not. You only need to configure MLflow to point to the workspace where the tracking should happen. * You can run any training routine that uses MLflow in Azure Machine Learning without any change. > [!TIP]
-> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2 and we recommend using MLflow for logging. Such strategy allows your training routines to become cloud-agnostic and portable, removing any dependency in your code with Azure Machine Learning.
+> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2. We recommend that you use MLflow for logging, so that your training routines are cloud-agnostic and portableΓÇöremoving any dependency your code has on Azure Machine Learning.
## Tracking with MLflow
-Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments. When connected to Azure Machine Learning, all tracking performed using MLflow is materialized in the workspace you are working on. To learn more about how to instrument your experiments for tracking experiments and training routines, see [Log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md). You can also use MLflow to [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
+Azure Machine Learning uses MLflow tracking to log metrics and store artifacts for your experiments. When you're connected to Azure Machine Learning, all tracking performed using MLflow is materialized in the workspace you're working on. To learn more about how to set up your experiments to use MLflow for tracking experiments and training routines, see [Log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md). You can also use MLflow to [query & compare experiments and runs](how-to-track-experiments-mlflow.md).
+MLflow in Azure Machine Learning provides a way to __centralize tracking__. You can connect MLflow to Azure Machine Learning workspaces even when you're working locally or in a different cloud. The workspace provides a centralized, secure, and scalable location to store training metrics and models.
-### Centralize tracking
+Using MLflow in Azure Machine Learning includes the capabilities to:
-You can connect MLflow to Azure Machine Learning workspaces even when you are running locally or in a different cloud. The workspace provides a centralized, secure, and scalable location to store training metrics and models.
-
-Capabilities include:
-
-* [Track machine learning experiments and models running locally or in the cloud](how-to-use-mlflow-cli-runs.md) with MLflow in Azure Machine Learning.
-* [Track Azure Databricks machine learning experiments](how-to-use-mlflow-azure-databricks.md) with MLflow in Azure Machine Learning.
-* [Track Azure Synapse Analytics machine learning experiments](how-to-use-mlflow-azure-synapse.md) with MLflow in Azure Machine Learning.
+* [Track machine learning experiments and models running locally or in the cloud](how-to-use-mlflow-cli-runs.md).
+* [Track Azure Databricks machine learning experiments](how-to-use-mlflow-azure-databricks.md).
+* [Track Azure Synapse Analytics machine learning experiments](how-to-use-mlflow-azure-synapse.md).
### Example notebooks * [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments by using MLflow, log models, and combine multiple flavors into pipelines.
-* [Training and tracking an XGBoost classifier with MLflow using service principal authentication](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_service_principal.ipynb): Demonstrates how to track experiments by using MLflow from compute that's running outside Azure Machine Learning. It shows how to authenticate against Azure Machine Learning services by using a service principal.
-* [Hyper-parameter optimization using Hyperopt and nested runs in MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_nested_runs.ipynb): Demonstrates how to use child runs in MLflow to do hyper-parameter optimization for models by using the popular library Hyperopt. It shows how to transfer metrics, parameters, and artifacts from child runs to parent runs.
-* [Logging models with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/logging_and_customizing_models.ipynb): Demonstrates how to use the concept of models instead of artifacts with MLflow, including how to construct custom models.
+* [Training and tracking an XGBoost classifier with MLflow using service principal authentication](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_service_principal.ipynb): Demonstrates how to track experiments by using MLflow from a compute that's running outside Azure Machine Learning. The example shows how to authenticate against Azure Machine Learning services by using a service principal.
+* [Hyper-parameter optimization using HyperOpt and nested runs in MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_nested_runs.ipynb): Demonstrates how to use child runs in MLflow to do hyper-parameter optimization for models by using the popular library `Hyperopt`. The example shows how to transfer metrics, parameters, and artifacts from child runs to parent runs.
+* [Logging models with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/logging_and_customizing_models.ipynb): Demonstrates how to use the concept of models, instead of artifacts, with MLflow. The example also shows how to construct custom models.
* [Manage runs and experiments with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/runs-management/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning by using MLflow.
-> [!IMPORTANT]
-> - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. Interactive training on RStudio, Posit (formerly RStudio Workbench) or Jupyter Notebooks with R kernels is not supported. Model management and registration is not supported using the MLflow R SDK. As an alternative, use Azure Machine Learning CLI or [Azure Machine Learning studio](https://ml.azure.com) for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
-> - MLflow in Java support is limited to tracking experiment's metrics and parameters on Azure Machine Learning jobs. Artifacts and models can't be tracked using the MLflow Java SDK. As an alternative, use the `Outputs` folder in jobs along with the method `mlflow.save_model` to save models (or artifacts) you want to capture. View the following [Java example about using the MLflow tracking client with the Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/java/iris).
+#### Tracking with MLflow in R
+
+MLflow support in R has the following limitations:
+
+- MLflow tracking is limited to tracking experiment metrics, parameters, and models on Azure Machine Learning jobs.
+- Interactive training on RStudio, Posit (formerly RStudio Workbench), or Jupyter notebooks with R kernels is _not supported_.
+- Model management and registration are _not supported_ using the MLflow R SDK. Instead, use the Azure Machine Learning CLI or [Azure Machine Learning studio](https://ml.azure.com) for model registration and management.
+
+To learn about using the MLflow tracking client with Azure Machine Learning, view the examples in [Train R models using the Azure Machine Learning CLI (v2)](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
+
+#### Tracking with MLflow in Java
+
+MLflow support in Java has the following limitations:
+
+- MLflow tracking is limited to tracking experiment metrics and parameters on Azure Machine Learning jobs.
+- Artifacts and models can't be tracked using the MLflow Java SDK. Instead, use the `Outputs` folder in jobs along with the `mlflow.save_model` method to save models (or artifacts) that you want to capture.
+
+To learn about using the MLflow tracking client with Azure Machine Learning, view the [Java example that uses the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/java/iris).
## Model registries with MLflow
-Azure Machine Learning supports MLflow for model management. This support represents a convenient way to support the entire model lifecycle for users who are familiar with the MLflow client.
+Azure Machine Learning supports MLflow for model management. This support represents a convenient way to support the entire model lifecycle for users that are familiar with the MLflow client.
To learn more about how to manage models by using the MLflow API in Azure Machine Learning, view [Manage model registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
-### Example notebooks
+### Example notebook
* [Manage model registries with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/model-management/model_management.ipynb): Demonstrates how to manage models in registries by using MLflow. ## Model deployment with MLflow
-You can [deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) and take advantage of the improved experience when you use this type of models. Azure Machine Learning supports deploying MLflow models to both real-time and batch endpoints without having to indicate and environment or a scoring script. Deployment is supported using either MLflow SDK, Azure Machine Learning CLI, Azure Machine Learning SDK for Python, or the [Azure Machine Learning studio](https://ml.azure.com) portal.
+You can [deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) and take advantage of the improved experience when you use MLflow models. Azure Machine Learning supports deployment of MLflow models to both real-time and batch endpoints without having to specify an environment or a scoring script. Deployment is supported using the MLflow SDK, Azure Machine Learning CLI, Azure Machine Learning SDK for Python, or the [Azure Machine Learning studio](https://ml.azure.com).
-Learn more at [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).
+To learn more about deploying MLflow models to Azure Machine Learning for both real-time and batch inferencing, see [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).
### Example notebooks
-* [Deploy MLflow to Online Endpoints](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK.
-* [Deploy MLflow to Online Endpoints with safe rollout](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints_progresive.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK with progressive rollout of models and the deployment of multiple model's versions in the same endpoint.
-* [Deploy MLflow to web services (V1)](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_web_service.ipynb): Demonstrates how to deploy models in MLflow format to web services (ACI/AKS v1) using MLflow SDK.
-* [Deploying models trained in Azure Databricks to Azure Machine Learning with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
+* [Deploy MLflow to online endpoints](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using the MLflow SDK.
+* [Deploy MLflow to online endpoints with safe rollout](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints_progresive.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints, using the MLflow SDK with progressive rollout of models. The example also shows deployment of multiple versions of a model to the same endpoint.
+* [Deploy MLflow to web services (V1)](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_web_service.ipynb): Demonstrates how to deploy models in MLflow format to web services (ACI/AKS v1) using the MLflow SDK.
+* [Deploy models trained in Azure Databricks to Azure Machine Learning with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. The example also covers how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
-## Training MLflow projects (preview)
+## Training with MLflow projects (preview)
> [!IMPORTANT] > Items marked (preview) in this article are currently in public preview.
Learn more at [Guidelines for deploying MLflow models](how-to-deploy-mlflow-mode
You can submit training jobs to Azure Machine Learning by using [MLflow projects](https://www.mlflow.org/docs/latest/projects.html) (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your jobs to the cloud via [Azure Machine Learning compute](./how-to-create-attach-compute-cluster.md).
-Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning](how-to-train-mlflow-projects.md).
+To learn how to submit training jobs with MLflow Projects that use Azure Machine Learning workspaces for tracking, see [Train machine learning models with MLflow projects and Azure Machine Learning](how-to-train-mlflow-projects.md).
### Example notebooks
-* [Track an MLflow project in Azure Machine Learning workspaces](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-local/train-projects-local.ipynb)
+* [Track an MLflow project in Azure Machine Learning workspaces](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-local/train-projects-local.ipynb).
* [Train and run an MLflow project on Azure Machine Learning jobs](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-remote/train-projects-remote.ipynb). ## MLflow SDK, Azure Machine Learning v2, and Azure Machine Learning studio capabilities
-The following table shows which operations are supported by each of the tools available in the machine learning lifecycle.
+The following table shows the operations that are possible, using each of the client tools available in the machine learning lifecycle.
| Feature | MLflow SDK | Azure Machine Learning CLI/SDK | Azure Machine Learning studio | | :- | :-: | :-: | :-: |
The following table shows which operations are supported by each of the tools av
> [!NOTE] > - <sup>1</sup> Only artifacts and models can be downloaded.
-> - <sup>2</sup> Using MLflow projects (preview).
+> - <sup>2</sup> Possible by using MLflow projects (preview).
> - <sup>3</sup> Some operations may not be supported. View [Manage model registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md) for details.
-> - <sup>4</sup> Deployment of MLflow models to batch inference by using the MLflow SDK is not possible at the moment. As an alternative, see [Deploy and run MLflow models in Spark jobs](how-to-deploy-mlflow-model-spark-jobs.md).
+> - <sup>4</sup> Deployment of MLflow models for batch inference by using the MLflow SDK is not possible at the moment. As an alternative, see [Deploy and run MLflow models in Spark jobs](how-to-deploy-mlflow-model-spark-jobs.md).
-## Next steps
+## Related content
-* [Concept: From artifacts to models in MLflow](concept-mlflow-models.md).
-* [How-to: Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
-* [How-to: Migrate logging from SDK v1 to MLflow](reference-migrate-sdk-v1-mlflow-tracking.md)
-* [How-to: Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md).
-* [How-to: Log MLflow models](how-to-log-mlflow-models.md).
+* [From artifacts to models in MLflow](concept-mlflow-models.md).
+* [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
+* [Migrate logging from SDK v1 to MLflow](reference-migrate-sdk-v1-mlflow-tracking.md)
+* [Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md).
+* [Log MLflow models](how-to-log-mlflow-models.md).
* [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).
machine-learning Concept Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-onnx.md
--++ Last updated 11/04/2022
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
subscription = '<subscription_id>'
resource_group = '<resource_group>' workspace = '<workspace>' datastore_name = '<datastore>'
-path_on_datastore '<path>'
+path_on_datastore = '<path>'
# long-form Datastore uri format: uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'.
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
Title: How to access a compute instance terminal in your workspace
description: Use the terminal on a compute instance for Git operations, to install packages, and add kernels. --++ Previously updated : 11/04/2022 Last updated : 01/10/2024 #Customer intent: As a data scientist, I want to use Git, install packages and add kernels to a compute instance in my workspace in Azure Machine Learning studio.
To integrate Git with your Azure Machine Learning workspace, see [Git integrati
## Install packages
- Install packages from a terminal window. Install Python packages into the **Python 3.8 - AzureML** environment. Install R packages into the **R** environment.
+ Install packages from a terminal window. Install packages into the kernel that you want to use to run your notebooks. The default kernel is **python310-sdkv2**.
Or you can install packages directly in Jupyter Notebook, RStudio, or Posit Workbench (formerly RStudio Workbench):
Or you can install packages directly in Jupyter Notebook, RStudio, or Posit Work
* Python: Add install code and execute in a Jupyter Notebook cell. > [!NOTE]
-> For package management within a notebook, use **%pip** or **%conda** magic functions to automatically install packages into the **currently-running kernel**, rather than **!pip** or **!conda** which refers to all packages (including packages outside the currently-running kernel)
+> For package management within a Python notebook, use **%pip** or **%conda** magic functions to automatically install packages into the **currently-running kernel**, rather than **!pip** or **!conda** which refers to all packages (including packages outside the currently-running kernel)
## Add new kernels > [!WARNING]
-> While customizing the compute instance, make sure you do not delete the **azureml_py36** or **azureml_py38** conda environments. Also do not delete **Python 3.6 - AzureML** or **Python 3.8 - AzureML** kernels. These are needed for Jupyter/JupyterLab functionality.
+> While customizing the compute instance, make sure you do not delete conda environments or jupyter kernels that you didn't create. Doing so may damage Jupyter/JupyterLab functionality.
To add a new Jupyter kernel to the compute instance:
For more information about conda, see [Using R language with Anaconda](https://d
### Remove added kernels > [!WARNING]
-> While customizing the compute instance, make sure you do not delete the **azureml_py36** or **azureml_py38** conda environments. Also do not delete **Python 3.6 - AzureML** or **Python 3.8 - AzureML** kernels. These are needed for Jupyter/JupyterLab functionality.
+> While customizing the compute instance, make sure you do not delete conda environments or jupyter kernels that you didn't create.
To remove an added Jupyter kernel from the compute instance, you must remove the kernelspec, and (optionally) the conda environment. You can also choose to keep the conda environment. You must remove the kernelspec, or your kernel will still be selectable and cause unexpected behavior.
machine-learning How To Manage Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-files.md
Title: Create and manage files in your workspace
description: Learn how create and manage files in your workspace in Azure Machine Learning studio. --++
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
Title: Register and work with models
description: Learn how to register and work with different model types in Azure Machine Learning (such as custom, MLflow, and Triton). --++
machine-learning How To Search Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-search-assets.md
description: Find your Azure Machine Learning assets with search
--++ Last updated 1/12/2023
machine-learning How To Troubleshoot Validation For Schema Failed Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-validation-for-schema-failed-error.md
description: Troubleshooting steps when you get the "Validation for schema faile
--++ Last updated 01/06/2023
machine-learning How To Use Batch Scoring Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-scoring-pipeline.md
- devplatv2 - event-tier1-build-2023 - ignite-2023
+ - update-code
# How to deploy a pipeline to perform batch scoring with preprocessing
machine-learning How To Use Batch Training Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-training-pipeline.md
- devplatv2 - event-tier1-build-2023 - ignite-2023
+ - update-code
# How to operationalize a training pipeline with batch endpoints
machine-learning Migrate To V2 Assets Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-model.md
--++ Last updated 12/01/2022
machine-learning How To Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md
Title: Create and manage runtimes in prompt flow
+ Title: Create and manage prompt flow runtimes
-description: Learn how to create and manage runtimes in prompt flow with Azure Machine Learning studio.
+description: Learn how to create and manage prompt flow runtimes in Azure Machine Learning studio.
Last updated 09/13/2023
-# Create and manage runtimes
+# Create and manage prompt flow runtimes in Azure Machine Learning studio
-Prompt flow's runtime provides the computing resources required for the application to run, including a Docker image that contains all necessary dependency packages. This reliable and scalable runtime environment enables prompt flow to efficiently execute its tasks and functions, ensuring a seamless user experience for users.
+A prompt flow runtime provides computing resources that are required for the application to run, including a Docker image that contains all necessary dependency packages. This reliable and scalable runtime environment enables prompt flow to efficiently execute its tasks and functions for a seamless user experience.
-We support following types of runtimes:
+Azure Machine Learning supports the following types of runtimes:
|Runtime type|Underlying compute type|Life cycle management|Customize environment | ||-|||
-|automatic runtime (preview) |Serverless compute| Automatically | Customized by image + requirements.txt in `flow.dag.yaml`|
-|Compute instance runtime | Compute instance | Manually | Manually via Azure Machine Learning environment|
+|Automatic runtime (preview) |Serverless compute| Automatic | Easily customize packages|
+|Compute instance runtime | Compute instance | Manual | Manually customize via Azure Machine Learning environment|
-For new users, we would recommend using the automatic runtime (preview) that can be used out of box, and you can easily customize the environment by adding packages in `requirements.txt` file in `flow.dag.yaml` in flow folder. For users, who already familiar with Azure Machine Learning environment and compute instance, your can use existing compute instance and environment to build your compute instance runtime.
+If you're a new user, we recommend that you use the automatic runtime (preview). You can easily customize the environment by adding packages in the `requirements.txt` file in `flow.dag.yaml` in the flow folder. If you're already familiar with the Azure Machine Learning environment and compute instances, you can use your existing compute instance and environment to build a compute instance runtime.
-## Permissions/roles for runtime management
+## Permissions and roles for runtime management
-To assign role, you need to have `owner` or have `Microsoft.Authorization/roleAssignments/write` permission on the resource.
+To assign roles, you need to have `owner` or `Microsoft.Authorization/roleAssignments/write` permission on the resource.
-To use the runtime, assigning the `AzureML Data Scientist` role of workspace to user (if using Compute instance as runtime) or endpoint (if using managed online endpoint as runtime). To learn more, see [Manage access to an Azure Machine Learning workspace](../how-to-assign-roles.md?view=azureml-api-2&tabs=labeler&preserve-view=true)
+For users of the runtime, assign the `AzureML Data Scientist` role in the workspace (if you're using a compute instance as a runtime) or endpoint (if you're using a managed online endpoint as a runtime). To learn more, see [Manage access to an Azure Machine Learning workspace](../how-to-assign-roles.md?view=azureml-api-2&tabs=labeler&preserve-view=true).
-> [!NOTE]
-> Role assignment may take several minutes to take effect.
+Role assignment might take several minutes to take effect.
-## Permissions/roles for deployments
+## Permissions and roles for deployments
-After deploying a prompt flow, the endpoint must be assigned the `AzureML Data Scientist` role to the workspace for successful inferencing. This operation can be done at any point after the endpoint has been created.
+After you deploy a prompt flow, the endpoint must be assigned the `AzureML Data Scientist` role to the workspace for successful inferencing. You can do this operation at any time after you create the endpoint.
-## Create runtime in UI
+## Create a runtime on the UI
-### Prerequisites
+Before you use Azure Machine Learning studio to create a runtime, make sure that:
-- You need `AzureML Data Scientist` role in the workspace to create a runtime.-- Make sure the default data store (usually it's `workspaceblobstore` ) in your workspace is blob type. -- Make `workspaceworkingdirectory` exist in the workspace. -- If you secure prompt flow with virtual network, follow [Network isolation in prompt flow](how-to-secure-prompt-flow.md) to learn more detail.
+- You have the `AzureML Data Scientist` role in the workspace.
+- The default data store (usually `workspaceblobstore`) in your workspace is the blob type.
+- The working directory (`workspaceworkingdirectory`) exists in the workspace.
+- If you use a virtual network for prompt flow, you understand the considerations in [Network isolation in prompt flow](how-to-secure-prompt-flow.md).
-### Create automatic runtime (preview) in flow page
+### Create an automatic runtime (preview) on a flow page
-Automatic is the default option for runtime, you can start automatic runtime (preview) in runtime dropdown in flow page.
+Automatic is the default option for a runtime. You can start an automatic runtime (preview) by selecting an option from the runtime dropdown list on a flow page.
> [!IMPORTANT]
-> Automatic runtime is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Automatic runtime is currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- **Start** creates automatic runtime (preview) using the environment defined in`flow.dag.yaml` in flow folder on the VM size you have quota in the workspace.
+- Select **Start**. Start creating an automatic runtime (preview) by using the environment defined in `flow.dag.yaml` in the flow folder on the virtual machine (VM) size where you have a quota in the workspace.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-create-automatic-init.png" alt-text="Screenshot of prompt flow on the start automatic with default settings on flow page. " lightbox = "./media/how-to-create-manage-runtime/runtime-create-automatic-init.png":::
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-create-automatic-init.png" alt-text="Screenshot of prompt flow with default settings for starting an automatic runtime on a flow page." lightbox = "./media/how-to-create-manage-runtime/runtime-create-automatic-init.png":::
-- **Start with advanced settings**, you can customize the VM size used by the runtime. You can also customize the idle time, which will delete runtime automatically if it isn't in use to save code. Meanwhile, you can set the user assigned manage identity used by automatic runtime, it's used to pull base image (please make sure user assigned manage identity have ACR pull permission) and install packages. If you don't set it, we use user identity as default. Learn more about [how to create update user assigned identities to workspace](../how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
+- Select **Start with advanced settings**. In the advanced settings, you can:
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png" alt-text="Screenshot of prompt flow on the start automatic with advanced setting on flow page. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png":::
+ - Customize the VM size that the runtime uses.
+ - Customize the idle time, which saves code by deleting the runtime automatically if it isn't in use.
+ - Set the user-assigned managed identity. The automatic runtime uses this identity to pull a base image and install packages. Make sure that the user-assigned managed identity has Azure Container Registry pull permission.
-### Create compute instance runtime in runtime page
+ If you don't set this identity, you use the user identity by default. [Learn more about how to create and update user-assigned identities for a workspace](../how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
-If you don't have a compute instance, create a new one: [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md).
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png" alt-text="Screenshot of prompt flow with advanced settings for starting an automatic runtime on a flow page." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png":::
-1. Select add runtime in runtime list page.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add.png" alt-text="Screenshot of prompt flow on the runtime add with compute instance runtime selected. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add.png":::
-1. Select compute instance you want to use as runtime.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-ci.png" alt-text="Screenshot of add compute instance runtime with select compute instance highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-ci.png":::
- Because compute instances is isolated by user, you can only see your own compute instances or the ones assigned to you. To learn more, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md).
-1. Authenticate on the compute instance. You only need to do auth one time per region in six months.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-authentication.png" alt-text="Screenshot of doing the authentication on compute instance. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-authentication.png":::
-1. Select create new custom application or existing custom application as runtime.
- 1. Select create new custom application as runtime.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-custom-application.png" alt-text="Screenshot of add compute instance runtime with custom application highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-custom-application.png":::
+### Create a compute instance runtime on a runtime page
- This is recommended for most users of prompt flow. The prompt flow system creates a new custom application on a compute instance as a runtime.
+Before you create a compute instance runtime, make sure that a compute instance is available and running. If you don't have a compute instance, [create one in an Azure Machine Learning workspace](../how-to-create-compute-instance.md).
- - To choose the default environment, select this option. This is the recommended choice for new users of prompt flow.
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-default-env.png" alt-text="Screenshot of add compute instance runtime with environment highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-default-env.png":::
+1. On the page that lists runtimes, select **Create**.
+
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add.png" alt-text="Screenshot of the page that lists runtimes and the button for creating a runtime." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add.png":::
- - If you want to install other packages in your project, you should create a custom environment. To learn how to build your own custom environment, see [Customize environment with docker context for runtime](how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime).
+1. Select the compute instance that you want to use as a runtime.
+
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-ci.png" alt-text="Screenshot of the box for selecting a compute instance." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-ci.png":::
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-custom-env.png" alt-text="Screenshot of add compute instance runtime with customized environment and choose an environment highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-custom-env.png":::
+ Because compute instances are isolated by user, only your own compute instances (or the ones assigned to you) are available. To learn more, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md).
- > [!NOTE]
- > - We are going to perform an automatic restart of your compute instance. Please ensure that you do not have any tasks or jobs running on it, as they may be affected by the restart.
+1. Select the **Authenticate** button to authenticate on the compute instance. You need to authenticate only one time per region in six months.
- 1. To use an existing custom application as a runtime, choose the option "existing".
- This option is available if you have previously created a custom application on a compute instance. For more information on how to create and use a custom application as a runtime, learn more about [how to create custom application as runtime](how-to-customize-environment-runtime.md#create-a-custom-application-on-compute-instance-that-can-be-used-as-prompt-flow-compute-instance-runtime).
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-authentication.png" alt-text="Screenshot of the button for authenticating on a compute instance." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-authentication.png":::
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-ci-existing-custom-application-ui.png" alt-text="Screenshot of add compute instance runtime with custom application dropdown highlighted. " lightbox = "./media/how-to-create-manage-runtime/runtime-creation-ci-existing-custom-application-ui.png":::
+1. Decide whether to create a custom application or select an existing one as a runtime:
+ - To create a custom application, under **Custom application**, select **New**.
-## Using runtime in prompt flow authoring
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-custom-application.png" alt-text="Screenshot of the option for creating a new custom application." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-ci-runtime-select-custom-application.png":::
-When you're authoring your prompt flow, you can select and change the runtime from left top corner of the flow page.
+ We recommend this option for most users of prompt flow. The prompt flow system creates a new custom application on a compute instance as a runtime.
+ Under **Environment**, if you want to use the default environment, select **Use default environment**. We recommend this choice for new users of prompt flow.
-When performing evaluation, you can use the original runtime in the flow or change to a more powerful runtime.
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-default-env.png" alt-text="Screenshot of the option for using a default environment." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-default-env.png":::
+ If you want to install other packages in your project, you should use a custom environment. Select **Use customized environment**, and then choose an environment from the list that appears. To learn how to build your own custom environment, see [Customize an environment with a Docker context for a runtime](how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime).
-## Update runtime from UI
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-custom-env.png" alt-text="Screenshot of the option for using a customized environment, along with a list of environments." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-runtime-list-add-custom-env.png":::
-### Update automatic runtime (preview) in flow page
+ > [!NOTE]
+ > Your compute instance restarts automatically. Ensure that no tasks or jobs are running on it, because the restart might affect them.
-You can operate automatic runtime (preview) in flow page. Here are options you can use:
-- **Install packages**, this triggers the `pip install -r requirements.txt` in flow folder. It takes minutes depends on the packages you install.-- **Reset**, will delete current runtime and create a new one with the same environment. If you encounter package conflict issue, you can try this option.-- **Edit**, will open runtime config page, you can define the VM side and idle time for the runtime.-- **Stop**, will delete current runtime. If there's no active runtime on underlining compute, compute resource will also be deleted.
+ - To use an existing custom application as a runtime, under **Custom application**, select **Existing**. Then select an application in the **Custom application** box.
+ This option is available if you previously created a custom application on a compute instance. [Learn more about how to create and use a custom application as a runtime](how-to-customize-environment-runtime.md#create-a-custom-application-on-compute-instance-that-can-be-used-as-prompt-flow-compute-instance-runtime).
-You can also customize environment used to run this flow.
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-ci-existing-custom-application-ui.png" alt-text="Screenshot of the option to use an existing custom application and the box for selecting an application." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-ci-existing-custom-application-ui.png":::
-- You can easily customize the environment by adding packages in `requirements.txt` file in flow folder. After you add more packages in this file, you can choose either save and install or save only. Save and install will trigger the `pip install -r requirements.txt` in flow folder. It takes minutes depends on the packages you install. Save only will only save the `requirements.txt` file, you can install the packages later by yourself.
+## Use a runtime in prompt flow authoring
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-create-automatic-save-install.png" alt-text="Screenshot of save and install packages for automatic runtime (preview) on flow page. " lightbox = "./media/how-to-create-manage-runtime/runtime-create-automatic-save-install.png":::
+When you're authoring a flow, you can select and change the runtime from the **Runtime** dropdown list on the upper right of the flow page.
-> [!NOTE]
-> You can change the location and even file name of `requirements.txt` by change it in `flow.dag.yaml` file in flow folder as well.
-> Please don't pin version of promptflow and promptflow-tools in `requirements.txt`, as we already include them in runtime base image.
+
+When you're performing evaluation, you can use the original runtime in the flow or change to a more powerful runtime.
++
+## Update a runtime on the UI
-#### Add packages in private feed in Azure DevOps
+### Update an automatic runtime (preview) on a flow page
-If you want to use a private feed in Azure DevOps, you need follow these steps:
+On a flow page, you can use the following options to manage an automatic runtime (preview):
-1. Create user assigned managed identity and add this user assigned managed identity in the Azure DevOps organization. To learn more, see [Use service principals & managed identities](/azure/devops/integrate/get-started/authentication/service-principal-managed-identity).
+- **Install packages** triggers `pip install -r requirements.txt` in the flow folder. This process can take a few minutes, depending on the packages that you install.
+- **Reset** deletes the current runtime and creates a new one with the same environment. If you encounter a package conflict issue, you can try this option.
+- **Edit** opens the runtime configuration page, where you can define the VM side and the idle time for the runtime.
+- **Stop** deletes the current runtime. If there's no active runtime on the underlying compute, the compute resource is also deleted.
- > [!NOTE]
- > If the 'Add Users' button isn't visible, it's likely you don't have the necessary permissions to perform this action.
-
-1. [Add or update user assigned identities to workspace](../how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
+You can also customize the environment that you use to run this flow by adding packages in the `requirements.txt` file in the flow folder. After you add more packages in this file, you can choose either of these options:
+
+- **Save and install** triggers `pip install -r requirements.txt` in the flow folder. The process can take a few minutes, depending on the packages that you install.
+- **Save only** just saves the `requirements.txt` file. You can install the packages later yourself.
++
+> [!NOTE]
+> You can change the location and even the file name of `requirements.txt`, but be sure to also change it in the `flow.dag.yaml` file in the flow folder.
+>
+> Don't pin the version of `promptflow` and `promptflow-tools` in `requirements.txt`, because we already include them in the runtime base image.
-1. You need to add `{private}` to your private feed URL. For example, if you want to install `test_package` from `test_feed` in Azure devops, add `-i https://{private}@{test_feed_url_in_azure_devops}` in `requirements.txt`.
+#### Add packages in a private feed in Azure DevOps
- ```txt
- -i https://{private}@{test_feed_url_in_azure_devops}
- test_package
- ```
+If you want to use a private feed in Azure DevOps, follow these steps:
-1. Specify the user assigned managed identity if `start with advanced setting` or **reset** automatic runtime in `edit`.
+1. Create a user-assigned managed identity and add this identity in the Azure DevOps organization. To learn more, see [Use service principals and managed identities](/azure/devops/integrate/get-started/authentication/service-principal-managed-identity).
- :::image type="content" source="./media/how-to-create-manage-runtime/runtime-advanced-setting-msi.png" alt-text="Screenshot of specify user assigned managed identity. " lightbox = "./media/how-to-create-manage-runtime/runtime-advanced-setting-msi.png":::
+ > [!NOTE]
+ > If the **Add Users** button isn't visible, you probably don't have the necessary permissions to perform this action.
-#### Change the base image used by automatic runtime (preview)
+1. [Add or update user-assigned identities to a workspace](../how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
-By default, we use latest prompt flow image as base image. If you want to use a different base image, you can build custom base image learn more, see [Customize environment with docker context for runtime](how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime), then you can use put it under `environment` in `flow.dag.yaml` file in flow folder. You need `reset` runtime to use the new base image, this takes several minutes as it pulls the new base image and install packages again.
+1. Add `{private}` to your private feed URL. For example, if you want to install `test_package` from `test_feed` in Azure DevOps, add `-i https://{private}@{test_feed_url_in_azure_devops}` in `requirements.txt`:
+ ```txt
+ -i https://{private}@{test_feed_url_in_azure_devops}
+ test_package
+ ```
+1. Specify the user-assigned managed identity in **Start with advanced settings** if automatic runtime isn't running, or use the **Edit** button if automatic runtime is running.
+
+ :::image type="content" source="./media/how-to-create-manage-runtime/runtime-advanced-setting-msi.png" alt-text="Screenshot that shows the toggle for using a workspace user-assigned managed identity. " lightbox = "./media/how-to-create-manage-runtime/runtime-advanced-setting-msi.png":::
+
+#### Change the base image for automatic runtime (preview)
+
+By default, we use the latest prompt flow image as the base image. If you want to use a different base image, you can [build a custom one](how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime). Then, put the new base image under `environment` in the `flow.dag.yaml` file in the flow folder. To use the new base image, you need to reset the runtime via the `reset` command. This process takes several minutes as it pulls the new base image and reinstalls packages.
+ ```yaml environment: image: <your-custom-image> python_requirements_txt: requirements.txt- ```
-### Update compute instance runtime in runtime page
+### Update a compute instance runtime on a runtime page
We regularly update our base image (`mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable`) to include the latest features and bug fixes. We recommend that you update your runtime to the [latest version](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime-stable/tags/list) if possible.
-Every time you open the runtime details page, we check whether there are new versions of the runtime. If there are new versions available, you see a notification at the top of the page. You can also manually check the latest version by selecting the **check version** button.
-
+Every time you open the page for runtime details, we check whether there are new versions of the runtime. If new versions are available, a notification appears at the top of the page. You can also manually check the latest version by selecting the **Check version** button.
-Try to keep your runtime up to date to get the best experience and performance.
-Go to the runtime details page and select the "Update" button at the top. Here you can update the environment to use in your runtime. If you select **use default environment**, system attempts to update your runtime to the latest version.
+To get the best experience and performance, try to keep your runtime up to date. On the page for runtime details, select the **Update** button. On the **Edit compute instance runtime** pane, you can update the environment for your runtime. If you select **Use default environment**, the system tries to update your runtime to the latest version.
-
-> [!NOTE]
-> If you used a custom environment, you need to rebuild it using the latest prompt flow image first, and then update your runtime with the new custom environment.
+If you select **Use customized environment**, you first need to rebuild the environment by using the latest prompt flow image. Then update your runtime with the new custom environment.
## Next steps
machine-learning How To Customize Environment Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-customize-environment-runtime.md
az ml environment create -f environment.yaml --subscription <sub-id> -g <resourc
> [!NOTE] > Building the image may take several minutes.
-Go to your workspace UI page, then go to the **environment** page, and locate the custom environment you created. You can now use it to create a compute instance runtime in your prompt flow. To learn more, see [Create compute instance runtime in UI](how-to-create-manage-runtime.md#create-compute-instance-runtime-in-runtime-page).
+Go to your workspace UI page, then go to the **environment** page, and locate the custom environment you created. You can now use it to create a compute instance runtime in your prompt flow. To learn more, see [Create compute instance runtime in UI](how-to-create-manage-runtime.md#create-a-compute-instance-runtime-on-a-runtime-page).
You can also find the image in environment detail page and use it as base image in automatic runtime (preview) in `flow.dag.yaml` file in prompt flow folder. This image will also be used to build environment for flow deployment from UI.
In `flow.dag.yaml` file in prompt flow folder, you can use `environment` section
:::image type="content" source="./media/how-to-customize-environment-runtime/runtime-creation-automatic-image-flow-dag.png" alt-text="Screenshot of customize environment for automatic runtime on flow page. " lightbox = "./media/how-to-customize-environment-runtime/runtime-creation-automatic-image-flow-dag.png":::
-If you want to use private feeds in Azure devops, see [Add packages in private feed in Azure devops](./how-to-create-manage-runtime.md#add-packages-in-private-feed-in-azure-devops).
+If you want to use private feeds in Azure devops, see [Add packages in private feed in Azure devops](./how-to-create-manage-runtime.md#add-packages-in-a-private-feed-in-azure-devops).
## Create a custom application on compute instance that can be used as prompt flow compute instance runtime
machine-learning How To Deploy For Real Time Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference.md
Then you need also specify the image to the `environment` in the `flow.dag.yaml`
:::image type="content" source="./media/how-to-deploy-for-real-time-inference/runtime-creation-automatic-image-flow-dag.png" alt-text="Screenshot of customize environment for automatic runtime on flow page. " lightbox = "./media/how-to-deploy-for-real-time-inference/runtime-creation-automatic-image-flow-dag.png"::: > [!NOTE]
-> If you are using private feeds in Azure devops, you need [build the image with private feeds](./how-to-create-manage-runtime.md#add-packages-in-private-feed-in-azure-devops) first and select custom environment to deploy in UI.
+> If you are using private feeds in Azure devops, you need [build the image with private feeds](./how-to-create-manage-runtime.md#add-packages-in-a-private-feed-in-azure-devops) first and select custom environment to deploy in UI.
## Create an online deployment
After you deploy the endpoint and want to test it in the **Test tab** in the end
:::image type="content" source="./media/how-to-deploy-for-real-time-inference/unable-to-fetch-deployment-schema.png" alt-text="Screenshot of the error unable to fetch deployment schema in Test tab in endpoint detail page. " lightbox = "./media/how-to-deploy-for-real-time-inference/unable-to-fetch-deployment-schema.png"::: - Make sure you have granted the correct permission to the endpoint identity. Learn more about [how to grant permission to the endpoint identity](#grant-permissions-to-the-endpoint).-- It might be because you ran your flow in an old version runtime and then deployed the flow, the deployment used the environment of the runtime which was in old version as well. Update the runtime following [this guidance](./how-to-create-manage-runtime.md#update-runtime-from-ui) and rerun the flow in the latest runtime and then deploy the flow again.
+- It might be because you ran your flow in an old version runtime and then deployed the flow, the deployment used the environment of the runtime which was in old version as well. Update the runtime following [this guidance](./how-to-create-manage-runtime.md#update-a-runtime-on-the-ui) and rerun the flow in the latest runtime and then deploy the flow again.
### Access denied to list workspace secret
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
Use `docker images` to check if the image was pulled successfully. If your image
### Run failed because of "No module named XXX"
-This type of error related to runtime lacks required packages. If you're using a default environment, make sure the image of your runtime is using the latest version. For more information, see [Runtime update](../how-to-create-manage-runtime.md#update-runtime-from-ui). If you're using a custom image and you're using a conda environment, make sure you installed all the required packages in your conda environment. For more information, see [Customize a prompt flow environment](../how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime).
+This type of error related to runtime lacks required packages. If you're using a default environment, make sure the image of your runtime is using the latest version. For more information, see [Runtime update](../how-to-create-manage-runtime.md#update-a-runtime-on-the-ui). If you're using a custom image and you're using a conda environment, make sure you installed all the required packages in your conda environment. For more information, see [Customize a prompt flow environment](../how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime).
### Request timeout issue
machine-learning Reference Migrate Sdk V1 Mlflow Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-migrate-sdk-v1-mlflow-tracking.md
--++ Last updated 05/04/2022
machine-learning Reference Yaml Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-environment.md
--++ Last updated 03/31/2022
machine-learning Reference Yaml Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-model.md
--++ Last updated 03/31/2022
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-azure-machine-learning-cli.md
--++ Last updated 11/11/2022
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
ms. Previously updated : 09/15/2023 Last updated : 01/09/2024 #Customer intent: As a server admin I want to discover my on-premises server inventory.
Before you start this tutorial, ensure you have these prerequisites in place.
**Requirement** | **Details** |
-**Appliance** | You need a server to run the Azure Migrate appliance. The server should have:<br/><br/> - Windows Server 2022 installed.<br/> _(Currently the deployment of appliance is only supported on Windows Server 2022.)_<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.<br/><br/> - Outbound internet connectivity to the required [URLs](migrate-appliance.md#url-access) from the appliance.
+**Appliance** | You need a server to run the Azure Migrate appliance. The server should have:<br/><br/> - Windows Server 2022 or 2019 installed.<br/> _(The deployment of appliance is supported on Windows Server 2022 (recommended) or Windows Server 2019.)_<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.<br/><br/> - Outbound internet connectivity to the required [URLs](migrate-appliance.md#url-access) from the appliance.
**Windows servers** | Allow inbound connections on WinRM port 5985 (HTTP) for discovery of Windows servers.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements). **Linux servers** | Allow inbound connections on port 22 (TCP) for discovery of Linux servers.<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements). **SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account [requires these permissions](migrate-support-matrix-physical.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance. You can use the [account provisioning utility](least-privilege-credentials.md) to create custom accounts or use any existing account that is a member of the sysadmin server role for simplicity.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
ms. Previously updated : 12/28/2023 Last updated : 01/09/2024 #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
In the configuration manager, select **Set up prerequisites**, and then complete
After the appliance is successfully registered, to see the registration details, select **View details**.
-1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*.
+1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. Download VDDK 6.7 or 7 (depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*.
The Migration and modernization tool uses the VDDK to replicate servers during migration to Azure.
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
Previously updated : 05/24/2022 Last updated : 01/10/2024 # Azure Database for MySQL - Flexible Server deployment model
Azure Database for MySQL powered by the MySQL community edition is available in
- Azure Database for MySQL flexible server - Azure Database for MySQL single server
-In this article, we'll provide an overview and introduction to core concepts of the flexible server deployment model. For information on how to decide what deployment option is appropriate for your workload, see [choosing the right MySQL server option in Azure](./../select-right-deployment-type.md).
+This article provides an overview and introduction to core concepts of the flexible server deployment model. For information on how to decide what deployment option is appropriate for your workload, see [choosing the right MySQL server option in Azure](./../select-right-deployment-type.md).
## Overview
For latest updates on Azure Database for MySQL flexible server, refer to [What's
## Free 12-month offer With an [Azure free account](https://azure.microsoft.com/free/), you can use Azure Database for MySQL flexible server for free for 12 months with monthly limits of up to:
-* **750 hours of Burstable B1MS instance**, enough hours to run a database instance continuously each month.
-* **32 GB** storage and **32 GB** backup storage.
-You can take advantage of this offer to develop and deploy applications that use Azure Database for MySQL flexible server. To learn how to create and use Azure Database for MySQL flexible server for free using Azure free account, refer [this tutorial](how-to-deploy-on-azure-free-account.md).
+- **750 hours of Burstable B1MS instance**, enough hours to run a database instance continuously each month.
+- **32 GB** storage and **32 GB** backup storage.
+
+You can take advantage of this offer to develop and deploy applications that use Azure Database for MySQL flexible server. To learn how to create and use Azure Database for MySQL flexible server for free using Azure free account, refer [this tutorial](how-to-deploy-on-azure-free-account.md).
## High availability within and across availability zones
-Azure Database for MySQL flexible server allows configuring high availability with automatic failover. The high availability solution is designed to ensure that committed data is never lost due to failures, and improve overall uptime for your application. When high availability is configured, flexible server automatically provisions and manages a standby replica. You're billed for the provisioned compute and storage for both the primary and secondary replica. There are two high availability-architectural models:
+Azure Database for MySQL flexible server allows configuring high availability with automatic failover. The high availability solution is designed to ensure that committed data is never lost due to failures, and improve overall uptime for your application. When high availability is configured, flexible server automatically provisions and manages a standby replica. You're billed for the provisioned compute and storage for both the primary and secondary replica. There are two high availability-architectural models:
-- **Zone Redundant High Availability (HA):** This option is preferred for complete isolation and redundancy of infrastructure across multiple availability zones. It provides highest level of availability, but it requires you to configure application redundancy across zones. Zone redundant HA is preferred when you want to achieve highest level of availability against any infrastructure failure in the availability zone and where latency across the availability zone is acceptable. Zone redundant HA is available in [subset of Azure regions](overview.md#azure-regions) where the region supports multiple Availability Zones and Zone redundant Premium file shares are available.
+- **Zone Redundant High Availability (HA):** This option is preferred for complete isolation and redundancy of infrastructure across multiple availability zones. It provides highest level of availability, but it requires you to configure application redundancy across zones. Zone redundant HA is preferred when you want to achieve highest level of availability against any infrastructure failure in the availability zone and where latency across the availability zone is acceptable. Zone redundant HA is available in [subset of Azure regions](overview.md#azure-regions) where the region supports multiple Availability Zones and Zone redundant Premium file shares are available.
:::image type="content" source="./media/concepts-high-availability/1-flexible-server-overview-zone-redundant-ha.png" alt-text="Zone redundant HA."::: -- **Same-Zone High Availability (HA):** This option is preferred for infrastructure redundancy with lower network latency as both primary and standby server will be in the same availability zone. It provides high availability without configuring application redundancy across zones. Same-Zone HA is preferred when you want to achieve highest level of availability within a single Availability zone with the lowest network latency. Same-Zone HA is available in [all Azure regions](overview.md#azure-regions) where you can create Azure Database for MySQL flexible server instances.
+- **Same-Zone High Availability (HA):** This option is preferred for infrastructure redundancy with lower network latency as both primary and standby server are in the same availability zone. It provides high availability without configuring application redundancy across zones. Same-Zone HA is preferred when you want to achieve highest level of availability within a single Availability zone with the lowest network latency. Same-Zone HA is available in [all Azure regions](overview.md#azure-regions) where you can create Azure Database for MySQL flexible server instances.
:::image type="content" source="./media/concepts-high-availability/flexible-server-overview-same-zone-ha.png" alt-text="Zone redundant high availability.":::
For more information, see [high availability concepts](concepts-high-availabilit
## Automated patching with managed maintenance window
-The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For MySQL engine, minor version upgrades are also included as part of the planned maintenance release. Users can configure the patching schedule to be system managed or define their custom schedule. During the maintenance schedule, the patch is applied and server may require a restart as part of the patching process to complete the update. With the custom schedule, users can make their patching cycle predictable and choose a maintenance window with minimum impact to the business. In general, the service follows monthly release schedule as part of the continuous integration and release.
+The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For MySQL engine, minor version upgrades are also included as part of the planned maintenance release. Users can configure the patching schedule to be system managed or define their custom schedule. During the maintenance schedule, the patch is applied and server might require a restart as part of the patching process to complete the update. With the custom schedule, users can make their patching cycle predictable and choose a maintenance window with minimum impact to the business. In general, the service follows monthly release schedule as part of the continuous integration and release.
-See [Scheduled Maintenance](concepts-maintenance.md) for more details.
+For more information, see [Scheduled Maintenance](concepts-maintenance.md).
## Automatic backups
See [Backup concepts](concepts-backup-restore.md) to learn more.
## Network Isolation
-You have two networking options to connect to Azure Database for MySQL flexible server. The options are **private access (VNet integration)** and **public access (allowed IP addresses)**.
+You have two networking options to connect to Azure Database for MySQL flexible server. The options are **private access (VNet integration)** and **public access (allowed IP addresses)**.
- **Private access (VNet Integration)** ΓÇô You can deploy your Azure Database for MySQL flexible server instance into your [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses.
See [Networking concepts](concepts-networking.md) to learn more.
## Adjust performance and scale within seconds
-Azure Database for MySQL flexible server is available in three SKU tiers: Burstable, General Purpose, and Business Critical. The Burstable tier is best suited for low-cost development and low concurrency workloads that don't need full-compute capacity continuously. General Purpose and Business Critical are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then seamlessly adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Azure Database for MySQL flexible server enables you to provision additional IOPS up to 80K IOPs above the complimentary IOPS limit independent of storage. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume.
+Azure Database for MySQL flexible server is available in three service tiers: Burstable, General Purpose, and Business Critical. The Burstable tier is best suited for low-cost development and low concurrency workloads that don't need full-compute capacity continuously. General Purpose and Business Critical are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first app on a small database for a few dollars a month, and then seamlessly adjust the scale to meet the needs of your solution. The storage scaling is online and supports storage autogrowth. Azure Database for MySQL flexible server enables you to provision additional IOPS up to 80 K IOPs above the complimentary IOPS limit independent of storage. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you consume.
See [Compute and Storage concepts](concepts-compute-storage.md) to learn more.
See [Compute and Storage concepts](concepts-compute-storage.md) to learn more.
MySQL is one of the popular database engines for running internet-scale web and mobile applications. Many of our customers use it for their online education services, video streaming services, digital payment solutions, e-commerce platforms, gaming services, news portals, government, and healthcare websites. These services are required to serve and scale as the traffic on the web or mobile application increases.
-On the applications side, the application is typically developed in Java or PHP and migrated to run on [Azure virtual machine scale sets](../../virtual-machine-scale-sets/overview.md) or [Azure App Services](../../app-service/overview.md) or are containerized to run on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). With virtual machine scale set, App Service or AKS as underlying infrastructure, application scaling is simplified by instantaneously provisioning new VMs and replicating the stateless components of applications to cater to the requests but often, database ends up being a bottleneck as centralized stateful component.
+On the applications side, the application is typically developed in Java or PHP and migrated to run on [Azure virtual machine scale sets](../../virtual-machine-scale-sets/overview.md) or [Azure App Services](../../app-service/overview.md) or are containerized to run on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). Using a virtual machine scale set with App Service or AKS as the underlying infrastructure simplifies application scaling by instantaneously provisioning new VMs and replicating the stateless components of applications to cater to the requests but often, database ends up being a bottleneck as centralized stateful component.
-The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server instance to a read-only server. You can replicate from the source server to **up to 10 replicas**. Replicas are updated asynchronously using the MySQL engine's native [binary log (binlog) file position-based replication technology](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html). You can use a load balancer proxy solution like [ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042) to seamlessly scale out your application workload to read replicas without any application refactoring cost.
+The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server instance to a read-only server. You can replicate from the source server to **up to 10 replicas**. Replicas are updated asynchronously using the MySQL engine's native [binary log (binlog) file position-based replication technology](https://dev.mysql.com/doc/refman/5.7/en/replication-features.html). You can use a load balancer proxy solution like [ProxySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042) to seamlessly scale out your application workload to read replicas without any application refactoring cost.
For more information, see [Read Replica concepts](concepts-read-replicas.md). ## Set up hybrid or multicloud data synchronization with data-in replication Data-in replication allows you to synchronize data from an external MySQL server into Azure Database for MySQL flexible server. The external server can be on-premises, in virtual machines, Azure Database for MySQL single server, or a database service hosted by other cloud providers. Data-in replication is based on the binary log (binlog) file position-based. The main scenarios to consider about using Data-in replication are:
-* Hybrid Data Synchronization
-* Multicloud Synchronization
-* [Minimal downtime migration to Azure Database for MySQL flexible server](../../mysql/howto-migrate-single-flexible-minimum-downtime.md)
-For more information, see [Data-in replication concepts](concepts-data-in-replication.md).
+- Hybrid Data Synchronization
+- Multicloud Synchronization
+- [Minimal downtime migration to Azure Database for MySQL flexible server](../../mysql/howto-migrate-single-flexible-minimum-downtime.md)
+For more information, see [Data-in replication concepts](concepts-data-in-replication.md).
## Stop/Start server to optimize cost
For more information, see [Server concepts](concept-servers.md).
Azure Database for MySQL flexible server uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default).
-Azure Database for MySQL flexible server encrypts data in-motion with transport layer security enforced by default. Azure Database for MySQL flexible server by default supports encrypted connections using Transport Layer Security (TLS 1.2) and all incoming connections with TLS 1.0 and TLS 1.1 will be denied. SSL enforcement can be disabled by setting the require_secure_transport server parameter and you can set the minimum tls_version for your server.
+Azure Database for MySQL flexible server encrypts data in-motion with transport layer security enforced by default. Azure Database for MySQL flexible server by default supports encrypted connections using Transport Layer Security (TLS 1.2) and all incoming connections with TLS 1.0 and TLS 1.1 are denied. You can disable TSL/SSL enforcement by setting the require_secure_transport server parameter and then can setting the minimum tls_version for your server.
For more information, see [how to use encrypted connections to Azure Database for MySQL flexible server instances](how-to-connect-tls-ssl.md).
-Azure Database for MySQL flexible server allows full-private access to the servers using [Azure virtual network](../../virtual-network/virtual-networks-overview.md) (VNet) integration. Servers in Azure virtual network can only be reached and connected through private IP addresses. With VNet integration, public access is denied and servers canΓÇÖt be reached using public endpoints.
+Azure Database for MySQL flexible server allows full-private access to the servers using [Azure virtual network](../../virtual-network/virtual-networks-overview.md) (virtual network) integration. Servers in Azure virtual network can only be reached and connected through private IP addresses. With virtual network integration, public access is denied and servers canΓÇÖt be reached using public endpoints.
For more information, see [Networking concepts](concepts-networking.md). ## Monitoring and alerting
-Azure Database for MySQL flexible server is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. Azure Database for MySQL flexible serverexposes host server metrics to monitor resources utilization, allows configuring slow query logs. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. Azure Database for MySQL flexible server allows you to visualize the slow query and audit logs data using Azure Monitor workbooks. With workbooks, you get a flexible canvas for analyzing data and creating rich visual reports within the Azure portal. Azure Database for MySQL flexible server provides three available workbook templates out of the box viz Server Overview, [Auditing](tutorial-configure-audit.md) and [Query Performance Insights](tutorial-query-performance-insights.md). [Query Performance Insights](tutorial-query-performance-insights.md) workbook is designed to help you spend less time troubleshooting database performance by providing such information as:
+Azure Database for MySQL flexible server is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. Azure Database for MySQL flexible server exposes host server metrics to monitor resources utilization, allows configuring slow query logs. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. Azure Database for MySQL flexible server allows you to visualize the slow query and audit logs data using Azure Monitor workbooks. With workbooks, you get a flexible canvas for analyzing data and creating rich visual reports within the Azure portal. Azure Database for MySQL flexible server provides three available workbook templates out of the box including Server Overview, [Auditing](tutorial-configure-audit.md) and [Query Performance Insights](tutorial-query-performance-insights.md). [Query Performance Insights](tutorial-query-performance-insights.md) workbook is designed to help you spend less time troubleshooting database performance by providing such information as:
-* Top N long-running queries and their trends.
-* The query details: view the query text as well as the history of execution with minimum, maximum, average, and standard deviation query time.
-* The resource utilizations (CPU, memory, and storage).
+- Top N long-running queries and their trends.
+- The query details: view the query text and the history of execution with minimum, maximum, average, and standard deviation query time.
+- The resource utilizations (CPU, memory, and storage).
In addition, you can use and integrate with community monitoring tools like [Percona Monitoring and Management with Azure Database for MySQL flexible server](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/monitor-azure-database-for-mysql-using-percona-monitoring-and/ba-p/2568545).
For more information, see [Monitoring concepts](concepts-monitoring.md).
Azure Database for MySQL flexible server runs the community version of MySQL. This allows full application compatibility and requires minimal refactoring cost to migrate existing applications developed on MySQL engine to Azure Database for MySQL flexible server. Migration to Azure Database for MySQL flexible server can be performed using the following option: ### Offline Migrations
-* Using Azure Data Migration Service when network bandwidth between source and Azure is good (for example: High-speed ExpressRoute). Learn more with step-by-step instructions - [Migrate MySQL to Azure Database for MySQL flexible server offline using DMS - Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md)
-* Use mydumper/myloader to take advantage of compression settings to efficiently move data over low speed networks (such as public internet). Learn more with step-by-step instructions [Migrate large databases to Azure Database for MySQL flexible server using mydumper/myloader](../../mysql/concepts-migrate-mydumper-myloader.md)
+
+- Using Azure Data Migration Service when network bandwidth between source and Azure is good (for example: High-speed ExpressRoute). Learn more with step-by-step instructions - [Migrate MySQL to Azure Database for MySQL flexible server offline using DMS - Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md)
+- Use mydumper/myloader to take advantage of compression settings to efficiently move data over low speed networks (such as public internet). Learn more with step-by-step instructions [Migrate large databases to Azure Database for MySQL flexible server using mydumper/myloader](../../mysql/concepts-migrate-mydumper-myloader.md)
### Online or Minimal downtime migrations+ Use data-in replication with mydumper/myloader consistent backup/restore for initial seeding. Learn more with step-by-step instructions: [Tutorial: Minimal Downtime Migration of Azure Database for MySQL single server to Azure Database for MySQL flexible server](../../mysql/howto-migrate-single-flexible-minimum-downtime.md). To migrate from Azure Database for MySQL single server to Azure Database for MySQL flexible server in five easy steps, refer to [this blog](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057).
One advantage of running your workload in Azure is its global reach. Azure Datab
| West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West US 2 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| West US 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
---
+| West US 3 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
## Contacts
In addition, consider the following points of contact as appropriate:
## Next steps
-Now that you've read an introduction to Azure Database for MySQL flexible server deployment mode, you're ready to:
+With this introduction to the Azure Database for MySQL flexible server deployment mode, you're ready to:
- Create your first server. - [Create an Azure Database for MySQL flexible server instance using Azure portal](quickstart-create-server-portal.md)
networking Lumenisity Patent List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/lumenisity-patent-list.md
Title: Lumenisity University of Southampton Patents description: List of Lumenisity UoS Patents as of April 19, 2023.+ Last updated 05/31/2023
operator-insights How To Install Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-mcc-edr-agent.md
The MCC EDR agent is a software package that is installed onto a Linux Virtual M
| OS | Red Hat Enterprise Linux 8.6 or later, or Oracle Linux 8.8 or later | | vCPUs | 4 | | Memory | 32 GB |
-| Disk | 30 GB |
+| Disk | 64 GB |
| Network | Connectivity from MCCs and to Azure | | Software | systemd, logrotate and zip installed | | Other | SSH or alternative access to run shell commands |
This process assumes that you're connecting to Azure over ExpressRoute and are u
<Storage private IP>   <ingestion URL> <Key Vault private IP>  <Key Vault URL> ````
+1. Additionally to this, the public IP of the the URL *login.microsoftonline.com* must be added to */etc/hosts*. You can use any of the public addresses resolved by DNS clients.
+
+ ```
+ <Public IP>   login.microsoftonline.com
+ ````
## Install agent software
operator-insights How To Install Sftp Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-sftp-agent.md
Steps:
<Storage private IP>   <ingestion URL> <Key Vault private IP>  <Key Vault URL> ````-
+5. Additionally to this, the public IP of the the URL *login.microsoftonline.com* must be added to */etc/hosts*. You can use any of the public addresses resolved by DNS clients.
+ ```
+ <Public IP>   login.microsoftonline.com
+ ````
## Install and configure agent software Repeat these steps for each VM onto which you want to install the agent:
operator-service-manager Azure Operator Service Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/azure-operator-service-manager-overview.md
# What is Azure Operator Service Manager?
-Azure Operator Service Manager is an Azure service designed to assist telecom operators in managing their network services. It provides management capabilities for multi-vendor applications across hybrid cloud sites, encompassing Azure regions, edge platforms, and Arc-connected sites. Azure Operator Service Manager caters to the needs of telecom operators who are in the process of migrating their workloads to Azure and Arc-connected cloud environments.
+Azure Operator Service Manager is an Azure service specifally designed to assist telecom operators in managing their network services. It provides management capabilities for multi-vendor applications across hybrid cloud sites, encompassing Azure regions, edge platforms, and Arc-connected sites. Azure Operator Service Manager caters to the needs of telecom operators who are in the process of migrating their workloads to Azure and Arc-connected cloud environments.
## Orchestrate operator services across Azure platforms
SLA (Service Level Agreement) information can be found on the [Service Level Agr
## Get access to Azure Operator Service Manager (AOSM) for your Azure subscription
-Contact your Microsoft account team to register your Azure subscription for access to Azure Operator Service Manager (AOSM) or express your interest through the [partner registration form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR7lMzG3q6a5Hta4AIflS-llUMlNRVVZFS00xOUNRM01DNkhENURXU1o2TS4u).
+Contact your Microsoft account team to register your Azure subscription for access to Azure Operator Service Manager (AOSM) or express your interest through the [partner registration form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR7lMzG3q6a5Hta4AIflS-llUMlNRVVZFS00xOUNRM01DNkhENURXU1o2TS4u).
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
After you restore the database, you can perform the following tasks to get your
## Long-term retention (preview)
-Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-term backup solution for Azure Database for PostgreSQL Flexible servers that retain backups for up to 10 years. You can use long-term retention independently or in addition to the automated backup solution offered by Azure PostgreSQL, which offers retention of up to 35 days. Automated backups are physical backups suited for operational recoveries, especially when you want to restore from the latest backups. Long-term backups help you with your compliance needs, are more granular, and are taken as logical backups using native pg_dump. In addition to long-term retention, the solution offers the following capabilities:
+Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-term backup solution for Azure Database for PostgreSQL Flexible servers that retain backups for up to 10 years. You can use long-term retention independently or in addition to the automated backup solution offered by Azure PostgreSQL, which offer retention of up to 35 days. Automated backups are physical backups suited for operational recoveries, especially when you want to restore from the latest backups. Long-term backups help you with your compliance needs, are more granular, and are taken as logical backups using native pg_dump. In addition to long-term retention, the solution offers the following capabilities:
- Customer-controlled scheduled and on-demand backups at the individual database level.
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
#### Limitations and Considerations -- During the early preview, Long Term Retention is available only in East US1, West Europe, and Central India regions. Support for other regions is coming soon. - In preview, LTR restore is currently available as RestoreasFiles to storage accounts. RestoreasServer capability will be added in the future.
+- In preview, you can perform LTR backups for all databases, single db backup support will be added in the future.
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
You can have a primary server in any [Azure Database for PostgreSQL region](http
- China North 3 - China East 3
+> [!NOTE]
+> The preview features - virtual endpoints and promote to primary server - are not currently supported in the special regions listed above.
### Use paired regions for disaster recovery purposes
Furthermore, to ease the connection process, the Azure portal provides ready-to-
"Promote" refers to the process where a replica is commanded to end its replica mode and transition into full read-write operations.
+> [!IMPORTANT]
+> Promote operation is not automatic. In the event of a primary server failure, the system won't switch to the read replica independently. An user action is always required for the promote operation.
+ Promotion of replicas can be done in two distinct manners: **Promote to primary server (preview)**
For both promotion methods, there are more options to consider:
- **Forced**: This option is designed for rapid recovery in scenarios such as regional outages. Instead of waiting to synchronize all the data from the primary, the server becomes operational once it processes WAL files needed to achieve the nearest consistent state. If you promote the replica using this option, the lag at the time you delink the replica from the primary will indicate how much data is lost.
-> [!IMPORTANT]
-> Promote operation is not automatic. In the event of a primary server failure, the system won't switch to the read replica independently. An user action is always required for the promote operation.
+> [!IMPORTANT]
+> The **Forced** option skips all the checks, for instance, the server symmetry requirement, and proceeds with promotion because it is designed for unexpected scenarios. If you use the "Forced" option without fulfilling the requirements for read replica specified in this documentation, you might experience issues such as broken replication. It is crucial to understand that this option prioritizes immediate availability over data consistency and should be used with caution.
Learn how to [promote replica to primary](how-to-read-replicas-portal.md#promote-replicas) and [promote to independent server and remove from replication](how-to-read-replicas-portal.md#promote-replica-to-independent-server).
postgresql Concepts Scaling Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-scaling-resources.md
When updating your Flexible server in scaling scenarios, we create a new copy of
### Precise Downtime Expectations * **Downtime Duration**: In most cases, the downtime ranges from 10 to 30 seconds.
-* **Additional Considerations**: After a scaling event, there's an inherent DNS `Time-To-Live` (TTL) period of approximately 30 seconds. This period isn't directly controlled by the scaling process but is a standard part of DNS behavior. So, from a application perspective, the total downtime experienced during scaling could be in the range of **40 to 60 seconds**.
+* **Additional Considerations**: After a scaling event, there's an inherent DNS `Time-To-Live` (TTL) period of approximately 30 seconds. This period isn't directly controlled by the scaling process but is a standard part of DNS behavior. So, from an application perspective, the total downtime experienced during scaling could be in the range of **40 to 60 seconds**.
-#### Limitations
+#### Considerations and limitations
- In order for near-zero downtime scaling to work, you should enable all [inbound/outbound connections between the IPs in the delegated subnet when using VNET integrated networking](../flexible-server/concepts-networking-private.md#virtual-network-concepts). If these aren't enabled near zero downtime scaling process will not work and scaling will occur through the standard scaling workflow. - Near-zero downtime scaling won't work if there are regional capacity constraints or quota limits on customer subscriptions.
When updating your Flexible server in scaling scenarios, we create a new copy of
## Related content -- [create a PostgreSQL server in the portal](how-to-manage-server-portal.md).
+- [Create a PostgreSQL server in the portal](how-to-manage-server-portal.md).
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
Microsoft Entra ID is a multitenant application. It requires outbound connectivi
- **Private access (virtual network integration)**: - You need an outbound network security group (NSG) rule to allow virtual network traffic to only reach the `AzureActiveDirectory` service tag.
- - If you're using a route table, you need to create a rule with destination service tag `AzureActiveDirectory` and next hop `Internet`.
+ - If you're using a route table, you need to create a rule with the destination service tag `AzureActiveDirectory` and next hop `Internet`.
- Optionally, if you're using a proxy, you can add a new firewall rule to allow HTTP/S traffic to reach only the `AzureActiveDirectory` service tag.
+- **Custom DNS**:
+ There are additional considerations if you are using custom DNS in your Virtual Network (VNET). In such cases, it is crucial to ensure that the following **endpoints** resolve to their corresponding IP addresses:
+**login.microsoftonline.com**: This endpoint is used for authentication purposes. Verify that your custom DNS setup enables resolving login.microsoftonline.com to its correct IP addresses
+**graph.microsoft.com**: This endpoint is used to access the Microsoft Graph API. Ensure your custom DNS setup allows the resolution of graph.microsoft.com to the correct IP addresses.
To set the Microsoft Entra admin during server provisioning, follow these steps:
postgresql How To Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-cli.md
+
+ Title: List and download server logs with Azure CLI
+description: This article describes how to list and download Azure Database for PostgreSQL - Flexible Server logs by using the Azure CLI.
+++++ Last updated : 1/10/2024++
+# List and download Azure Database for PostgreSQL - Flexible Server logs by using the Azure CLI
++
+This article shows you how to list and download Azure Database for PostgreSQL flexible server logs using Azure CLI.
+
+## Prerequisites
+
+This article requires that you're running the Azure CLI version 2.39.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+You need to sign-in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+
+```azurecli-interactive
+az login
+```
+
+Select the specific subscription under your account using [az account set](/cli/azure/account) command. Make a note of the **id** value from the **az login** output to use as the value for **subscription** argument in the command. If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscription, use [az account list](/cli/azure/account#az-account-list).
+
+```azurecli
+az account set --subscription <subscription id>
+```
+
+## List server logs using Azure CLI
+
+Once you're configured the prerequisites and connected to your required subscription.
+You can list the server logs from your Azure Database for PostgreSQL flexible server instance by using the following command.
++
+```azurecli
+az postgres flexible-server server-logs list --resource-group <myresourcegroup> --server-name <serverlogdemo> --out <table>
+```
+
+Here are the details for the above command
+
+|**LastModifiedTime** |**Name** |**ResourceGroup**|**SizeInKb**|**TypePropertiesType**|**Url** |
+|-|||--|||
+|2024-01-10T13:20:15+00:00|serverlogs/postgresql_2024_01_10_12_00_00.log|myresourcegroup|242 |LOG |`https://00000000000.blob.core.windows.net/serverlogs/postgresql_2024_01_10_12_00_00.log?`|
+|2024-01-10T14:20:37+00:00|serverlogs/postgresql_2024_01_10_13_00_00.log|myresourcegroup|237 |LOG |`https://00000000000.blob.core.windows.net/serverlogs/postgresql_2024_01_10_13_00_00.log?`|
+|2024-01-10T15:20:58+00:00|serverlogs/postgresql_2024_01_10_14_00_00.log|myresourcegroup|237 |LOG |`https://00000000000.blob.core.windows.net/serverlogs/postgresql_2024_01_10_14_00_00.log?`|
+|2024-01-10T16:21:17+00:00|serverlogs/postgresql_2024_01_10_15_00_00.log|myresourcegroup|240 |LOG |`https://00000000000.blob.core.windows.net/serverlogs/postgresql_2024_01_10_15_00_00.log?`|
++
+The output table here lists `LastModifiedTime`, `Name`, `ResourceGroup`, `SizeInKb` and `Download Url` of the Server Logs.
+
+By default `LastModifiedTime` is set to 72 hours, for listing files older than 72 hours, use flag `--file-last-written <Time:HH>`
+
+```azurecli
+az postgres flexible-server server-logs list --resource-group <myresourcegroup> --server-name <serverlogdemo> --out table --file-last-written <144>
+```
+
+## Download server logs using Azure CLI
+
+The following command downloads the preceding server logs to your current directory.
+
+```azurecli
+az postgres flexible-server server-logs download --resource-group <myresourcegroup> --server-name <serverlogdemo> --name <serverlogs/postgresql_2024_01_10_12_00_00.log>
+```
+
+## Next steps
+- To enable and disable Server logs from portal, you can refer to the [article.](./how-to-server-logs-portal.md)
+- Learn more about [Logging](./concepts-logging.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| | | | | | | Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Australia Southeast | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: (v3 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Brazil South | :heavy_check_mark: (v3 only) | :x: $ | :heavy_check_mark: | :x: |
| Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Central US | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | China East 3 | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| China North 3 | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| China North 3 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| East Asia | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: | | East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: (v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
private-5g-core Azure Stack Edge Packet Core Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-packet-core-compatibility.md
The following table provides information on which versions of the ASE device are
| Packet core version | ASE Pro GPU compatible versions | ASE Pro 2 compatible versions | |--|--|--|
-! 2310 | 2309 | 2309 |
+! 2310 | 2309, 2312 | 2309, 2312 |
| 2308 | 2303, 2309 | 2303, 2309 | | 2307 | 2303 | 2303 | | 2306 | 2303 | 2303 |
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
For a more detailed overview of reliability principles in Azure, see [Reliabilit
|Azure Storage Mover|[Reliability in Azure Storage Mover](./reliability-azure-storage-mover.md)|[Reliability in Azure Storage Mover](./reliability-azure-storage-mover.md)| |Azure VMware Solution|| [Azure VMware disaster recovery for virtual machines](../azure-vmware/disaster-recovery-for-virtual-machines.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Microsoft Defender for Cloud DevOps security|[Reliability in Microsoft Defender for Cloud DevOps security](./reliability-defender-devops.md)|[Reliability in Microsoft Defender for Cloud DevOps security](./reliability-defender-devops.md)|
-|Microsoft Fabric|[Microsoft Fabric](reliability-fabric.md) |[Microsoft Fabric](reliability-fabric.md)
-
+|Microsoft Fabric|[Microsoft Fabric](reliability-fabric.md) |[Microsoft Fabric](reliability-fabric.md)|
+|Microsoft Purview|[Reliability for Microsoft Purview](reliability-fabric.md) |[Disaster recovery for Microsoft Purview](/purview/concept-best-practices-migration#implementation-steps)|
## Azure Service Manager Retirement
reliability Reliability Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-microsoft-purview.md
+
+ Title: Reliability in Microsoft Purview for governance experiences
+description: Find out about reliability in Microsoft Purview for governance experiences
+++++ Last updated : 01/08/2024++
+# Reliability in Microsoft Purview
+
+This article describes reliability support in Microsoft Purview for governance experiences, and covers both regional resiliency with [availability zones](#availability-zone-support) and [disaster recovery and business continuity](#disaster-recovery-and-business-continuity). For a more detailed overview of reliability principles in Azure, see [Azure reliability](/azure/well-architected/reliability/).
+
+## Availability zone support
++
+Microsoft Purview makes commercially reasonable efforts to support zone-redundant availability zones, where resources automatically replicate across zones, without any need for you to set up or configure.
+
+### Prerequisites
+
+- Microsoft Purview governance experience currently provides partial availablility-zone support in [a limited number of regions](#supported-regions). This partial availability-zone support covers experiences (and/or certain functionalities within an experience).
+- Zone availability might or might not be available for Microsoft Purview governance experiences or features/functionalities that are in preview.
+
+### Supported regions
+
+Microsoft Purview makes commercially reasonable efforts to provide availability zone support in various regions as follows:
+
+| Region | Data Map | Scan | Policy | Insights |
+| | | | | |
+|Norway East|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::|
+|East US 2 EUAP||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|Central US||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|West Central US|||||
+|Southeast Asia||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|East US||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|Australia East|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::|
+|West US 2||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|Canada Central|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::|
+|Central India||:::image type="icon" source="media/yes-icon.svg":::|||
+|East US 2||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|France Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|Germany West Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|Japan East||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|Korea Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|West US 3||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|North Europe||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|South Africa North||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|Sweden Central|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::|
+|Switzerland North||:::image type="icon" source="media/yes-icon.svg":::|||
+|UAE North|||||
+|USGov Virginia|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::|
+|South Central US||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|Brazil South||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|UK South|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::|
+|Canada East|||||
+|Qatar Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+|China North 3|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::|
+|West Europe||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
+
+## Disaster recovery and business continuity
++
+There's some key information to consider upfront:
+
+- It isn't advisable to back up "scanned" assets' details. You should only back up the curated data such as mapping of classifications and glossaries on assets. The only case when you need to back up assets' details is when you have custom assets via custom `typeDef`.
+
+- The backed-up asset count should be fewer than 100,000 assets. The main driver is that you have to use the search query API to get the assets, which have limitation of 100,000 assets returned. However, if you're able to segment the search query to get smaller number of assets per API call, it's possible to back up more than 100,000 assets.
+
+- The goal is to perform one time migration. If you wish to continuously "sync" assets between two accounts, there are other steps that won't be covered in detail by this article. You have to use [Microsoft Purview's Event Hubs to subscribe and create entities to another account](/purview/manage-kafka-dotnet). However, Event Hubs only has Atlas information. Microsoft Purview has added other capabilities such as **glossaries** and **contacts** which won't be available via Event Hubs.
+
+### Identify key requirements
+
+Most of enterprise organizations have critical requirement for Microsoft Purview for capabilities such as Backup, Business Continuity, and Disaster Recovery (BCDR). To get into more details of this requirement, you need to differentiate between Backup, High Availability (HA), and Disaster recovery (DR).
+
+While they're similar, HA keeps the service operational if there was a hardware fault, for example, but it wouldn't protect you if someone accidentally or deliberately deleted all the records in your database. For that, you might need to restore the service from a backup.
+
+### Backup
+
+You might need to create regular backups from a Microsoft Purview account and use a backup in case a piece of data or configuration is accidentally or deliberately deleted from the Microsoft Purview account by the users.
+
+The backup should allow saving a point in time copy of the following configurations from the Microsoft Purview account:
+
+- Account information (for example, Friendly name)
+- Collection structure and role assignments
+- Custom Scan rule sets, classifications, and classification rules
+- Registered data sources
+- Scan information
+- Create and maintain key vaults connections
+- Key vault connections and Credentials and relations with current scans
+- Registered SHIRs
+- Glossary terms templates
+- Glossary terms
+- Manual asset updates (including classification and glossary assignments)
+- ADF and Synapse connections and lineage
+
+Backup strategy is determined by restore strategy, or more specifically how long it will take to restore things when a disaster occurs. To answer that, you might need to engage with the affected stakeholders (the business owners) and understand what the required recovery objectives are.
+
+There are three main requirements to take into consideration:
+
+- **Recover Time Objective (RTO)** ΓÇô Defines the maximum allowable downtime following a disaster for which ideally the system should be back operational.
+- **Recovery Point Objective (RPO)** ΓÇô Defines the acceptable amount of data loss that is ok following a disaster. Normally RPO is expressed as a timeframe in hours or minutes.
+- **Recovery Level Object (RLO)** ΓÇô Defines the granularity of the data being restored. It could be a SQL server, a set of databases, tables, records, etc.
+
+To implement disaster recovery for Microsoft Purview, see the [Microsoft Purview disaster recovery documentation.](/purview/concept-best-practices-migration#implementation-steps)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](/azure/well-architected/reliability/)
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-365-defender.md
Last updated 02/01/2023
# Connect data from Microsoft Defender XDR to Microsoft Sentinel
-Microsoft Sentinel's [Microsoft Defender XDR](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft Defender XDR incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft Defender XDR incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft Defender XDR's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, **Microsoft Defender for Cloud Apps**, and **Microsoft Defender for Cloud**, as well as alerts from other services such as **Microsoft Purview Data Loss Prevention** and **Microsoft Entra ID Protection**.
+Microsoft Sentinel's [Microsoft Defender XDR](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft Defender XDR incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft Defender XDR incidents include all their alerts, entities, and other relevant information. They also include alerts from Microsoft Defender XDR's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**, as well as alerts from other services such as **Microsoft Purview Data Loss Prevention** and **Microsoft Entra ID Protection**. The Microsoft Defender XDR connector also brings incidents from **Microsoft Defender for Cloud**, although in order to synchronize alerts and entities from these incidents, you must enable the Microsoft Defender for Cloud connector, otherwise your Microsoft Defender for Cloud incidents will appear empty. Learn more about the available connectors for [Microsoft Defender for Cloud](ingest-defender-for-cloud-incidents.md).
The connector also lets you stream **advanced hunting** events from *all* of the above Defender components into Microsoft Sentinel, allowing you to copy those Defender components' advanced hunting queries into Microsoft Sentinel, enrich Sentinel alerts with the Defender components' raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
sentinel Ingest Defender For Cloud Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ingest-defender-for-cloud-incidents.md
Last updated 11/28/2023
Microsoft Defender for Cloud is now [integrated with Microsoft Defender XDR](../defender-for-cloud/release-notes.md#defender-for-cloud-is-now-integrated-with-microsoft-365-defender-preview), formerly known as Microsoft 365 Defender. This integration, currently **in Preview**, allows Defender XDR to collect alerts from Defender for Cloud and create Defender XDR incidents from them.
-Thanks to this integration, Microsoft Sentinel customers who have enabled [Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md) will now be able to ingest and synchronize Defender for Cloud incidents, with all their alerts, through Microsoft Defender XDR.
+Thanks to this integration, Microsoft Sentinel customers who enable [Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md) can now ingest and synchronize Defender for Cloud incidents through Microsoft Defender XDR.
-To support this integration, Microsoft Sentinel has added a new **Tenant-based Microsoft Defender for Cloud (Preview)** connector. This connector will allow Microsoft Sentinel customers to receive Defender for Cloud alerts and incidents across their entire tenants, without having to monitor and maintain the connector's enrollment to all their Defender for Cloud subscriptions.
+To support this integration, you must set up one of the following Microsoft Defender for Cloud data connectors, otherwise your incidents for Microsoft Defender for Cloud coming through the Microsoft Defender XDR connector won't display their associated alerts and entities:
-This connector can be used to ingest Defender for Cloud alerts, regardless of whether you have Defender XDR incident integration enabled.
+- Microsoft Sentinel has a new **Tenant-based Microsoft Defender for Cloud (Preview)** connector. This connector allows Microsoft Sentinel customers to receive Defender for Cloud alerts across their entire tenants, without having to monitor and maintain the connector's enrollment to all their Defender for Cloud subscriptions. We recommend using this new connector, as the Microsoft Defender XDR integration with Microsoft Defender for Cloud is also implemented at the tenant level.
+
+- Alternatively, you can use the [**Subscription-based Microsoft Defender for Cloud (Legacy)**](connect-defender-for-cloud.md) connector. This connector is not recommended, because if you have any Defender for Cloud subscriptions that aren't connected to Microsoft Sentinel in the connector, incidents from those subscriptions won't display their associated alerts and entities.
+
+Both connectors mentioned above can be used to ingest Defender for Cloud alerts, regardless of whether you have Defender XDR incident integration enabled.
> [!IMPORTANT] > The Defender for Cloud integration with Defender XDR, and the Tenant-based Microsoft Defender for Cloud connector, are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
site-recovery Move Azure Vms Avset Azone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-azure-VMs-AVset-Azone.md
The following steps will guide you when using Azure Site Recovery to enable repl
> These steps are for a single VM. You can extend the same to multiple VMs. Go to the Recovery Services vault, select **+ Replicate**, and select the relevant VMs together. 1. In the Azure portal, select **Virtual machines**, and select the VM you want to move into Availability Zones.
-2. In **Operations**, select **Disaster recovery**.
+2. In **Backup + disaster recovery**, select **Disaster recovery**.
3. In **Configure disaster recovery** > **Target region**, select the target region to which you'll replicate. Ensure this region [supports](../availability-zones/az-region.md) Availability Zones. 4. Select **Next: Advanced settings**. 5. Choose the appropriate values for the target subscription, target VM resource group, and virtual network.
Go to the VM. Select **Disable Replication**. This action stops the process of c
In this tutorial, you increased the availability of an Azure VM by moving into an availability set or Availability Zone. Now you can set disaster recovery for the moved VM. > [!div class="nextstepaction"]
-> [Set up disaster recovery after migration](azure-to-azure-quickstart.md)
+> [Set up disaster recovery after migration](azure-to-azure-quickstart.md)
spring-apps How To Appdynamics Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-appdynamics-java-agent-monitor.md
To activate an application through the Azure CLI, use the following steps.
--resource-group "<your-resource-group-name>" \ --service "<your-Azure-Spring-Apps-instance-name>" \ --name "<your-app-name>" \
- --jar-path app.jar \
+ --artifact-path app.jar \
--jvm-options="-javaagent:/opt/agents/appdynamics/java/javaagent.jar" \ --env APPDYNAMICS_AGENT_APPLICATION_NAME=<your-app-name> \ APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=<your-agent-access-key> \
spring-apps How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-dynatrace-one-agent-monitor.md
az spring app deploy \
--resource-group <your-resource-group-name> \ --service <your-Azure-Spring-Apps-name> \ --name <your-application-name> \
- --jar-path app.jar \
+ --artifact-path app.jar \
--env \ DT_TENANT=<your-environment-ID> \ DT_TENANTTOKEN=<your-tenant-token> \
spring-apps How To New Relic Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-new-relic-monitor.md
Use the following procedure to access the agent:
--resource-group <resource-group-name> \ --service <Azure-Spring-Apps-instance-name> \ --name <app-name> \
- --jar-path app.jar \
+ --artifact-path app.jar \
--jvm-options="-javaagent:/opt/agents/newrelic/java/newrelic-agent.jar" \ --env NEW_RELIC_APP_NAME=appName \ NEW_RELIC_LICENSE_KEY=newRelicLicenseKey
spring-apps How To Use Grpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-grpc.md
Use the following command to deploy the newly built JAR file to your Azure Sprin
```azurecli az spring app deploy \ --name ${CUSTOMERS_SERVICE} \
- --jar-path ${CUSTOMERS_SERVICE_JAR} \
+ --artifact-path ${CUSTOMERS_SERVICE_JAR} \
--jvm-options='-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql' \ --env MYSQL_SERVER_FULL_NAME=${MYSQL_SERVER_FULL_NAME} \ MYSQL_DATABASE_NAME=${MYSQL_DATABASE_NAME} \
spring-apps How To Write Log To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-write-log-to-custom-persistent-storage.md
In the preceding example, there are two placeholders named `{LOGS}` in the path
--resource-group <resource-group-name> \ --name <app-name> \ --service <spring-instance-name> \
- --jar-path <path-to-jar-file>
+ --artifact-path <path-to-jar-file>
``` 1. Use the following command to check your application's console log:
spring-apps Quickstart Deploy Microservice Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-microservice-apps.md
description: Learn how to deploy microservice applications to Azure Spring Apps.
Previously updated : 06/21/2023 Last updated : 01/10/2024 zone_pivot_groups: spring-apps-tier-selection
zone_pivot_groups: spring-apps-tier-selection
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-This article explains how to deploy microservice applications to Azure Spring Apps using the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices). The Pet Clinic sample demonstrates the microservice architecture pattern. The following diagram shows the architecture of the PetClinic application on Azure Spring Apps.
+This article explains how to deploy microservice applications to Azure Spring Apps using the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices).
+
+The Pet Clinic sample demonstrates the microservice architecture pattern. The following diagram shows the architecture of the PetClinic application on the Azure Spring Apps Enterprise plan.
++
+The diagram shows the following architectural flows and relationships of the Pet Clinic sample:
+
+- Uses Azure Spring Apps to manage the frontend and backend apps. The backend apps are built with Spring Boot and each app uses HSQLDB as the persistent store. The reforged frontend app builds upon Pet Clinic API Gateway App with Node.js serving as a standalone frontend web application.
+- Uses the managed components on Azure Spring Apps, including Service Registry, Application Configuration Service, Spring Cloud Gateway, and Application Live View. The Application Configuration Service reads the Git repository configuration.
+- Exposes the URL of Spring Cloud Gateway to route request to backend service apps, and exposes the URL of the Application Live View to monitor the backend apps.
+- Analyzes logs using the Log Analytics workspace.
+- Monitors performance with Application Insights.
+
+> [!NOTE]
+> This article uses a simplified version of PetClinic, using an in-memory database that isn't production-ready to quickly deploy to Azure Spring Apps.
+>
+> The Tanzu Developer Tools exposes public access for Application Live View, which is a risk point. The production environment needs to secure the access. For more information, see the [Configure Dev Tools Portal](./how-to-use-dev-tool-portal.md#configure-dev-tools-portal) section of [Configure Tanzu Dev Tools in the Azure Spring Apps Enterprise plan](how-to-use-dev-tool-portal.md).
+++
+The Pet Clinic sample demonstrates the microservice architecture pattern. The following diagram shows the architecture of the PetClinic application on the Azure Spring Apps Standard plan.
+ The diagram shows the following architectural flows and relationships of the Pet Clinic sample: - Uses Azure Spring Apps to manage the Spring Boot apps. Each app uses HSQLDB as the persistent store.-- Uses the managed components Spring Cloud Config Server and Eureka Service Discovery on Azure Spring Apps. The Config Server reads Git repository configuration.
+- Uses the managed components Spring Cloud Config Server and Eureka Service Registry on Azure Spring Apps. The Config Server reads the Git repository configuration.
- Exposes the URL of API Gateway to load balance requests to service apps, and exposes the URL of the Admin Server to manage the applications. - Analyzes logs using the Log Analytics workspace. - Monitors performance with Application Insights. > [!NOTE]
-> This article uses a simplified version of PetClinic, using an in-memory database that is not production-ready to quickly deploy to Azure Spring Apps.
->
+> This article uses a simplified version of PetClinic, using an in-memory database that isn't production-ready to quickly deploy to Azure Spring Apps.
+>
> The deployed app `admin-server` exposes public access, which is a risk point. The production environment needs to secure the Spring Boot Admin application. This article provides the following options for deploying to Azure Spring Apps: -- The **Azure portal and Maven plugin** option provides a more conventional way to create resources and deploy applications step by step. This option is suitable for Spring developers using Azure cloud services for the first time.+
+- The **Azure portal** option is the easiest and the fastest way to create resources and deploy applications with a single click. This option is suitable for Spring developers who want to quickly deploy applications to Azure cloud services.
+- The **Azure portal + Maven plugin** option is a more conventional way to create resources and deploy applications step by step. This option is suitable for Spring developers using Azure cloud services for the first time.
+++
+- The **Azure portal + Maven plugin** option is a more conventional way to create resources and deploy applications step by step. This option is suitable for Spring developers using Azure cloud services for the first time.
- The **Azure Developer CLI** option is a more efficient way to automatically create resources and deploy applications through simple commands. The Azure Developer CLI uses a template to provision the Azure resources needed and to deploy the application code. This option is suitable for Spring developers who are familiar with Azure cloud services. ::: zone-end
This article provides the following options for deploying to Azure Spring Apps:
::: zone pivot="sc-enterprise"
+### [Azure portal](#tab/Azure-portal-ent)
+ - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
- (Optional) [Git](https://git-scm.com/downloads). - (Optional) [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Git](https://git-scm.com/downloads).
+- (Optional) [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- (Optional) [Node.js](https://nodejs.org/en/download), version 16.20 or higher.
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.45.0 or higher.
+++ ::: zone-end ::: zone pivot="sc-standard"
This article provides the following options for deploying to Azure Spring Apps:
The following sections describe how to validate the deployment. +
+### [Azure portal](#tab/Azure-portal-ent)
++
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
+ ### 5.1. Access the applications
-Using the URL information in the deployment log output, open the URL exposed by the app named `api-gateway` - for example, `https://<your-Azure-Spring-Apps-instance-name>-api-gateway.azuremicroservices.io`. The application should look similar to the following screenshot:
+Using the endpoint assigned from Spring Cloud Gateway - for example, `
+https://<your-Azure-Spring-Apps-instance-name>-gateway-xxxxx.svc.azuremicroservices.io`. The application should look similar to the following screenshot:
### 5.2. Query the application logs After you browse each function of the Pet Clinic, the Log Analytics workspace collects logs of each application. You can check the logs by using custom queries, as shown in the following screenshot: ### 5.3. Monitor the applications Application Insights monitors the application dependencies, as shown by the following application tracing map: +
+Open the Application Live View URL exposed by the Developer Tools to monitor application runtimes, as shown in the following screenshot:
-Open the URL exposed by the app `admin-server` to manage the applications through the Spring Boot Admin Server, as shown in the following screenshot:
+++++ ## 6. Clean up resources
Be sure to delete the resources you created in this article when you no longer n
Use the following steps to delete the entire resource group, including the newly created service instance:
-1. Locate your resource group in the Azure portal. On the navigation menu, select **Resource groups**, and then select the name of your resource group.
+1. Locate your resource group in the Azure portal. On the navigation menu, select **Resource groups** and then select the name of your resource group.
1. On the **Resource group** page, select **Delete**. Enter the name of your resource group in the text box to confirm deletion, then select **Delete**.
spring-apps Tutorial Managed Identities Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-functions.md
This sample invokes the HTTP triggered function by first requesting an access to
--resource-group <resource-group-name> \ --service <Azure-Spring-Apps-instance-name> \ --name "msiapp" \
- --jar-path target/asc-managed-identity-function-sample-0.1.0.jar
+ --artifact-path target/asc-managed-identity-function-sample-0.1.0.jar
``` 1. Use the following command to access the public endpoint or test endpoint to test your app:
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
and other volume names that you might require
Copy the script from [here](https://github.com/Azure-Samples/azure-elastic-san/blob/main/PSH%20(Windows)%20Multi-Session%20Connect%20Scripts/ElasticSanDocScripts0523/connect.ps1) and save it as a .ps1 file, for example, connect.ps1. Then execute it with the required parameters. The following is an example of how to run the script: ```bash
-./connnect.ps1 $rgname $esanname $vgname $vol1,$vol2,$vol3 32
+./connect.ps1 $rgname $esanname $vgname $vol1,$vol2,$vol3 32
``` Verify the number of sessions your volume has with either `iscsicli SessionList` or `mpclaim -s -d`
stream-analytics Stream Analytics Machine Learning Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-machine-learning-anomaly-detection.md
description: This article describes how to use Azure Stream Analytics and Azure
Previously updated : 10/05/2022 Last updated : 01/09/2024 # Anomaly detection in Azure Stream Analytics Available in both the cloud and Azure IoT Edge, Azure Stream Analytics offers built-in machine learning based anomaly detection capabilities that can be used to monitor the two most commonly occurring anomalies: temporary and persistent. With the **AnomalyDetection_SpikeAndDip** and **AnomalyDetection_ChangePoint** functions, you can perform anomaly detection directly in your Stream Analytics job.
-The machine learning models assume a uniformly sampled time series. If the time series isn't uniform, you may insert an aggregation step with a tumbling window prior to calling anomaly detection.
+The machine learning models assume a uniformly sampled time series. If the time series isn't uniform, you can insert an aggregation step with a tumbling window before calling anomaly detection.
The machine learning operations don't support seasonality trends or multi-variate correlations at this time.
Generally, the model's accuracy improves with more data in the sliding window. T
The functions operate by establishing a certain normal based on what they've seen so far. Outliers are identified by comparing against the established normal, within the confidence level. The window size should be based on the minimum events required to train the model for normal behavior so that when an anomaly occurs, it would be able to recognize it.
-The model's response time increases with history size because it needs to compare against a higher number of past events. It's recommended to only include the necessary number of events for better performance.
+The model's response time increases with history size because it needs to compare against a higher number of past events. We recommend that you only include the necessary number of events for better performance.
Gaps in the time series can be a result of the model not receiving events at certain points in time. This situation is handled by Stream Analytics using imputation logic. The history size, as well as a time duration, for the same sliding window is used to calculate the average rate at which events are expected to arrive.
-An anomaly generator available [here](https://aka.ms/asaanomalygenerator) can be used to feed an Iot Hub with data with different anomaly patterns. An ASA job can be set up with these anomaly detection functions to read from this Iot Hub and detect anomalies.
+An anomaly generator available [here](https://aka.ms/asaanomalygenerator) can be used to feed an Iot Hub with data with different anomaly patterns. An Azure Stream Analytics job can be set up with these anomaly detection functions to read from this Iot Hub and detect anomalies.
## Spike and dip
FROM AnomalyDetectionStep
Persistent anomalies in a time series event stream are changes in the distribution of values in the event stream, like level changes and trends. In Stream Analytics, such anomalies are detected using the Machine Learning based [AnomalyDetection_ChangePoint](/stream-analytics-query/anomalydetection-changepoint-azure-stream-analytics) operator.
-Persistent changes last much longer than spikes and dips and could indicate catastrophic event(s). Persistent changes aren't usually visible to the naked eye, but can be detected with the **AnomalyDetection_ChangePoint** operator.
+Persistent changes last much longer than spikes and dips and could indicate catastrophic events. Persistent changes aren't usually visible to the naked eye, but can be detected with the **AnomalyDetection_ChangePoint** operator.
The following image is an example of a level change:
The following image is an example of a trend change:
![Example of trend change anomaly](./media/stream-analytics-machine-learning-anomaly-detection/anomaly-detection-trend-change.png)
-The following example query assumes a uniform input rate of one event per second in a 20-minute sliding window with a history size of 1200 events. The final SELECT statement extracts and outputs the score and anomaly status with a confidence level of 80%.
+The following example query assumes a uniform input rate of one event per second in a 20-minute sliding window with a history size of 1,200 events. The final SELECT statement extracts and outputs the score and anomaly status with a confidence level of 80%.
```SQL WITH AnomalyDetectionStep AS
FROM AnomalyDetectionStep
## Performance characteristics
-The performance of these models depends on the history size, window duration, event load, and whether function level partitioning is used. This section discusses these configurations and provides samples for how to sustain ingestion rates of 1K, 5K and 10K events per second.
+The performance of these models depends on the history size, window duration, event load, and whether function level partitioning is used. This section discusses these configurations and provides samples for how to sustain ingestion rates of 1 K, 5 K, and 10K events per second.
-* **History size** - These models perform linearly with **history size**. The longer the history size, the longer the models take to score a new event. This is because the models compare the new event with each of the past events in the history buffer.
+* **History size** - These models perform linearly with **history size**. The longer the history size, the longer the models take to score a new event. It's because the models compare the new event with each of the past events in the history buffer.
* **Window duration** - The **Window duration** should reflect how long it takes to receive as many events as specified by the history size. Without that many events in the window, Azure Stream Analytics would impute missing values. Hence, CPU consumption is a function of the history size. * **Event load** - The greater the **event load**, the more work that is performed by the models, which impacts CPU consumption. The job can be scaled out by making it embarrassingly parallel, assuming it makes sense for business logic to use more input partitions. * **Function level partitioning** - **Function level partitioning** is done by using ```PARTITION BY``` within the anomaly detection function call. This type of partitioning adds an overhead, as state needs to be maintained for multiple models at the same time. Function level partitioning is used in scenarios like device level partitioning.
windowDuration (in ms) = 1000 * historySize / (total input events per second / I
When partitioning the function by deviceId, add "PARTITION BY deviceId" to the anomaly detection function call. ### Observations
-The following table includes the throughput observations for a single node (6 SU) for the non-partitioned case:
+The following table includes the throughput observations for a single node (six SU) for the nonpartitioned case:
| History size (events) | Window duration (ms) | Total input events per second | | | -- | -- |
The following table includes the throughput observations for a single node (6 SU
| 600 | 728 | 1,650 | | 6,000 | 10,910 | 1,100 |
-The following table includes the throughput observations for a single node (6 SU) for the partitioned case:
+The following table includes the throughput observations for a single node (six SU) for the partitioned case:
| History size (events) | Window duration (ms) | Total input events per second | Device count | | | -- | -- | |
The following table includes the throughput observations for a single node (6 SU
| 600 | 218,182 | 550 | 100 | | 6,000 | 2,181,819 | <550 | 100 |
-Sample code to run the non-partitioned configurations above is located in the [Streaming At Scale repo](https://github.com/Azure-Samples/streaming-at-scale/blob/f3e66fa9d8c344df77a222812f89a99b7c27ef22/eventhubs-streamanalytics-eventhubs/anomalydetection/create-solution.sh) of Azure Samples. The code creates a stream analytics job with no function level partitioning, which uses Event Hubs as input and output. The input load is generated using test clients. Each input event is a 1KB json document. Events simulate an IoT device sending JSON data (for up to 1K devices). The history size, window duration, and total event load are varied over 2 input partitions.
+Sample code to run the nonpartitioned configurations above is located in the [Streaming At Scale repo](https://github.com/Azure-Samples/streaming-at-scale/blob/f3e66fa9d8c344df77a222812f89a99b7c27ef22/eventhubs-streamanalytics-eventhubs/anomalydetection/create-solution.sh) of Azure Samples. The code creates a stream analytics job with no function level partitioning, which uses Event Hubs as input and output. The input load is generated using test clients. Each input event is a 1 KB json document. Events simulate an IoT device sending JSON data (for up to 1 K devices). The history size, window duration, and total event load are varied over two input partitions.
> [!Note] > For a more accurate estimate, customize the samples to fit your scenario. ### Identifying bottlenecks
-Use the Metrics pane in your Azure Stream Analytics job to identify bottlenecks in your pipeline. Review **Input/Output Events** for throughput and ["Watermark Delay"](https://azure.microsoft.com/blog/new-metric-in-azure-stream-analytics-tracks-latency-of-your-streaming-pipeline/) or **Backlogged Events** to see if the job is keeping up with the input rate. For Event Hub metrics, look for **Throttled Requests** and adjust the Threshold Units accordingly. For Azure Cosmos DB metrics, review **Max consumed RU/s per partition key range** under Throughput to ensure your partition key ranges are uniformly consumed. For Azure SQL DB, monitor **Log IO** and **CPU**.
+To identify bottlenecks in your pipeline, uUse the Metrics pane in your Azure Stream Analytics job. Review **Input/Output Events** for throughput and ["Watermark Delay"](https://azure.microsoft.com/blog/new-metric-in-azure-stream-analytics-tracks-latency-of-your-streaming-pipeline/) or **Backlogged Events** to see if the job is keeping up with the input rate. For Event Hubs metrics, look for **Throttled Requests** and adjust the Threshold Units accordingly. For Azure Cosmos DB metrics, review **Max consumed RU/s per partition key range** under Throughput to ensure your partition key ranges are uniformly consumed. For Azure SQL DB, monitor **Log IO** and **CPU**.
+
+## Demo video
+
+> [!VIDEO https://www.youtube.com/embed/Ra8HhBLdzHE?si=erKzcoSQb-rEGLXG]
## Next steps
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
description: Maintenance scheduling enables customers to plan around the necessa
Previously updated : 11/28/2022 Last updated : 01/10/2024
To view the maintenance schedule that has been applied to your Synapse SQL pool,
## Skip or change maintenance schedule
-To ensure compliance with latest security requirements, we are unable to accommodate requests to skip or delay these updates. However, you may have some options to adjust your maintenance window within the current cycle depending on your situation:
+To ensure compliance with latest security requirements, we are unable to accommodate requests to skip or delay these updates. However, you may have some options to adjust your maintenance window if you are using DW500c and higher
+data warehouse tiers within the current cycle depending on your situation:
- If you receive a pending notification for maintenance, and you need more time to finish your jobs or notify your team, you can change the window start time as long as you do so before the beginning of your defined maintenance window. This will shift your window forward in time within the cycle. - You can manually trigger the maintenance by pausing and resuming (or scaling) your SQL Dedicated pool after the start of a cycle for which a "Pending" notification has been received. The weekend maintenance cycle starts on Saturday at 00:00 UTC; the midweek maintenance cycle starts Tuesday at 12:00 UTC.
virtual-desktop Deploy Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-virtual-desktop.md
This article shows you how to deploy Azure Virtual Desktop on Azure or Azure Sta
You can do all these tasks in a single process when using the Azure portal, but you can also do them separately.
-The process covered in this article is an in-depth and adaptable approach to deploying Azure Virtual Desktop. If you want a more simple approach to deploy a sample Windows 11 desktop in Azure Virtual Desktop, see [Tutorial: Deploy a sample Azure Virtual Desktop infrastructure with a Windows 11 desktop](tutorial-try-deploy-windows-11-desktop.md) or use the [getting started feature](getting-started-feature.md).
- For more information on the terminology used in this article, see [Azure Virtual Desktop terminology](environment-setup.md), and to learn about the service architecture and resilience of the Azure Virtual Desktop service, see [Azure Virtual Desktop service architecture and resilience](service-architecture-resilience.md).
+> [!TIP]
+> The process covered in this article is an in-depth and adaptable approach to deploying Azure Virtual Desktop. If you want to try Azure Virtual Desktop with a more simple approach to deploy a sample Windows 11 desktop in Azure Virtual Desktop, see [Tutorial: Deploy a sample Azure Virtual Desktop infrastructure with a Windows 11 desktop](tutorial-try-deploy-windows-11-desktop.md) or use the [getting started feature](getting-started-feature.md).
+ ## Prerequisites Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required and supported, such as operating systems (OS), virtual networks, and identity providers. It also includes a list of the [supported Azure regions](prerequisites.md#azure-regions) in which you can deploy host pools, workspaces, and application groups. This list of regions is where the *metadata* for the host pool can be stored. However, session hosts can be located in any Azure region, and on-premises with [Azure Stack HCI (preview)](azure-stack-hci-overview.md). For more information about the types of data and locations, see [Data locations for Azure Virtual Desktop](data-locations.md).
virtual-desktop Enroll Per User Access Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/enroll-per-user-access-pricing.md
+
+ Title: Enroll in per-user access pricing for Azure Virtual Desktop
+description: Learn how to enroll your Azure subscription for per-user access pricing for Azure Virtual Desktop.
+++ Last updated : 01/08/2024++
+# Enroll in per-user access pricing for Azure Virtual Desktop
+
+Per-user access pricing lets you pay for Azure Virtual Desktop access rights on behalf of external users. External users aren't members of your organization, such as customers of a business. To learn more about licensing options, see [Licensing Azure Virtual Desktop](licensing.md).
+
+Before external users can connect to your deployment, you need to enroll your Azure subscriptions that you use for Azure Virtual Desktop in per-user access pricing. Your enrolled subscription is charged each month based on the number of distinct users that connect to Azure Virtual Desktop resources.
+
+> [!IMPORTANT]
+> Per-user access pricing with Azure Virtual Desktop doesn't currently support Citrix DaaS and VMware Horizon Cloud.
+
+## How to enroll an Azure subscription
+
+To enroll your Azure subscription into per-user access pricing:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type **Azure Virtual Desktop** and select the matching service entry.
+
+1. In the **Azure Virtual Desktop** overview page, select **Per-user access pricing**.
+
+1. In the list of subscriptions, check the box for the subscription where you deploy Azure Virtual Desktop resources for external users.
+
+1. Select **Enroll**.
+
+1. Review the Product Terms, then select **Enroll** to begin enrollment. It might take up to an hour for the enrollment process to finish. The **Per-user access pricing** column of the subscriptions list shows **Enrolling** while the enrollment process is running.
+
+1. After enrollment completes, check the value in the **Per-user access pricing** column of the subscriptions list changes to **Enrolled**.
+
+## How to unenroll an Azure subscription
+
+To enroll your Azure subscription from per-user access pricing:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type **Azure Virtual Desktop** and select the matching service entry.
+
+1. In the **Azure Virtual Desktop** overview page, select **Per-user access pricing**.
+
+1. In the list of subscriptions, check the box for the subscription you want to unenroll from per-user access pricing.
+
+1. Select **Unenroll**.
+
+1. Review the unenrollment message, then select **Unenroll** to begin unenrollment. It might take up to an hour for the unenrollment process to finish. The **Per-user access pricing** column of the subscriptions list shows **Unenrolling** while the unenrollment process is running.
+
+1. After unenrollment completes, check the value in the **Per-user access pricing** column of the subscriptions list changes to **Not enrolled**.
+
+## Next steps
+
+- To learn more about per-user access pricing, see [Licensing Azure Virtual Desktop](licensing.md).
+- For estimating total deployment costs, see [Understand and estimate costs for Azure Virtual Desktop](understand-estimate-costs.md).
virtual-desktop Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/licensing.md
+
+ Title: Licensing Azure Virtual Desktop
+description: An overview of licensing Azure Virtual Desktop for internal and external commercial purposes, including per-user access pricing.
+++ Last updated : 01/08/2024++
+# Licensing Azure Virtual Desktop
+
+This article explains the licensing requirements for using Azure Virtual Desktop, whether you're providing desktops or applications to users in your organization, or to external users. This article shows you how licensing Azure Virtual Desktop for external commercial purposes is different than for internal purposes, how per-user access pricing works in detail, and how you can license other products you plan to use with Azure Virtual Desktop.
+
+## Internal and external commercial purposes
+
+In the context of providing virtualized infrastructure with Azure Virtual Desktop, *internal users* (for internal commercial purposes) refers to people who are members of your own organization, such as employees of a business or students of a school, including external vendors or contractors. *External users* (for external commercial purposes) aren't members of your organization, but your customers where you might provide a Software-as-a-Service (SaaS) application using Azure Virtual Desktop.
+
+> [!NOTE]
+> Take care not to confuse external *users* with external *identities*. Azure Virtual Desktop doesn't support external identities, including guest accounts or business-to-business (B2B) identities. Whether you're serving internal commercial purposes or external users with Azure Virtual Desktop, you'll need to create and manage identities for those users yourself. For more information, see [Recommendations for deploying Azure Virtual Desktop for internal or external commercial purposes](organization-internal-external-commercial-purposes-recommendations.md).
+
+Licensing Azure Virtual Desktop works differently for internal and external commercial purposes. Consider the following examples:
+
+- A manufacturing company called *Fabrikam, Inc*. might use Azure Virtual Desktop to provide Fabrikam's employees (internal users) with access to virtual workstations and line-of-business apps. Because Fabrikam is serving internal users, Fabrikam must purchase one of the eligible licenses listed in [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) for each of their employees that access Azure Virtual Desktop.
+
+- A retail company called *Wingtip Toys* might use Azure Virtual Desktop to provide an external contractor company (external users) with access to line-of-business apps. Because these external users are serving internal purposes, Wingtip Toys must purchase one of the eligible licenses listed in [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) for each of their contractors that access Azure Virtual Desktop. Per-user access pricing isn't applicable in this scenario.
+
+- A software vendor called *Contoso* might use Azure Virtual Desktop to sell remote access of Contoso's productivity app to Contoso's customers (external users). Because Contoso is serving external users for external commercial purposes, Contoso must enroll in Azure Virtual Desktop's per-user access pricing. This enables Contoso to pay for Azure Virtual Desktop access rights on behalf of those external users who connect to Contoso's deployment. The users don't need a separate license like Microsoft 365 to access Azure Virtual Desktop. Contoso still needs to create and manage identities for those external users.
+
+> [!IMPORTANT]
+> Per-user access pricing can only be used for external commercial purposes, not internal purposes. Per-user access pricing isn't a way to enable guest user accounts with Azure Virtual Desktop. Check if your Azure Virtual Desktop solution is is applicable for per-user access pricing by reviewing [our licensing documentation](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/EAEAS#Documents).
+
+## Eligible licenses for internal commercial purposes to use Azure Virtual Desktop
+
+If you're providing Azure Virtual Desktop access for internal commercial purposes, you must purchase one of the following eligible licenses for each user that accesses Azure Virtual Desktop. The license you need also depends on whether you're using a Windows client operating system or a Windows Server operating system for your session hosts.
++
+## Per-user access pricing for external commercial purposes to use Azure Virtual Desktop
+
+Per-user access pricing lets you pay for Azure Virtual Desktop access rights for external commercial purposes. You must enroll in per-user access pricing to build a compliant deployment for external users.
+
+You pay for per-user access pricing through your enrolled Azure subscription or subscriptions on top of your charges for virtual machines, storage, and other Azure services. Each billing cycle, you only pay for users who actually used the service. Only users that connect at least once in that month to Azure Virtual Desktop incur an access charge.
+
+There are two price tiers for Azure Virtual Desktop per-user access pricing. Charges are determined automatically each billing cycle based on the type of [application groups](terminology.md#application-groups) a user connected to. Each price tier has flat per-user access charges. For example, a user incurs the same charge to your subscription no matter when or how many hours they used the service during that billing cycle. If a user doesn't access a RemoteApp or desktop, then there's no charge.
+
+| Price tier | Description |
+|--|--|
+| *Apps* | A flat price is charged for each user who accesses at least one published RemoteApp, but doesn't access a published full desktop. |
+| *Desktops + apps* | A flat price is charged for each user who accesses at least one published full desktop. The user can also access published applications. |
+
+For more information about prices, see [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+
+> [!IMPORTANT]
+> Azure Virtual Desktop will also charge users with separate assigned licenses that otherwise entitle them to Azure Virtual Desktop access. If you have internal users you're purchasing eligible licenses for, we recommend you give them access to Azure Virtual Desktop through a separate subscription that isn't enrolled in per-user access pricing to avoid effectively paying twice for those users.
+
+Azure Virtual Desktop issues at most one access charge for a given user in a given billing period. For example, if you grant the user Alice access to Azure Virtual Desktop resources across two different Azure subscriptions in the same tenant, only the first subscription accessed by Alice incurs a usage charge.
+
+To learn how to enroll an Azure subscription for per-user access pricing, see [Enroll in per-user access pricing](enroll-per-user-access-pricing.md).
+
+### Licensing other products and services for use with Azure Virtual Desktop
+
+The Azure Virtual Desktop per-user access license isn't a full replacement for a Windows or Microsoft 365 license. Per-user licenses only grant access rights to Azure Virtual Desktop and don't include Microsoft Office, Microsoft Defender XDR, or Universal Print. This means that if you choose a per-user license, you need to separately license other products and services to grant your users access to them in your Azure Virtual Desktop environment.
+
+There are a few ways to enable your external users to access Office:
+
+- Users can sign in to Office with their own Office account.
+- You can resell Office through your Cloud Service Provider (CSP).
+- You can distribute Office by using a Service Provider Licensing Agreement (SPLA).
+
+## Comparing licensing options
+
+Here's a summary of the two types of licenses for Azure Virtual Desktop you can choose from:
+
+| Component | Eligible Windows, Microsoft 365, or RDS license | Per-user access pricing |
+|--|--|--|
+| Access rights | Internal purposes only. It doesn't grant permission for external commercial purposes, not even identities you create in your own Microsoft Entra tenant. | External commercial purposes only. It doesn't grant access to members of your own organization or contractors for internal business purposes. |
+| Billing | Licensing channels. | Pay-as-you-go through an Azure meter, billed to an Azure subscription. |
+| User behavior | Fixed cost per user each month regardless of user behavior. | Cost per user each month depends on user behavior. |
+| Other products | Dependent on the license. | Only includes access rights to Azure Virtual Desktop and [FSlogix](/fslogix/overview-what-is-fslogix).<br /><br />Per-user access pricing only supports Windows Enterprise and Windows Enterprise multi-session client operating systems for session hosts. Windows Server isn't supported with per-user access pricing. |
+
+## Next steps
+
+Now that you're familiar with your licensing pricing options, you can start planning your Azure Virtual Desktop environment. Here are some articles that might help you:
+
+- [Enroll in per-user access pricing](enroll-per-user-access-pricing.md)
+- [Understand and estimate costs for Azure Virtual Desktop](understand-estimate-costs.md)
virtual-desktop Organization Internal External Commercial Purposes Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/organization-internal-external-commercial-purposes-recommendations.md
+
+ Title: Recommendations for deploying Azure Virtual Desktop for internal or external commercial purposes
+description: Learn about recommendations for deploying Azure Virtual Desktop for internal or external commercial purposes, such as for your organization's workers, or using delivering software-as-a-service applications.
+++ Last updated : 07/14/2021++
+# Recommendations for deploying Azure Virtual Desktop for internal or external commercial purposes
+
+You can deploy Azure Virtual Desktop to be tailored to your requirements, depending on many factors like end-users, the existing infrastructure of the organization deploying the service, and so on. How do you make sure you meet your organization's needs?
+
+This article provides guidance for your Azure Virtual Desktop deployment structure. The examples listed in this article aren't the only possible ways you can deploy Azure Virtual Desktop. However, we do cover two of the most basic types of deployments for internal or external commercial purposes.
+
+## Deploying Azure Virtual Desktop for internal purposes
+
+If you're making an Azure Virtual Desktop deployment for users inside your organization, you can host all your users and resources in the same Azure tenant. You can also use Azure Virtual Desktop's currently supported identity management methods to keep your users secure.
+
+These components are the most basic requirements for an Azure Virtual Desktop deployment that can serve desktops and applications to users within your organization:
+
+- One host pool to host user sessions
+- One Azure subscription to host the host pool
+- One Azure tenant to be the owning tenant for the subscription and identity management
+
+However, you can also deploy Azure Virtual Desktop with multiple host pools that offer different applications to different groups of users.
+
+Some customers choose to create separate Azure subscriptions to store each Azure Virtual Desktop deployment in. This practice lets you distinguish the cost of each deployment from each other based on the sub-organizations they provide resources to. Others choose to use Azure billing scopes to distinguish costs at a more granular level. To learn more, see [Understand and work with scopes](../cost-management-billing/costs/understand-work-scopes.md).
+
+Licensing Azure Virtual Desktop works differently for internal and external commercial purposes. If you're providing Azure Virtual Desktop access for internal commercial purposes, you must purchase an eligible license for each user that accesses Azure Virtual Desktop. You can't use per-user access pricing for internal commercial purposes. To learn more about the different licensing options, see [License Azure Virtual Desktop](licensing.md).
+
+## Deploying Azure Virtual Desktop for external purposes
+
+If your Azure Virtual Desktop deployment serves end-users outside your organization, especially users that don't typically use Windows or don't have access to your organization's internal resources, you need to consider extra security recommendations.
+
+Azure Virtual Desktop doesn't currently support external identities, including business-to-business (B2B) or business-to-client (B2C) users. You need to create and manage these identities manually and provide the credentials to your users yourself. Users then use these identities to access resources in Azure Virtual Desktop.
+
+To provide a secure solution to your customers, Microsoft strongly recommends creating a Microsoft Entra tenant and subscription for each customer with their own dedicated Active Directory. This separation means you have to create a separate Azure Virtual Desktop deployment for each organization that's isolated from the other deployments and their resources. The virtual machines that each organization uses shouldn't be able to access the resources of other companies to keep information secure. You can set up these separate deployments by using either a combination of Active Directory Domain Services (AD DS) and Microsoft Entra Connect or by using Microsoft Entra Domain Services.
+
+If you're providing Azure Virtual Desktop access for external commercial purposes, per-user access pricing lets you pay for Azure Virtual Desktop access rights on behalf of external users. You must enroll in per-user access pricing to build a compliant deployment for external users. You pay for per-user access pricing through an Azure subscription. To learn more about the different licensing options, see [License Azure Virtual Desktop](licensing.md).
+
+## Next steps
+
+- To learn more about licensing Azure Virtual Desktop, see [License Azure Virtual Desktop](licensing.md).
+- Learn how to [Enroll in per-user access pricing](enroll-per-user-access-pricing.md).
+- [Understand and estimate costs for Azure Virtual Desktop](understand-estimate-costs.md).
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/overview.md
Title: What is Azure Virtual Desktop? - Azure
-description: An overview of Azure Virtual Desktop.
+description: Azure Virtual Desktop is a desktop and app virtualization service that runs on Azure. Deliver a full Windows experience with Windows 11 or Windows 10. Offer full desktops or use RemoteApp to deliver individual apps to users.
Previously updated : 08/04/2023 Last updated : 01/04/2024 # What is Azure Virtual Desktop?
-Azure Virtual Desktop is a desktop and app virtualization service that runs on the cloud.
+Azure Virtual Desktop is a desktop and app virtualization service that runs on Azure. Here's some of the key highlights:
-Here's what you can do when you run Azure Virtual Desktop on Azure:
+- Deliver a full Windows experience with Windows 11, Windows 10, or Windows Server. Use single-session to assign devices to a single user, or use multi-session for scalability.
-- Set up a multi-session Windows 11 or Windows 10 deployment that delivers a full Windows experience with scalability-- Present Microsoft 365 Apps for enterprise and optimize it to run in multi-user virtual scenarios-- Bring your existing Remote Desktop Services (RDS) and Windows Server desktops and apps to any computer-- Virtualize both desktops and apps-- Manage desktops and apps from different Windows and Windows Server operating systems with a unified management experience
+- Offer full desktops or use RemoteApp to deliver individual apps.
+
+- Present Microsoft 365 Apps for enterprise and optimize it to run in multi-user virtual scenarios.
+
+- Install your line-of-business or custom apps you can run from anywhere, including apps in the formats Win32, MSIX, and Appx.
+
+- Deliver Software-as-a-service (SaaS) for external usage.
+
+- Replace existing Remote Desktop Services (RDS) deployments.
+
+- Manage desktops and apps from different Windows and Windows Server operating systems with a unified management experience.
+
+- Host desktops and apps on-premises in a hybrid configuration with Azure Stack HCI.
## Introductory video
You can find more videos about Azure Virtual Desktop from [Microsoft Mechanics](
With Azure Virtual Desktop, you can set up a scalable and flexible environment: - Create a full desktop virtualization environment in your Azure subscription without running any gateway servers.-- Publish host pools as you need to accommodate your diverse workloads.+
+- Flexible configurations to accommodate your diverse workloads.
+ - Bring your own image for production workloads or test from the Azure Gallery.+ - Reduce costs with pooled, multi-session resources. With the new Windows 11 and Windows 10 Enterprise multi-session capability, exclusive to Azure Virtual Desktop, or Windows Server, you can greatly reduce the number of virtual machines and operating system overhead while still providing the same resources to your users.+ - Provide individual ownership through personal (persistent) desktops.-- Use autoscale to automatically increase or decrease capacity based on time of day, specific days of the week, or as demand changes, helping to manage cost.+
+- Automatically increase or decrease capacity based on time of day, specific days of the week, or as demand changes with autoscale, helping to manage cost.
You can deploy and manage virtual desktops and applications: - Use the Azure portal, Azure CLI, PowerShell and REST API to configure the host pools, create application groups, assign users, and publish resources.+ - Publish a full desktop or individual applications from a single host pool, create individual application groups for different sets of users, or even assign users to multiple application groups to reduce the number of images.+ - As you manage your environment, use built-in delegated access to assign roles and collect diagnostics to understand various configuration or user errors.+ - Use the new diagnostics service to troubleshoot errors.+ - Only manage the image and virtual machines, not the infrastructure. You don't need to personally manage the Remote Desktop roles like you do with Remote Desktop Services, just the virtual machines in your Azure subscription. Connect users: - Once assigned, users can launch any Azure Virtual Desktop client to connect to their published Windows desktops and applications. Connect from any device through either a native application on your device or the Azure Virtual Desktop HTML5 web client.+ - Securely establish users through reverse connections to the service, so you don't need to open any inbound ports. ## Next steps
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
Your users need accounts that are in Microsoft Entra ID. If you're also using AD
- If you're using Microsoft Entra ID with AD DS, you need to configure [Microsoft Entra Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to synchronize user identity data between AD DS and Microsoft Entra ID. - If you're using Microsoft Entra ID with Microsoft Entra Domain Services, user accounts are synchronized one way from Microsoft Entra ID to Microsoft Entra Domain Services. This synchronization process is automatic.
+> [!IMPORTANT]
+> The user account must exist in the Microsoft Entra tenant you use for Azure Virtual Desktop. Azure Virtual Desktop doesn't support [B2B](../active-directory/external-identities/what-is-b2b.md), [B2C](../active-directory-b2c/overview.md), or personal Microsoft accounts.
+>
+> When using hybrid identities, either the UserPrincipalName (UPN) or the Security Identifier (SID) must match across Active Directory Domain Services and Microsoft Entra ID. For more information, see [Supported identities and authentication methods](authentication.md#hybrid-identity).
+ ### Supported identity scenarios The following table summarizes identity scenarios that Azure Virtual Desktop currently supports:
The following table summarizes identity scenarios that Azure Virtual Desktop cur
| Microsoft Entra ID + Microsoft Entra Domain Services | Joined to Microsoft Entra ID | In Microsoft Entra ID and Microsoft Entra Domain Services, synchronized| | Microsoft Entra-only | Joined to Microsoft Entra ID | In Microsoft Entra ID |
+For more detailed information about supported identity scenarios, including single sign-on and multifactor authentication, see [Supported identities and authentication methods](authentication.md).
+
+### FSLogix Profile Container
+ To use [FSLogix Profile Container](/fslogix/configure-profile-container-tutorial) when joining your session hosts to Microsoft Entra ID, you need to [store profiles on Azure Files](create-profile-container-azure-ad.md) or [Azure NetApp Files](create-fslogix-profile-container.md) and your user accounts must be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md). You must create these accounts in AD DS and synchronize them to Microsoft Entra ID. To learn more about deploying FSLogix Profile Container with different identity scenarios, see the following articles: - [Set up FSLogix Profile Container with Azure Files and Active Directory Domain Services or Microsoft Entra Domain Services](fslogix-profile-container-configure-azure-files-active-directory.md). - [Set up FSLogix Profile Container with Azure Files and Microsoft Entra ID](create-profile-container-azure-ad.md). - [Set up FSLogix Profile Container with Azure NetApp Files](create-fslogix-profile-container.md)
-> [!IMPORTANT]
-> The user account must exist in the Microsoft Entra tenant you use for Azure Virtual Desktop. Azure Virtual Desktop doesn't support [B2B](../active-directory/external-identities/what-is-b2b.md), [B2C](../active-directory-b2c/overview.md), or personal Microsoft accounts.
->
-> When using hybrid identities, either the UserPrincipalName (UPN) or the Security Identifier (SID) must match across Active Directory Domain Services and Microsoft Entra ID. For more information, see [Supported identities and authentication methods](authentication.md#hybrid-identity).
- ### Deployment parameters You need to enter the following identity parameters when deploying session hosts:
You need to enter the following identity parameters when deploying session hosts
You have a choice of operating systems (OS) that you can use for session hosts to provide desktops and applications. You can use different operating systems with different host pools to provide flexibility to your users. We support the following 64-bit versions of these operating systems, where supported versions and dates are inline with the [Microsoft Lifecycle Policy](/lifecycle/).
-|Operating system |User access rights|
-|||
-|<ul><li>[Windows 11 Enterprise multi-session](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 11 Enterprise](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 10 Enterprise multi-session](/lifecycle/products/windows-10-enterprise-and-education)</li><li>[Windows 10 Enterprise](/lifecycle/products/windows-10-enterprise-and-education)</li><ul>|License entitlement:<ul><li>Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit</li><li>Windows Enterprise E3, E5</li><li>Windows VDA E3, E5</li><li>Windows Education A3, A5</li></ul>External users can use [per-user access pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) by enrolling an Azure subscription instead of license entitlement.</li></ul>|
-|<ul><li>[Windows Server 2022](/lifecycle/products/windows-server-2022)</li><li>[Windows Server 2019](/lifecycle/products/windows-server-2019)</li><li>[Windows Server 2016](/lifecycle/products/windows-server-2016)</li></ul>|License entitlement:<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses.</li></ul>Per-user access pricing isn't available for Windows Server operating systems.|
+
+To learn more, see about licenses you can use, including per-user access pricing, see [Licensing Azure Virtual Desktop](licensing.md).
> [!IMPORTANT] > - The following items are not supported:
You have a choice of operating systems (OS) that you can use for session hosts t
> - [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md). > > - Support for Windows 7 ended on January 10, 2023.
-> - Support for Windows Server 2012 R2 ended on October 10, 2023. For more information, view [SQL Server 2012 and Windows Server 2012/2012 R2 end of support](/lifecycle/announcements/sql-server-2012-windows-server-2012-2012-r2-end-of-support).
+> - Support for Windows Server 2012 R2 ended on October 10, 2023.
For Azure, you can use operating system images provided by Microsoft in the [Azure Marketplace](https://azuremarketplace.microsoft.com), or create your own custom images stored in an Azure Compute Gallery or as a managed image. Using custom image templates for Azure Virtual Desktop enables you to easily create a custom image that you can use when deploying session host virtual machines (VMs). To learn more about how to create custom images, see:
virtual-desktop Publish Applications Stream Remoteapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/publish-applications-stream-remoteapp.md
+
+ Title: Publish applications with RemoteApp in Azure Virtual Desktop - Azure
+description: How to publish applications with RemoteApp in Azure Virtual Desktop using the Azure portal and Azure PowerShell.
+++ Last updated : 12/08/2023+++
+# Publish applications with RemoteApp in Azure Virtual Desktop
+
+There are two ways to make applications available to users in Azure Virtual Desktop: as part of a full desktop or as individual applications with RemoteApp. You publish applications by adding them to an application group, which is associated with a host pool and workspace, and assigned to users. For more information about application groups, see [Terminology](terminology.md#application-groups).
+
+You publish applications in the following scenarios:
+
+- For *RemoteApp* application groups, you publish applications to stream remotely that are installed locally on session hosts or delivered dynamically using *app attach* and *MSIX app attach* and presented to users as individual applications in one of the [supported Remote Desktop clients](users/remote-desktop-clients-overview.md).
+
+- For *desktop* application groups, you can only publish a full desktop and all applications in MSIX packages using *MSIX app attach* to appear in the user's start menu in a desktop session. If you use *app attach*, applications aren't added to a desktop application group.
+
+This article shows you how to publish applications that are installed locally with RemoteApp using the Azure portal and Azure PowerShell. You can't publish applications using Azure CLI.
+
+## Prerequisites
+
+# [Portal](#tab/portal)
+
+In order to publish an application to a RemoteApp application group, you need the following things:
+
+- An Azure account with an active subscription.
+
+- An existing [host pool](create-host-pool.md) with [session hosts](add-session-hosts-host-pool.md), a [RemoteApp application group, and a workspace](create-application-group-workspace.md).
+
+- At least one session host is powered on in the host pool the application group is assigned to.
+
+- The applications you want to publish are installed on the session hosts in the host pool the application group is assigned to. If you're using app attach, you must add and assign an MSIX package to your host pool before you start. For more information, see [Add and manage app attach applications](app-attach-setup.md).
+
+- As a minimum, the Azure account you use must have the [Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) built-in role-based access control (RBAC) roles on the resource group, or on the subscription to create the resources.
+
+# [Azure PowerShell](#tab/powershell)
+
+In order to publish an application to a RemoteApp application group, you need the following things:
+
+- An Azure account with an active subscription.
+
+- An existing [host pool](create-host-pool.md) with [session hosts](add-session-hosts-host-pool.md), a [RemoteApp application group, and a workspace](create-application-group-workspace.md).
+
+- At least one session host is powered on in the host pool the application group is assigned to.
+
+- The applications you want to publish are installed on the session hosts in the host pool the application group is assigned to. If you're using app attach, you must add and assign an MSIX package to your host pool. For more information, see [Add and manage app attach applications](app-attach-setup.md).
+
+- As a minimum, the Azure account you use must have the [Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) built-in role-based access control (RBAC) roles on the resource group, or on the subscription to create the resources.
+
+- If you want to publish an app attach application, you need to use version 4.2.0 or later of the *Az.DesktopVirtualization* PowerShell module, which contains the cmdlets that support app attach. You can download and install the Az.DesktopVirtualization PowerShell module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/).
+
+- If you want to publish an application from the Microsoft Store, you also need the [Appx](/powershell/module/appx) module, which is part of Windows.
+++
+## Add applications to a RemoteApp application group
+
+To add applications to a RemoteApp application group, select the relevant tab for your scenario and follow the steps.
+
+# [Portal](#tab/portal)
+
+Here's how to add applications to a RemoteApp application group using the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Application groups**, then select the RemoteApp application group you want to add an application to.
+
+1. Select **Applications**, select **+ Add**. Make sure you have at least one session host powered on in the host pool the application group is assigned to.
+
+1. On the **Basics** tab, from **application source** drop-down list, select **App Attach**, **Start menu**, or **File path**. The remaining fields change depending on the application source you select.
+
+ - For **App Attach**, complete the following information. Your MSIX package must already be [added and assigned to your host pool](app-attach-setup.md).
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Package | Select a package available for the host pool from the drop-down list. Regional packages are from *app attach* and host pool packages are from *MSIX app attach*. |
+ | Application | Select an application from the drop-down list. |
+ | Application identifier | Enter a unique identifier for the application. |
+ | Display name | Enter a friendly name for the application that is to users. |
+ | Description | Enter a description for the application. |
+
+ - For **Start menu**, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Application | Select an application from the drop-down list. |
+ | Display name | Enter a friendly name for the application that is to users. |
+ | Description | Enter a description for the application. |
+ | Application path | Review the file path to the `.exe` file for the application and change it if necessary. |
+ | Require command line | Select if you need to add a specific command to run when the application launches. If you select **Yes**, enter the command in the **Command line** field. |
+
+ - For **File path**, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Application path | Enter the file path to the `.exe` file for the application. |
+ | Application identifier | Enter a unique identifier for the application. |
+ | Display name | Enter a friendly name for the application that is displayed to users. |
+ | Description | Enter a description for the application. |
+ | Require command line | Select if you need to add a specific command to run when the application launches. If you select **Yes**, enter the command in the **Command line** field. |
+
+ Once you've completed this tab, select **Next**.
+
+1. On the **Icon** tab, the options you see depend on the application source you selected on the **Basics** tab. With **app attach** you can use a UNC path, but for **Start Menu** and **File path** you can only use a local path.
+
+ - If you selected **App Attach**, select **Default** to use the default icon for the application, or select **File path** to use a custom icon.
+
+ For **File path**, select one of the following options:
+
+ - **Browse Azure Files** to use an icon from an Azure file share. Select **Select a storage account** and select the storage account containing your icon file, then select **Select icon file**. Browse to the file share and directory your icon is in, check the box next to the icon you want to add, for example `MyApp.ico`, then select **Select**. You can also use a `.png` file. For **Icon index**, specify the index number for the icon you want to use. This is usually **0**.
+
+ - **UNC file path** to use an icon from a file share. For **Icon path**, enter the UNC path to your icon file, for example `\\MyFileShare\MyApp.ico`. You can also use a `.png` file. For **Icon index**, specify the index number for the icon you want to use. This is usually **0**.
+
+ - If you selected **Start menu** or **File path**, for **Icon path**, enter a local path to the `.exe` file or your icon file, for example `C:\Program Files\MyApp\MyApp.exe`. For **Icon index**, specify the index number for the icon you want to use. This is usually **0**.
+
+ Once you've completed this tab, select **Review + add**.
+
+1. On the **Review + add** tab, ensure validation passes and review the information that is used to add the application, then select **Add** to add the application to the RemoteApp application group.
+
+# [Azure PowerShell](#tab/powershell)
+
+Here's how to add applications to a RemoteApp application group using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module.
+
+> [!IMPORTANT]
+> In the following examples, you'll need to change the `<placeholder>` values for your own.
++
+2. Add an application to a RemoteApp application group by running the commands in one of the following examples.
+
+ - To add an application from the **Windows Start menu** of your session hosts, run the following commands. This example publishes WordPad with its default icon and has no command line parameters.
+
+ ```azurepowershell
+ # List the available applications in the start menu
+ $parameters = @{
+ ApplicationGroupName = '<ApplicationGroupName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ }
+
+ Get-AzWvdStartMenuItem @parameters | Select-Object Name, AppAlias
+ ```
+
+ Use the `AppAlias` value from the previous command for the application you want to publish:
+
+ ```azurepowershell
+ # Use the value from AppAlias in the previous command
+ $parameters = @{
+ Name = 'WordPad'
+ AppAlias = 'wordpad'
+ GroupName = '<ApplicationGroupName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ CommandLineSetting = 'DoNotAllow'
+ }
+
+ New-AzWvdApplication @parameters
+ ```
+
+ - To add an application by specifying a **file path** on your session hosts, run the following commands. This example specifies Microsoft Excel with a different icon index, and adds a command line parameter.
+
+ ```azurepowershell
+ $parameters = @{
+ Name = 'Microsoft Excel'
+ FilePath = 'C:\Program Files\Microsoft Office\root\Office16\EXCEL.EXE'
+ GroupName = '<ApplicationGroupName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ IconPath = 'C:\Program Files\Microsoft Office\root\Office16\EXCEL.EXE'
+ IconIndex = '1'
+ CommandLineSetting = 'Require'
+ CommandLineArgument = '/t http://MySite/finance-template.xltx'
+ ShowInPortal = $true
+ }
+
+ New-AzWvdApplication @parameters
+ ```
+
+ - To add an MSIX or Appx application from *MSIX app attach* or *app attach (preview)*, your MSIX package must already be [added and assigned to your host pool](app-attach-setup.md). Run the commands from one of the following examples:
+
+ - For **MSIX app attach**, get the application details and store them in a variable:
+
+ ```azurepowershell
+ $parameters = @{
+ HostPoolName = '<HostPoolName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ }
+
+ $package = Get-AzWvdMsixPackage @parameters | ? DisplayName -like *<DisplayName>*
+
+ Write-Host "These are the application IDs available in the package. Many packages only contain one application." -ForegroundColor Yellow
+ $package.PackageApplication.AppId
+ ```
+
+ Make a note of the application ID you want to publish (for example `App`), then run the following commands to add the application to the RemoteApp application group:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<ApplicationName>'
+ ApplicationType = 'MsixApplication'
+ MsixPackageFamilyName = $package.PackageFamilyName
+ MsixPackageApplicationId = '<ApplicationID>'
+ GroupName = '<ApplicationGroupName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ CommandLineSetting = 'DoNotAllow'
+ }
+
+ New-AzWvdApplication @parameters
+ ```
+
+ - For **app attach**, get the package and application details and store them in a variable by running the following commands:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<Name>'
+ ResourceGroupName = '<ResourceGroupName>'
+ }
+
+ $package = Get-AzWvdAppAttachPackage @parameters
+
+ Write-Host "These are the application IDs available in the package. Many packages only contain one application." -ForegroundColor Yellow
+ $package.ImagePackageApplication.AppId
+ ```
+
+ Make a note of the application ID you want to publish (for example `App`), then run the following commands to add the application to the RemoteApp application group:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<ApplicationName>'
+ ApplicationType = 'MsixApplication'
+ MsixPackageFamilyName = $package.ImagePackageFamilyName
+ MsixPackageApplicationId = '<ApplicationID>'
+ GroupName = '<ApplicationGroupName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ CommandLineSetting = 'DoNotAllow'
+ }
+
+ New-AzWvdApplication @parameters
+ ```
+
+1. Verify the list of applications in the application group by running the following command:
+
+ ```azurepowershell
+ $parameters = @{
+ GroupName = '<ApplicationGroupName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ }
+
+ Get-AzWvdApplication @parameters
+ ```
+++
+## Assign applications to users
+
+Applications aren't assigned individually to users unless you're using app attach. Instead, users are assigned to application groups. When a user is assigned to an application group, they can access all the applications in that group. To learn how to assign users to application groups, see [Assign users to an application group](create-application-group-workspace.md#assign-users-to-an-application-group) or [Add and manage app attach applications](app-attach-setup.md?pivots=app-attach).
+
+## Publish Microsoft Store applications
+
+Applications in the Microsoft Store are updated frequently and often install automatically. The directory path for an application installed from the Microsoft Store includes the version number, which changes each time an application is updated. If an update happens automatically, the path changes and the application is no longer available to users. You can publish applications using the Windows `shell:appsFolder` location in the format `shell:AppsFolder\<PackageFamilyName>!<AppId>`, which doesn't use the `.exe` file or the directory path with the version number. This method ensures that the application location is always correct.
+
+Using `shell:appsFolder` means the application icon isn't picked up automatically from the application. You should provide an icon file on a local drive on each session host in a path that doesn't change, unlike the application installation directory.
+
+Select the relevant tab for your scenario and follow the steps.
+
+# [Portal](#tab/portal)
+
+Here's how to publish a Microsoft Store application using the Windows user interface and the Azure portal:
+
+1. On your session host, open **File Explorer** and go to the path `shell:appsFolder`.
+
+1. Find the application in the list, right-click it, then select **Create a shortcut**.
+
+1. For the shortcut prompt that appears, select **Yes** to place the shortcut on the desktop.
+
+1. View the properties of the shortcut and make a note of the **Target** value. This value is the package family name and application ID you need to publish the application.
+
+1. Follow the steps in the section [Add applications to a RemoteApp application group](#add-applications-to-a-remoteapp-application-group) for publishing an application based on **File path**. For the parameter **Application path**, use the value from the **Target** field of the shortcut you created, then specify the icon path as your local icon file.
+
+# [Azure PowerShell](#tab/powershell)
+
+> [!IMPORTANT]
+> In the following examples, you'll need to change the `<placeholder>` values for your own.
++
+2. You also need to connect to a session host to run PowerShell commands as an administrator. On a session host, get a list of installed applications from the Microsoft Store by running the following commands:
+
+ ```azurepowershell
+ Get-AppxPackage -AllUsers | Sort-Object Name | Select-Object Name, PackageFamilyName
+ ```
+
+3. Make a note of the value for `PackageFamilyName`, then run the following command to get the `AppId` value:
+
+ ```azurepowershell
+ $packageFamilyName = '<PackageFamilyName>'
+
+ (Get-AppxPackage -AllUsers | ? PackageFamilyName -eq $packageFamilyName | Get-AppxPackageManifest).Package.Applications.Application.Id
+ ```
+
+4. Use Azure PowerShell with the values for `PackageFamilyName` and `AppId` combined with an exclamation mark (`!`) in between, together with the `FilePath` parameter to add an application to a RemoteApp application group by running the following commands. In this example, *Microsoft Paint* from the Microsoft Store is added:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = 'Microsoft Paint'
+ ResourceGroupName = '<ResourceGroupName>'
+ ApplicationGroupName = '<ApplicationGroupName>'
+ FilePath = 'shell:appsFolder\Microsoft.Paint_8wekyb3d8bbwe!App'
+ CommandLineSetting = 'DoNotAllow'
+ IconPath = 'C:\Icons\Microsoft Paint.png'
+ IconIndex = '0'
+ ShowInPortal = $true
+ }
+
+ New-AzWvdApplication @parameters
+ ```
+
+ The output should be similar to the following output:
+
+ ```output
+ Name
+ -
+ myappgroup/Microsoft Paint
+ ```
+++
+## Publish Windows Sandbox
+
+[Windows Sandbox](/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-overview) provides a lightweight desktop environment to safely run applications in isolation. You can use Windows Sandbox with Azure Virtual Desktop in a desktop or RemoteApp session.
+
+Your session hosts need to use a virtual machine (VM) size that supports [nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization). To check if a VM series supports nested virtualization, see [Sizes for virtual machines in Azure](../virtual-machines/sizes.md), go to the relevant article for the series of the VM, and check the list of supported features.
+
+1. To install Windows Sandbox on your session hosts, follow the steps in [Windows Sandbox overview](/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-overview). We recommend you install Windows Sandbox in a custom image you can use when creating your session hosts.
+
+1. Once you installed Windows Sandbox on your session hosts, it's available in a desktop session. If you also want to publish it as a RemoteApp, follow the steps to [Add applications to a RemoteApp application group](#add-applications-to-a-remoteapp-application-group) and use the file path `C:\Windows\System32\WindowsSandbox.exe`.
+
+## Next steps
+
+- Learn how to [Add and manage app attach applications](app-attach-setup.md).
+
+- Learn about how to [customize the feed](customize-feed-for-virtual-desktop-users.md) so resources appear in a recognizable way for your users.
+
+- If you encounter issues with your applications running in Azure Virtual Desktop, App Assure is a service from Microsoft designed to help you resolve them at no extra cost. For more information, see [App Assure](/microsoft-365/fasttrack/windows-and-other-services#app-assure).
virtual-desktop Architecture Recs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/architecture-recs.md
- Title: Azure Virtual Desktop architecture recommendations - Azure
-description: Architecture recommendations for Azure Virtual Desktop for app developers.
----- Previously updated : 07/14/2021----
-# Architecture recommendations
-
-Azure Virtual Desktop deployments come in many different shapes and sizes depending on many factors like end-user needs, the existing infrastructure of the organization deploying the service, and so on. How do you make sure you're using the right architecture that meets your organization's needs?
-
-This article will provide guidance for your Azure Virtual Desktop deployment structure. The examples listed in this article aren't the only possible ways you can deploy Azure Virtual Desktop. However, we do cover two of the most basic types of deployments for users inside and outside of your organization.
-
-## Deploying Azure Virtual Desktop for users within your organization
-
-If you're making an Azure Virtual Desktop deployment for users inside your organization, you can host all your users and resources in the same Azure tenant. You can also use Azure Virtual Desktop's currently supported identity management methods to keep your users secure.
-
-These are the most basic requirements for an Azure Virtual Desktop deployment that can serve desktops and applications to users within your organization:
--- One host pool to host user sessions-- One Azure subscription to host the host pool-- One Azure tenant to be the owning tenant for the subscription and identity management-
-However, you can also build a deployment with multiple host pools that offer different apps to different groups of users.
-
-Some customers choose to create separate Azure subscriptions to store each Azure Virtual Desktop deployment in. This practice lets you distinguish the cost of each deployment from each other based on the sub-organizations they provide resources to. Others choose to use Azure billing scopes to distinguish costs at a more granular level. To learn more, see [Understand and work with scopes](../../cost-management-billing/costs/understand-work-scopes.md).
-
-## Deploying Azure Virtual Desktop for users outside your organization
-
-If your Azure Virtual Desktop deployment will serve end-users outside your organization, especially users that don't typically use Windows or don't have access to your organization's internal resources, you'll need to consider additional security recommendations.
-
-Azure Virtual Desktop doesn't currently support external identities, including business-to-business (B2B) or business-to-client (B2C) users. You'll need to create and manage these identities manually and provide the credentials to your users yourself. Users will then use these identities to access resources in Azure Virtual Desktop.
-
-To provide a secure solution to your customers, Microsoft strongly recommends creating a Microsoft Entra tenant and subscription for each customer with their own dedicated Active Directory. This separation means you'll have to create a separate Azure Virtual Desktop deployment for each organization that's totally isolated from the other deployments and their resources. The virtual machines that each organization uses shouldn't be able to access the resources of other companies to keep information secure. You can set up these separate deployments by using either a combination of Active Directory Domain Services (AD DS) and Microsoft Entra Connect or by using Microsoft Entra Domain Services.
virtual-desktop Custom Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/custom-apps.md
- Title: Azure Virtual Desktop host custom apps - Azure
-description: How to serve custom apps with Azure Virtual Desktop.
----- Previously updated : 07/14/2021---
-# How to host custom apps with Azure Virtual Desktop
-
-Azure Virtual Desktop can host multiple types of Windows applications. We recommend you prepare your apps according to the type of app packages you plan to deploy your apps with. In this article, we'll explain what you need to do for each type of app package.
-
->[!NOTE]
->We recommend you host your apps on a multi-session host. We also recommend that you test your apps to make sure they behave as expected while running on your multi-session host. For example, run a test to see if two or more users on the same session host can successfully run the app at the same time.
-
-## MSIX
-
-MSIX is the recommended type of package for custom apps in Azure Virtual Desktop because they can take advantage of the service's built-in [MSIX app attach feature](../what-is-app-attach.md). To learn how to repackage existing Win32 applications in the MSIX format, visit [Repackage your existing Win32 applications to the MSIX format](/windows/application-management/msix-app-packaging-tool).
-
-Once you've packaged your app in the MSIX format, you can use Azure Virtual DesktopΓÇÖs MSIX app attach feature to deliver your apps to your customers. Learn how to use MSIX app attach for your apps at [Deploy apps with MSIX app attach](msix-app-attach.md).
-
-## Other options for Win32 applications
-
-You can also offer Win32 applications to your users without repackaging them in MSIX format by using the following options.
-
-### Include the application manually on session hosts
-
-Follow the instructions at [Prepare and customize a master VHD image](../set-up-customize-master-image.md) to include an app as part of the Windows image you use for your virtual machines. More specifically, follow the directions in the [Other applications and registry configuration](../set-up-customize-master-image.md#other-applications-and-registry-configuration) section to install the application for all users.
-
-### Use Microsoft Intune to deploy the application at scale
-
-If you use Microsoft Intune to manage your session hosts, you can deploy applications by following the instructions in [Install apps on Windows 10 devices](/mem/intune/apps/apps-windows-10-app-deploy#install-apps-on-windows-10-devices). Make sure you deploy your app in "device context" mode to all session hosts to make sure all users in your deployment can access the application.
-
-### Manual installation
-
-We don't recommend installing apps manually because it requires repeating the process for each session host. This method is more often used by IT professionals for testing purposes.
-
-If you must install your apps manually, you'll need to remote into your session host with an administrator account after you've set up your Azure Virtual Desktop host pool. After that, install the application like you would on a physical PC. You'll need to repeat this process to install the application on each session host in your host pool.
-
->[!NOTE]
->If the setup process gives you the option to install the application for all users, select that option.
-
-## Microsoft Store applications
-
-We don't recommend using Microsoft Store apps for RemoteApp streaming in Azure Virtual Desktop at this time.
-
-## Next steps
-
-To learn how to package and deploy apps using MSIX app attach, see [Deploy apps with MSIX app attach](msix-app-attach.md).
virtual-desktop Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/identities.md
- Title: Create user accounts for RemoteApp streaming - Azure Virtual Desktop
-description: How to create user accounts for RemoteApp streaming for your customers in Azure Virtual Desktop with Microsoft Entra ID, Microsoft Entra Domain Services, or AD DS.
-- Previously updated : 08/06/2021----
-# Create user accounts for RemoteApp streaming
-
-Because Azure Virtual Desktop doesn't currently support external profiles, or "identities," your users won't be able to access the apps you host with their own corporate credentials. Instead, you'll need to create identities for them in the Active Directory Domain that you'll use for RemoteApp streaming and sync user objects to the associated Microsoft Entra tenant.
-
-In this article, we'll explain how you can manage user identities to provide a secure environment for your customers. We'll also talk about the different parts that make up an identity.
-
-## Requirements
-
-The identities you create need to follow these guidelines:
--- Identities must be [hybrid identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which means they exist in both the [Active Directory (AD)](/previous-versions/windows/it-pro/windows-server-2003/cc781408(v=ws.10)) and [Microsoft Entra ID](../../active-directory/fundamentals/active-directory-whatis.md). You can use either [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/active-directory-domain-services) or [Microsoft Entra Domain Services](https://azure.microsoft.com/services/active-directory-ds) to create these identities. To learn more about each method, see [Compare identity solutions](../../active-directory-domain-services/compare-identity-solutions.md).-- You should keep users from different organizations in separate Microsoft Entra tenants to prevent security breaches. We recommend creating one Active Directory Domain and Microsoft Entra tenant per customer organization. That tenant should have its own associated Microsoft Entra Domain Services or AD DS subscription dedicated to that customer.-
-> [!NOTE]
-> If you want to enable [single sign-on (SSO)](../configure-single-sign-on.md) and [Intune management](../management.md), you can do this for Microsoft Entra joined and Microsoft Entra hybrid joined VMs. Azure Virtual Desktop doesn't support SSO and Intune with VMs joined to Microsoft Entra Domain Services.
-
-The following two sections will tell you how to create identities with AD DS and Microsoft Entra Domain Services. To follow [the security guidelines for cross-organizational apps](security.md), you'll need to repeat the process for each customer.
-
-## Create users with Active Directory Domain Services
-
-In this method, you'll set up hybrid identities using an Active Directory Domain Controller to manage user identities and sync them to Microsoft Entra ID.
-
-This method involves setting up Active Directory Domain Controllers to manage the user identities and syncing the users to Microsoft Entra ID to create hybrid identities. These identities can then be used to access hosted applications in Azure Virtual Desktop. In this configuration, users are synced from Active Directory to Microsoft Entra ID and the session host VMs are joined to the AD DS domain.
-
-To set up an identity in AD DS:
-
-1. [Create a Microsoft Entra tenant](../../active-directory/fundamentals/active-directory-access-create-new-tenant.md) and a subscription for your customer.
-
-2. [Install Active Directory Domain Services](/windows-server/identity/ad-ds/deploy/install-active-directory-domain-services--level-100-) on the Windows Server virtual machine (VM) you're using for the customer.
-
-3. Install and configure [Microsoft Entra Connect](../../active-directory/hybrid/how-to-connect-install-roadmap.md) on a separate domain-joined VM to sync the user accounts from Active Directory to Microsoft Entra ID.
-
-4. If you plan to manage the VMs using Intune, enable [Microsoft Entra hybrid joined devices](../../active-directory/devices/hybrid-join-plan.md) with Microsoft Entra Connect.
-
-5. Once you've configured the environment, [create new users](/previous-versions/windows/it-pro/windows-server-2003/cc755607(v=ws.10)) in the Active Directory. These users should automatically be synced with Microsoft Entra ID.
-
-6. When deploying session hosts in your host pool, use the Active Directory domain name to join the VMs and ensure the session hosts have line-of-sight to the domain controller.
-
-This configuration will give you more control over your environment, but its complexity can make it less easy to manage. However, this option lets you provide your users with Microsoft Entra ID-based apps. It also lets you manage your users' VMs with Intune.
-
-<a name='create-users-with-azure-active-directory-domain-services'></a>
-
-## Create users with Microsoft Entra Domain Services
-
-Microsoft Entra Domain Services identities are stored in a Microsoft managed Active Directory platform as a service (PaaS) where Microsoft manages two AD domain controllers that lets users use AD DS within their Azure subscriptions. In this configuration, users are synced from Microsoft Entra ID to Microsoft Entra Domain Services, and the session hosts are joined to the Microsoft Entra Domain Services domain. Microsoft Entra Domain Services identities are easier to manage, but don't offer as much control as regular AD DS identities. You can only join the Azure Virtual Desktop VMs to the Microsoft Entra Domain Services domain, and you can't manage them with Intune.
-
-To create an identity with Microsoft Entra Domain
-
-1. [Create a Microsoft Entra tenant](../../active-directory/fundamentals/active-directory-access-create-new-tenant.md) and subscription for your customer.
-
-2. [Deploy Microsoft Entra Directory Services](../../active-directory-domain-services/tutorial-create-instance.md) in the userΓÇÖs subscription.
-
-3. Once you've finished configuring the environment, [create new users](../../active-directory/fundamentals/add-users-azure-active-directory.md) in Microsoft Entra ID. These user objects will automatically sync with Microsoft Entra Domain Services.
-
-4. When deploying session hosts in a host pool, use the Microsoft Entra Domain Services domain name to join the VMs.
-
-## Next steps
-
-If you'd like to learn more about security considerations for setting up identities and tenants, see the [Security guidelines for cross-organizational apps](security.md).
virtual-desktop Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/licensing.md
- Title: Understanding Azure Virtual Desktop per-user access pricing for RemoteApp streaming - Azure
-description: An overview of Azure Virtual Desktop licensing considerations for RemoteApp streaming.
---- Previously updated : 12/01/2022---
-# Understanding licensing and per-user access pricing
-
-This article explains the licensing requirements for using Azure Virtual Desktop to stream applications remotely to external users. In this article, you'll learn how licensing Azure Virtual Desktop for external commercial purposes is different than for internal purposes, how per-user access pricing works in detail, and how to license other products you plan to use with Azure Virtual Desktop.
-
-## Internal and external purposes
-
-In the context of providing virtualized infrastructure with Azure Virtual Desktop, *internal users* refers to people who are members of your own organization, such as employees of a business or students of a school. *External users* aren't members of your organization, such as customers of a business.
-
->[!NOTE]
->Take care not to confuse external *users* with external *identities*. Azure Virtual Desktop doesn't currently support external identities, including guest accounts or business-to-business (B2B) identities. Whether you're serving internal users or external users with Azure Virtual Desktop, you'll need to create and manage identities for those users yourself. Per-user access pricing is not a way to enable guest user accounts with Azure Virtual Desktop. For more information, see [Architecture recommendations](architecture-recs.md).
-
-Licensing Azure Virtual Desktop works differently for internal and external commercial purposes. Consider the following examples:
--- A manufacturing company called Fabrikam, Inc. might use Azure Virtual Desktop to provide Fabrikam's employees (internal users) with access to virtual workstations and line-of-business apps. Because Fabrikam is serving internal users, Fabrikam must purchase one of the eligible licenses listed in [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) for each of their employees that will access Azure Virtual Desktop.
-
-- A retail company called Wingtip Toys might use Azure Virtual Desktop to provide an external contractor company (external users) with access to line-of-business apps. Because these external users are serving internal purposes, Wingtip Tops must purchase one of the eligible licenses listed in [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) for each of their contractors that will access Azure Virtual Desktop. Per-user access pricing is not applicable in this scenario. --- A software vendor called Contoso might use Azure Virtual Desktop to sell remote streams of ContosoΓÇÖs productivity app to ContosoΓÇÖs customers (external users). Because Contoso is serving external users for external commercial purposes, Contoso must enroll in Azure Virtual DesktopΓÇÖs per-user access pricing. This enables Contoso to pay for Azure Virtual Desktop access rights on behalf of those external users who connect to Contoso's deployment. The users don't need a separate license like Microsoft 365 to access Azure Virtual Desktop. Contoso still needs to create and manage identities for those external users.-
-> [!IMPORTANT]
-> Per-user access pricing can only be used for external commercial purposes, not internal purposes. Check if your Azure Virtual Desktop solution is compatible with per-user access pricing by reviewing [our licensing documentation](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/EAEAS#Documents).
-
-## Per-user access pricing for Azure Virtual Desktop
-
-Per-user access pricing lets you pay for Azure Virtual Desktop access rights on behalf of external users. You must enroll in per-user access pricing to build a compliant deployment for external users. To learn more, see [Enroll in per-user access pricing](per-user-access-pricing.md).
-
-You pay for per-user access pricing through your enrolled Azure subscription or subscriptions on top of your charges for virtual machines, storage, and other Azure services. Each billing cycle, you only pay for users who actually used the service. Only users that connect at least once to Azure Virtual Desktop that month incur an access charge.
-
-There are two price tiers for Azure Virtual Desktop per-user access pricing. Charges are determined automatically each billing cycle based on the type of [application groups](../environment-setup.md#app-groups) a user connected to.
--- The first price tier is called "Apps." This flat price is charged for each user who accesses at least one RemoteApp application group and zero Desktop application groups.-- The second tier is "Apps + Desktops." This flat price is charged for each user who accesses at least one Desktop application group.-- If a user doesn't access any application groups, then there's no charge.-
-For more information about prices, see [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
-
-Each price tier has flat per-user access charges. For example, a user incurs the same charge to your subscription no matter when or how many hours they used the service during that billing cycle.
-
-> [!IMPORTANT]
-> Azure Virtual Desktop will also charge users with separate assigned licenses that otherwise entitle them to Azure Virtual Desktop access. If you have internal users you're purchasing eligible licenses for, we recommend you give them access to Azure Virtual Desktop through a separate subscription that isn't enrolled in per-user access pricing to avoid effectively paying twice for those users.
-
-Azure Virtual Desktop will issue at most one access charge for a given user in a given billing period. For example, if your deployment grants user Alice access to Azure Virtual Desktop resources across two different Azure subscriptions in the same tenant, only the first subscription accessed by Alice will incur a usage charge.
-
-## Comparing licensing options
-
-Here's a summary of the two types of licenses for Azure Virtual Desktop you can choose from:
--- An eligible Windows or Microsoft 365 license:
- - Grants Azure Virtual Desktop access rights for *internal purposes* only. It doesn't grant permission for external commercial purposes, not even identities you create in your own Microsoft Entra tenant.
- - Paid in advance through a subscription
- - Same cost per user each month regardless of user behavior
- - Includes entitlements to some other Microsoft products and services
--- Per-user access pricing:
- - Grants Azure Virtual Desktop access rights for *external commercial purposes* only. It doesn't grant access to members of your own organization or contractors for internal business purposes.
- - Pay-as-you-go through an Azure meter
- - Cost per user each month depends on user behavior
- - Only includes access rights to Azure Virtual Desktop
- - Includes use rights to leverage [FSlogix](/fslogix/overview-what-is-fslogix)
-
-> [!IMPORTANT]
-> Per-user access pricing only supports Windows Enterprise and Windows Enterprise multi-session client operating systems for session hosts. Windows Server session hosts are not supported with per-user access pricing.
-
-## Licensing other products and services for use with Azure Virtual Desktop
-
-The Azure Virtual Desktop per-user access license isn't a full replacement for a Windows or Microsoft 365 license. Per-user licenses only grant access rights to Azure Virtual Desktop and don't include Microsoft Office, Microsoft Defender XDR, or Universal Print. This means that if you choose a per-user license, you'll need to separately license other products and services to grant your users access to them in your Azure Virtual Desktop environment.
-
-There are a few ways to enable your external users to access Office:
--- Users can sign in to Office with their own Office account.-- You can re-sell Office through your Cloud Service Provider (CSP). -- You can distribute Office by using a Service Provider Licensing Agreement (SPLA).-
-## Next steps
-
-Now that you're familiar with your licensing pricing options, you can start planning your Azure Virtual Desktop environment. Here are some articles that might help you:
--- [Enroll your subscription in per-user access pricing](per-user-access-pricing.md)-- [Estimate per-user app streaming costs for Azure Virtual Desktop](streaming-costs.md)-- If you feel ready to start setting up your first deployment, get started with our [Tutorials](../create-host-pools-azure-marketplace.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json).
virtual-desktop Msix App Attach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/msix-app-attach.md
- Title: Azure Virtual Desktop deploy application MSIX app attach - Azure
-description: How to deploy apps with MSIX app attach for Azure Virtual Desktop.
-- Previously updated : 07/14/2021----
-# Deploy apps with MSIX app attach
-
-This article is a basic outline of how to publish an application in Azure Virtual Desktop with the MSIX app attach feature. In this article, we'll also give you links to resources that can give you more in-depth explanations and instructions.
-
-## What is MSIX app attach?
-
-MSIX app attach is an application layering solution that lets you deliver applications to active user sessions in Azure Virtual Desktop. The MSIX package system separates apps from the operating system, making it easier to build images for virtual machines. MSIX packages also give you greater control over which apps your users can access in their virtual machines. You can even separate apps from the master image and give them to users later.
-
-To learn more, see [What is MSIX app attach?](../what-is-app-attach.md).
-
-## Requirements
-
-You'll need the following things to use MSIX app attach in Azure Virtual Desktop:
--- An MSIX-packaged application-- An MSIX image made from the expanded MSIX application-- An MSIX share, which is the network location where you store MSIX images-- At least one healthy and active session host in the host pool you'll use-- If your MSIX packaged application has a private certificate, that certificate must be available on all session hosts in the host pool-- Azure Virtual Desktop configuration for MSIX app attach (user assignment, association of MSIX application with application group, adding MSIX image to host pool)-
-## Create an MSIX package from an existing installer
-
-To start using MSIX app attach, you'll need to put your application inside of an MSIX package. Some apps already come in the MSIX format, but if you're using a legacy installer like MSI, ClickOnce, and so on, you'll need to convert the app into the MSIX package format. Learn how to convert your existing apps into MSIX format at our [MSIX overview article](/windows/msix/packaging-tool/create-an-msix-overview).
-
-## Test the application fidelity of your packaged app
-
-After you've repackaged your application as an MSIX package, you need to make sure your application fidelity is high. App fidelity is the application's behavior and performance before and after repackaging. An app package with high app fidelity has similar performance before and after.
-
-If you find that your app fidelity decreases after repackaging, your organization must test the app to make sure its performance meets user standards. If not, you may have to update your app to prevent the issue or try repackaging again.
-
-## Create an MSIX image
-
-Next, you'll need to create an MSIX image from your packaged app. An MSIX image is what happens when you expand an MSIX app package and store the resulting app in a VHD(X) or CIM storage. To learn how to create an MSIX image, see [Create an MSIX image](../app-attach-msixmgr.md#create-an-msix-image).
-
-## Configure an MSIX file share
-
-Next, you'll need to set up an MSIX network share to store MSIX images. Once configured, your session hosts will use the MSIX share to attach MSIX packages to active user sessions, delivering apps to your users. Learn how to set up an MSIX share at [Set up a file share for MSIX app attach](../app-attach-file-share.md).
-
-## Configure MSIX app attach for Azure Virtual Desktop host pool
-
-After you've uploaded an MSIX image to the MSIX share, you'll need to open up the Azure portal and configure the host pool you're going to use to accept MSIX app attach. Learn how to configure your host pool at [Set up MSIX app attach with the Azure portal](../app-attach-azure-portal.md).
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/overview.md
- Title: What is Azure Virtual Desktop RemoteApp streaming? - Azure
-description: An overview of Azure Virtual Desktop RemoteApp streaming.
-- Previously updated : 11/12/2021----
-# What is Azure Virtual Desktop RemoteApp streaming?
-
-Azure Virtual Desktop is a desktop and app virtualization service that runs on the cloud and lets you access your remote desktop anytime, anywhere. However, did you know you can also use Azure Virtual Desktop as a Platform as a Service (PaaS) to provide your organization's apps as Software as a Service (SaaS) to your customers? With Azure Virtual Desktop RemoteApp streaming, you can now use Azure Virtual Desktop to deliver apps to your customers over a secure network through virtual machines.
-
-If you're unfamiliar with Azure Virtual Desktop (or are new to app virtualization in general), we've gathered some resources here that can help you get your deployment up and running.
-
-## Requirements
-
-Before you get started, we recommend you take a look at the [overview for Azure Virtual Desktop](../overview.md) for a more in-depth list of system requirements for running Azure Virtual Desktop. While you're there, you can browse the rest of the Azure Virtual Desktop documentation if you want a more IT-focused look into the service, as most of the articles also apply to RemoteApp streaming for Azure Virtual Desktop. Once you understand the basics, you can use the RemoteApp streaming documentation more effectively.
-
-In order to set up an Azure Virtual Desktop deployment for your custom apps that's available to customers outside your organization, you'll need the following things:
--- Your custom app. See [How to host custom apps with Azure Virtual Desktop](custom-apps.md) to learn about the types of apps Azure Virtual Desktop supports and how you can serve them to your customers.--- Your domain join credentials. If you don't already have an identity management system compatible with Azure Virtual Desktop, you'll need to set up identity management for your host pool. To learn more, see [Set up managed identities](identities.md).--- An Azure subscription. If you don't already have a subscription, make sure to [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-
-## Get started
-
-Now that you're ready, let's take a look at how you can set up your Azure Virtual Desktop deployment. You have two options to set yourself up for success. You can either set up your deployment manually or automatically. The next two sections will describe the differences between these two methods.
-
-### Set up Azure Virtual Desktop manually
-
-You can set up your deployment manually by following these tutorials:
-
-1. [Create a host pool with the Azure portal](../create-host-pools-azure-marketplace.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
-
-2. [Manage application groups](../manage-app-groups.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
-
-3. [Create a host pool to validate service updates](../create-validation-host-pool.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
-
-4. [Set up service alerts](../set-up-service-alerts.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
-
-### Set up Azure Virtual Desktop automatically
-
-If you'd prefer an automatic process, you can use the getting started feature to set up your deployment for you. For more information, check out these articles:
--- [Deploy Azure Virtual Desktop with the getting started feature](../getting-started-feature.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (When following these instructions, make sure to follow the instructions for existing Microsoft Entra Domain Services or AD DS. This method gives you better identity management and app compatibility while also giving you the power to fine-tune identity-related infrastructure costs. The method for subscriptions that don't already have Microsoft Entra Domain Services or AD DS doesn't give you these benefits.)-- [Troubleshoot the getting started feature](../troubleshoot-getting-started.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-
-## Customize and manage Azure Virtual Desktop
-
-Once you've set up Azure Virtual Desktop, you have lots of options to customize your deployment to meet your organization or customers' needs. These articles can help you get started:
--- [How to host custom apps with Azure Virtual Desktop](custom-apps.md)-- [Enroll your subscription in per-user access pricing](per-user-access-pricing.md)-- [How to use Microsoft Entra ID](../../active-directory/fundamentals/active-directory-access-create-new-tenant.md)-- [Using Windows 10 virtual machines with Intune](/mem/intune/fundamentals/windows-10-virtual-machines)-- [How to deploy an app using MSIX app attach](msix-app-attach.md)-- [Use Azure Virtual Desktop Insights to monitor your deployment](../insights.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [Set up a business continuity and disaster recovery plan](../disaster-recovery.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [Scale session hosts using Azure Automation](../set-up-scaling-script.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [Set up Universal Print](/universal-print/fundamentals/universal-print-getting-started)-- [Set up the Start VM on Connect feature](../start-virtual-machine-connect.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [Tag Azure Virtual Desktop resources to manage costs](../tag-virtual-desktop-resources.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-
-## Get to know your Azure Virtual Desktop deployment
-
-Read the following articles to understand concepts essential to creating and managing Azure Virtual Desktop deployments:
--- [Understanding licensing and per-user access pricing](licensing.md)-- [Security guidelines for cross-organizational apps](security.md)-- [Azure Virtual Desktop security best practices](../security-guide.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [Azure Virtual Desktop Insights glossary](../azure-monitor-glossary.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [Azure Virtual Desktop for the enterprise](/azure/architecture/example-scenario/wvd/windows-virtual-desktop)-- [Estimate total deployment costs](total-costs.md)-- [Estimate per-user app streaming costs](streaming-costs.md)-- [Architecture recommendations](architecture-recs.md)-- [Start VM on Connect FAQ](../start-virtual-machine-connect-faq.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-
-## Next steps
-
-If you're ready to start setting up your deployment manually, head to the following tutorial.
-
-> [!div class="nextstepaction"]
-> [Create a host pool with the Azure portal](../create-host-pools-azure-marketplace.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
virtual-desktop Per User Access Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/per-user-access-pricing.md
- Title: Azure Virtual Desktop enroll per-user access pricing - Azure
-description: How to enroll in per-user access pricing for Azure Virtual Desktop.
----- Previously updated : 11/17/2022----
-# Enroll your subscription in per-user access pricing
-
-Before external users can connect to your deployment, you need to enroll your subscription in per-user access pricing. Per-user access pricing entitles users outside of your organization to access apps and desktops in your subscription using identities that you provide and manage. Your enrolled subscription will be charged each month based on the number of distinct users that connect to Azure Virtual Desktop resources.
-
-> [!IMPORTANT]
-> Per-user access pricing with Azure Virtual Desktop doesn't currently support Citrix DaaS and VMware Horizon Cloud.
-
->[!NOTE]
->Take care not to confuse external *users* with external *identities*. Azure Virtual Desktop doesn't currently support external identities, including guest accounts or business-to-business (B2B) identities. Whether you're serving internal users or external users with Azure Virtual Desktop, you'll need to create and manage identities for those users yourself. Per-user access pricing is not a way to enable guest user accounts with Azure Virtual Desktop. For more information, see [Understanding licensing and per-user access pricing](licensing.md).
-
-## How to enroll
-
-To enroll your Azure subscription into per-user access pricing:
-
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com/).
-
-2. Enter **Azure Virtual Desktop** into the search bar, then find and select **Azure Virtual Desktop** under Services.
-
-3. In the **Azure Virtual Desktop** overview page, select **Per-user access pricing**.
-
-4. In the list of subscriptions, select the subscription where you'll deploy Azure Virtual Desktop resources.
-
-5. Select **Enroll**.
-
-6. Review the Product Terms, then select **Enroll** to begin enrollment. It may take up to an hour for the enrollment process to finish.
-
-7. After enrollment is done, check the value in the **Per-user access pricing** column of the subscriptions list to make sure it's changed from ΓÇ£EnrollingΓÇ¥ to ΓÇ£Enrolled.ΓÇ¥
-
-## Next steps
-
-To learn more about per-user access pricing, see [Understanding licensing and per-user access pricing](licensing.md). If you want to learn how to estimate per-user app streaming costs for your deployment, see [Estimate per-user app streaming costs for Azure Virtual Desktop](streaming-costs.md). For estimating total deployment costs, see [Understanding total Azure Virtual Desktop deployment costs](total-costs.md).
virtual-desktop Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/security.md
- Title: Security guide for cross-organizational apps Azure Virtual Desktop - Azure
-description: A guide for how to keep the apps you host in Azure Virtual Desktop secure across multiple organizations.
-- Previously updated : 07/14/2021-----
-# Security guidelines for cross-organizational apps
-
-When you host or stream apps on Azure Virtual Desktop, you reach a wide variety of users inside and outside of your organization. As a result, it's extremely important to know how to keep your deployment secure so that you can keep your organization and your customers safe. This guide will give you a basic understanding of possible safety concerns and how to address them.
-
-## Shared responsibility
-
-Before Azure Virtual Desktop, on-premises virtualization solutions like Remote Desktop Services require granting users access to roles like Gateway, Broker, Web Access, and so on. These roles had to be fully redundant and able to handle peak capacity. Admins would install these roles as part of the Windows Server OS, and they had to be domain-joined with specific ports accessible to public connections. To keep deployments secure, admins had to constantly make sure everything in the infrastructure was maintained and up-to-date.
-
-Meanwhile, Azure Virtual Desktop manages portions of the services on the customer's behalf. Specifically, Microsoft hosts and manages the infrastructure parts as part of the service. Partners and customers no longer have to manually manage the required infrastructure to let users access session host virtual machines (VMs). The service also has built-in advanced security capabilities like reverse connect, which reduces the risk involved with allowing users to access their remote desktops from anywhere.
-
-To keep the service flexible, the session hosts are hosted in the partner or customers' Azure subscription. This lets customers integrate the service with other Azure services and lets them connect on-premises network infrastructure with ExpressRoute or a virtual private network (VPN).
-
-Like many cloud services, there's a [shared set of security responsibilities](../../security/fundamentals/shared-responsibility.md) between you and Microsoft. When you use Azure Virtual Desktop, it's important to understand that while some parts of the service are secured for you, there are others you'll need to configure yourself according to your organization's security needs.
-
-You'll need to configure security in the following areas:
--- End-user devices-- Application security-- Session host operating systems-- Deployment configuration-- Network controls-
-For more information about how to configure each of these areas, check out our [security best practices](./../security-guide.md).
-
-## Combined Microsoft security platform
-
-You can protect workloads by using security features and controls from Microsoft 365, Azure, and Azure Virtual Desktop.
-
-When the user connects to the service over the internet, Microsoft Entra authenticates the user's credentials, enabling protective features like [multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) to help greatly reduce the risk of user identities being compromised.
-
-Azure Virtual Desktop has features like [Reverse Connect](../network-connectivity.md#reverse-connect-transport) that allow users to access the session host without having to open inbound ports. This feature is designed with scalability and service in mind, so it shouldn't limit your ability to expand session hosts, either. You can also use existing GPOs with this feature to apply additional security with support for Active Directory-joined VMs or, for Windows 10 session hosts that might involve Microsoft Entra join scenarios, [Microsoft Intune](/mem/intune/fundamentals/windows-virtual-desktop-multi-session).
-
-## Defense in depth
-
-Today's threat landscape requires designs with security approaches in mind. Ideally, you'll want to build a series of security mechanisms and controls layered throughout your computer network to protect your data and network from being compromised or attacked. This type of security design is what the United States Cybersecurity and Infrastructure Security Agency (CISA) calls "defense in depth".
-
-## Security boundaries
-
-Security boundaries separate the code and data of security domains with different levels of trust. For example, there's usually a security boundary between kernel mode and user mode. Most Microsoft software and services depend on multiple security boundaries to isolate devices on networks, VMs, and applications on devices. The following table lists each security boundary for Windows and what they do for overall security.
-
-| Security boundary | What the boundary does |
-|--|--|
-| Network boundary | An unauthorized network endpoint can't access or tamper with code and data on a customerΓÇÖs device. |
-| Kernel boundary | A non-administrative user mode process can't access or tamper with kernel code and data. Administrator-to-kernel is not a security boundary. |
-| Process boundary | An unauthorized user mode process can't access or tamper with the code and data of another process. |
-| AppContainer sandbox boundary | An AppContainer-based sandbox process can't access or tamper with code and data outside of the sandbox based on the container capabilities. |
-| User boundary | A user can't access or tamper with the code and data of another user without being authorized. |
-| Session boundary | A user session can't access or tamper with another user session without being authorized. |
-| Web browser boundary | An unauthorized website can't violate the same-origin policy, nor can it access or tamper with the native code and data of the Microsoft Edge web browser sandbox. |
-| Virtual machine boundary | An unauthorized Hyper-V guest virtual machine can't access or tamper with the code and data of another guest virtual machine; this includes Hyper-V isolated containers. |
-| Virtual Secure Mode (VSM) boundary | Code running outside of the VSM trusted process or enclave can't access or tamper with data and code within the trusted process. |
-
-## Security features
-
-Security features build upon security boundaries to strengthen protection against specific threats. However, in some cases, there can be by-design limitations that prevent the security feature from fully protecting the deployment.
-
-Azure Virtual Desktop supports most security features available in Windows. Security features that rely on hardware features, such as (v)TPM or nested virtualization, might require a separate support statement from the Azure Virtual Desktop team.
-
-To learn more about security feature support and servicing, see our [Microsoft Security Servicing Criteria for Windows](https://www.microsoft.com/msrc/windows-security-servicing-criteria).
-
-## Recommended security boundaries for Azure Virtual Desktop scenarios
-
-You'll also need to make certain choices about security boundaries on a case-by-case basis. For example, if a user in your organization needs local administrative privileges to install apps, you'll need to give them a personal desktop instead of a shared RDSH. We don't recommend giving users local admin privileges in multi-session pooled scenarios because these users can cross security boundaries for sessions or NTFS data permissions, shut down multi-session VMs, or do other things that could interrupt service or cause data losses.
-
-Users from the same organization, like knowledge workers with apps that don't require admin privileges, are great candidates for multi-session Remote Desktop session hosts like Windows 10 Enterprise multi-session. These session hosts reduce costs for your organization because multiple users can share a single VM, with only the overhead costs of a single OS. With profile technology like FSLogix, users can be assigned any VM in a host pool without noticing any service interruptions. This feature also lets you optimize costs by doing things like shutting down VMs during off-peak hours.
-
-If your situation requires users from different organizations to connect to your deployment, we recommend you have a separate tenant for identity services like Active Directory and Microsoft Entra ID. We also recommend you have a separate subscription for hosting Azure resources like Azure Virtual Desktop and VMs.
-
-The following table lists our recommendations for each scenario.
-
-| Trust level scenario | Recommended solution |
-||-|
-| Users from one organization with standard privileges | Windows 10 Enterprise multi-session |
-| Users require administrative privileges | Personal Desktops (VDI) |
-| Users from different organizations connecting | Separate Azure subscription |
-
-Let's take a look at our recommendations for some example scenarios.
-
-### Should I share Identity resources to reduce costs?
-
-We don't currently recommend using a shared identity system in Azure Virtual Desktop. We recommend that you have separate Identity resources that you deploy in a separate Azure subscription. These resources include Active Directories, Microsoft Entra ID, and VM workloads. Every user working for an individual organization will need additional infrastructure and associated maintenance costs, but this is currently the most feasible solution for security purposes.
-
-### Should I share a multi-session Remote Desktop (RD) session host VM to reduce costs?
-
-Multi-session RD session hosts save costs by sharing hardware resources like CPU and memory among users. This resource sharing lets you design for peak capacity, even if itΓÇÖs unlikely all users will need maximum resources at the same time. In a shared multi-session environment, hardware resources are shared and allocated so that they reduce the gap between usage and costs.
-
-In many cases, using multi-session is an acceptable way to reduce costs, but whether we recommend it depends on the trust level between users with simultaneous access to a shared multi-session instance. Typically, users that belong to the same organization have a sufficient and agreed-upon trust relationship. For example, a department or workgroup where people collaborate and can access each otherΓÇÖs personal information is an organization with a high trust level.
-
-Windows uses security boundaries and controls to ensure user processes and data are isolated between sessions. However, Windows still provides access to the instance the user is working on.
-
-This deployment would benefit from a security in depth strategy that adds more security boundaries that prevent users within and outside of the organization from getting unauthorized access to other users' personal information. Unauthorized data access happens because of an error in the configuration process by the system admin, such as an undisclosed security vulnerability or a known vulnerability that hasn't been patched out yet.
-
-On the other hand, Microsoft doesn't support granting users that work for different or competing companies access to the same multi-session environment. These scenarios have several security boundaries that can be attacked or abused, like network, kernel, process, user, or sessions. A single security vulnerability could cause unauthorized data and credential theft, personal information leaks, identity theft, and other issues. Virtualized environment providers are responsible for offering well-designed systems with multiple strong security boundaries and extra safety features enabled wherever possible.
-
-Reducing these potential threats requires a fault-proof configuration, patch management design process, and regular patch deployment schedules. It's better to follow the principles of defense in depth and keep environments separate.
-
-## Next steps
-
-Find our recommended guidelines for configuring security for your Azure Virtual Desktop deployment at our [security best practices](./../security-guide.md).
virtual-desktop Streaming Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/streaming-costs.md
- Title: Estimate per-user app streaming costs for Azure Virtual Desktop - Azure
-description: How to estimate per-user billing costs for Azure Virtual Desktop.
----- Previously updated : 07/14/2021----
-# Estimate per-user app streaming costs for Azure Virtual Desktop
-
-Per-user access pricing for Azure Virtual Desktop lets you grant access to apps and desktops hosted with Azure Virtual Desktop to users outside of your organization. To learn more about per-user access pricing for Azure Virtual Desktop, see [Understanding licensing and per-user access pricing](licensing.md). This article will show you how to estimate user access costs for your deployment, assuming full pricing.
-
->[!NOTE]
->The prices in this article are based on full price per-user subscriptions without promotional offers or discounts. Also, keep in mind that there are additional costs associated with an Azure Virtual Desktop deployment, such as infrastructure consumption costs. To learn more about these other costs, see [Understanding total Azure Virtual Desktop deployment costs](total-costs.md).
-
-## Requirements
-
-Before you can estimate per-user access costs for an existing deployment, youΓÇÖll need the following things:
--- An Azure Virtual Desktop deployment that's had active users within the last month.-- [Azure Virtual Desktop Insights for your Azure Virtual Desktop deployment](../insights.md)-
-## Measure monthly user activity in a host pool
-
-In order to estimate total costs for running a host pool, you'll first need to know the number of active users over the past month. You can use Azure Virtual Desktop Insights to find this number.
-
-To check monthly active users on Azure Virtual Desktop Insights:
-
-1. Open the Azure portal, then search for and select **Azure Virtual Desktop**. After that, select **Insights** to open Azure Virtual Desktop Insights.
-
-2. Select the name of the subscription or host pool that you want to measure.
-
-3. On the **Overview** tab, find the **Monthly users (MAU)** chart in the **Utilization** section.
-
-4. Check the monthly active users (MAU) value for the most recent date. The MAU shows how many users connected to this host pool within the last 28 days before that date.
-
-## Estimate per-user access costs for an Azure Virtual Desktop host pool
-
-Next, we'll check the amount billed per billing cycle. This number is determined by how many users connected to at least one session host in your enrolled subscription.
-
-Additionally, there are two price tiers for users:
--- Users that only connect to RemoteApp application groups.-- Users that connect to at least one desktop application group.-
-You can estimate the total cost by checking how many users in each pricing tier connected to session hosts in your subscription.
-
-To check the number of users in each tier:
-
-1. Go to the [Azure Virtual Desktop pricing page](https://azure.microsoft.com/pricing/details/virtual-desktop/#pricing) and look for the "Apps" and "Desktops + apps" prices for your region.
-2. Use the connection volume number you found in step 4 of [Measure monthly activity in a host pool](#measure-monthly-user-activity-in-a-host-pool) to calculate the total user access cost.
-
- If your host pool uses a RemoteApp application group, you'll need to multiply the connection volume by the price value you see in "Apps." In other words, you'll need to use this equation:
-
- Connection volume x "Apps" price per user = total cost
-
- If your host pool uses a Desktop application group, multiply it by the "Apps + Desktops" price per user instead, like this:
-
- Connection volume x "Apps + Desktops" price per user = total cost
-
->[!IMPORTANT]
->Depending on your environment, the actual price may be very different from the estimate following these instructions will give you. For example, the estimate might be higher than the real cost because your users access resources from multiple host pools but you're only charged once per user each billing cycle. The estimate might also underestimate the costs if the user activity during the 28-day time window you're basing your data on doesn't match your typical monthly user activity. For example, a month with a week-long holiday or a major service outage will have lower-than-average user activity and won't give you an accurate estimate.
-
-## Next steps
-
-If you're interested in learning about estimating total costs for an entire Azure Virtual Desktop deployment, see [Understanding total Azure Virtual Desktop deployment costs](total-costs.md). To learn about licensing requirements and costs, see [Understanding licensing and per-user access pricing](licensing.md).
virtual-desktop Total Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/total-costs.md
- Title: Understanding total Azure Virtual Desktop deployment costs - Azure
-description: How to estimate the total cost of your Azure Virtual Desktop deployment.
----- Previously updated : 02/04/2022---
-# Understanding total Azure Virtual Desktop deployment costs
-
-Azure Virtual Desktop costs come from two sources: underlying Azure resource consumption and licensing. Azure Virtual Desktop costs are charged to the organization that owns the Azure Virtual Desktop deployment, not the end-users accessing the deployment resources. Some licensing charges must be paid in advance. Other licenses and the underlying resource consumption charges are based on meters that track your usage.
-
-In this article, we'll explain consumption and licensing costs, and how to estimate service costs before deploying Azure Virtual Desktop using the Azure Pricing Calculator. This article also includes instructions for how to use Azure Cost Management to view costs after deploying Azure Virtual Desktop.
-
->[!NOTE]
->The customer who pays for the Azure Virtual Desktop deployment is responsible for handling their deployment's lifetime resource management and costs. If the owner no longer needs resources connected to their Azure Virtual Desktop deployment, they should ensure those resources are properly removed. For more information, see [How to manage Azure resources by using the Azure portal](../../azure-resource-manager/management/manage-resources-portal.md).
-
-## Consumption costs
-
-Consumption costs are the sum of all Azure resource usage charges for users accessing an Azure Virtual Desktop host pool. These charges come from the session host virtual machines (VMs) inside the host pools, including resources shared by other products across Azure and any identity management systems that require running additional infrastructure to keep the service available, such as a domain controller for Active Directory Domain Services (AD DS).
-
-### Session host VM costs
-
-In Azure Virtual Desktop, session host VMs use the following three Azure
--- Virtual machines (compute)-- Storage for managed disks (including OS storage per VM and any data disks for personal desktops)-- Bandwidth (networking)-
-These charges can be viewed at the Azure Resource Group level where the host pool-specific resources including session host VMs are assigned. If one or more host pools are also configured to use the paid Log Analytics service to send VM data to the optional Azure Virtual Desktop Insights feature, then the bill will also charge you for the Log Analytics for the corresponding Azure Resource Groups. For more information, see [Estimate Azure Virtual Desktop monitoring costs](../insights-costs.md).
-
-Of the three primary VM session host usage costs that are listed at the beginning of this section, compute usually costs the most. To mitigate compute costs and optimize resource demand with availability, many customers choose to [scale session hosts automatically](../set-up-scaling-script.md).
-
-### Domain controller costs for Active Directories
-
-Domain controller VMs use the following four Azure services at a minimum:
--- Virtual machines (compute)-- Storage for managed disks (including OS storage per VM and any data disks for personal desktops)-- Bandwidth (networking)-- Virtual networks-
-If your Azure Virtual Desktop deployment relies on a domain controller to run its Active Directory, then you should include it in the total Azure Virtual Desktop deployment cost. Domain controllers that are hosted in Azure will also share the three Azure services for session host VMs described in [Session host VM costs](#session-host-vm-costs), because a standard Azure VM must also keep the Active DirectoryΓÇÖs identities available.
-
-However, domain controllers have a few key differences from session host VMs:
--- Domain controllers will generate an additional virtual network charge because they have to communicate with other services outside the deployment.-- Scaling domain controller availability can cause problems because your deployments need your domain controllers to always be available.-
-### Shared service costs
-
-Depending on which features your Azure Virtual Desktop deployment uses, you may also have to pay for Azure storage for any combination of the following optional features:
--- [MSIX app attach](../what-is-app-attach.md)-- [Custom OS images](../set-up-customize-master-image.md)-- [FSLogix profiles](../fslogix-containers-azure-files.md)-
-These features use Azure storage options like [Azure Files](../../storage/files/storage-files-introduction.md) and [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md), so that means they can share their storage with other Azure services beyond Azure Virtual Desktop. We recommend creating a separate storage account for the storage you buy for your Azure Virtual Desktop features to make sure you can tell the difference between its costs and the costs for other Azure services you're using.
-
-### User access costs
-
-You pay user access costs each month for each user who connects to apps or desktops in your Azure Virtual Desktop deployment. To learn more about how Azure Virtual DesktopΓÇÖs per-user access pricing works, see [Understanding licensing and per-user access pricing](licensing.md).
-
-## Predicting costs before deploying Azure Virtual Desktop
-
-Now that you understand the basics, let's start estimating. To do this, we'll need to estimate both the consumption and user access costs.
-
-### Predicting consumption costs
-
-You can use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to estimate Azure Virtual Desktop consumption costs before creating a deployment. Here's how to predict consumption costs:
-
-1. In the pricing calculator, select the **Compute** tab to show the Azure Pricing Calculator compute options.
-
-2. Select **Azure Virtual Desktop**. The Azure Virtual Desktop calculator module should appear.
-
-3. Enter the values for your deployment into the fields to estimate your monthly Azure bill based on your expected compute, storage, and networking usage.
-
->[!NOTE]
->Currently, the Azure Pricing Calculator Azure Virtual Desktop module can only estimate consumption costs for session host VMs and the aggregate additional storage of any optional Azure Virtual Desktop features requiring storage that you choose to deploy. Your total cost may also include egress network traffic to Microsoft 365 services, such as OneDrive for Business or Exchange Online. However, you can add estimates for other Azure Virtual Desktop features in separate modules within the same Azure Pricing calculator page to get a more complete or modular cost estimate.
->
->You can add extra Azure Pricing Calculator modules to estimate the cost impact of other components of your deployment, including but not limited to:
->
->- Domain controllers
->- Other storage-dependent features, such as custom OS images, MSIX app attach, and Azure Virtual Desktop Insights
-
-### Predicting user access costs
-
-User access costs depend on the number of users that connect to your deployment each month. To learn how to estimate the total user access costs you can expect, see [Estimate per-user app streaming costs for Azure Virtual Desktop](streaming-costs.md).
-
-## Viewing costs after deploying Azure Virtual Desktop
-
-Once you deploy Azure Virtual Desktop, you can use [Azure Cost Management](../../cost-management-billing/cost-management-billing-overview.md) to view your billing invoices. This section will tell you how to look up prices for your current services.
-
-### Viewing bills for consumption costs
-
-With the proper [Azure RBAC](../../role-based-access-control/rbac-and-directory-admin-roles.md) permissions, users in your organization like billing admins can use [cost analysis tools](../../cost-management-billing/costs/cost-analysis-common-uses.md) and find Azure billing invoices through [Azure Cost Management](../../cost-management-billing/cost-management-billing-overview.md) to track monthly Azure Virtual Desktop consumption costs under your Azure subscription or subscriptions.
-
-### Viewing bills for user access costs
-
-User access costs will appear each billing cycle on the Azure billing invoice for any enrolled subscription, alongside consumption costs and other Azure charges.
-
-## Next steps
-
-If you'd like to get a clearer idea of how much specific parts of your deployment will cost, take a look at these articles:
--- [Understanding licensing and per-user access pricing](licensing.md)-- [Estimate per-user app streaming costs for Azure Virtual Desktop](streaming-costs.md)-- [Tag Azure Virtual Desktop resources to manage costs](../tag-virtual-desktop-resources.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
virtual-desktop Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-recommendations.md
+
+ Title: Security recommendations for Azure Virtual Desktop
+description: Learn about recommendations for helping keep your Azure Virtual Desktop environment secure.
+++ Last updated : 01/09/2024++
+# Security recommendations for Azure Virtual Desktop
+
+Azure Virtual Desktop is a managed virtual desktop service that includes many security capabilities for keeping your organization safe. The architecture of Azure Virtual Desktop comprises many components that make up the service connecting users to their desktops and apps.
+
+Azure Virtual Desktop has many built-in advanced security features, such as Reverse Connect where no inbound network ports are required to be open, which reduces the risk involved with having remote desktops accessible from anywhere. The service also benefits from many other security features of Azure, such as multifactor authentication and conditional access. This article describes steps you can take as an administrator to keep your Azure Virtual Desktop deployments secure, whether you provide desktops and apps to users in your organization or to external users.
+
+## Shared security responsibilities
+
+Before Azure Virtual Desktop, on-premises virtualization solutions like Remote Desktop Services require granting users access to roles like Gateway, Broker, Web Access, and so on. These roles had to be fully redundant and able to handle peak capacity. Administrators would install these roles as part of the Windows Server operating system, and they had to be domain-joined with specific ports accessible to public connections. To keep deployments secure, administrators had to constantly make sure everything in the infrastructure was maintained and up-to-date.
+
+In most cloud services, however, there's a [shared set of security responsibilities](../security/fundamentals/shared-responsibility.md) between Microsoft and the customer or partner. For Azure Virtual Desktop, most components are Microsoft-managed, but session hosts and some supporting services and components are customer-managed or partner-managed. To learn more about the Microsoft-managed components of Azure Virtual Desktop, see [Azure Virtual Desktop service architecture and resilience](service-architecture-resilience.md).
+
+While some components come already secured for your environment, you'll need to configure other areas yourself to fit your organization's or customer's security needs. Here are the components of which you're responsible for the security in your Azure Virtual Desktop deployment:
+
+| Component | Responsibility |
+|--|:-:|
+| Identity | Customer or partner |
+| User devices (mobile and PC) | Customer or partner |
+| App security | Customer or partner |
+| Session host operating system | Customer or partner |
+| Deployment configuration | Customer or partner |
+| Network controls | Customer or partner |
+| Virtualization control plane | Microsoft |
+| Physical hosts | Microsoft |
+| Physical network | Microsoft |
+| Physical datacenter | Microsoft |
+
+## Security boundaries
+
+Security boundaries separate the code and data of security domains with different levels of trust. For example, there's usually a security boundary between kernel mode and user mode. Most Microsoft software and services depend on multiple security boundaries to isolate devices on networks, virtual machines (VMs), and applications on devices. The following table lists each security boundary for Windows and what they do for overall security.
+
+| Security boundary | Description |
+|--|--|
+| Network boundary | An unauthorized network endpoint can't access or tamper with code and data on a customerΓÇÖs device. |
+| Kernel boundary | A non-administrative user mode process can't access or tamper with kernel code and data. Administrator-to-kernel is not a security boundary. |
+| Process boundary | An unauthorized user mode process can't access or tamper with the code and data of another process. |
+| AppContainer sandbox boundary | An AppContainer-based sandbox process can't access or tamper with code and data outside of the sandbox based on the container capabilities. |
+| User boundary | A user can't access or tamper with the code and data of another user without being authorized. |
+| Session boundary | A user session can't access or tamper with another user session without being authorized. |
+| Web browser boundary | An unauthorized website can't violate the same-origin policy, nor can it access or tamper with the native code and data of the Microsoft Edge web browser sandbox. |
+| Virtual machine boundary | An unauthorized Hyper-V guest virtual machine can't access or tamper with the code and data of another guest virtual machine; this includes Hyper-V isolated containers. |
+| Virtual Secure Mode (VSM) boundary | Code running outside of the VSM trusted process or enclave can't access or tamper with data and code within the trusted process. |
+
+### Recommended security boundaries for Azure Virtual Desktop scenarios
+
+You'll also need to make certain choices about security boundaries on a case-by-case basis. For example, if a user in your organization needs local administrator privileges to install apps, you'll need to give them a personal desktop instead of a shared session host. We don't recommend giving users local administrator privileges in multi-session pooled scenarios because these users can cross security boundaries for sessions or NTFS data permissions, shut down multi-session VMs, or do other things that could interrupt service or cause data losses.
+
+Users from the same organization, like knowledge workers with apps that don't require administrator privileges, are great candidates for multi-session session hosts like Windows 11 Enterprise multi-session. These session hosts reduce costs for your organization because multiple users can share a single VM, with only the overhead costs of a VM per user. With user profile management products like FSLogix, users can be assigned any VM in a host pool without noticing any service interruptions. This feature also lets you optimize costs by doing things like shutting down VMs during off-peak hours.
+
+If your situation requires users from different organizations to connect to your deployment, we recommend you have a separate tenant for identity services like Active Directory and Microsoft Entra ID. We also recommend you have a separate subscription for those users for hosting Azure resources like Azure Virtual Desktop and VMs.
+
+In many cases, using multi-session is an acceptable way to reduce costs, but whether we recommend it depends on the trust level between users with simultaneous access to a shared multi-session instance. Typically, users that belong to the same organization have a sufficient and agreed-upon trust relationship. For example, a department or workgroup where people collaborate and can access each otherΓÇÖs personal information is an organization with a high trust level.
+
+Windows uses security boundaries and controls to ensure user processes and data are isolated between sessions. However, Windows still provides access to the instance the user is working on.
+
+Multi-session deployments would benefit from a security in depth strategy that adds more security boundaries that prevent users within and outside of the organization from getting unauthorized access to other users' personal information. Unauthorized data access happens because of an error in the configuration process by the system admin, such as an undisclosed security vulnerability or a known vulnerability that hasn't been patched out yet.
+
+We don't recommend granting users that work for different or competing companies access to the same multi-session environment. These scenarios have several security boundaries that can be attacked or abused, like network, kernel, process, user, or sessions. A single security vulnerability could cause unauthorized data and credential theft, personal information leaks, identity theft, and other issues. Virtualized environment providers are responsible for offering well-designed systems with multiple strong security boundaries and extra safety features enabled wherever possible.
+
+Reducing these potential threats requires a fault-proof configuration, patch management design process, and regular patch deployment schedules. It's better to follow the principles of defense in depth and keep environments separate.
+
+The following table summarizes our recommendations for each scenario.
+
+| Trust level scenario | Recommended solution |
+||-|
+| Users from one organization with standard privileges | Use a Windows Enterprise multi-session OS. |
+| Users require administrative privileges | Use a personal host pool and assign each user their own session host. |
+| Users from different organizations connecting | Separate Azure tenant and Azure subscription |
+
+## Azure security best practices
+
+Azure Virtual Desktop is a service under Azure. To maximize the safety of your Azure Virtual Desktop deployment, you should make sure to secure the surrounding Azure infrastructure and management plane as well. To secure your infrastructure, consider how Azure Virtual Desktop fits into your larger Azure ecosystem. To learn more about the Azure ecosystem, see [Azure security best practices and patterns](../security/fundamentals/best-practices-and-patterns.md).
+
+Today's threat landscape requires designs with security approaches in mind. Ideally, you'll want to build a series of security mechanisms and controls layered throughout your computer network to protect your data and network from being compromised or attacked. This type of security design is what the United States Cybersecurity and Infrastructure Security Agency (CISA) calls *defense in depth*.
+
+The following sections contain recommendations for securing an Azure Virtual Desktop deployment.
+
+### Enable Microsoft Defender for Cloud
+
+We recommend enabling Microsoft Defender for Cloud's enhanced security features to:
+
+- Manage vulnerabilities.
+- Assess compliance with common frameworks like from the PCI Security Standards Council.
+- Strengthen the overall security of your environment.
+
+To learn more, see [Enable enhanced security features](../defender-for-cloud/enable-enhanced-security.md).
+
+### Improve your Secure Score
+
+Secure Score provides recommendations and best practice advice for improving your overall security. These recommendations are prioritized to help you pick which ones are most important, and the Quick Fix options help you address potential vulnerabilities quickly. These recommendations also update over time, keeping you up to date on the best ways to maintain your environmentΓÇÖs security. To learn more, see [Improve your Secure Score in Microsoft Defender for Cloud](../defender-for-cloud/secure-score-security-controls.md).
+
+### Require multifactor authentication
+
+Requiring multifactor authentication for all users and admins in Azure Virtual Desktop improves the security of your entire deployment. To learn more, see [Enable Microsoft Entra multifactor authentication for Azure Virtual Desktop](set-up-mfa.md).
+
+### Enable Conditional Access
+
+Enabling [Conditional Access](../active-directory/conditional-access/overview.md) lets you manage risks before you grant users access to your Azure Virtual Desktop environment. When deciding which users to grant access to, we recommend you also consider who the user is, how they sign in, and which device they're using.
+
+### Collect audit logs
+
+Enabling audit log collection lets you view user and admin activity related to Azure Virtual Desktop. Some examples of key audit logs are:
+
+- [Azure Activity Log](../azure-monitor/essentials/activity-log.md)
+- [Microsoft Entra Activity Log](../active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md)
+- [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md)
+- [Session hosts](../azure-monitor/agents/agent-windows.md)
+- [Key Vault logs](../key-vault/general/logging.md)
+
+### Use RemoteApp
+
+When choosing a deployment model, you can either provide remote users access to entire desktops, or only select applications when published as a RemoteApp. RemoteApp provides a seamless experience as the user works with apps from their virtual desktop. RemoteApp reduces risk by only letting the user work with a subset of the remote machine exposed by the application.
+
+### Monitor usage with Azure Monitor
+
+Monitor your Azure Virtual Desktop service's usage and availability with [Azure Monitor](https://azure.microsoft.com/services/monitor/). Consider creating [service health alerts](../service-health/alerts-activity-log-service-notifications-portal.md) for the Azure Virtual Desktop service to receive notifications whenever there's a service impacting event.
+
+### Encrypt your session hosts
+
+Encrypt your session hosts with [managed disk encryption options](../virtual-machines/disk-encryption-overview.md) to protect stored data from unauthorized access.
+
+## Session host security best practices
+
+Session hosts are virtual machines that run inside an Azure subscription and virtual network. Your Azure Virtual Desktop deployment's overall security depends on the security controls you put on your session hosts. This section describes best practices for keeping your session hosts secure.
+
+### Enable endpoint protection
+
+To protect your deployment from known malicious software, we recommend enabling endpoint protection on all session hosts. You can use either Windows Defender Antivirus or a third-party program. To learn more, see [Deployment guide for Windows Defender Antivirus in a VDI environment](/windows/security/threat-protection/windows-defender-antivirus/deployment-vdi-windows-defender-antivirus).
+
+For profile solutions like FSLogix or other solutions that mount virtual hard disk files, we recommend excluding those file extensions.
+
+### Install an endpoint detection and response product
+
+We recommend you install an endpoint detection and response (EDR) product to provide advanced detection and response capabilities. For server operating systems with [Microsoft Defender for Cloud](../defender-for-cloud/integration-defender-for-endpoint.md) enabled, installing an EDR product will deploy Microsoft Defender for Endpoint. For client operating systems, you can deploy [Microsoft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/onboarding) or a third-party product to those endpoints.
+
+### Enable threat and vulnerability management assessments
+
+Identifying software vulnerabilities that exist in operating systems and applications is critical to keeping your environment secure. Microsoft Defender for Cloud can help you identify problem spots through [Microsoft Defender for Endpoint's threat and vulnerability management solution](../defender-for-cloud/deploy-vulnerability-assessment-defender-vulnerability-management.md). You can also use third-party products if you're so inclined, although we recommend using Microsoft Defender for Cloud and Microsoft Defender for Endpoint.
+
+### Patch software vulnerabilities in your environment
+
+Once you identify a vulnerability, you must patch it. This applies to virtual environments as well, which includes the running operating systems, the applications that are deployed inside of them, and the images you create new machines from. Follow your vendor patch notification communications and apply patches in a timely manner. We recommend patching your base images monthly to ensure that newly deployed machines are as secure as possible.
+
+### Establish maximum inactive time and disconnection policies
+
+Signing users out when they're inactive preserves resources and prevents access by unauthorized users. We recommend that timeouts balance user productivity as well as resource usage. For users that interact with stateless applications, consider more aggressive policies that turn off machines and preserve resources. Disconnecting long running applications that continue to run if a user is idle, such as a simulation or CAD rendering, can interrupt the user's work and may even require restarting the computer.
+
+### Set up screen locks for idle sessions
+
+You can prevent unwanted system access by configuring Azure Virtual Desktop to lock a machine's screen during idle time and requiring authentication to unlock it.
+
+### Establish tiered admin access
+
+We recommend you don't grant your users admin access to virtual desktops. If you need software packages, we recommend you make them available through configuration management utilities like Microsoft Intune. In a multi-session environment, we recommend you don't let users install software directly.
+
+### Consider which users should access which resources
+
+Consider session hosts as an extension of your existing desktop deployment. We recommend you control access to network resources the same way you would for other desktops in your environment, such as using network segmentation and filtering. By default, session hosts can connect to any resource on the internet. There are several ways you can limit traffic, including using Azure Firewall, Network Virtual Appliances, or proxies. If you need to limit traffic, make sure you add the proper rules so that Azure Virtual Desktop can work properly.
+
+### Manage Microsoft 365 app security
+
+In addition to securing your session hosts, it's important to also secure the applications running inside of them. Microsoft 365 apps are some of the most common applications deployed in session hosts. To improve the Microsoft 365 deployment security, we recommend you use the [Security Policy Advisor](/DeployOffice/overview-of-security-policy-advisor) for Microsoft 365 Apps for enterprise. This tool identifies policies that can you can apply to your deployment for more security. Security Policy Advisor also recommends policies based on their impact to your security and productivity.
+
+### User profile security
+
+User profiles can contain sensitive information. You should restrict who has access to user profiles and the methods of accessing them, especially if you're using [FSLogix Profile Container](/fslogix/tutorial-configure-profile-containers) to store user profiles in a virtual hard disk file on an SMB share. You should follow the security recommendations for the provider of your SMB share. For example, If you're using Azure Files to store these virtual hard disk files, you can use [private endpoints](../storage/files/storage-files-networking-overview.md#private-endpoints) to make them only accessible within an Azure virtual network.
+
+### Other security tips for session hosts
+
+By restricting operating system capabilities, you can strengthen the security of your session hosts. Here are a few things you can do:
+
+- Control device redirection by redirecting drives, printers, and USB devices to a user's local device in a remote desktop session. We recommend that you evaluate your security requirements and check if these features ought to be disabled or not.
+
+- Restrict Windows Explorer access by hiding local and remote drive mappings. This prevents users from discovering unwanted information about system configuration and users.
+
+- Avoid direct RDP access to session hosts in your environment. If you need direct RDP access for administration or troubleshooting, enable [just-in-time](../defender-for-cloud/just-in-time-access-usage.md) access to limit the potential attack surface on a session host.
+
+- Grant users limited permissions when they access local and remote file systems. You can restrict permissions by making sure your local and remote file systems use access control lists with least privilege. This way, users can only access what they need and can't change or delete critical resources.
+
+- Prevent unwanted software from running on session hosts. You can enable App Locker for additional security on session hosts, ensuring that only the apps you allow can run on the host.
+
+## Trusted launch
+
+Trusted launch are Gen2 Azure VMs with enhanced security features aimed to protect against bottom-of-the-stack threats through attack vectors such as rootkits, boot kits, and kernel-level malware. The following are the enhanced security features of trusted launch, all of which are supported in Azure Virtual Desktop. To learn more about trusted launch, visit [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
+
+### Enable trusted launch as default
+
+Trusted launch protects against advanced and persistent attack techniques. This feature also allows for secure deployment of VMs with verified boot loaders, OS kernels, and drivers. Trusted launch also protects keys, certificates, and secrets in the VMs. Learn more about trusted launch at [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
+
+When you add session hosts using the Azure portal, the security type automatically changes to **Trusted virtual machines**. This ensures that your VM meets the mandatory requirements for Windows 11. For more information about these requirements, see [Virtual machine support](/windows/whats-new/windows-11-requirements#virtual-machine-support).
+
+## Azure Confidential computing virtual machines
+
+Azure Virtual Desktop support for Azure Confidential computing virtual machines ensures a userΓÇÖs virtual desktop is encrypted in memory, protected in use, and backed by hardware root of trust. Azure Confidential computing VMs for Azure Virtual Desktop are compatible with [supported operating systems](prerequisites.md#operating-systems-and-licenses). Deploying confidential VMs with Azure Virtual Desktop gives users access to Microsoft 365 and other applications on session hosts that use hardware-based isolation, which hardens isolation from other virtual machines, the hypervisor, and the host OS. These virtual desktops are powered by the latest Third-generation (Gen 3) Advanced Micro Devices (AMD) EPYCΓäó processor with Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP) technology. Memory encryption keys are generated and safeguarded by a dedicated secure processor inside the AMD CPU that can't be read from software. For more information, see the [Azure Confidential computing overview](../confidential-computing/overview.md).
+
+The following operating systems are supported for use as session hosts with confidential VMs on Azure Virtual Desktop:
+
+- Windows 11 Enterprise, version 22H2
+- Windows 11 Enterprise multi-session, version 22H2
+- Windows Server 2022
+- Windows Server 2019
+
+You can create session hosts using confidential VMs when you [create a host pool](create-host-pool.md) or [add session hosts to a host pool](add-session-hosts-host-pool.md).
+
+### OS disk encryption
+
+Encrypting the operating system disk is an extra layer of encryption that binds disk encryption keys to the Confidential computing VM's Trusted Platform Module (TPM). This encryption makes the disk content accessible only to the VM. Integrity monitoring allows cryptographic attestation and verification of VM boot integrity and monitoring alerts if the VM didnΓÇÖt boot because attestation failed with the defined baseline. For more information about integrity monitoring, see [Microsoft Defender for Cloud Integration](../virtual-machines/trusted-launch.md#microsoft-defender-for-cloud-integration). You can enable confidential compute encryption when you create session hosts using confidential VMs when you [create a host pool](create-host-pool.md) or [add session hosts to a host pool](add-session-hosts-host-pool.md).
+
+### Secure Boot
+
+Secure Boot is a mode that platform firmware supports that protects your firmware from malware-based rootkits and boot kits. This mode only allows signed operating systems and drivers to boot.
+
+### Monitor boot integrity using Remote Attestation
+
+Remote attestation is a great way to check the health of your VMs. Remote attestation verifies that Measured Boot records are present, genuine, and originate from the Virtual Trusted Platform Module (vTPM). As a health check, it provides cryptographic certainty that a platform started up correctly.
+
+### vTPM
+
+A vTPM is a virtualized version of a hardware Trusted Platform Module (TPM), with a virtual instance of a TPM per VM. vTPM enables remote attestation by performing integrity measurement of the entire boot chain of the VM (UEFI, OS, system, and drivers).
+
+We recommend enabling vTPM to use remote attestation on your VMs. With vTPM enabled, you can also enable BitLocker functionality with Azure Disk Encryption, which provides full-volume encryption to protect data at rest. Any features using vTPM will result in secrets bound to the specific VM. When users connect to the Azure Virtual Desktop service in a pooled scenario, users can be redirected to any VM in the host pool. Depending on how the feature is designed this may have an impact.
+
+> [!NOTE]
+> BitLocker shouldn't be used to encrypt the specific disk where you're storing your FSLogix profile data.
+
+### Virtualization-based Security
+
+Virtualization-based Security (VBS) uses the hypervisor to create and isolate a secure region of memory that's inaccessible to the OS. Hypervisor-Protected Code Integrity (HVCI) and Windows Defender Credential Guard both use VBS to provide increased protection from vulnerabilities.
+
+#### Hypervisor-Protected Code Integrity
+
+HVCI is a powerful system mitigation that uses VBS to protect Windows kernel-mode processes against injection and execution of malicious or unverified code.
+
+#### Windows Defender Credential Guard
+
+Enable Windows Defender Credential Guard. Windows Defender Credential Guard uses VBS to isolate and protect secrets so that only privileged system software can access them. This prevents unauthorized access to these secrets and credential theft attacks, such as Pass-the-Hash attacks. For more information, see [Credential Guard overview](/windows/security/identity-protection/credential-guard/).
+
+### Windows Defender Application Control
+
+Enable Windows Defender Application Control. Windows Defender Application Control is designed to protect devices against malware and other untrusted software. It prevents malicious code from running by ensuring that only approved code, that you know, can be run. For more information, see [Application Control for Windows](/windows/security/application-security/application-control/windows-defender-application-control/wdac).
+
+> [!NOTE]
+> When using Windows Defender Access Control, we recommend only targeting policies at the device level. Although it's possible to target policies to individual users, once the policy is applied, it affects all users on the device equally.
+
+## Windows Update
+
+Keep your session hosts up to date with updates from Windows Update. Windows Update provides a secure way to keep your devices up-to-date. Its end-to-end protection prevents manipulation of protocol exchanges and ensures updates only include approved content. You may need to update firewall and proxy rules for some of your protected environments in order to get proper access to Windows Updates. For more information, see [Windows Update security](/windows/deployment/update/windows-update-security).
+
+## Remote Desktop client and updates on other OS platforms
+
+Software updates for the Remote Desktop clients you can use to access Azure Virtual Desktop services on other OS platforms are secured according to the security policies of their respective platforms. All client updates are delivered directly by their platforms. For more information, see the respective store pages for each app:
+
+- [macOS](https://apps.apple.com/app/microsoft-remote-desktop/id1295203466?mt=12)
+- [iOS](https://apps.apple.com/us/app/remote-desktop-mobile/id714464092)
+- [Android](https://play.google.com/store/apps/details?id=com.microsoft.rdc.androidx)
+
+## Next steps
+
+- Learn how to [Set up multifactor authentication](set-up-mfa.md).
+- [Apply Zero Trust principles for an Azure Virtual Desktop deployment](/security/zero-trust/azure-infrastructure-avd).
virtual-desktop Service Architecture Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/service-architecture-resilience.md
Last updated 10/19/2023
# Azure Virtual Desktop service architecture and resilience
-Azure Virtual Desktop is designed to provide a resilient, reliable, and secure service for organizations and users. The architecture of Azure Virtual Desktop comprises many components that make up the service connecting users to their desktops and apps. Most components are Microsoft-managed, but some are customer-managed.
+Azure Virtual Desktop is designed to provide a resilient, reliable, and secure service for organizations and users. The architecture of Azure Virtual Desktop comprises many components that make up the service connecting users to their desktops and apps. Most components are Microsoft-managed, but some are customer-managed or partner-managed.
Microsoft provides the virtual desktop infrastructure (VDI) components for core functionality as a service. These components include:
virtual-desktop Understand Estimate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/understand-estimate-costs.md
+
+ Title: Understand and estimate costs for Azure Virtual Desktop
+description: Learn about which components are charged for in Azure Virtual Desktop and how to estimate the total cost.
+++ Last updated : 01/09/2024++
+# Understand and estimate costs for Azure Virtual Desktop
+
+Azure Virtual Desktop costs come from two sources: underlying Azure resource consumption and licensing. Azure Virtual Desktop costs are charged to the organization that owns the Azure Virtual Desktop deployment, not the end-users accessing the deployment resources. Some licensing charges must be paid in advance. Azure meters track other licenses and the underlying resource consumption charges based on your usage.
+
+The organization who pays for Azure Virtual Desktop is responsible for handling the resource management and costs. If the owner no longer needs resources connected to their Azure Virtual Desktop deployment, they should ensure those resources are properly removed. For more information, see [How to manage Azure resources by using the Azure portal](../azure-resource-manager/management/manage-resources-portal.md).
+
+This article explains consumption and licensing costs, and how to estimate service costs before deploying Azure Virtual Desktop.
+
+## Azure resource consumption costs
+
+Azure resource consumption costs are the sum of all Azure resource usage charges that provide users desktops or apps from Azure Virtual Desktop. These charges come from the session host virtual machines (VMs), plus resources shared by other products across Azure that require running more infrastructure to keep the service available, such as storage accounts, network data egress, and identity management systems.
+
+### Session host costs
+
+Session hosts are based on virtual machines (VMs), so the same Azure Compute charges and billing mechanisms as VMs apply. These charges include the following components:
+
+- Virtual machine instance.
+- Storage for managed disks for the operating system and any extra data disks.
+- Network bandwidth.
+
+Of the charges for these components, virtual machine instances usually cost the most. To mitigate compute costs and optimize resource demand with availability, you can use [autoscale](autoscale-scenarios.md) to automatically scale session hosts based on demand and time. You can also use [Azure savings plans](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) or [Azure reserved VM instances](../virtual-machines/prepay-reserved-vm-instances.md) to reduce compute costs.
+
+### Identity provider costs
+
+You have a choice of identity provider to use for Azure Virtual Desktop, from Microsoft Entra ID only, or Microsoft Entra ID in conjunction with Active Directory Domain Services (AD DS) or Microsoft Entra Domain Services. The following table shows the components that are charged for each identity provider:
+
+| Identity provider | Components charged |
+|--|--|
+| Microsoft Entra ID only | [Free tier available, licensed tiers for some features](https://www.microsoft.com/security/business/microsoft-entra-pricing), such as conditional access. |
+| Microsoft Entra ID + AD DS | Microsoft Entra ID and domain controller VM costs, including compute, storage, and networking. |
+| Microsoft Entra ID + Microsoft Entra Domain Services | Microsoft Entra ID and [Microsoft Entra Domain Services](https://azure.microsoft.com/pricing/details/microsoft-entra-ds/), |
+
+### Accompanying service costs
+
+Depending on which features your use for Azure Virtual Desktop, you have to pay for the associated costs of those features. Some examples might include:
+
+| Feature | Associated costs |
+|--|--|
+| [Azure Virtual Desktop Insights](insights.md) | Log data in [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor/). For more information, see [Estimate Azure Virtual Desktop Insights costs](insights-costs.md). |
+| [App attach](app-attach-overview.md) | Application storage, such as [Azure Files](https://azure.microsoft.com/pricing/details/storage/files/) or [Azure NetApp Files](https://azure.microsoft.com/pricing/details/netapp/). |
+| [FSLogix profile container](/fslogix/overview-what-is-fslogix) | User profile storage, such as [Azure Files](https://azure.microsoft.com/pricing/details/storage/files/) or [Azure NetApp Files](https://azure.microsoft.com/pricing/details/netapp/). |
+| [Custom image templates](custom-image-templates.md) | Storage and network costs for [managed disks](https://azure.microsoft.com/pricing/details/managed-disks/) and [bandwidth](https://azure.microsoft.com/pricing/details/bandwidth/). |
+
+### Licensing costs
+
+In the context of providing virtualized infrastructure with Azure Virtual Desktop, *internal users* (for internal commercial purposes) refers to people who are members of your own organization, such as employees of a business or students of a school, including external vendors or contractors. *External users* (for external commercial purposes) aren't members of your organization, but your customers where you might provide a Software-as-a-Service (SaaS) application using Azure Virtual Desktop.
+
+Licensing Azure Virtual Desktop works differently for internal and external commercial purposes:
+
+- If you're providing Azure Virtual Desktop access to internal commercial purposes, you must purchase an eligible license for each user that accesses Azure Virtual Desktop.
+
+- If you're providing Azure Virtual Desktop access external commercial purposes, per-user access pricing lets you pay for Azure Virtual Desktop access rights on behalf of external users. You must enroll in per-user access pricing to build a compliant deployment for external users. You pay for per-user access pricing through an Azure subscription.
+
+To learn more about the different options, see [License Azure Virtual Desktop](licensing.md).
+
+## Estimate costs before deploying Azure Virtual Desktop
+
+You can use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to estimate consumption and per-user access licensing costs before deploying Azure Virtual Desktop. Here's how to estimate costs:
+
+1. In a web browser, open the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/).
+
+1. Select the **Compute** tab to show the Azure Pricing Calculator compute options.
+
+1. Select **Azure Virtual Desktop**. The Azure Virtual Desktop calculator module should appear.
+
+1. Enter the values for your deployment into the fields to estimate your monthly Azure bill based on:
+
+ - Your expected compute, storage, and networking usage.
+ - Number of users, total hours, and concurrency.
+ - Whether you're using per-user access pricing for external commercial purposes. If you're licensing for internal commercial purposes, you have to factor this license into your total cost estimate separately.
+ - Whether you're using a savings plan or reserved instances.
+ - Level of support.
+ - Other components of your deployment, such as those features listed in [Accompanying service costs](#accompanying-service-costs).
+
+> [!NOTE]
+> The Azure Pricing Calculator Azure Virtual Desktop module can only estimate consumption costs for session host VMs and the aggregate additional storage of any optional Azure Virtual Desktop features requiring storage that you choose to deploy. Your total cost may also include egress network traffic to Microsoft 365 services, such as OneDrive for Business or Exchange Online. However, you can add estimates for other Azure Virtual Desktop features in separate modules within the same Azure Pricing calculator page to get a more complete or modular cost estimate.
+
+## View costs after deploying Azure Virtual Desktop
+
+Once you deploy Azure Virtual Desktop, you can use [Microsoft Cost Management](../cost-management-billing/cost-management-billing-overview.md) to view your billing invoices. Users in your organization like billing admins can use [cost analysis tools](../cost-management-billing/costs/cost-analysis-common-uses.md) and find Azure billing invoices through Microsoft Cost Management to track monthly Azure Virtual Desktop consumption costs under your Azure subscription or subscriptions. You can also [Tag Azure Virtual Desktop resources to manage costs](tag-virtual-desktop-resources.md).
+
+If you're using per-user access pricing, costs appear each billing cycle on the Azure billing invoice for any enrolled subscription, alongside consumption costs and other Azure charges.
+
+If you [Use Azure Virtual Desktop Insights](insights.md), you can gain a detailed understanding of how Azure Virtual Desktop is being used in your organization. You can use this information to help you optimize your Azure Virtual Desktop deployment and reduce costs.
+
+## Next steps
+
+- Learn how to [Licensing Azure Virtual Desktop](licensing.md).
+- [Tag Azure Virtual Desktop resources to manage costs](tag-virtual-desktop-resources.md).
+- [Use Azure Virtual Desktop Insights](insights.md).
virtual-desktop Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md
Before you can access your resources, you'll need to meet the prerequisites:
> [!IMPORTANT] > - Support for Windows 7 ended on January 10, 2023.
- > - Support for Windows Server 2012 R2 ended on October 10, 2023. For more information, view [SQL Server 2012 and Windows Server 2012/2012 R2 end of support](/lifecycle/announcements/sql-server-2012-windows-server-2012-2012-r2-end-of-support).
+ > - Support for Windows Server 2012 R2 ended on October 10, 2023.
- Download the Remote Desktop client installer, choosing the correct version for your device: - [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 01/04/2024 Last updated : 01/10/2024 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-|
-| Public | 1.2.4763 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
-| Insider | 1.2.5102 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+| Public | 1.2.5105 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
+| Insider | 1.2.5105 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
-## Updates for version 1.2.5102 (Insider)
+## Updates for version 1.2.5105
-*Published: December 19, 2023*
+*Published: January 9, 2024*
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
In this release, we've made the following changes:
+- Fixed the [CVE-2024-21307](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-21307) security vulnerability.
- Improved accessibility by making the **Change the size of text and apps** drop-down menu more visible in the High Contrast theme. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
-## Updates for version 1.2.5018 (Insider)
+>[!NOTE]
+>This release was originally 1.2.5102 in Insiders, but we changed the Public version number to 1.2.5105 after adding the security improvements addressing [CVE-2024-21307](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-21307).
+
+## Updates for version 1.2.5018
*Published: November 20, 2023* > [!NOTE]
-> We replaced this Insiders version with [version 1.2.5102](#updates-for-version-125102-insider). As a result, version 1.2.5018 is no longer available for download.
+> We replaced this Insiders version with [version 1.2.5102](#updates-for-version-125105). As a result, version 1.2.5018 is no longer available for download.
In this release, we've made the following change:
In this release, we've made the following change:
*Published: November 7, 2023*
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
+Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1dqzi), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1dlc8), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1dlc7)
In this release, we've made the following changes:
In this release, we've made the following changes:
*Published: October 17, 2023*
-Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1d1KN), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1d1KO), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1cRm0)
- In this release, we've made the following changes: - Added new parameters for multiple monitor configuration when connecting to a remote resource using the [Uniform Resource Identifier (URI) scheme](uri-scheme.md).
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 11/15/2023 Last updated : 01/10/2024
The following table provides a comparison of disk sizes and performance caps to
|512 |153,600 |4,000 | |1,024-65,536 (sizes in this range increasing in increments of 1 TiB) |160,000 |4,000 |
-Ultra disks are designed to provide submillisecond latencies and target IOPS and throughput described in the preceding table 99.99% of the time.
- ### Ultra disk performance
-Ultra disks feature a flexible performance configuration model that allows you to independently configure IOPS and throughput both before and after you provision the disk. Ultra disks come in several fixed sizes, ranging from 4 GiB up to 64 TiB.
+ Ultra disks are designed to provide low sub millisecond latencies and provisioned IOPS and throughput 99.99% of the time. Ultra disks also feature a flexible performance configuration model that allows you to independently configure IOPS and throughput, before and after you provision the disk. Ultra disks come in several fixed sizes, ranging from 4 GiB up to 64 TiB.
### Ultra disk IOPS
Unlike Premium SSDs, Premium SSD v2 doesn't have dedicated sizes. You can set a
### Premium SSD v2 performance
-With Premium SSD v2 disks, you can individually set the capacity, throughput, and IOPS of a disk based on your workload needs, providing you with more flexibility and reduced costs. Each of these values determines the cost of your disk.
+Premium SSD v2 disks are designed to provide sub millisecond latencies and provisioned IOPS and throughput 99.9% of the time. With Premium SSD v2 disks, you can individually set the capacity, throughput, and IOPS of a disk based on your workload needs, providing you with more flexibility and reduced costs. Each of these values determines the cost of your disk.
#### Premium SSD v2 capacities
virtual-machines Enable Infiniband https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/enable-infiniband.md
For Windows, download and install the [Mellanox OFED for Windows drivers](https:
## Enable IP over InfiniBand (IB) If you plan to run MPI jobs, you typically don't need IPoIB. The MPI library will use the verbs interface for IB communication (unless you explicitly use the TCP/IP channel of MPI library). But if you have an app that uses TCP/IP for communication and you want to run over IB, you can use IPoIB over the IB interface. Use the following commands (for RHEL/CentOS) to enable IP over InfiniBand.
+> [!IMPORTANT]
+> To avoid issues, ensure you aren't running older versions of Microsoft Azure Linux Agent (waagent). We recommend using at least [version 2.4.0.2](https://github.com/Azure/WALinuxAgent/releases/tag/v2.4.0.2) before enabling IP over IB.
+ ```bash sudo sed -i -e 's/# OS.EnableRDMA=n/OS.EnableRDMA=y/g' /etc/waagent.conf sudo systemctl restart waagent
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/instance-metadata-service.md
If there's a data element not found or a malformed request, the Instance Metadat
- **Why am I'm not seeing the SKU information for my VM in `instance/compute` details?** - For custom images created from Azure Marketplace, Azure platform doesn't retain the SKU information for the custom image and the details for any VMs created from the custom image. This is by design and hence not surfaced in the VM `instance/compute` details. -- **Why is my request timed out for my call to the service?**
+- **Why is my request timed out (or failed to connect) for my call to the service?**
- Metadata calls must be made from the primary IP address assigned to the primary network card of the VM. Additionally, if you've changed your routes, there must be a route for the 169.254.169.254/32 address in your VM's local routing table. ### [Windows](#tab/windows/)
virtual-network Virtual Networks Static Private Ip Classic Pportal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-classic-pportal.md
- Title: Configure private IP addresses for VMs (Classic) - Azure portal
-description: Learn how to configure private IP addresses for virtual machines (Classic) using the Azure portal.
- Previously updated : 08/24/2023---------
-# Configure private IP addresses for a virtual machine (Classic) using the Azure portal
----
-This article covers the classic deployment model. You can also [manage a static private IP address in the Resource Manager deployment model](virtual-networks-static-private-ip-arm-pportal.md).
--
-The sample steps that follow expect an environment already created. If you want to run the steps as they're displayed in this document, first build the test environment described in [create a vnet](/previous-versions/azure/virtual-network/virtual-networks-create-vnet-classic-pportal).
-
-## How to specify a static private IP address when creating a VM
-
-To create a VM named *DNS01* in the *FrontEnd* subnet of a VNet named *TestVNet* with a static private IP of *192.168.1.101*, complete the following steps:
-
-1. From a browser, navigate to the [Azure portal](https://portal.azure.com) and, if necessary, sign in with your Azure account.
-
-1. Select **NEW** > **Compute** > **Windows Server 2012 R2 Datacenter**, notice that the **Select a deployment model** list already shows **Classic**, and then select **Create**.
-
- :::image type="content" source="./media/virtual-networks-static-ip-classic-pportal/figure01.png" alt-text="Screenshot that shows the Azure portal with the New > Compute > Windows Server 2012 R2 Datacenter tile highlighted.":::
-
-1. In **Create VM**, enter the name of the VM to be created (*DNS01* in the scenario), the local administrator account, and password.
-
- :::image type="content" source="./media/virtual-networks-static-ip-classic-pportal/figure02.png" alt-text="Screenshot that shows how to create a VM by entering the name of the VM, local administrator user name, and password.":::
-
-1. Select **Optional Configuration** > **Network** > **Virtual Network**, and then select **TestVNet**. If **TestVNet** isn't available, make sure you're using the *Central US* location and have created the test environment described at the beginning of this article.
-
- :::image type="content" source="./media/virtual-networks-static-ip-classic-pportal/figure03.png" alt-text="Screenshot that shows the Optional Configuration > Network > Virtual Network > TestVNet option highlighted.":::
-
-1. In **Network**, make sure the subnet currently selected is *FrontEnd*, then select **IP addresses**, under **IP address assignment** select **Static**, and then enter *192.168.1.101* for **IP Address** as seen in the following screenshot.
-
- :::image type="content" source="./media/virtual-networks-static-ip-classic-pportal/figure04.png" alt-text="Screenshot that highlights the IP Addresses field where you type the static IP address.":::
-
-1. Select **OK** under **IP addresses**, select **OK** under **Network**, and then select **OK** under **Optional config**.
-
-1. Under **Create VM**, select **Create**. Notice the following tile displayed in your dashboard:
-
- :::image type="content" source="./media/virtual-networks-static-ip-classic-pportal/figure05.png" alt-text="Screenshot that shows the Creating Windows Server 2012 R2 Datacenter tile.":::
-
-## How to retrieve static private IP address information for a VM
-
-To view the static private IP address information for the VM created with the previous steps, execute the following steps.
-
-1. From the Azure portal, select **BROWSE ALL** > **Virtual machines (classic)** > **DNS01** > **All settings** > **IP addresses** and notice the following IP address assignment and IP address information:
-
- :::image type="content" source="./media/virtual-networks-static-ip-classic-pportal/figure06.png" alt-text=" Screenshot of create VM in Azure portal.":::
-
-## How to remove a static private IP address from a VM
-
-Under **IP addresses**, select **Dynamic** to the right of **IP address assignment**, select **Save**, and then select **Yes**, as shown in the following picture:
--
-## How to add a static private IP address to an existing VM
-
-1. Under **IP addresses**, shown previously, select **Static** to the right of **IP address assignment**.
-
-1. Type *192.168.1.101* for **IP address**, select **Save**, and then select **Yes**.
-
-## Set IP addresses within the operating system
-
-ItΓÇÖs recommended that you don't statically assign the private IP assigned to the Azure virtual machine within the operating system of a VM, unless necessary. If you do manually set the private IP address within the operating system, ensure that it's the same address as the private IP address assigned to the Azure VM. Failure to match the IP address could result in loss of connectivity to the virtual machine.
-
-## Next steps
-
-* Learn about [reserved public IP](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip) addresses.
-
-* Learn about [instance-level public IP (ILPIP)](/previous-versions/azure/virtual-network/virtual-networks-instance-level-public-ip) addresses.
-
-* Consult the [Reserved IP REST APIs](/previous-versions/azure/reference/dn722420(v=azure.100)).
virtual-network Virtual Networks Static Private Ip Classic Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-classic-ps.md
- Title: Configure private IP addresses for VMs (Classic) - Azure PowerShell
-description: Learn how to configure private IP addresses for virtual machines (Classic) using PowerShell.
- Previously updated : 08/24/2023---------
-# Configure private IP addresses for a virtual machine (Classic) using PowerShell
----
-This article covers the classic deployment model. You can also [manage a static private IP address in the Resource Manager deployment model](virtual-networks-static-private-ip-arm-ps.md).
--
-The following sample PowerShell commands expect a simple environment already created. If you want to run the commands as they're displayed in this document, first build the test environment described in [Create a VNet](/previous-versions/azure/virtual-network/virtual-networks-create-vnet-classic-netcfg-ps).
-
-## How to verify if a specific IP address is available
-
-To verify if the IP address *192.168.1.101* is available in a VNet named *TestVNet*, run the following PowerShell command and verify the value for *IsAvailable*:
-
-```azurepowershell
-$tstip = @{
- VNetName = "TestVNet"
- IPAddress = "192.168.1.101"
-}
-Test-AzureStaticVNetIP @tstip
-
-```
-
-Expected output:
-
-```output
-IsAvailable : True
-AvailableAddresses : {}
-OperationDescription : Test-AzureStaticVNetIP
-OperationId : fd3097e1-5f4b-9cac-8afa-bba1e3492609
-OperationStatus : Succeeded
-```
-
-## How to specify a static private IP address when creating a VM
-
-The following PowerShell script creates a new cloud service named *TestService*. The script then retrieves an image from Azure and creates a VM named *DNS01* in the new cloud service. Finally, the script using the sets the VM to be in a subnet named *FrontEnd*, and sets *192.168.1.7* as a static private IP address for the VM:
--
-```azurepowershell
-$azsrv = @{
- ServiceName = "TestService"
- Location = "Central US"
-}
-New-AzureService @azsrv
-
-$image = Get-AzureVMImage | where {$_.ImageName -like "*RightImage-Windows-2012R2-x64*"}
-
-$azcfg = @{
- Name = "DNS01"
- InstanceSize = "Small"
- ImageName = $image.ImageName
-}
-
-$azprv = @{
- AdminUsername = "adminuser"
-}
-
-$azsub = @{
- SubnetNames = "FrontEnd"
-}
-
-$azip = @{
- IPAddress = "192.168.1.7"
-}
-
-$azvm = @{
- ServiceName = "TestService"
- VNetsName = "TestVNet"
-}
-New-AzureVMConfig @azcfg | Add-AzureProvisioningConfig @azprv -Windows | Set-AzureSubnet @azsub | Set-AzureStaticVNetIP @azip | New-AzureVM @azvm
-```
-
-Expected output:
-
-```output
-WARNING: No deployment found in service: 'TestService'.
-OperationDescription OperationId OperationStatus
--
-New-AzureService fcf705f1-d902-011c-95c7-b690735e7412 Succeeded
-New-AzureVM 3b99a86d-84f8-04e5-888e-b6fc3c73c4b9 Succeeded
-```
-
-## How to retrieve static private IP address information for a VM
-
-To view the static private IP address information for the VM created with the previous script, run the following PowerShell command and observe the values for *IpAddress*:
-
-```azurepowershell
-$vm = @{
- Name = "DNS01"
- ServiceName = "TestService"
-}
-Get-AzureVM @vm
-```
-
-Expected output:
-
-```output
-DeploymentName : TestService
-Name : DNS01
-Label :
-VM : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM
-InstanceStatus : Provisioning
-IpAddress : 192.168.1.7
-InstanceStateDetails : Windows is preparing your computer for first use...
-PowerState : Started
-InstanceErrorCode :
-InstanceFaultDomain : 0
-InstanceName : DNS01
-InstanceUpgradeDomain : 0
-InstanceSize : Small
-HostName : rsR2-797
-AvailabilitySetName :
-DNSName : http://testservice000.cloudapp.net/
-Status : Provisioning
-GuestAgentStatus : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.GuestAgentStatus
-ResourceExtensionStatusList : {Microsoft.Compute.BGInfo}
-PublicIPAddress :
-PublicIPName :
-NetworkInterfaces : {}
-ServiceName : TestService
-OperationDescription : Get-AzureVM
-OperationId : 34c1560a62f0901ab75cde4fed8e8bd1
-OperationStatus : OK
-```
-
-## How to remove a static private IP address from a VM
-
-To remove the static private IP address added to the VM in the previous script, run the following PowerShell command:
-
-```azurepowershell
-$vm = @{
- Name = "DNS01"
- ServiceName = "TestService"
-}
-Get-AzureVM -ServiceName @vm | Remove-AzureStaticVNetIP | Update-AzureVM
-```
-
-Expected output:
-
-```output
-OperationDescription OperationId OperationStatus
--
-Update-AzureVM 052fa6f6-1483-0ede-a7bf-14f91f805483 Succeeded
-```
-
-## How to add a static private IP address to an existing VM
-
-To add a static private IP address to the VM created using the previous script, run the following command:
-
-```azurepowershell
-$vm = {
- Name = "DNS01"
- ServiceName = "TestService"}
-
-$ip = {
- IPAddress = "192.168.1.7"
-}
-Get-AzureVM @vm | Set-AzureStaticVNetIP @ip | Update-AzureVM
-```
-
-Expected output:
-
-```output
-OperationDescription OperationId OperationStatus
--
-Update-AzureVM 77d8cae2-87e6-0ead-9738-7c7dae9810cb Succeeded
-```
-
-## Set IP addresses within the operating system
-
-ItΓÇÖs recommended that you don't statically assign the private IP assigned to the Azure virtual machine within the operating system of a VM, unless necessary. If you do manually set the private IP address within the operating system, ensure that it's the same address as the private IP address assigned to the Azure VM. Failure to match the IP address could result in loss of connectivity to the virtual machine.
-
-## Next steps
-
-* Learn about [reserved public IP](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip) addresses.
-
-* Learn about [instance-level public IP (ILPIP)](/previous-versions/azure/virtual-network/virtual-networks-instance-level-public-ip) addresses.
-
-* Consult the [Reserved IP REST APIs](/previous-versions/azure/reference/dn722420(v=azure.100)).
virtual-network Move Across Regions Nsg Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-nsg-portal.md
- Title: Move Azure network security group (NSG) to another Azure region - Azure portal
-description: Use Azure Resource Manager template to move Azure network security group from one Azure region to another using the Azure portal.
---- Previously updated : 08/31/2019---
-# Move Azure network security group (NSG) to another region using the Azure portal
-
-There are various scenarios in which you'd want to move your existing NSGs from one region to another. For example, you may want to create an NSG with the same configuration and security rules for testing. You may also want to move an NSG to another region as part of disaster recovery planning.
-
-Azure security groups can't be moved from one region to another. You can however, use an Azure Resource Manager template to export the existing configuration and security rules of an NSG. You can then stage the resource in another region by exporting the NSG to a template, modifying the parameters to match the destination region, and then deploy the template to the new region. For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
--
-## Prerequisites
--- Make sure that the Azure network security group is in the Azure region from which you want to move.--- Azure network security groups can't be moved between regions. You'll have to associate the new NSG to resources in the target region.--- To export an NSG configuration and deploy a template to create an NSG in another region, you'll need the Network Contributor role or higher.--- Identify the source networking layout and all the resources that you're currently using. This layout includes but isn't limited to load balancers, public IPs, and virtual networks.--- Verify that your Azure subscription allows you to create NSGs in the target region that's used. Contact support to enable the required quota.--- Make sure that your subscription has enough resources to support the addition of NSGs for this process. See [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).--
-## Prepare and move
-The following steps show how to prepare the network security group for the configuration and security rule move using a Resource Manager template, and move the NSG configuration and security rules to the target region using the portal.
--
-### Export the template and deploy from the portal
-
-1. Login to the [Azure portal](https://portal.azure.com) > **Resource Groups**.
-2. Locate the Resource Group that contains the source NSG and click on it.
-3. Select > **Settings** > **Export template**.
-4. Choose **Deploy** in the **Export template** blade.
-5. Click **TEMPLATE** > **Edit parameters** to open the **parameters.json** file in the online editor.
-6. To edit the parameter of the NSG name, change the **value** property under **parameters**:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "networkSecurityGroups_myVM1_nsg_name": {
- "value": "<target-nsg-name>"
- }
- }
- }
- ```
-
-7. Change the source NSG value in the editor to a name of your choice for the target NSG. Ensure you enclose the name in quotes.
-
-8. Click **Save** in the editor.
-
-9. Click **TEMPLATE** > **Edit template** to open the **template.json** file in the online editor.
-
-10. To edit the target region where the NSG configuration and security rules will be moved, change the **location** property under **resources** in the online editor:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/networkSecurityGroups",
- "apiVersion": "2019-06-01",
- "name": "[parameters('networkSecurityGroups_myVM1_nsg_name')]",
- "location": "<target-region>",
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "2c846acf-58c8-416d-be97-ccd00a4ccd78",
- }
- }
- ]
-
- ```
-
-11. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
-
-12. You can also change other parameters in the template if you choose, and are optional depending on your requirements:
-
- * **Security rules** - You can edit which rules are deployed into the target NSG by adding or removing rules to the **securityRules** section in the **template.json** file:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/networkSecurityGroups",
- "apiVersion": "2019-06-01",
- "name": "[parameters('networkSecurityGroups_myVM1_nsg_name')]",
- "location": "<target-region>",
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "2c846acf-58c8-416d-be97-ccd00a4ccd78",
- "securityRules": [
- {
- "name": "RDP",
- "etag": "W/\"c630c458-6b52-4202-8fd7-172b7ab49cf5\"",
- "properties": {
- "provisioningState": "Succeeded",
- "protocol": "TCP",
- "sourcePortRange": "*",
- "destinationPortRange": "3389",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "*",
- "access": "Allow",
- "priority": 300,
- "direction": "Inbound",
- "sourcePortRanges": [],
- "destinationPortRanges": [],
- "sourceAddressPrefixes": [],
- "destinationAddressPrefixes": []
- }
- },
- ]
- }
- ```
-
- To complete the addition or the removal of the rules in the target NSG, you must also edit the custom rule types at the end of the **template.json** file in the format of the example below:
-
- ```json
- {
- "type": "Microsoft.Network/networkSecurityGroups/securityRules",
- "apiVersion": "2019-06-01",
- "name": "[concat(parameters('networkSecurityGroups_myVM1_nsg_name'), '/Port_80')]",
- "dependsOn": [
- "[resourceId('Microsoft.Network/networkSecurityGroups', parameters('networkSecurityGroups_myVM1_nsg_name'))]"
- ],
- "properties": {
- "provisioningState": "Succeeded",
- "protocol": "*",
- "sourcePortRange": "*",
- "destinationPortRange": "80",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "*",
- "access": "Allow",
- "priority": 310,
- "direction": "Inbound",
- "sourcePortRanges": [],
- "destinationPortRanges": [],
- "sourceAddressPrefixes": [],
- "destinationAddressPrefixes": []
- }
- ```
-
-13. Click **Save** in the online editor.
-
-14. Click **BASICS** > **Subscription** to choose the subscription where the target NSG will be deployed.
-
-15. Click **BASICS** > **Resource group** to choose the resource group where the target NSG will be deployed. You can click **Create new** to create a new resource group for the target NSG. Ensure the name isn't the same as the source resource group of the existing NSG.
-
-16. Verify **BASICS** > **Location** is set to the target location where you wish for the NSG to be deployed.
-
-17. Verify under **SETTINGS** that the name matches the name that you entered in the parameters editor above.
-
-18. Check the box under **TERMS AND CONDITIONS**.
-
-19. Click the **Purchase** button to deploy the target network security group.
-
-## Discard
-
-If you wish to discard the target NSG, delete the resource group that contains the target NSG. To do so, select the resource group from your dashboard in the portal and select **Delete** at the top of the overview page.
-
-## Clean up
-
-To commit the changes and complete the move of the NSG, delete the source NSG or resource group. To do so, select the network security group or resource group from your dashboard in the portal and select **Delete** at the top of each page.
-
-## Next steps
-
-In this tutorial, you moved an Azure network security group from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
---- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
virtual-network Move Across Regions Nsg Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-nsg-powershell.md
- Title: Move Azure network security group (NSG) to another Azure region - Azure PowerShell
-description: Use Azure Resource Manager template to move Azure network security group from one Azure region to another using Azure PowerShell.
--- Previously updated : 08/31/2019----
-# Move Azure network security group (NSG) to another region using Azure PowerShell
-
-There are various scenarios in which you'd want to move your existing NSGs from one region to another. For example, you may want to create an NSG with the same configuration and security rules for testing. You may also want to move an NSG to another region as part of disaster recovery planning.
-
-Azure security groups can't be moved from one region to another. You can however, use an Azure Resource Manager template to export the existing configuration and security rules of an NSG. You can then stage the resource in another region by exporting the NSG to a template, modifying the parameters to match the destination region, and then deploy the template to the new region. For more information on Resource Manager and templates, see [Export resource groups to templates](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates).
--
-## Prerequisites
--- Make sure that the Azure network security group is in the Azure region from which you want to move.--- Azure network security groups can't be moved between regions. You'll have to associate the new NSG to resources in the target region.--- To export an NSG configuration and deploy a template to create an NSG in another region, you'll need the Network Contributor role or higher.
-
-- Identify the source networking layout and all the resources that you're currently using. This layout includes but isn't limited to load balancers, public IPs, and virtual networks.--- Verify that your Azure subscription allows you to create NSGs in the target region that's used. Contact support to enable the required quota.--- Make sure that your subscription has enough resources to support the addition of NSGs for this process. See [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).--
-## Prepare and move
-The following steps show how to prepare the network security group for the configuration and security rule move using a Resource Manager template, and move the NSG configuration and security rules to the target region using Azure PowerShell.
---
-### Export the template and deploy from a script
-
-1. Sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command and follow the on-screen directions:
-
- ```azurepowershell-interactive
- Connect-AzAccount
- ```
-
-2. Obtain the resource ID of the NSG you want to move to the target region and place it in a variable using [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup):
-
- ```azurepowershell-interactive
- $sourceNSGID = (Get-AzNetworkSecurityGroup -Name <source-nsg-name> -ResourceGroupName <source-resource-group-name>).Id
-
- ```
-3. Export the source NSG to a .json file into the directory where you execute the command [Export-AzResourceGroup](/powershell/module/az.resources/export-azresourcegroup):
-
- ```azurepowershell-interactive
- Export-AzResourceGroup -ResourceGroupName <source-resource-group-name> -Resource $sourceNSGID -IncludeParameterDefaultValue
- ```
-
-4. The file downloaded will be named after the resource group the resource was exported from. Locate the file that was exported from the command named **\<resource-group-name>.json** and open it in an editor of your choice:
-
- ```azurepowershell
- notepad <source-resource-group-name>.json
- ```
-
-5. To edit the parameter of the NSG name, change the property **defaultValue** of the source NSG name to the name of your target NSG, ensure the name is in quotes:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "networkSecurityGroups_myVM1_nsg_name": {
- "defaultValue": "<target-nsg-name>",
- "type": "String"
- }
- }
-
- ```
--
-6. To edit the target region where the NSG configuration and security rules will be moved, change the **location** property under **resources**:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/networkSecurityGroups",
- "apiVersion": "2019-06-01",
- "name": "[parameters('networkSecurityGroups_myVM1_nsg_name')]",
- "location": "<target-region>",
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "2c846acf-58c8-416d-be97-ccd00a4ccd78",
- }
- }
- ```
-
-7. To obtain region location codes, you can use the Azure PowerShell cmdlet [Get-AzLocation](/powershell/module/az.resources/get-azlocation) by running the following command:
-
- ```azurepowershell-interactive
-
- Get-AzLocation | format-table
-
- ```
-8. You can also change other parameters in the **\<resource-group-name>.json** if you choose, and are optional depending on your requirements:
-
- * **Security rules** - You can edit which rules are deployed into the target NSG by adding or removing rules to the **securityRules** section in the **\<resource-group-name>.json** file:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/networkSecurityGroups",
- "apiVersion": "2019-06-01",
- "name": "[parameters('networkSecurityGroups_myVM1_nsg_name')]",
- "location": "TARGET REGION",
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "2c846acf-58c8-416d-be97-ccd00a4ccd78",
- "securityRules": [
- {
- "name": "RDP",
- "etag": "W/\"c630c458-6b52-4202-8fd7-172b7ab49cf5\"",
- "properties": {
- "provisioningState": "Succeeded",
- "protocol": "TCP",
- "sourcePortRange": "*",
- "destinationPortRange": "3389",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "*",
- "access": "Allow",
- "priority": 300,
- "direction": "Inbound",
- "sourcePortRanges": [],
- "destinationPortRanges": [],
- "sourceAddressPrefixes": [],
- "destinationAddressPrefixes": []
- }
- ]
- }
-
- ```
-
- To complete the addition or the removal of the rules in the target NSG, you must also edit the custom rule types at the end of the **\<resource-group-name>.json** file in the format of the example below:
-
- ```json
- {
- "type": "Microsoft.Network/networkSecurityGroups/securityRules",
- "apiVersion": "2019-06-01",
- "name": "[concat(parameters('networkSecurityGroups_myVM1_nsg_name'), '/Port_80')]",
- "dependsOn": [
- "[resourceId('Microsoft.Network/networkSecurityGroups', parameters('networkSecurityGroups_myVM1_nsg_name'))]"
- ],
- "properties": {
- "provisioningState": "Succeeded",
- "protocol": "*",
- "sourcePortRange": "*",
- "destinationPortRange": "80",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "*",
- "access": "Allow",
- "priority": 310,
- "direction": "Inbound",
- "sourcePortRanges": [],
- "destinationPortRanges": [],
- "sourceAddressPrefixes": [],
- "destinationAddressPrefixes": []
- }
- ```
-
-9. Save the **\<resource-group-name>.json** file.
-
-10. Create a resource group in the target region for the target NSG to be deployed using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
-
- ```azurepowershell-interactive
- New-AzResourceGroup -Name <target-resource-group-name> -location <target-region>
- ```
-
-11. Deploy the edited **\<resource-group-name>.json** file to the resource group created in the previous step using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
-
- ```azurepowershell-interactive
-
- New-AzResourceGroupDeployment -ResourceGroupName <target-resource-group-name> -TemplateFile <source-resource-group-name>.json
-
- ```
-
-12. To verify the resources were created in the target region, use [Get-AzResourceGroup](/powershell/module/az.resources/get-azresourcegroup) and [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup):
-
- ```azurepowershell-interactive
-
- Get-AzResourceGroup -Name <target-resource-group-name>
-
- ```
-
- ```azurepowershell-interactive
-
- Get-AzNetworkSecurityGroup -Name <target-nsg-name> -ResourceGroupName <target-resource-group-name>
-
- ```
-
-## Discard
-
-After the deployment, if you wish to start over or discard the NSG in the target, delete the resource group that was created in the target and the moved NSG will be deleted. To remove the resource group, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup):
-
-```azurepowershell-interactive
-
-Remove-AzResourceGroup -Name <target-resource-group-name>
-
-```
-
-## Clean up
-
-To commit the changes and complete the move of the NSG, delete the source NSG or resource group, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) or [Remove-AzNetworkSecurityGroup](/powershell/module/az.network/remove-aznetworksecuritygroup):
-
-```azurepowershell-interactive
-
-Remove-AzResourceGroup -Name <source-resource-group-name>
-
-```
-
-``` azurepowershell-interactive
-
-Remove-AzNetworkSecurityGroup -Name <source-nsg-name> -ResourceGroupName <source-resource-group-name>
-
-```
-
-## Next steps
-
-In this tutorial, you moved an Azure network security group from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
---- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
virtual-network Move Across Regions Publicip Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-publicip-portal.md
- Title: Move Azure Public IP configuration to another Azure region - Azure portal
-description: Use a template to move Azure Public IP configuration from one Azure region to another using the Azure portal.
---- Previously updated : 08/29/2019---
-# Move Azure Public IP configuration to another region using the Azure portal
-
-There are various scenarios in which you'd want to move your existing Azure Public IP configurations from one region to another. For example, you may want to create a public IP with the same configuration and sku for testing. You may also want to move a public IP configuration to another region as part of disaster recovery planning.
-
-**Azure Public IPs are region specific and can't be moved from one region to another.** You can however, use an Azure Resource Manager template to export the existing configuration of a public IP. You can then stage the resource in another region by exporting the public IP to a template, modifying the parameters to match the destination region, and then deploy the template to the new region. For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
--
-## Prerequisites
--- Make sure that the Azure Public IP is in the Azure region from which you want to move.--- Azure Public IPs can't be moved between regions. You'll have to associate the new public ip to resources in the target region.--- To export a public IP configuration and deploy a template to create a public IP in another region, you'll need the Network Contributor role or higher.--- Identify the source networking layout and all the resources that you're currently using. This layout includes but isn't limited to load balancers, network security groups (NSGs), and virtual networks.--- Verify that your Azure subscription allows you to create public IPs in the target region that's used. Contact support to enable the required quota.--- Make sure that your subscription has enough resources to support the addition of public IPs for this process. See [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).--
-## Prepare and move
-The following steps show how to prepare the public IP for the configuration move using a Resource Manager template, and move the public IP configuration to the target region using the Azure portal.
-
-### Export the template and deploy from a script
-
-1. Login to the [Azure portal](https://portal.azure.com) > **Resource Groups**.
-2. Locate the Resource Group that contains the source public IP and click on it.
-3. Select > **Settings** > **Export template**.
-4. Choose **Deploy** in the **Export template** blade.
-5. Click **TEMPLATE** > **Edit parameters** to open the **parameters.json** file in the online editor.
-8. To edit the parameter of the public IP name, change the property under **parameters** > **value** from the source public IP name to the name of your target public IP, ensure the name is in quotes:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "publicIPAddresses_myVM1pubIP_name": {
- "value": "<target-publicip-name>"
- }
- }
- }
-
- ```
-8. Click **Save** in the editor.
-
-9. Click **TEMPLATE** > **Edit template** to open the **template.json** file in the online editor.
-
-10. To edit the target region where the public IP will be moved, change the **location** property under **resources**:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/publicIPAddresses",
- "apiVersion": "2019-06-01",
- "name": "[parameters('publicIPAddresses_myPubIP_name')]",
- "location": "<target-region>",
- "sku": {
- "name": "Basic",
- "tier": "Regional"
- },
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "7549a8f1-80c2-481a-a073-018f5b0b69be",
- "ipAddress": "52.177.6.204",
- "publicIPAddressVersion": "IPv4",
- "publicIPAllocationMethod": "Dynamic",
- "idleTimeoutInMinutes": 4,
- "ipTags": []
- }
- }
- ]
- ```
-
-11. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
-
-12. You can also change other parameters in the template if you choose, and are optional depending on your requirements:
-
- * **Sku** - You can change the sku of the public IP in the configuration from standard to basic or basic to standard by altering the **sku** > **name** property in the **template.json** file:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/publicIPAddresses",
- "apiVersion": "2019-06-01",
- "name": "[parameters('publicIPAddresses_myPubIP_name')]",
- "location": "<target-region>",
- "sku": {
- "name": "Basic",
- "tier": "Regional"
- },
- ```
-
- For more information on the differences between basic and standard sku public ips, see [Create, change, or delete a public IP address](./ip-services/virtual-network-public-ip-address.md):
-
- * **Public IP allocation method** and **Idle timeout** - You can change both of these options in the template by altering the **publicIPAllocationMethod** property from **Dynamic** to **Static** or **Static** to **Dynamic**. The idle timeout can be changed by altering the **idleTimeoutInMinutes** property to your desired amount. The default is **4**:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/publicIPAddresses",
- "apiVersion": "2019-06-01",
- "name": "[parameters('publicIPAddresses_myPubIP_name')]",
- "location": "<target-region>",
- "sku": {
- "name": "Basic",
- "tier": "Regional"
- },
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "7549a8f1-80c2-481a-a073-018f5b0b69be",
- "ipAddress": "52.177.6.204",
- "publicIPAddressVersion": "IPv4",
- "publicIPAllocationMethod": "Dynamic",
- "idleTimeoutInMinutes": 4,
- "ipTags": []
-
- ```
-
- For more information on the allocation methods and the idle timeout values, see [Create, change, or delete a public IP address](./ip-services/virtual-network-public-ip-address.md).
--
-13. Click **Save** in the online editor.
-
-14. Click **BASICS** > **Subscription** to choose the subscription where the target public IP will be deployed.
-
-15. Click **BASICS** > **Resource group** to choose the resource group where the target public IP will be deployed. You can click **Create new** to create a new resource group for the target public IP. Ensure the name isn't the same as the source resource group of the existing source public IP.
-
-16. Verify **BASICS** > **Location** is set to the target location where you wish for the public IP to be deployed.
-
-17. Verify under **SETTINGS** that the name matches the name that you entered in the parameters editor above.
-
-18. Check the box under **TERMS AND CONDITIONS**.
-
-19. Click the **Purchase** button to deploy the target public IP.
-
-## Discard
-
-If you wish to discard the target public IP, delete the resource group that contains the target public IP. To do so, select the resource group from your dashboard in the portal and select **Delete** at the top of the overview page.
-
-## Clean up
-
-To commit the changes and complete the move of the public IP, delete the source public IP or resource group. To do so, select the public IP or resource group from your dashboard in the portal and select **Delete** at the top of each page.
-
-## Next steps
-
-In this tutorial, you moved an Azure Public IP from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
---- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
virtual-network Move Across Regions Publicip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-publicip-powershell.md
- Title: Move Azure Public IP configuration to another Azure region - Azure PowerShell
-description: Use Azure Resource Manager template to move Azure Public IP configuration from one Azure region to another using Azure PowerShell.
---- Previously updated : 12/08/2021----
-# Move Azure Public IP configuration to another region using Azure PowerShell
-
-There are various scenarios in which you'd want to move your existing Azure Public IP configurations from one region to another. For example, you may want to create a public IP with the same configuration and sku for testing. You may also want to move a public IP configuration to another region as part of disaster recovery planning.
-
-**Azure Public IPs are region specific and can't be moved from one region to another.** You can however, use an Azure Resource Manager template to export the existing configuration of a public IP. You can then stage the resource in another region by exporting the public IP to a template, modifying the parameters to match the destination region, and then deploy the template to the new region. For more information on Resource Manager and templates, see [Export resource groups to templates](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates)
--
-## Prerequisites
--- Make sure that the Azure Public IP is in the Azure region from which you want to move.--- Azure Public IPs cannot be moved between regions. You'll have to associate the new public ip to resources in the target region.--- To export a public IP configuration and deploy a template to create a public IP in another region, you'll need the Network Contributor role or higher.
-
-- Identify the source networking layout and all the resources that you're currently using. This layout includes but isn't limited to load balancers, network security groups (NSGs), and virtual networks.--- Verify that your Azure subscription allows you to create public IPs in the target region that's used. Contact support to enable the required quota.--- Make sure that your subscription has enough resources to support the addition of public IPs for this process. See [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).--
-## Prepare and move
-The following steps show how to prepare the public IP for the configuration move using a Resource Manager template, and move the public IP configuration to the target region using Azure PowerShell.
---
-### Export the template and deploy from a script
-
-1. Sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command and follow the on-screen directions:
-
- ```azurepowershell-interactive
- Connect-AzAccount
- ```
-
-2. Obtain the resource ID of the public IP you want to move to the target region and place it in a variable using [Get-AzPublicIPAddress](/powershell/module/az.network/get-azpublicipaddress):
-
- ```azurepowershell-interactive
- $sourcePubIPID = (Get-AzPublicIPaddress -Name <source-public-ip-name> -ResourceGroupName <source-resource-group-name>).Id
- ```
-3. Export the source virtual network to a .json file into the directory where you execute the command [Export-AzResourceGroup](/powershell/module/az.resources/export-azresourcegroup):
-
- ```azurepowershell-interactive
- Export-AzResourceGroup -ResourceGroupName <source-resource-group-name> -Resource $sourceVNETID -IncludeParameterDefaultValue
- ```
-
-4. The file downloaded will be named after the resource group the resource was exported from. Locate the file that was exported from the command named **\<resource-group-name>.json** and open it in an editor of your choice:
-
- ```azurepowershell
- notepad <source-resource-group-name>.json
- ```
-
-5. To edit the parameter of the public IP name, change the property **defaultValue** of the source public IP name to the name of your target public IP, ensure the name is in quotes:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "publicIPAddresses_myVM1pubIP_name": {
- "defaultValue": "<target-publicip-name>",
- "type": "String"
- }
- }
- ```
-
-6. To edit the target region where the public IP will be moved, change the **location** property under resources:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/publicIPAddresses",
- "apiVersion": "2019-06-01",
- "name": "[parameters('publicIPAddresses_myPubIP_name')]",
- "location": "<target-region>",
- "sku": {
- "name": "Basic",
- "tier": "Regional"
- },
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "7549a8f1-80c2-481a-a073-018f5b0b69be",
- "ipAddress": "52.177.6.204",
- "publicIPAddressVersion": "IPv4",
- "publicIPAllocationMethod": "Dynamic",
- "idleTimeoutInMinutes": 4,
- "ipTags": []
- }
- }
- ]
- ```
-
-7. To obtain region location codes, you can use the Azure PowerShell cmdlet [Get-AzLocation](/powershell/module/az.resources/get-azlocation) by running the following command:
-
- ```azurepowershell-interactive
- Get-AzLocation | format-table
- ```
-8. You can also change other parameters in the template if you choose, and are optional depending on your requirements:
-
- * **Sku** - You can change the sku of the public IP in the configuration from standard to basic or basic to standard by altering the **sku** > **name** property in the **\<resource-group-name>.json** file:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/publicIPAddresses",
- "apiVersion": "2019-06-01",
- "name": "[parameters('publicIPAddresses_myPubIP_name')]",
- "location": "<target-region>",
- "sku": {
- "name": "Basic",
- "tier": "Regional"
- },
- ```
-
- For more information on the differences between basic and standard sku public ips, see [Create, change, or delete a public IP address](./ip-services/virtual-network-public-ip-address.md).
-
- * **Public IP allocation method** and **Idle timeout** - You can change both of these options in the template by altering the **publicIPAllocationMethod** property from **Dynamic** to **Static** or **Static** to **Dynamic**. The idle timeout can be changed by altering the **idleTimeoutInMinutes** property to your desired amount. The default is **4**:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/publicIPAddresses",
- "apiVersion": "2019-06-01",
- "name": "[parameters('publicIPAddresses_myPubIP_name')]",
- "location": "<target-region>",
- "sku": {
- "name": "Basic",
- "tier": "Regional"
- },
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "7549a8f1-80c2-481a-a073-018f5b0b69be",
- "ipAddress": "52.177.6.204",
- "publicIPAddressVersion": "IPv4",
- "publicIPAllocationMethod": "Dynamic",
- "idleTimeoutInMinutes": 4,
- "ipTags": []
- }
- }
- ```
-
- For more information on the allocation methods and the idle timeout values, see [Create, change, or delete a public IP address](./ip-services/virtual-network-public-ip-address.md).
--
-9. Save the **\<resource-group-name>.json** file.
-
-10. Create a resource group in the target region for the target public IP to be deployed using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup).
-
- ```azurepowershell-interactive
- New-AzResourceGroup -Name <target-resource-group-name> -location <target-region>
- ```
-11. Deploy the edited **\<resource-group-name>.json** file to the resource group created in the previous step using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
-
- ```azurepowershell-interactive
- New-AzResourceGroupDeployment -ResourceGroupName <target-resource-group-name> -TemplateFile <source-resource-group-name>.json
- ```
-
-12. To verify the resources were created in the target region, use [Get-AzResourceGroup](/powershell/module/az.resources/get-azresourcegroup) and [Get-AzPublicIPAddress](/powershell/module/az.network/get-azpublicipaddress):
-
- ```azurepowershell-interactive
- Get-AzResourceGroup -Name <target-resource-group-name>
- ```
-
- ```azurepowershell-interactive
- Get-AzPublicIPAddress -Name <target-publicip-name> -ResourceGroupName <target-resource-group-name>
- ```
-## Discard
-
-After the deployment, if you wish to start over or discard the public ip in the target, delete the resource group that was created in the target and the moved public IP will be deleted. To remove the resource group, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup):
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name <target-resource-group-name>
-```
-
-## Clean up
-
-To commit the changes and complete the move of the virtual network, delete the source virtual network or resource group, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) or [Remove-AzPublicIPAddress](/powershell/module/az.network/remove-azpublicipaddress):
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name <source-resource-group-name>
-```
-
-``` azurepowershell-interactive
-Remove-AzPublicIpAddress -Name <source-publicip-name> -ResourceGroupName <resource-group-name>
-```
-
-## Next steps
-
-In this tutorial, you moved an Azure Public IP from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
---- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
virtual-network Move Across Regions Vnet Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-vnet-portal.md
- Title: Move an Azure virtual network to another Azure region - Azure portal.
-description: Move an Azure virtual network from one Azure region to another by using a Resource Manager template and the Azure portal.
--- Previously updated : 08/26/2019---
-# Move an Azure virtual network to another region by using the Azure portal
-
-There are various scenarios for moving an existing Azure virtual network from one region to another. For example, you might want to create a virtual network with the same configuration for testing and availability as your existing virtual network. Or you might want to move a production virtual network to another region as part of your disaster recovery planning.
-
-You can use an Azure Resource Manager template to complete the move of the virtual network to another region. You do this by exporting the virtual network to a template, modifying the parameters to match the destination region, and then deploying the template to the new region. For more information about Resource Manager templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
--
-## Prerequisites
--- Make sure that your virtual network is in the Azure region that you want to move from.--- To export a virtual network and deploy a template to create a virtual network in another region, you need to have the Network Contributor role or higher.--- Virtual network peerings won't be re-created, and they'll fail if they're still present in the template. Before you export the template, you have to remove any virtual network peers. You can then reestablish them after the virtual network move.--- Identify the source networking layout and all the resources that you're currently using. This layout includes but isn't limited to load balancers, network security groups (NSGs), and public IPs.--- Verify that your Azure subscription allows you to create virtual networks in the target region. To enable the required quota, contact support.--- Make sure that your subscription has enough resources to support the addition of virtual networks for this process. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).--
-## Prepare for the move
-In this section, you prepare the virtual network for the move by using a Resource Manager template. You then move the virtual network to the target region by using the Azure portal.
-
-To export the virtual network and deploy the target virtual network by using the Azure portal, do the following:
-
-1. Sign in to the [Azure portal](https://portal.azure.com), and then select **Resource Groups**.
-1. Locate the resource group that contains the source virtual network, and then select it.
-1. Select **Automation** > **Export template**.
-1. In the **Export template** pane, select **Deploy**.
-1. To open the *parameters.json* file in your online editor, select **Template** > **Edit parameters**.
-1. To edit the parameter of the virtual network name, change the **value** property under **parameters**:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "virtualNetworks_myVNET1_name": {
- "value": "<target-virtual-network-name>"
- }
- }
- }
- ```
-
-1. In the editor, change the source virtual network name value in the editor to a name that you want for the target virtual network. Be sure to enclose the name in quotation marks.
-
-1. Select **Save** in the editor.
-
-1. To open the *template.json* file in the online editor, select **Template** > **Edit template**.
-
-1. In the online editor, to edit the target region where the virtual network will be moved, change the **location** property under **resources**:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2019-06-01",
- "name": "[parameters('virtualNetworks_myVNET1_name')]",
- "location": "<target-region>",
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "6e2652be-35ac-4e68-8c70-621b9ec87dcb",
- "addressSpace": {
- "addressPrefixes": [
- "10.0.0.0/16"
- ]
- },
-
- ```
-
-1. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name, without spaces (for example, **Central US** = **centralus**).
-
-1. (Optional) You can also change other parameters in the template, depending on your requirements:
-
- * **Address Space**: Before you save the file, you can alter the address space of the virtual network by modifying the **resources** > **addressSpace** section and changing the **addressPrefixes** property:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2019-06-01",
- "name": "[parameters('virtualNetworks_myVNET1_name')]",
- "location": "<target-region",
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "6e2652be-35ac-4e68-8c70-621b9ec87dcb",
- "addressSpace": {
- "addressPrefixes": [
- "10.0.0.0/16"
- ]
- },
-
- ```
-
- * **Subnet**: You can change or add to the subnet name and the subnet address space by changing the template's **subnets** section. You can change the name of the subnet by changing the **name** property. And you can change the subnet address space by changing the **addressPrefix** property:
-
- ```json
- "subnets": [
- {
- "name": "subnet-1",
- "etag": "W/\"d9f6e6d6-2c15-4f7c-b01f-bed40f748dea\"",
- "properties": {
- "provisioningState": "Succeeded",
- "addressPrefix": "10.0.0.0/24",
- "delegations": [],
- "privateEndpointNetworkPolicies": "Enabled",
- "privateLinkServiceNetworkPolicies": "Enabled"
- }
- },
- {
- "name": "GatewaySubnet",
- "etag": "W/\"d9f6e6d6-2c15-4f7c-b01f-bed40f748dea\"",
- "properties": {
- "provisioningState": "Succeeded",
- "addressPrefix": "10.0.1.0/29",
- "serviceEndpoints": [],
- "delegations": [],
- "privateEndpointNetworkPolicies": "Enabled",
- "privateLinkServiceNetworkPolicies": "Enabled"
- }
- }
-
- ]
- ```
-
- To change the address prefix in the *template.json* file, edit it in two places: in the code in the preceding section and in the **type** section of the following code. Change the **addressPrefix** property in the following code to match the **addressPrefix** property in the code in the preceding section.
-
- ```json
- "type": "Microsoft.Network/virtualNetworks/subnets",
- "apiVersion": "2019-06-01",
- "name": "[concat(parameters('virtualNetworks_myVNET1_name'), '/GatewaySubnet')]",
- "dependsOn": [
- "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name'))]"
- ],
- "properties": {
- "provisioningState": "Succeeded",
- "addressPrefix": "10.0.1.0/29",
- "serviceEndpoints": [],
- "delegations": [],
- "privateEndpointNetworkPolicies": "Enabled",
- "privateLinkServiceNetworkPolicies": "Enabled"
- }
- },
- {
- "type": "Microsoft.Network/virtualNetworks/subnets",
- "apiVersion": "2019-06-01",
- "name": "[concat(parameters('virtualNetworks_myVNET1_name'), '/subnet-1')]",
- "dependsOn": [
- "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name'))]"
- ],
- "properties": {
- "provisioningState": "Succeeded",
- "addressPrefix": "10.0.0.0/24",
- "delegations": [],
- "privateEndpointNetworkPolicies": "Enabled",
- "privateLinkServiceNetworkPolicies": "Enabled"
- }
- }
- ]
- ```
-
-1. In the online editor, select **Save**.
-
-1. To choose the subscription where the target virtual network will be deployed, select **Basics** > **Subscription**.
-
-1. To choose the resource group where the target virtual network will be deployed, select **Basics** > **Resource group**.
-
- If you need to create a new resource group for the target virtual network, select **Create new**. Make sure that the name isn't the same as the source resource group name in the existing virtual network.
-
-1. Verify that **Basics** > **Location** is set to the target location where you want the virtual network to be deployed.
-
-1. Under **Settings**, verify that the name matches the name that you entered previously in the parameters editor.
-
-1. Select the **Terms and Conditions** check box.
-
-1. To deploy the target virtual network, select **Purchase**.
-
-## Delete the target virtual network
-
-To discard the target virtual network, you delete the resource group that contains the target virtual network. To do so:
-1. On the Azure portal dashboard, select the resource group.
-1. At the top of the **Overview** pane, select **Delete**.
-
-## Clean up
-
-To commit the changes and complete the virtual network move, you delete the source virtual network or resource group. To do so:
-1. On the Azure portal dashboard, select the virtual network or resource group.
-1. At the top of each pane, select **Delete**.
-
-## Next steps
-
-In this tutorial, you moved an Azure virtual network from one region to another by using the Azure portal and then cleaned up the unneeded source resources. To learn more about moving resources between regions and disaster recovery in Azure, see:
---- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure virtual machines to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
virtual-network Move Across Regions Vnet Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-vnet-powershell.md
- Title: Move an Azure virtual network to another Azure region - Azure PowerShell
-description: Move an Azure virtual network from one Azure region to another by using a Resource Manager template and Azure PowerShell.
--- Previously updated : 08/26/2019----
-# Move an Azure virtual network to another region by using Azure PowerShell
-
-There are various scenarios for moving an existing Azure virtual network from one region to another. For example, you might want to create a virtual network with the same configuration for testing and availability as your existing virtual network. Or you might want to move a production virtual network to another region as part of your disaster recovery planning.
-
-You can use an Azure Resource Manager template to complete the move of the virtual network to another region. You do this by exporting the virtual network to a template, modifying the parameters to match the destination region, and then deploying the template to the new region. For more information about Resource Manager templates, see [Export resource groups to templates](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates).
--
-## Prerequisites
--- Make sure that your virtual network is in the Azure region that you want to move from.--- To export a virtual network and deploy a template to create a virtual network in another region, you need to have the Network Contributor role or higher.--- Virtual network peerings won't be re-created, and they'll fail if they're still present in the template. Before you export the template, you have to remove any virtual network peers. You can then reestablish them after the virtual network move.
-
-- Identify the source networking layout and all the resources that you're currently using. This layout includes but isn't limited to load balancers, network security groups (NSGs), and public IPs.--- Verify that your Azure subscription allows you to create virtual networks in the target region. To enable the required quota, contact support.--- Make sure that your subscription has enough resources to support the addition of virtual networks for this process. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).--
-## Prepare for the move
-In this section, you prepare the virtual network for the move by using a Resource Manager template. You then move the virtual network to the target region by using Azure PowerShell commands.
--
-To export the virtual network and deploy the target virtual network by using PowerShell, do the following:
-
-1. Sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command, and then follow the on-screen directions:
-
- ```azurepowershell-interactive
- Connect-AzAccount
- ```
-
-1. Obtain the resource ID of the virtual network that you want to move to the target region, and then place it in a variable by using [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork):
-
- ```azurepowershell-interactive
- $sourceVNETID = (Get-AzVirtualNetwork -Name <source-virtual-network-name> -ResourceGroupName <source-resource-group-name>).Id
- ```
-
-1. Export the source virtual network to a .json file in the directory where you execute the command [Export-AzResourceGroup](/powershell/module/az.resources/export-azresourcegroup):
-
- ```azurepowershell-interactive
- Export-AzResourceGroup -ResourceGroupName <source-resource-group-name> -Resource $sourceVNETID -IncludeParameterDefaultValue
- ```
-
-1. The downloaded file has the same name as the resource group that the resource was exported from. Locate the *\<resource-group-name>.json* file, which you exported with the command, and then open it in your editor:
-
- ```azurepowershell
- notepad <source-resource-group-name>.json
- ```
-
-1. To edit the parameter of the virtual network name, change the **defaultValue** property of the source virtual network name to the name of your target virtual network. Be sure to enclose the name in quotation marks.
-
- ```json
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentmyResourceGroupVNET.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "virtualNetworks_myVNET1_name": {
- "defaultValue": "<target-virtual-network-name>",
- "type": "String"
- }
- ```
-
-1. To edit the target region where the virtual network will be moved, change the **location** property under resources:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2019-06-01",
- "name": "[parameters('virtualNetworks_myVNET1_name')]",
- "location": "<target-region>",
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "6e2652be-35ac-4e68-8c70-621b9ec87dcb",
- "addressSpace": {
- "addressPrefixes": [
- "10.0.0.0/16"
- ]
- },
-
- ```
-
-1. To obtain region location codes, you can use the Azure PowerShell cmdlet [Get-AzLocation](/powershell/module/az.resources/get-azlocation) by running the following command:
-
- ```azurepowershell-interactive
-
- Get-AzLocation | format-table
- ```
-
-1. (Optional) You can also change other parameters in the *\<resource-group-name>.json* file, depending on your requirements:
-
- * **Address Space**: Before you save the file, you can alter the address space of the virtual network by modifying the **resources** > **addressSpace** section and changing the **addressPrefixes** property:
-
- ```json
- "resources": [
- {
- "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2019-06-01",
- "name": "[parameters('virtualNetworks_myVNET1_name')]",
- "location": "<target-region",
- "properties": {
- "provisioningState": "Succeeded",
- "resourceGuid": "6e2652be-35ac-4e68-8c70-621b9ec87dcb",
- "addressSpace": {
- "addressPrefixes": [
- "10.0.0.0/16"
- ]
- },
- ```
-
- * **Subnet**: You can change or add to the subnet name and the subnet address space by changing the file's **subnets** section. You can change the name of the subnet by changing the **name** property. And you can change the subnet address space by changing the **addressPrefix** property:
-
- ```json
- "subnets": [
- {
- "name": "subnet-1",
- "etag": "W/\"d9f6e6d6-2c15-4f7c-b01f-bed40f748dea\"",
- "properties": {
- "provisioningState": "Succeeded",
- "addressPrefix": "10.0.0.0/24",
- "delegations": [],
- "privateEndpointNetworkPolicies": "Enabled",
- "privateLinkServiceNetworkPolicies": "Enabled"
- }
- },
- {
- "name": "GatewaySubnet",
- "etag": "W/\"d9f6e6d6-2c15-4f7c-b01f-bed40f748dea\"",
- "properties": {
- "provisioningState": "Succeeded",
- "addressPrefix": "10.0.1.0/29",
- "serviceEndpoints": [],
- "delegations": [],
- "privateEndpointNetworkPolicies": "Enabled",
- "privateLinkServiceNetworkPolicies": "Enabled"
- }
- }
-
- ]
- ```
-
- To change the address prefix, edit the file in two places: in the code in the preceding section and in the **type** section of the following code. Change the **addressPrefix** property in the following code to match the **addressPrefix** property in the code in the preceding section.
-
- ```json
- "type": "Microsoft.Network/virtualNetworks/subnets",
- "apiVersion": "2019-06-01",
- "name": "[concat(parameters('virtualNetworks_myVNET1_name'), '/GatewaySubnet')]",
- "dependsOn": [
- "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name'))]"
- ],
- "properties": {
- "provisioningState": "Succeeded",
- "addressPrefix": "10.0.1.0/29",
- "serviceEndpoints": [],
- "delegations": [],
- "privateEndpointNetworkPolicies": "Enabled",
- "privateLinkServiceNetworkPolicies": "Enabled"
- }
- },
- {
- "type": "Microsoft.Network/virtualNetworks/subnets",
- "apiVersion": "2019-06-01",
- "name": "[concat(parameters('virtualNetworks_myVNET1_name'), '/subnet-1')]",
- "dependsOn": [
- "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name'))]"
- ],
- "properties": {
- "provisioningState": "Succeeded",
- "addressPrefix": "10.0.0.0/24",
- "delegations": [],
- "privateEndpointNetworkPolicies": "Enabled",
- "privateLinkServiceNetworkPolicies": "Enabled"
- }
- }
- ]
- ```
-
-1. Save the *\<resource-group-name>.json* file.
-
-1. Create a resource group in the target region for the target virtual network to be deployed by using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
-
- ```azurepowershell-interactive
- New-AzResourceGroup -Name <target-resource-group-name> -location <target-region>
- ```
-
-1. Deploy the edited *\<resource-group-name>.json* file to the resource group that you created in the previous step by using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
-
- ```azurepowershell-interactive
-
- New-AzResourceGroupDeployment -ResourceGroupName <target-resource-group-name> -TemplateFile <source-resource-group-name>.json
- ```
-
-1. To verify that the resources were created in the target region, use [Get-AzResourceGroup](/powershell/module/az.resources/get-azresourcegroup) and [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork):
-
- ```azurepowershell-interactive
-
- Get-AzResourceGroup -Name <target-resource-group-name>
- ```
-
- ```azurepowershell-interactive
-
- Get-AzVirtualNetwork -Name <target-virtual-network-name> -ResourceGroupName <target-resource-group-name>
- ```
-
-## Delete the virtual network or resource group
-
-After you've deployed the virtual network, to start over or discard the virtual network in the target region, delete the resource group that you created in the target region, and the moved virtual network will be deleted.
-
-To remove the resource group, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup):
-
-```azurepowershell-interactive
-
-Remove-AzResourceGroup -Name <target-resource-group-name>
-```
-
-## Clean up
-
-To commit your changes and complete the virtual network move, do either of the following:
-
-* Delete the resource group by using [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup):
-
- ```azurepowershell-interactive
-
- Remove-AzResourceGroup -Name <source-resource-group-name>
- ```
-
-* Delete the source virtual network by using [Remove-AzVirtualNetwork](/powershell/module/az.network/remove-azvirtualnetwork):
- ``` azurepowershell-interactive
-
- Remove-AzVirtualNetwork -Name <source-virtual-network-name> -ResourceGroupName <source-resource-group-name>
- ```
-
-## Next steps
-
-In this tutorial, you moved a virtual network from one region to another by using PowerShell and then cleaned up the unneeded source resources. To learn more about moving resources between regions and disaster recovery in Azure, see:
--- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure virtual machines to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
virtual-network Virtual Network Powershell Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-multi-tier-application.md
- Title: Create a VNet for multi-tier applications - Azure PowerShell script sample
-description: Create a virtual network for multi-tier applications - Azure PowerShell script sample.
---- Previously updated : 03/28/2023----
-# Create a network for multi-tier applications script sample
-
-This script sample creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP and SSH, while traffic to the back-end subnet is limited to MySQL, port 3306. After running the script, you'll have two virtual machines, one in each subnet that you can deploy web server and MySQL software to.
-
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
--
-## Sample script
--
-A subnet ID is assigned after you've created a virtual network; specifically, using the New-AzVirtualNetwork cmdlet with the -Subnet option. If you configure the subnet using the New-AzVirtualNetworkSubnetConfig cmdlet before the call to New-AzVirtualNetwork, you won't see the subnet ID until after you call New-AzVirtualNetwork.
-
-[!code-azurepowershell-interactive [main](../../../powershell_scripts/virtual-network/virtual-network-multi-tier-application/virtual-network-multi-tier-application.ps1 "Virtual network for multi-tier application")]
-
-## Clean up deployment
-
-Run the following command to remove the resource group, VM, and all related resources:
-
-```powershell
-Remove-AzResourceGroup -Name myResourceGroup -Force
-```
-
-## Script explanation
-
-This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the following table links to command-specific documentation:
-
-| Command | Notes |
-|||
-| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. |
-| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates an Azure virtual network and front-end subnet. |
-| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates a back-end subnet. |
-| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates a public IP address to access the VM from the internet. |
-| [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) | Creates virtual network interfaces and attaches them to the virtual network's front-end and back-end subnets. |
-| [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) | Creates network security groups (NSG) that are associated to the front-end and back-end subnets. |
-| [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) |Creates NSG rules that allow or block specific ports to specific subnets. |
-| [New-AzVM](/powershell/module/az.compute/new-azvm) | Creates virtual machines and attaches a NIC to each VM. This command also specifies the virtual machine image to use and administrative credentials. |
-| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group and all resources it contains. |
-
-## Next steps
-
-For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure/).
-
-More virtual network PowerShell script samples can be found in [Virtual network PowerShell samples](../powershell-samples.md).
virtual-wan About Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing.md
Labels provide a mechanism to logically group route tables. This is especially h
Configuring static routes provides a mechanism to steer traffic from the hub through a next hop IP, which could be of a Network Virtual Appliance (NVA) provisioned in a Spoke VNet attached to a virtual hub. The static route is composed of a route name, list of destination prefixes, and a next hop IP.
+### <a name="delete-route"></a>Deleting static routes
+
+To delete a static route, the route must be deleted from the route table that it's placed in. See [Delete a route](how-to-virtual-hub-routing.md#delete-a-route) for steps.
+ ## <a name="route"></a>Route tables for pre-existing routes Route tables now have features for association and propagation. A pre-existing route table is a route table that doesn't have these features. If you have pre-existing routes in hub routing and would like to use the new capabilities, consider the following:
virtual-wan How To Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing.md
Previously updated : 06/30/2023 Last updated : 01/10/2024 # How to configure virtual hub routing - Azure portal
-A virtual hub can contain multiple gateways such as a site-to-site VPN gateway, ExpressRoute gateway, point-to-site gateway, and Azure Firewall. The routing capabilities in the virtual hub are provided by a router that manages all routing, including transit routing, between the gateways using Border Gateway Protocol (BGP). The virtual hub router also provides transit connectivity between virtual networks that connect to a virtual hub and can support up to an aggregate throughput of 50 Gbps. These routing capabilities apply to customers using **Standard** Virtual WANs. For more information, see [About virtual hub routing](about-virtual-hub-routing.md).
- This article helps you configure virtual hub routing using Azure portal. You can also configure virtual hub routing using the [Azure PowerShell steps](how-to-virtual-hub-routing-powershell.md).
+A virtual hub can contain multiple gateways such as a site-to-site VPN gateway, ExpressRoute gateway, point-to-site gateway, and Azure Firewall. The routing capabilities in the virtual hub are provided by a router that manages all routing, including transit routing, between the gateways using Border Gateway Protocol (BGP). The virtual hub router also provides transit connectivity between virtual networks that connect to a virtual hub and can support up to an aggregate throughput of 50 Gbps. These routing capabilities apply to customers using **Standard** Virtual WANs. For more information, see [About virtual hub routing](about-virtual-hub-routing.md).
+ ## Create a route table
-1. In the Azure portal, navigate to the **virtual hub**.
-1. On the **Virtual HUB** page, in the left pane, select **Route Tables**. The **Route Tables** page will populate the current route tables for this hub.
+The following steps help you create a route table and a route.
+
+1. In the Azure portal, go to the **virtual hub**.
+1. On the **Virtual HUB** page, in the left pane, select **Route Tables** to open the Route Tables page. Notice the route tables that are propagated to this virtual hub.
1. Select **+ Create route table** to open the **Create Route Table** page.
-1. On the **Basics** page, complete the following fields, then click **Labels** to move to the Labels page.
+1. On the **Basics** tab, complete the following fields, then click **Labels** to move to the Labels page.
:::image type="content" source="./media/how-to-virtual-hub-routing/basics.png" alt-text="Screenshot showing the Create Route Table page Basics tab." lightbox="./media/how-to-virtual-hub-routing/basics.png":::
This article helps you configure virtual hub routing using Azure portal. You can
## Edit a route table
-In the Azure portal, go to your **Virtual HUB -> Route Tables** page. To open the **Edit route table page**, click the name of the route table you want to edit. Edit the values you want to change, then click **Review + create** or **Create** (depending on the page that you are on) to save your settings.
+1. Go to the virtual hub and, in the left pane, click **Route Tables**. On the **Route Tables** page, click the name of the route table you want to edit.
+1. On the **Edit route table** page, on each tab, edit the values that you want to change.
+1. On the **Propagations** page, click **Create** to update the route table with new route information.
+
+## Edit a route
+
+1. Go to the virtual hub and, in the left pane, click **Route Tables**. On the **Route Tables** page, click the name of the route table that contains the route you want to edit.
+1. On the **Edit route table** page, locate the route from the list and make the applicable changes. Then, click **Review + create**.
+1. On the **Propagations** page, make any additional changes (if necessary), then click **Create** to update the route table with new route information.
+1. As long as no errors occur, the route is updated.
+
+## Delete a route
+
+1. Go to the virtual hub and, in the left pane, click **Route Tables**. On the **Route Tables** page, click the name of the route table that contains the route you want to edit.
+1. On the **Edit route table** page, locate the route from the list. Use the scroll bar to navigate to the right. You'll see an ellipsis (three dots) at the end of the line for the route. Click the ellipsis to reveal the **Remove** button. Click **Remove**.
+1. At the bottom of the page, click **Review + Create**, and then **Create**.
+1. As long as no errors occur, the route is removed.
## Delete a route table
-In the Azure portal, go to your **Virtual HUB -> Route Tables** page. Select the checkbox for route table that you want to delete. Click **"…"**, and then select **Delete**. You can't delete a Default or None route table. However, you can delete all custom route tables.
+You can't delete a **Default** or **None** route table. However, you can delete all custom route tables.
+
+1. Go to the virtual hub and, in the left pane, click **Route Tables**. On the **Route Tables** page, select the checkbox for the route table that you want to delete (don't click the name).
+1. On the right side of the line that the route table is on, you'll see an ellipsis (three dots). Click the ellipsis, then select **Delete** from the dropdown list.
+1. On the **Delete** page, confirm that you want to delete the route table, then click **Delete**.
+1. As long as no errors occur, the route table is deleted.
## View effective routes