Updates from: 08/13/2024 01:05:23
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Provisioned Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-migration.md
+
+ Title: 'Azure OpenAI Provisioned August 2024 Update'
+
+description: Learn about the improvements to Provisioned Throughput
++++ Last updated : 08/07/2024++
+recommendations: false
++
+# Azure OpenAI provisioned August 2024 update
+
+In mid-August, 2024, Microsoft launched improvements to its Provisioned Throughput offering that address customer feedback on usability and operational agility that open new payment options and deployment scenarios.
+
+This article is intended for existing users of the provisioned throughput offering. New customers should refer to the [Azure OpenAI provisioned onboarding guide](../how-to/provisioned-throughput-onboarding.md).
+
+## What's changing?
+
+The capabilities below are rolling out for the Provisioned Managed offering.
+
+> [!IMPORTANT]
+> The changes in this article do not apply to the older *"Provisioned Classic (PTU-C)"* offering. They only affect the Provisioned (also known as the Provisioned Managed) offering.
+
+### Usability improvements
+
+|Feature | Benefit|
+|||
+|Model-independent quota | A single quota limit covering all models/versions reduces quota administration and accelerates experimentation with new models. |
+|Self-service quota requests | Request quota increases without engaging the sales team ΓÇô many can be autoapproved. |
+|Default provisioned-managed quota in many regions | Get started quickly without having to first request quota. |
+|Transparent information on real-time capacity availability + New deployment flow | Reduced negotiation around availability accelerates time-to-market. |
+
+### New hourly/reservation commercial model
+
+|Feature | Benefit|
+|||
+|Hourly, uncommitted usage | Hourly payment option without a required commitment enables short-term deployment scenarios. |
+|Term discounts via Azure Reservations | Azure reservations provide substantial discounts over the hourly rate for one month and one year terms, and provide flexible scopes that minimize administration and associated with todayΓÇÖs resource-bound commitments.|
+| Default provisioned-managed quota in many regions | Get started quickly in new regions without having to first request quota. |
+| Flexible choice of payment model for existing provisioned customers | Customers with commitments can stay on the commitment model at least through the end of 2024, and can choose to migrate existing commitments to hourly/reservations via a self-service or managed process. |
+| Supports latest model generations | The hourly/reservation model is required to deploy models released after August 1, 2024. |
+
+## Usability improvement details
+
+Provisioned quota granularity is changing from model-specific to model-independent. Rather than each model and version within subscription and region having its own quota limit, there is a single quota item per subscription and region that limits the total number of PTUs that can be deployed across all supported models and versions.
+
+## Model-independent quota
+
+Starting August 12, 2024, existing customers' current, model-specific quota has been converted to model-independent. This happens automatically. No quota is lost in the transition. Existing quota limits are summed and assigned to a new model-independent quota item.
++
+The new model-independent quota shows up as a quota item named **Provisioned Managed Throughput Unit**, with model and version no longer included in the name. In the Studio Quota pane, expanding the quota item still shows all of the deployments that contribute to the quota item.
+
+### Default quota
+
+New and existing subscriptions are assigned a small amount of provisioned quota in many regions. This allows customers to start using those regions without having to first request quota.
+
+For existing customers, if the region already contains a quota assignment, the quota limit isn't changed for the region. For example, it isn't automatically increased by the new default amount.
+
+### Self-service quota requests
+
+Customers no longer obtain quota by contacting their sales teams. Instead, they use the self-service quota request form and specify the PTU-Managed quota type. The form is accessible from a link to the right of the quota item. The target is to respond to all quota requests within two business days.
+
+The quota screenshot below shows model-independent quota being used by deployments of different types, as well as the link for requesting additional quota.
++
+## Quota as a limit
+
+Prior to the August update, Azure OpenAI Provisioned was only available to a few customers, and quota was allocated to maximize the ability for them to deploy and use it. With these changes, the process of acquiring quota is simplified for all users, and there is a greater likelihood of running into service capacity limitations when deployments are attempted. A new API and Studio experience are available to help users find regions where the subscription has quota and the service has capacity to support deployments of a desired model.
+
+We also recommend that customers using commitments now create their deployments prior to creating or expanding commitments to cover them. This guarantees that capacity is available prior to creating a commitment and prevents over-purchase of the commitment. To support this, the restriction that prevented deployments from being created larger than their commitments has been removed. This new approach to quota, capacity availability and commitments matches what is provided under the hourly/reservation model, and the guidance to deploy before purchasing a commitment (or reservation, for the hourly model) is the same for both.
+
+See the following links for more information. The guidance for reservations and commitments is the same:
+
+* [Capacity Transparency](#self-service-migration)
+* [Sizing reservations](../how-to/provisioned-throughput-onboarding.md#important-sizing-azure-openai-provisioned-reservations)
+
+## New hourly reservation payment model
+
+> [!NOTE]
+> The following discussion of payment models does not apply to the older "Provisioned Classic (PTU-C)" offering. They only affect the Provisioned (aka Provisioned Managed) offering. Provisioned Classic continues to be governed by the monthly commitment payment model, unchanged from today.
+
+Microsoft has introduced a new "Hourly/reservation" payment model for provisioned deployments. This is in addition to the current **Commitment** payment model, which will continue to be supported at least through the end of 2024.
+
+### Commitment payment model
+
+- Regional, monthly commitment is required to use provisioned (longer terms available contractually).
+
+- Commitments are bound to Azure OpenAI resources, making moving deployments across resources difficult.
+
+- Commitments can't be canceled or altered during their term, except to add new PTUs.
+
+- Supports models released prior to August 1, 2024.
+
+### Hourly reservation payment model
+
+- The payment model is aligned with Azure standards for other products.
+
+- Hourly usage is supported, without commitment.
+
+- One month and one year term discounts can be purchased as regional Azure Reservations.
+
+- Reservations can be flexibly scoped to cover multiple subscriptions, and the scope can be changed mid-term.
+
+- Supports all models, both old and new.
+
+> [!IMPORTANT]
+> **Models released after August 1, 2024 require the use of the Hourly/Reservation payment model.** They are not deployable on Azure OpenAI resources that have active commitments. To deploy models released after August 1, exiting customers must either:
+> - Create deployments on Azure OpenAI resources without commitments.
+> - Migrate an existing resources off its commitments.
++
+## Hourly reservation model details
+
+Details on the hourly/reservation model can be found in the [Azure OpenAI Provisioned Onboarding Guide](../how-to/provisioned-throughput-onboarding.md).
+
+### Commitment and hourly reservation coexistence
+
+Customers that have commitments aren't required to use the hourly/reservation model. They can continue to use existing commitments, purchase new commitments, and manage commitments as they do currently.
+
+A customer can also decide to use both payment models in the same subscription/region. In this case, **the payment model for a deployment depends on the resource to which it is attached.**
+
+**Deployments on resources with active commitments follow the commitment payment model.**
+
+- The monthly commitment purchase covers the deployed PTUs.
+
+- Hourly overage charges are generated if the deployed PTUs ever become greater than the committed PTUs.
+
+- All existing discounts attached to the monthly commitment SKU continue to apply.
+
+- **Azure Reservations DO NOT apply additional discounts on top of the monthly commitment SKU**, however they will apply discounts to any overages (this behavior is new).
+
+- The **Manage Commitments** page in Studio is used to purchase and manage commitments.
+
+Deployments on resources without commitments (or only expired commitments) follow the Hourly/Reservation payment model.
+- Deployments generate hourly charges under the new Hourly/Reservation SKU and meter.
+- Azure Reservations can be purchased to discount the PTUs for deployments.
+- Reservations are purchased and managed from the Reservation blade of the Azure portal (not within Studio).
+
+If a deployment is on a resource that has a commitment, and that commitment expires. The deployment will automatically shift to be billed.
+
+### Changes to the existing payment mode
+
+Customers that have commitments today can continue to use them at least through the end of 2024. This includes purchasing new PTUs on new or existing commitments and managing commitment renewal behaviors. However, the August update has changed certain aspects of commitment operation.
+
+- Only models released as provisioned prior to August 1, 2023 or before can be deployed on a resource with a commitment.
+
+- If the deployed PTUs under a commitment exceed the committed PTUs, the hourly overage charges will be emitted against the same hourly meter as used for the new hourly/reservation payment model. This allows the overage charges to be discounted via an Azure Reservation.
+- It is possible to deploy more PTUs than are committed on the resource. This supports the ability to guarantee capacity availability prior to increasing the commitment size to cover it.
+
+## Migrating existing resources off commitments
+
+Existing customers can choose to migrate their existing resources from the Commitment to the Hourly/Reservation payment model to benefit from the ability to deploy the latest models, or to consolidate discounting for diverse deployments under a single reservation.
+
+Two approaches are available for customers to migrate resources using the Commitment model to the Hourly/Reservation model.
+
+### Self-service migration
+
+The self-service migration approach allows a customer to organically resources off of their commitments by allowing them to expire. The process to migrate a resource is as follows:
+
+- Set existing commitment to not autorenew and note the expiration date.
+
+- Before the expiration date, a customer should purchase an Azure Reservation covering the total number of committed PTUs per subscription. If an existing reservation already has the subscription in its scope, it can be increased in size to cover the new PTUs.
+
+- When the commitment expires, the deployments under the resource will automatically switch to the Hourly/Reservation mode with the usage discounted by the reservation.
+
+This self-service migration approach will result in an overlap where the reservation and commitment are both active. This is a characteristic of this migration mode and the reservation or commitment time for this overlap won't be credited back to the customer.
+
+An alternative approach to self-service migration is to switch the reservation purchase to occur after the expiration of the commitment. In this approach, the deployments will generate hourly usage for the period between the commitment expiration and the purchase of the reservation. As with the previous model, this is a characteristic of this approach, and this hourly usage won't be credited.
+
+**Self-service migration advantages:**
+
+* Individual resources can be migrated at different times.
+* Customers manage the migration without any dependencies on Microsoft.
+
+**Self-service migration disadvantages:**
+
+* There will be a short period of double-billing or hourly charges during the switchover from committed to hourly/reservation billing.
+
+> [!IMPORTANT]
+> Both self-service approaches generate some additional charges as the payment mode is switched from Committed to Hourly/Reservation. These are characteristics of the migration approaches and customers aren't credited for these charges. Customers may choose to use the managed migration approach described below to avoid them.
+
+### Managed migration
+
+The managed migration approach involves the customer partnering with Microsoft to bulk-migrate all the PTU commitments in a subscription/region at the same time. It works like this:
+
+1. The customer will engage their account team and request a managed migration. A migration owner from the Microsoft team will be assigned to assist the customer with migration.
+2. A date will be selected when all resources within each of the customers' subscriptions and regions containing current PTU commitments will be migrated from committed to hourly/reservation billing model. Multiple subscriptions and regions can be migrated on the same date.
+3. On the agreed-upon date:
+ * The customer will purchase regional reservations to cover the committed PTUs that will be converted and pass the reservation information to their Microsoft migration contact.
+ * Within 2-3 business days, all commitments will be proactively canceled and deployments previously under commitments will begin using the hourly/reservation payment model.
+ * In the billing period after the one with the reservation purchase, the customer will receive a credit for the reservation purchase covering the portions of the commitments that were canceled, starting from the time of the reservation purchase.
+
+Customers must reach out to their account teams to schedule a managed migration.
+
+**Managed migration advantages:**
+
+- Bulk migration of all commitments in an subscription/region is beneficial for customers with many commitments.
+- Seamless cost migration: No possibility of double-billing or extra hourly charges.
+
+**Managed migration disadvantages:**
+
+- All commitments in a subscription/region must be migrated at the same time.
+- Needing to coordinate a time for migration with the Microsoft team.
+
+
++
ai-services Provisioned Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md
Title: Azure OpenAI Service provisioned throughput
description: Learn about provisioned throughput and Azure OpenAI. Previously updated : 05/02/2024 Last updated : 08/07/2024
recommendations: false
# What is provisioned throughput?
-The provisioned throughput capability allows you to specify the amount of throughput you require in a deployment. The service then allocates the necessary model processing capacity and ensures it's ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing the throughput for your deployment. Each model-version pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU.
+> [!NOTE]
+> The Azure OpenAI Provisioned offering received significant updates on August 12, 2024, including aligning the purchase model with Azure standards and moving to model-independent quota. It is highly recommended that customers onboarded before this date read the Azure [OpenAI provisioned August update](./provisioned-migration.md) to learn more about these changes.
+
+The provisioned throughput capability allows you to specify the amount of throughput you require in a deployment. The service then allocates the necessary model processing capacity and ensures it's ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing the throughput for your deployment. Each model-version pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU.
## What does the provisioned deployment type provide?
The provisioned throughput capability allows you to specify the amount of throug
An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model. A deployment provides customer access to a model for inference and integrates more features like Content Moderation ([See content moderation documentation](content-filter.md)).
-> [!NOTE]
-> Provisioned throughput unit (PTU) quota is different from standard quota in Azure OpenAI and is not available by default. To learn more about this offering contact your Microsoft Account Team.
- ## What do you get? | Topic | Provisioned|
An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model.
| Utilization | Provisioned-managed Utilization measure provided in Azure Monitor. | | Estimating size | Provided calculator in the studio & benchmarking script. |
-## How do I get access to Provisioned?
-
-You need to speak with your Microsoft sales/account team to acquire provisioned throughput. If you don't have a sales/account team, unfortunately at this time, you cannot purchase provisioned throughput.
- ## What models and regions are available for provisioned throughput? [!INCLUDE [Provisioned](../includes/model-matrix/provisioned-models.md)]
You need to speak with your Microsoft sales/account team to acquire provisioned
## Key concepts
-### Provisioned throughput units
-
-Provisioned throughput units (PTU) are units of model processing capacity that you can reserve and deploy for processing prompts and generating completions. The minimum PTU deployment, increments, and processing capacity associated with each unit varies by model type & version.
- ### Deployment types
-When deploying a model in Azure OpenAI, you need to set the `sku-name` to be Provisioned-Managed. The `sku-capacity` specifies the number of PTUs assigned to the deployment.
+When creating a provisioned deployment in Azure OpenAI Studio, the deployment type on the Create Deployment dialog is Provisioned-Managed.
+
+When creating a provisioned deployment in Azure OpenAI via CLI or API, you need to set the `sku-name` to be Provisioned-Managed. The `sku-capacity` specifies the number of PTUs assigned to the deployment.
```azurecli az cognitiveservices account deployment create \
az cognitiveservices account deployment create \
### Quota
-Provisioned throughput quota represents a specific amount of total throughput you can deploy. Quota in the Azure OpenAI Service is managed at the subscription level. All Azure OpenAI resources within the subscription share this quota.
+#### Provisioned throughput units
+
+Provisioned throughput units (PTU) are generic units of model processing capacity that you can use to size provisioned deployments to achieve the required throughput for processing prompts and generating completions. Provisioned throughput units are granted to a subscription as quota on a regional basis, which defines the maximum number of PTUs that can be assigned to deployments in that subscription and region.
++
+#### Model independent quota
+
+Unlike the Tokens Per Minute (TPM) quota used by other Azure OpenAI offerings, PTUs are model-independent. The PTUs might be used to deploy any supported model/version in the region.
++
+The new quota shows up in Azure OpenAI Studio as a quota item named **Provisioned Managed Throughput Unit**. In the Studio Quota pane, expanding the quota item will show the deployments contributing to usage of the quota.
++
+#### Obtaining PTU Quota
+
+PTU quota is available by default in many regions. If additional quota is required, customers can request additional quota via the Request Quota link to the right of the Provisioned Managed Throughput Unit quota item in Azure OpenAI Studio. The form allows the customer to request an increase in PTU quota for a specified region. The customer will receive an email at the included address once the request is approved, typically within two business days.
+
+#### Per-Model PTU Minimums
+
+The minimum PTU deployment, increments, and processing capacity associated with each unit varies by model type & version.
+
+## Capacity transparency
+
+Azure OpenAI is a highly sought-after service where customer demand might exceed service GPU capacity. Microsoft strives to provide capacity for all in-demand regions and models, but selling out a region is always a possibility. This can limit some customersΓÇÖ ability to create a deployment of their desired model, version, or number of PTUs in a desired region - even if they have quota available in that region. Generally speaking:
+
+- Quota places a limit on the maximum number of PTUs that can be deployed in a subscription and region, and is not a guarantee of capacity availability.
+- Capacity is allocated at deployment time and is held for as long as the deployment exists. If service capacity is not available, the deployment will fail
+- Customers use real-time information on quota/capacity availability to choose an appropriate region for their scenario with the necessary model capacity
+- Scaling down or deleting a deployment releases capacity back to the region. There is no guarantee that the capacity will be available should the deployment be scaled up or re-created later.
+
+#### Regional capacity guidance
+
+To help users find the capacity needed for their deployments, customers will use a new API and Studio experience to provide real-time information on.
+
+In Azure OpenAI Studio, the deployment experience will identify when a region lacks the capacity to support the desired model, version and number of PTUs, and will direct the user to a select an alternative region when needed.
+
+Details on the new deployment experience can be found in the Azure OpenAI [Provisioned get started guide](../how-to/provisioned-get-started.md).
+
+The new [model capacities API](/rest/api/aiservices/accountmanagement/model-capacities/list?view=rest-aiservices-accountmanagement-2024-04-01-preview&tabs=HTTP&preserve-view=true) can also be used to programmatically identify the maximum sized deployment of a specified model that can be created in each region based on the availability of both quota in the subscription and service capacity in the region.
-Quota is specified in Provisioned throughput units and is specific to a (deployment type, model, region) triplet. Quota isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-3.5-Turbo.
+If an acceptable region isn't available to support the desire model, version and/or PTUs, customers can also try the following steps:
-While we make every attempt to ensure that quota is deployable, quota doesn't represent a guarantee that the underlying capacity is available. The service assigns capacity during the deployment operation and if capacity is unavailable the deployment fails with an out of capacity error.
+- Attempt the deployment with a smaller number of PTUs.
+- Attempt the deployment at a different time. Capacity availability changes dynamically based on customer demand and more capacity might become available later.
+- Ensure that quota is available in all acceptable regions. The [model capacities API](/rest/api/aiservices/accountmanagement/model-capacities/list?view=rest-aiservices-accountmanagement-2024-04-01-preview&tabs=HTTP&preserve-view=true) and Studio experience consider quota availability in returning alternative regions for creating a deployment.
### Determining the number of PTUs needed for a workload
-PTUs represent an amount of model processing capacity. Similar to your computer or databases, different workloads or requests to the model will consume different amounts of underlying processing capacity. The conversion from call shape characteristics (prompt size, generation size and call rate) to PTUs is complex and non-linear. To simplify this process, you can use the [Azure OpenAI Capacity calculator](https://oai.azure.com/portal/calculator) to size specific workload shapes.
+PTUs represent an amount of model processing capacity. Similar to your computer or databases, different workloads or requests to the model will consume different amounts of underlying processing capacity. The conversion from call shape characteristics (prompt size, generation size and call rate) to PTUs is complex and nonlinear. To simplify this process, you can use the [Azure OpenAI Capacity calculator](https://oai.azure.com/portal/calculator) to size specific workload shapes.
A few high-level considerations: - Generations require more capacity than prompts-- Larger calls are progressively more expensive to compute. For example, 100 calls of with a 1000 token prompt size will require less capacity than 1 call with 100,000 tokens in the prompt. This also means that the distribution of these call shapes is important in overall throughput. Traffic patterns with a wide distribution that includes some very large calls may experience lower throughput per PTU than a narrower distribution with the same average prompt & completion token sizes.
+- Larger calls are progressively more expensive to compute. For example, 100 calls of with a 1000 token prompt size requires less capacity than one call with 100,000 tokens in the prompt. This also means that the distribution of these call shapes is important in overall throughput. Traffic patterns with a wide distribution that includes some very large calls might experience lower throughput per PTU than a narrower distribution with the same average prompt & completion token sizes.
### How utilization performance works
The [Provisioned-Managed Utilization V2 metric](../how-to/monitoring.md#azure-op
The 429 response isn't an error, but instead part of the design for telling users that a given deployment is fully utilized at a point in time. By providing a fast-fail response, you have control over how to handle these situations in a way that best fits your application requirements. The `retry-after-ms` and `retry-after` headers in the response tell you the time to wait before the next call will be accepted. How you choose to handle this response depends on your application requirements. Here are some considerations:-- You can consider redirecting the traffic to other models, deployments or experiences. This option is the lowest-latency solution because the action can be taken as soon as you receive the 429 signal. For ideas on how to effectively implement this pattern see this [community post](https://github.com/Azure/aoai-apim).
+- You can consider redirecting the traffic to other models, deployments, or experiences. This option is the lowest-latency solution because the action can be taken as soon as you receive the 429 signal. For ideas on how to effectively implement this pattern see this [community post](https://github.com/Azure/aoai-apim).
- If you're okay with longer per-call latencies, implement client-side retry logic. This option gives you the highest amount of throughput per PTU. The Azure OpenAI client libraries include built-in capabilities for handling retries. #### How does the service decide when to send a 429?
-In the Provisioned-Managed offering, each request is evaluated individually according to its prompt size, expected generation size, and model to determine its expected utilization. This is in contrast to pay-as-you-go deployments which have a [custom rate limiting behavior](../how-to/quota.md) based on the estimated traffic load. For pay-as-you-go deployments this can lead to HTTP 429s being generated prior to defined quota values being exceeded if traffic is not evenly distributed.
+In the Provisioned-Managed offering, each request is evaluated individually according to its prompt size, expected generation size, and model to determine its expected utilization. This is in contrast to pay-as-you-go deployments, which have a [custom rate limiting behavior](../how-to/quota.md) based on the estimated traffic load. For pay-as-you-go deployments this can lead to HTTP 429 errors being generated prior to defined quota values being exceeded if traffic is not evenly distributed.
For Provisioned-Managed, we use a variation of the leaky bucket algorithm to maintain utilization below 100% while allowing some burstiness in the traffic. The high-level logic is as follows: 1. Each customer has a set amount of capacity they can utilize on a deployment
For Provisioned-Managed, we use a variation of the leaky bucket algorithm to mai
4. The overall utilization is decremented down at a continuous rate based on the number of PTUs deployed. > [!NOTE]
-> Calls are accepted until utilization reaches 100%. Bursts just over 100% maybe permitted in short periods, but over time, your traffic is capped at 100% utilization.
+> Calls are accepted until utilization reaches 100%. Bursts just over 100% may be permitted in short periods, but over time, your traffic is capped at 100% utilization.
:::image type="content" source="../media/provisioned/utilization.jpg" alt-text="Diagram showing how subsequent calls are added to the utilization." lightbox="../media/provisioned/utilization.jpg":::
ai-services Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/batch.md
Previously updated : 08/04/2024 Last updated : 08/12/2024 recommendations: false
Global batch is currently supported in the following regions:
The following models support global batch: | Model | Version | Supported |
-|||
+||||
|`gpt-4o` | 2024-05-13 |Yes (text + vision) |
+|`gpt-4o-mini` | 2024-07-18 | Yes (text + vision) |
|`gpt-4` | turbo-2024-04-09 | Yes (text only) | |`gpt-4` | 0613 | Yes | | `gpt-35-turbo` | 0125 | Yes |
ai-services Deployment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/deployment-types.md
You can use the following policy to disable access to Azure OpenAI global standa
To learn about creating resources and deploying models refer to the [resource creation guide](./create-resource.md).
-## Retrieve batch job output file
-- ## See also
ai-services Provisioned Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-get-started.md
Previously updated : 12/15/2023 Last updated : 08/07/2024 recommendations: false # Get started using Provisioned Deployments on the Azure OpenAI Service
-The following guide walks you through setting up a provisioned deployment with your Azure OpenAI Service resource.
+The following guide walks you through key steps in creating a provisioned deployment with your Azure OpenAI Service resource. For more details on the concepts discussed here, see:
+* [Azure OpenAI Provisioned Onboarding Guide](./provisioned-throughput-onboarding.md)
+* [Azure OpenAI Provisioned Concepts](../concepts/provisioned-throughput.md)
## Prerequisites - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)-- Obtained Quota for a provisioned deployment and purchased a commitment.
+- Azure Contributor or Cognitive Services Contributor role
+- Access to Azure OpenAI Studio
-> [!NOTE]
-> Provisioned Throughput Units (PTU) are different from standard quota in Azure OpenAI and are not available by default. To learn more about this offering contact your Microsoft Account Team.
+## Obtain/verify PTU quota availability.
+
+Provisioned throughput deployments are sized in units called Provisioned Throughput Units (PTUs). PTU quota is granted to a subscription regionally and limits the total number of PTUs that can be deployed in that region across all models and versions.
+
+Creating a new deployment requires available (unused) quota to cover the desired size of the deployment. For example: If a subscription has the following in South Central US:
+
+* Total PTU Quota = 500 PTUs
+* Deployments:
+ * 100 PTUs: GPT-4o, 2024-05-13
+ * 100 PTUs: GPT-4, 0613
+
+Then 200 PTUs of quota are considered used, and there are 300 PTUs available for use to create new deployments.
+A default amount of PTU quota is assigned to all subscriptions in several regions. You can view the quota available to you in a region by visiting the Quotas blade in Azure OpenAI Studio and selecting the desired subscription and region. For example, the screenshot below shows a quota limit of 500 PTUs in West US for the selected subscription. Note that you might see lower values of available default quotas.
+
+
+Additional quota can be requested by clicking the Request Quota link to the right of the ΓÇ£Usage/LimitΓÇ¥ column. (This is off-screen in the screenshot above).
+
+## Create an Azure OpenAI resource
+
+Provisioned Throughput deployments are created via Azure OpenAI resource objects within Azure. You must have an Azure OpenAI resource in each region where you intend to create a deployment. Use the Azure portal toΓÇ»[create a resource](./create-resource.md) in a region with available quota, if required.
+
+> [!NOTE]
+> Azure OpenAI resources can support multiple types of Azure OpenAI deployments at the same time. It is not necessary to dedicate new resources for your provisioned deployments.
-## Create your provisioned deployment
+## Create your provisioned deployment - capacity is available
After you purchase a commitment on your quota, you can create a deployment. To create a provisioned deployment, you can follow these steps; the choices described reflect the entries shown in the screenshot. 1. Sign into the [Azure OpenAI Studio](https://oai.azure.com) 2. Choose the subscription that was enabled for provisioned deployments & select the desired resource in a region where you have the quota. 3. Under **Management** in the left-nav select **Deployments**.
-4. Select Create new deployment and configure the following fields. Expand the ΓÇÿadvanced optionsΓÇÖ drop-down.
+4. Select Create new deployment and configure the following fields. Expand the **advanced options** drop-down menu.
5. Fill out the values in each field. Here's an example: | Field | Description | Example |
After you purchase a commitment on your quota, you can create a deployment. To c
| Deployment Type |This impacts the throughput and performance. Choose Provisioned-Managed for your provisioned deployment | Provisioned-Managed | | Provisioned Throughput Units | Choose the amount of throughput you wish to include in the deployment. | 100 |
+Important things to note:
+* The deployment dialog contains a reminder that you can purchase an Azure Reservation for Azure OpenAI Provisioned to obtain a significant discount for a term commitment.
+* There is a message that tells you the list, hourly price of the deployment that you would be charged if this deployment is not covered by a reservation. This is a list price that does not include any negotiated discounts for your company.
If you wish to create your deployment programmatically, you can do so with the following Azure CLI command. Update the `sku-capacity` with the desired number of provisioned throughput units.
az cognitiveservices account deployment create \
--sku-name ProvisionedManaged ```
-REST, ARM template, Bicep and Terraform can also be used to create deployments. See the section on automating deployments in the [Managing Quota](quota.md?tabs=rest#automate-deployment) how-to guide and replace the `sku.name` with "ProvisionedManaged" rather than "Standard."
+REST, ARM template, Bicep, and Terraform can also be used to create deployments. See the section on automating deployments in the [Managing Quota](quota.md?tabs=rest#automate-deployment) how-to guide and replace the `sku.name` with "ProvisionedManaged" rather than "Standard."
+
+## Create your provisioned deployment ΓÇô Capacity is not available
+
+Due to the dynamic nature of capacity availability, it is possible that the region of your selected resource might not have the service capacity to create the deployment of the specified model, version, and number of PTUs.
+
+In this event, Azure OpenAI Studio will direct you to other regions with available quota and capacity to create a deployment of the desired model. If this happens, the deployment dialog will look like this:
++
+Things to notice:
+
+* A message displays showing you many PTUs you have in available quota, and how many can currently be deployed at this time.
+* If you select a number of PTUs greater than service capacity, a message will appear that provides options for you to obtain more capacity, and a button to allow you to select an alternate region. Clicking the "See other regions" button will display a dialog that shows a list of Azure OpenAI resources where you can create a deployment, along with the maximum sized deployment that can be created based on available quota and service capacity in each region.
++
+Selecting a resource and clicking **Switch resource** will cause the deployment dialog to redisplay using the selected resource. You can then proceed to create your deployment in the new region.
+
+Learn more about the purchase model and how to purchase a reservation:
+
+* [Azure OpenAI provisioned onboarding guide](./provisioned-throughput-onboarding.md)
+* [Guide for Azure OpenAI provisioned reservations](../concepts/provisioned-throughput.md)
-## Make your first calls
-The inferencing code for provisioned deployments is the same a standard deployment type. The following code snippet shows a chat completions call to a GPT-4 model. For your first time using these models programmatically, we recommend starting with our [quickstart guide](../quickstart.md). Our recommendation is to use the OpenAI library with version 1.0 or greater since this includes retry logic within the library.
+## Make your first inferencing calls
+The inferencing code for provisioned deployments is the same a standard deployment type. The following code snippet shows a chat completions call to a GPT-4 model. For your first time using these models programmatically, we recommend starting with our [quickstart guide](../quickstart.md). Our recommendation is to use the OpenAI library with version 1.0 or greater since this includes retry logic within the library.
```python
The inferencing code for provisioned deployments is the same a standard deployme
## Understanding expected throughput
-The amount of throughput that you can achieve on the endpoint is a factor of the number of PTUs deployed, input size, output size and call rate. The number of concurrent calls and total tokens processed can vary based on these values. Our recommended way for determining the throughput for your deployment is as follows:
+The amount of throughput that you can achieve on the endpoint is a factor of the number of PTUs deployed, input size, output size, and call rate. The number of concurrent calls and total tokens processed can vary based on these values. Our recommended way for determining the throughput for your deployment is as follows:
1. Use the Capacity calculator for a sizing estimate. You can find the capacity calculator in the Azure OpenAI Studio under the quotas page and Provisioned tab. 2. Benchmark the load using real traffic workload. For more information about benchmarking, see the [benchmarking](#run-a-benchmark) section.
When you deploy a specified number of provisioned throughput units (PTUs), a set
PTU deployment utilization = (PTUs consumed in the time period) / (PTUs deployed in the time period)
-You can find the utilization measure in the Azure-Monitor section for your resource. To access the monitoring dashboards sign-in to [https://portal.azure.com](https://portal.azure.com), go to your Azure OpenAI resource and select the Metrics page from the left nav. On the metrics page, select the 'Provisioned-managed utilization' measure. If you have more than one deployment in the resource, you should also split the values by each deployment by clicking the 'Apply Splitting' button.
+You can find the utilization measure in the Azure-Monitor section for your resource. To access the monitoring dashboards sign-in to [https://portal.azure.com](https://portal.azure.com), go to your Azure OpenAI resource and select the Metrics page from the left nav. On the metrics page, select the 'Provisioned-managed utilization' measure. If you have more than one deployment in the resource, you should also split the values by each deployment by clicking the 'Apply Splitting' button.
:::image type="content" source="../media/provisioned/azure-monitor-utilization.jpg" alt-text="Screenshot of the provisioned managed utilization on the resource's metrics blade in the Azure portal." lightbox="../media/provisioned/azure-monitor-utilization.jpg":::
For more information about monitoring your deployments, see the [Monitoring Azur
## Handling high utilization
-Provisioned deployments provide you with an allocated amount of compute capacity to run a given model. The ΓÇÿProvisioned-Managed UtilizationΓÇÖ metric in Azure Monitor measures the utilization of the deployment in one-minute increments. Provisioned-Managed deployments are also optimized so that calls accepted are processed with a consistent per-call max latency. When the workload exceeds its allocated capacity, the service returns a 429 HTTP status code until the utilization drops down below 100%. The time before retrying is provided in the `retry-after` and `retry-after-ms` response headers that provide the time in seconds and milliseconds respectively. This approach maintains the per-call latency targets while giving the developer control over how to handle high-load situations ΓÇô for example retry or divert to another experience/endpoint.
+Provisioned deployments provide you with an allocated amount of compute capacity to run a given model. The ΓÇÿProvisioned-Managed UtilizationΓÇÖ metric in Azure Monitor measures the utilization of the deployment in one-minute increments. Provisioned-Managed deployments are also optimized so that calls accepted are processed with a consistent per-call max latency. When the workload exceeds its allocated capacity, the service returns a 429 HTTP status code until the utilization drops down below 100%. The time before retrying is provided in the `retry-after` and `retry-after-ms` response headers that provide the time in seconds and milliseconds respectively. This approach maintains the per-call latency targets while giving the developer control over how to handle high-load situations ΓÇô for example retry or divert to another experience/endpoint.
### What should I do when I receive a 429 response? A 429 response indicates that the allocated PTUs are fully consumed at the time of the call. The response includes the `retry-after-ms` and `retry-after` headers that tell you the time to wait before the next call will be accepted. How you choose to handle a 429 response depends on your application requirements. Here are some considerations: - If you are okay with longer per-call latencies, implement client-side retry logic to wait the `retry-after-ms` time and retry. This approach lets you maximize the throughput on the deployment. Microsoft-supplied client SDKs already handle it with reasonable defaults. You might still need further tuning based on your use-cases.-- Consider redirecting the traffic to other models, deployments or experiences. This approach is the lowest-latency solution because this action can be taken as soon as you receive the 429 signal.
+- Consider redirecting the traffic to other models, deployments, or experiences. This approach is the lowest-latency solution because this action can be taken as soon as you receive the 429 signal.
The 429 signal isn't an unexpected error response when pushing to high utilization but instead part of the design for managing queuing and high load for provisioned deployments. ### Modifying retry logic within the client libraries
We recommend the following workflow:
1. Estimate your throughput PTUs using the capacity calculator. 1. Run a benchmark with this traffic shape for an extended period of time (10+ min) to observe the results in a steady state. 1. Observe the utilization, tokens processed and call rate values from benchmark tool and Azure Monitor.
-1. Run a benchmark with your own traffic shape and workloads using your client implementation. Be sure to implement retry logic using either an Azure Openai client library or custom logic.
+1. Run a benchmark with your own traffic shape and workloads using your client implementation. Be sure to implement retry logic using either an Azure OpenAI client library or custom logic.
ai-services Provisioned Throughput Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md
Title: Azure OpenAI Service Provisioned Throughput Units (PTU) onboarding
description: Learn about provisioned throughput units onboarding and Azure OpenAI. Previously updated : 06/25/2024 Last updated : 08/07/2024
recommendations: false
This article walks you through the process of onboarding to [Provisioned Throughput Units (PTU)](../concepts/provisioned-throughput.md). Once you complete the initial onboarding, we recommend referring to the PTU [getting started guide](./provisioned-get-started.md).
-> [!NOTE]
-> Provisioned Throughput Units (PTU) are different from standard quota in Azure OpenAI and are not available by default. To learn more about this offering contact your Microsoft Account Team.
- ## When to use provisioned throughput units (PTU)
-You should consider switching from pay-as-you-go to provisioned throughput when you have well-defined, predictable throughput requirements. Typically, this occurs when the application is ready for production or has already been deployed in production and there is an understanding of the expected traffic. This will allow users to accurately forecast the required capacity and avoid unexpected billing.
+You should consider switching from pay-as-you-go to provisioned throughput when you have well-defined, predictable throughput requirements. Typically, this occurs when the application is ready for production or has already been deployed in production and there's an understanding of the expected traffic. This allows users to accurately forecast the required capacity and avoid unexpected billing.
### Typical PTU scenarios -- An application that is ready for production or in production.-- Application has predictable capacity/usage expectations.-- Application has real-time/latency sensitive requirements.
+- An application that is ready for production or in production.
+- An application that has predictable capacity/usage expectations.
+- An application has real-time/latency sensitive requirements.
> [!NOTE]
-> In function calling and agent use cases, token usage can be variable. You should understand your expected Tokens Per Minute (TPM) usage in detail prior to migrating the workloads to PTU.
+> In function calling and agent use cases, token usage can be variable. You should understand your expected Tokens Per Minute (TPM) usage in detail prior to migrating workloads to PTU.
## Sizing and estimation: provisioned managed only
The **Provisioned** option and the capacity planner are only available in certai
|Model | OpenAI model you plan to use. For example: GPT-4 | | Version | Version of the model you plan to use, for example 0614 | | Peak calls per min | The number of calls per minute that are expected to be sent to the model |
-| Tokens in prompt call | The number of tokens in the prompt for each call to the model. Calls with larger prompts will utilize more of the PTU deployment. Currently this calculator assumes a single prompt value so for workloads with wide variance, we recommend benchmarking your deployment on your traffic to determine the most accurate estimate of PTU needed for your deployment. |
-| Tokens in model response | The number of tokens generated from each call to the model. Calls with larger generation sizes will utilize more of the PTU deployment. Currently this calculator assumes a single prompt value so for workloads with wide variance, we recommend benchmarking your deployment on your traffic to determine the most accurate estimate of PTU needed for your deployment. |
+| Tokens in prompt call | The number of tokens in the prompt for each call to the model. Calls with larger prompts utilize more of the PTU deployment. Currently this calculator assumes a single prompt value so for workloads with wide variance. We recommend benchmarking your deployment on your traffic to determine the most accurate estimate of PTU needed for your deployment. |
+| Tokens in model response | The number of tokens generated from each call to the model. Calls with larger generation sizes will utilize more of the PTU deployment. Currently this calculator assumes a single prompt value so for workloads with wide variance. We recommend benchmarking your deployment on your traffic to determine the most accurate estimate of PTU needed for your deployment. |
After you fill in the required details, select **Calculate** button in the output column.
The values in the output column are the estimated value of PTU units required fo
:::image type="content" source="../media/how-to/provisioned-onboarding/capacity-calculator.png" alt-text="Screenshot of the Azure OpenAI Studio landing page." lightbox="../media/how-to/provisioned-onboarding/capacity-calculator.png"::: > [!NOTE]
-> The capacity planner is an estimate based on simple input criteria. The most accurate way to determine your capacity is to benchmark a deployment with a representational workload for your use case.
-
-### Understanding the provisioned throughput purchase model
-
-Unlike Azure services where you're charged based on usage, the Azure OpenAI Provisioned Throughput feature is purchased as a renewable, monthly commitment. This commitment is charged to your subscription upon creation and at each monthly renewal. When you onboard to Provisioned Throughput, you need to create a commitment on each Azure OpenAI resource where you intend to create a provisioned deployment. The PTUs you purchase in this way are available for use when creating deployments on those resources.
-
-The total number of PTUs you can purchase via commitments is limited to the amount of Provisioned Throughput quota that is assigned to your subscription. The following table compares other characteristics of Provisioned Throughput quota (PTUs) and Provisioned Throughput commitments.
-
-|Topic|Quota|Commitments|
-||||
-|Purpose| Grants permission to create provisioned deployments, and provides the upper limit on the capacity that can be used|Purchase vehicle for Provisioned Throughput capacity|
-|Lifetime| Quota might be removed from your subscription if it isn't purchased via a commitment within five days of being granted|The minimum term is one month, with customer-selectable autorenewal behavior. A commitment isn't cancelable, and can't be moved to a new resource while it's active|
-|Scope |Quota is specific to a subscription and region, and is shared across all Azure OpenAI resources | Commitments are an attribute of an Azure OpenAI resource, and are scoped to deployments within that resource. A subscription might contain as many active commitments as there are resources.|
-|Granularity| Quota is granted specific to a model family (for example, GPT-4) but is shareable across model versions within the family| Commitments aren't model or version specific. For example, a resourceΓÇÖs 1000 PTU commitment can cover deployments of both GPT-4 and GPT-35-Turbo|
-|Capacity guarantee| Having quota doesn't guarantee that capacity is available when you create the deployment| Capacity availability to cover committed PTUs is guaranteed as long as the commitment is active.|
-|Increases/Decreases| New quota can be requested and approved at any time, independent of your commitment renewal dates | The number of PTUs covered by a commitment can be increased at any time, but can't be decreased except at the time of renewal.|
-
-Quota and commitments work together to govern the creation of deployments within your subscriptions. To create a provisioned deployment, two criteria must be met:
--- Quota must be available for the desired model within the desired region and subscription. This means you can't exceed your subscription/region-wide limit for the model.-- Committed PTUs must be available on the resource where you create the deployment. (The capacity you assign to the deployment is paid-for).-
-### Commitment properties and charging model
-
-A commitment includes several properties.
-
-|Property|Description|When Set|
-||||
-|Azure OpenAI Resource | The resource hosting the commitment | Commitment creation|
-|Committed PTUs| The number of PTUs covered by the commitment. | Initially set at commitment creation, and can be increased at any time, but not decreased.|
-|Term| The term of the commitment. A commitment expires one month from its creation date. The renewal policy defines what happens next. | Commitment creation |
-|Expiration Date| The expiration date of the commitment. This time of expiration is midnight UTC.| Initially, 30 days from creation. However, the expiration date changes if the commitment is renewed.|
-|Renewal Policy| There are three options for what to do upon expiration: <br><br> - Autorenew: A new commitment term begins for another 30 days at the current number of PTUs <br>- Autorenew with different settings: This setting is the same as *Autorenew*, except that the number of PTUs committed upon renewal can be decreased <br>- Don't autorenew: Upon expiration, the commitment ends and isn't renewed.| Initially set at commitment creation, and can be changed at any time.|
-
-### Commitment charges
-
-Provisioned Throughput Commitments generate charges against your Azure subscription at the following times:
--- At commitment creation. The charge is computed according to the current monthly PTU rate and the number of PTUs committed. You will receive a single up-front charge on your invoice.--- At commitment renewal. If the renewal policy is set to autorenew, a new monthly charge is generated based on the PTUs committed in the new term. This charge appears as a single up-front charge on your invoice.--- When new PTUs are added to an existing commitment. The charge is computed based on the number of PTUs added to the commitment, pro-rated hourly to the end of the existing commitment term. For example, if 300 PTUs are added to an existing commitment of 900 PTUs exactly halfway through its term, there is a charge at the time of the addition for the equivalent of 150 PTUs (300 PTUs pro-rated to the commitment expiration date). If the commitment is renewed, the following monthΓÇÖs charge will be for the new PTU total of 1,200 PTUs.-
-As long as the number of deployed PTUs in a resource is covered by the resourceΓÇÖs commitment, then you'll only see the commitment charges. However, if the number of deployed PTUs in a resource becomes greater than the resourceΓÇÖs committed PTUs, the excess PTUs will be charged as overage at an hourly rate. Typically, the only way this overage will happen is if a commitment expires or is reduced at its renewal while the resource contains deployments. For example, if a 300 PTU commitment is allowed to expire on a resource that has 300 PTUs deployed, the deployed PTUs is no longer be covered by any commitment. Once the expiration date is reached, the subscription is charged an hourly overage fee based on the 300 excess PTUs.
-
-The hourly rate is higher than the monthly commitment rate and the charges exceed the monthly rate within a few days. There are two ways to end hourly overage charges:
--- Delete or scale-down deployments so that they donΓÇÖt use more PTUs than are committed.-- Create a new commitment on the resource to cover the deployed PTUs.-
-## Purchasing and managing commitments
-
-### Planning your commitments
-
-Upon receiving confirmation that Provisioned Throughput Unit (PTU) quota is assigned to a subscription, you must create commitments on the target resources (or extend existing commitments) to make the quota usable for deployments.
-
-Prior to creating commitments, plan how the provisioned deployments will be used and which Azure OpenAI resources will host them. Commitments have a **one month minimum term and can't be decreased in size until the end of the term**. They also can't be moved to new resources once created. Finally, the sum of your committed PTUs can't be greater than your quota ΓÇô PTUs committed on a resource are no longer available to commit to on a different resource until the commitment expires. Having a clear plan on which resources will be used for provisioned deployments and the capacity you intend to apply to them (for at least a month) will help ensure an optimal experience with your provisioned throughput setup.
-
-For example:
--- DonΓÇÖt create a commitment and deployment on a *temporary* resource for the purpose of validation. YouΓÇÖll be locked into using that resource for at least month. Instead, if the plan is to ultimately use the PTUs on a production resource, create the commitment and test deployment on that resource right from the start.--- Calculate the number of PTUs to commit on a resource based on the number, model, and size of the deployments you intend to create, keeping in mind the minimum number of PTUs each model requires create a deployment. -
- - Example 1: GPT-4-32K requires a minimum of 200 PTUs to deploy. If you create a commitment of only 100 PTUs on a resource, you wonΓÇÖt have enough committed PTUs to deploy GPT-4-32K there
-
- - Example 2: If you need to create multiple deployments on a resource, sum the PTUs required for each deployment. A production resource hosting deployments for 300 PTUs of GPT-4, and 500 PTUs of GPT-4-32K will require a commitment of at least 800 PTUs to cover both deployments.
--- Distribute or consolidate PTUs as needed. For example, total quota of 1000 PTUs can be distributed across resources as needed to support your deployments. It could be committed on a single resource to support one or more deployments adding up to 1000 PTUs, or distributed across multiple resources (for example, a dev and a prod resource) as long as the total number of committed PTUs is less than or equal to the quota of 1000.
+> The capacity calculator provides an estimate based on simple input criteria. The most accurate way to determine your capacity is to benchmark a deployment with a representational workload for your use case.
-- Consider operational requirements in your plan. For example:
- - Organizationally required resource naming conventions
- - Business continuity policies that require multiple deployments of a model per region, perhaps on different Azure OpenAI resources
+## Understanding the Provisioned Throughput Purchase Model
-### Managing Provisioned Throughput Commitments
+Azure OpenAI Provisioned is purchased on-demand at an hourly basis based on the number of deployed PTUs, with substantial term discount available via the purchase of Azure Reservations.
-Provisioned throughput commitments are created and managed from the **Manage Commitments** view in Azure OpenAI Studio. You can navigate to this view by selecting **Manage Commitments** from the Quota pane:
+The hourly model is useful for short-term deployment needs, such as validating new models or acquiring capacity for a hackathon.  However, the discounts provided by the Azure Reservation for Azure OpenAI Provisioned are considerable and most customers with consistent long-term usage will find a reserved model to be a better value proposition.
-
-From the Manage Commitments view, you can do several things:
--- Purchase new commitments or edit existing commitments.-- Monitor all commitments in your subscription.-- Identify and take action on commitments that might cause unexpected billing.-
-The sections below will take you through these tasks.
-
-### Purchase a Provisioned Throughput Commitment
-
-With your commitment plan ready, the next step is to create the commitments. Commitments are created manually via Azure OpenAI Studio and require the user creating the commitment to have either the [Contributor or Cognitive Services Contributor role](./role-based-access-control.md) at the subscription level.
-
-For each new commitment you need to create, follow these steps:
-
-1. Launch the Provisioned Throughput purchase dialog by selecting **Quotas** > **Provisioned** > **Manage Commitments**.
--
-2. Select **Purchase commitment**.
-
-3. Select the Azure OpenAI resource and purchase the commitment. You will see your resources divided into resources with existing commitments, which you can edit and resources that don't currently have a commitment.
-
-| Setting | Notes |
-||-|
-| **Select a resource** | Choose the resource where you'll create the provisioned deployment. Once you have purchased the commitment, you will be unable to use the PTUs on another resource until the current commitment expires. |
-| **Select a commitment type** | Select Provisioned. (Provisioned is equivalent to Provisioned Managed) |
-| **Current uncommitted provisioned quota** | The number of PTUs currently available for you to commit to this resource. |
-| **Amount to commit (PTU)** | Choose the number of PTUs you're committing to. **This number can be increased during the commitment term, but can't be decreased**. Enter values in increments of 50 for the commitment type Provisioned. |
-| **Commitment tier for current period** | The commitment period is set to one month. |
-| **Renewal settings** | Auto-renew at current PTUs <br> Auto-renew at lower PTUs <br> Do not auto-renew |
-
-4. Select Purchase. A confirmation dialog will be displayed. After you confirm, your PTUs will be committed, and you can use them to create a provisioned deployment. |
--
-> [!IMPORTANT]
-> A new commitment is billed up-front for the entire term. If the renewal settings are set to auto-renew, then you will be billed again on each renewal date based on the renewal settings.
-
-## Edit an existing Provisioned Throughput commitment
-
-From the Manage Commitments view, you can also edit an existing commitment. There are two types of changes you can make to an existing commitment:
--- You can add PTUs to the commitment.-- You can change the renewal settings.-
-To edit a commitment, select the current to edit, then select Edit commitment.
-
-### Adding Provisioned Throughput Units to existing commitments
-
-Adding PTUs to an existing commitment will allow you to create larger or more numerous deployments within the resource. You can do this at any time during the term of your commitment.
--
-> [!IMPORTANT]
-> When you add PTUs to a commitment, they will be billed immediately, at a pro-rated amount from the current date to the end of the existing commitment term. Adding PTUs does not reset the commitment term.
-
-### Changing renewal settings
-
-Commitment renewal settings can be changed at any time before the expiration date of your commitment. Reasons you might want to change the renewal settings include ending your use of provisioned throughput by setting the commitment to not auto-renew, or to decrease usage of provisioned throughput by lowering the number of PTUs that will be committed in the next period.
-
-> [!IMPORTANT]
-> If you allow a commitment to expire or decrease in size such that the deployments under the resource require more PTUs than you have in your resource commitment, you will receive hourly overage charges for any excess PTUs. For example, a resource that has deployments that total 500 PTUs and a commitment for 300 PTUs will generate hourly overage charges for 200 PTUs.
+> [!NOTE]
+> Azure OpenAI Provisioned customers onboarded prior to the August self-service update use a purchase model called the Commitment model. These customers can continue to use this older purchase model alongside the Hourly/reservation purchase model. The Commitment model is not available for new customers. For details on the Commitment purchase model and options for coexistence and migration, please see the [Azure OpenAI Provisioned August Update](../concepts/provisioned-migration.md).
-## Monitor commitments and prevent unexpected billings
+## Hourly UsageΓÇ»
-The manage commitments pane provides a subscription wide overview of all resources with commitments and PTU usage within a given Azure Subscription. Of particular importance interest are:
+Provisioned Throughput deployments are charged an hourly rate ($/PTU/hr) on the number of PTUs that have been deployed.ΓÇ» For example, a 300 PTU deployment will be charged the hourly rate times 300.ΓÇ» All Azure OpenAI pricing is available in the Azure Pricing Calculator.
-- **PTUs Committed, Deployed and Usage** ΓÇô These figures provide the sizes of your commitments, and how much is in use by deployments. Maximize your investment by using all of your committed PTUs.-- **Expiration policy and date** - The expiration date and policy tell you when a commitment will expire and what will happen when it does. A commitment set to auto-renew will generate a billing event on the renewal date. For commitments that are expiring, be sure you delete deployments from these resources prior to the expiration date to prevent hourly overage billingThe current renewal settings for a commitment. -- **Notifications** - Alerts regarding important conditions like unused commitments, and configurations that might result in billing overages. Billing overages can be caused by situations such as when a commitment has expired and deployments are still present, but have shifted to hourly billing.
+If a deployment exists for a partial hour, it will receive a prorated charge based on the number of minutes it was deployed during the hour.ΓÇ» For example, a deployment that exists for 15 minutes during an hour will receive 1/4th the hourly charge.ΓÇ»
-## Common Commitment Management Scenarios
+If the deployment size is changed, the costs of the deployment will adjust to match the new number of PTUs.
-**Discontinue use of provisioned throughput**
-To end use of provisioned throughput, and prevent hourly overage charges after commitment expiration, stop any charges after the current commitments are expired, two steps must be taken:
+Paying for provisioned deployments on an hourly basis is ideal for short-term deployment scenarios.ΓÇ» For example: Quality and performance benchmarking of new models, or temporarily increasing PTU capacity to cover an event such as a hackathon.ΓÇ»
-1. Set the renewal policy on all commitments to *Don't autorenew*.
-2. Delete the provisioned deployments using the quota.
+Customers that require long-term usage of provisioned deployments, however, might pay significantly less per month by purchasing a term discount via an Azure Reservation as discussed in the next section.
-**Move a commitment/deployment to a new resource in the same subscription/region**
+> [!NOTE]
+> It is not recommended to scale production deployments according to incoming traffic and pay for them purely on an hourly basis. There are two reasons for this:
+> * The cost savings achieved by purchasing an Azure Reservation for Azure OpenAI Provisioned are significant, and it will be less expensive in many cases to maintain a deployment sized for full production volume paid for via a reservation than it would be to scale the deployment with incoming traffic.
+> * Having unused provisioned quota (PTUs) does not guarentee that capacity will be available to support increasing the size of the deployment when required. Quota limits the maximum number of PTUs that can be deployed, but it is not a capacity guarantee. Provisioned capacity for each region and modal dynamically changes throughout the day and might not be available when required. As a result, it is recommended to maintain a permanant deployment to cover your traffic needs (paid for via a reservation).
-It isn't possible in Azure OpenAI Studio to directly *move* a deployment or a commitment to a new resource. Instead, a new deployment needs to be created on the target resource and traffic moved to it. There will need to be a commitment purchased established on the new resource to accomplish this. Because commitments are charged up-front for a 30-day period, it's necessary to time this move with the expiration of the original commitment to minimize overlap with the new commitment and ΓÇ£double-billingΓÇ¥ during the overlap.
+## Azure Reservations for Azure OpenAI Provisioned  
-There are two approaches that can be taken to implement this transition.
+Discounts on top of the hourly usage price can be obtained by purchasing an Azure Reservation for Azure OpenAI Provisioned. An Azure Reservation is a term-discounting mechanism shared by many Azure products. For example, Compute and Cosmos DB. For Azure OpenAI Provisioned, the reservation provides a discount for committing to payment for fixed number of PTUs for a one-month or one-year period.ΓÇ»
-**Option 1: No-Overlap Switchover**
+* Azure Reservations are purchased via the Azure portal, not Azure OpenAI Studio Link to Azure reservation portal.
-This option requires some downtime, but requires no extra quota and generates no extra costs.
+* Reservations are purchased regionally and can be flexibly scoped to cover usage from a group of deployments. Reservation scopes include:
-| Steps | Notes |
-|-|-|
-|Set the renewal policy on the existing commitment to expire| This will prevent the commitment from renewing and generating further charges |
-|Before expiration of the existing commitment, delete its deployment | Downtime will start at this point and will last until the new deployment is created and traffic is moved. You'll minimize the duration by timing the deletion to happen as close to the expiration date/time as possible.|
-|After expiration of the existing commitment, create the commitment on the new resource|Minimize downtime by executing this and the next step as soon after expiration as possible.|
-|Create the deployment on the new resource and move traffic to it||
+ * Individual resource groups or subscriptions
-**Option 2: Overlapped Switchover**
+ * A group of subscriptions in a Management Group
-This option has no downtime by having both existing and new deployments live at the same time. This requires having quota available to create the new deployment, and will generate extra costs for the duration of the overlapped deployments.
+ * All subscriptions in a billing account
-| Steps | Notes |
-|-|-|
-|Set the renewal policy on the existing commitment to expire| Doing so prevents the commitment from renewing and generating further charges.|
-|Before expiration of the existing commitment:<br>1. Create the commitment on the new resource.<br>2. Create the new deployment.<br>3. Switch traffic<br>4. Delete existing deployment| Ensure you leave enough time for all steps before the existing commitment expires, otherwise overage charges will be generated (see next section) for options. |
+* New reservations can be purchased to cover the same scope as existing reservations, to allow for discounting of new provisioned deployments. The scope of existing reservations can also be updated at any time without penalty, for example to cover a new subscription.
-If the final step takes longer than expected and will finish after the existing commitment expires, there are three options to minimize overage charges.
+* Reservations can be canceled after purchase, but credits are limited.
-- **Take downtime**: Delete the original deployment then complete the move.-- **Pay overage**: Keep the original deployment and pay hourly until you have moved traffic off and deleted the deployment.-- **Reset the original commitment** to renew one more time. This will give you time to complete the move with a known cost.
+* If the size of provisioned deployments within the scope of a reservation exceeds the amount of the reservation, the excess is charged at the hourly rate. For example, if deployments amounting to 250 PTUs exist within the scope of a 200 PTU reservation, 50 PTUs will be charged on an hourly basis until the deployment sizes are reduced to 200 PTUs, or a new reservation is created to cover the remaining 50.
-Both paying for an overage and resetting the original commitment will generate charges beyond the original expiration date. Paying overage charges might be cheaper than a new one-month commitment if you only need a day or two to complete the move. Compare the costs of both options to find the lowest-cost approach.
+* Reservations guarantee a discounted price for the selected term.ΓÇ» They do not reserve capacity on the service or guarantee that it will be available when a deployment is created. It is highly recommended that customers create deployments prior to purchasing a reservation to prevent from over-purchasing a reservation.
+
+> [!NOTE]
+> The Azure role and tenant policy requirements to purchase a reservation are different than those required to create a deployment or Azure OpenAI resource. See Azure OpenAI [Provisioned reservation documentation](https://aka.ms/oai/docs/ptum-reservations) for more details.
-### Move the deployment to a new region and or subscription
+## Important: Sizing Azure OpenAI Provisioned Reservations
-The same approaches apply in moving the commitment and deployment within the region, except that having available quota in the new location will be required in all cases.
+The PTU amounts in reservation purchases are independent of PTUs allocated in quota or used in deployments. It is possible to purchase a reservation for more PTUs than you have in quota, or can deploy for the desired region, model, or version. Credits for over-purchasing a reservation are limited, and customers must take steps to ensure they maintain their reservation sizes in line with their deployed PTUs.
+
+The best practice is to always purchase a reservation after deployments have been created. This prevents purchasing a reservation and then finding out that the required capacity is not available for the desired region or model.
+
+To assist customers with purchasing the correct reservation amounts. The total number of PTUs in a subscription and region that can be covered by a reservation are listed on the Quotas page of Azure OpenAI Studio. See the message "PTUs Available for reservation."
-### View and edit an existing resource
-In Azure OpenAI Studio, select **Quota** > **Provisioned** > **Manage commitments** and select a resource with an existing commitment to view/change it.
## Next steps - [Provisioned Throughput Units (PTU) getting started guide](./provisioned-get-started.md)-- [Provisioned Throughput Units (PTU) concepts](../concepts/provisioned-throughput.md)
+- [Provisioned Throughput Units (PTU) concepts](../concepts/provisioned-throughput.md)
+- [Provisioned Throughput reservation documentation](https://aka.ms/oai/docs/ptum-reservations)
ai-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech.md
Here's more information about neural text to speech features in the Speech servi
* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text to speech by using [prebuilt neural voices](language-support.md?tabs=tts) or [custom neural voices](custom-neural-voice.md).
-* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text to speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or Speech to text REST API, responses aren't returned in real-time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
+* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) to asynchronously synthesize text to speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or Speech to text REST API, responses aren't returned in real-time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
* **Prebuilt neural voices**: Microsoft neural text to speech capability uses deep neural networks to overcome the limits of traditional speech synthesis regarding stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24 kHz and high-fidelity 48 kHz. You can use neural voices to:
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
> > If you're looking for information on the in-place migration feature, see [Migrate to App Service Environment v3 by using the in-place migration feature](migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md). >
+> Side-by-side migration comes with additional challenges compared to in-place migration. For customers who need to decide between the two options, the recommendation is to use in-place migration since there are fewer steps and less complexity. If you decide to use side-by-side migration, review the [common sources of issues when migrating using the side-by-side migration feature](#common-sources-of-issues-when-migrating-using-the-side-by-side-migration-feature) section to avoid common pitfalls.
+>
App Service can automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). There are different migration options. Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case. App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
App Service Environment v3 is the latest version of App Service Environment. It'
There are two automated migration features available to help you upgrade to App Service Environment v3. -- **In-place migration feature** migrates your App Service Environment to App Service Environment v3 in-place. In-place means that your App Service Environment v3 replaces your existing App Service Environment in the same subnet. There's application downtime during the migration because a subnet can only have a single App Service Environment at a given time. For more information about this feature, see [Automated upgrade using the in-place migration feature](migrate.md).
+- **In-place migration feature** migrates your App Service Environment to App Service Environment v3 in-place and is the recommended migration option. In-place means that your App Service Environment v3 replaces your existing App Service Environment in the same subnet. There's application downtime during the migration because a subnet can only have a single App Service Environment at a given time. For more information about this feature, see [Automated upgrade using the in-place migration feature](migrate.md).
- **Side-by-side migration feature** creates a new App Service Environment v3 in a different subnet that you choose and recreates all of your App Service plans and apps in that new environment. Your existing environment is up and running during the entire migration. Once the new App Service Environment v3 is ready, you can redirect traffic to the new environment and complete the migration. There's no application downtime during the migration. For more information about this feature, see [Automated upgrade using the side-by-side migration feature](side-by-side-migrate.md).
+ > [!NOTE]
+ > Side-by-side migration comes with additional challenges compared to in-place migration. For customers who need to decide between the two options, the recommendation is to use in-place migration since there are fewer steps and less complexity. If you decide to use side-by-side migration, review the [common sources of issues when migrating using the side-by-side migration feature](side-by-side-migrate.md#common-sources-of-issues-when-migrating-using-the-side-by-side-migration-feature) section to avoid common pitfalls.
+ >
- **Manual migration options** are available if you can't use the automated migration features. For more information about these options, see [Migration alternatives](migration-alternatives.md). ### Why do some customers see performance differences after migrating?
When migrating to App Service Environment v3, we map App Service plan tiers as f
### Migration path decision tree
-Use the following decision tree to determine which migration path is right for you.
+Use the following decision tree to determine which migration path is right for you. The recommendation for all customers is to use the in-place migration feature if your App Service Environment meets the criteria for an automated migration. In-place migration is the simplest and fastest way to upgrade to App Service Environment v3.
:::image type="content" source="./media/migration/migration-path-decision-tree.png" alt-text="Screenshot of the decision tree for helping decide which App Service Environment upgrade option to use." lightbox="./media/migration/migration-path-decision-tree-expanded.png":::
app-service Overview Inbound Outbound Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-inbound-outbound-ips.md
az webapp show --resource-group <group_name> --name <app_name> --query possibleO
(Get-AzWebApp -ResourceGroup <group_name> -name <app_name>).PossibleOutboundIpAddresses ```
+For function apps, see [Function app outbound IP addresses](/azure/azure-functions/ip-addresses?tabs=azure-powershell#find-outbound-ip-addresses).
+ ## Get a static outbound IP You can control the IP address of outbound traffic from your app by using virtual network integration together with a virtual network NAT gateway to direct traffic through a static public IP address. [Virtual network integration](./overview-vnet-integration.md) is available on **Basic**, **Standard**, **Premium**, **PremiumV2**, and **PremiumV3** App Service plans. To learn more about this setup, see [NAT gateway integration](./networking/nat-gateway-integration.md).
app-service Overview Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-tls.md
To ensure backward compatibility for TLS 1.0 and TLS 1.1, App Service will conti
The minimum TLS cipher suite includes a fixed list of cipher suites with an optimal priority order that you cannot change. Reordering or reprioritizing the cipher suites is not recommended as it could expose your web apps to weaker encryption. You also cannot add new or different cipher suites to this list. When you select a minimum cipher suite, the system automatically disables all less secure cipher suites for your web app, without allowing you to selectively disable only some weaker cipher suites.
-Follow these steps to change the Minimum TLS cipher suite:
-1. Browse to your app in the [Azure portal](https://portal.azure.com/)
-1. In the left menu, select **configuration** and then select the **General settings** tab.
-1. Under __Minimum Inbound TLS Cipher Suite__, select **change**, and then select the **Minimum TLS Cipher Suite**.
-1. Select **Ok**.
-1. Select **Save** to save the changes.
- ### What are cipher suites and how do they work on App Service? A cipher suite is a set of instructions that contains algorithms and protocols to help secure network connections between clients and servers. By default, the front-end's OS would pick the most secure cipher suite that is supported by both App Service and the client. However, if the client only supports weak cipher suites, then the front-end's OS would end up picking a weak cipher suite that is supported by them both. If your organization has restrictions on what cipher suites should not be allowed, you may update your web appΓÇÖs minimum TLS cipher suite property to ensure that the weak cipher suites would be disabled for your web app.
For App Service Environments with `FrontEndSSLCipherSuiteOrder` cluster setting,
## End-to-end TLS Encryption (preview)
-End-to-end (E2E) TLS encryption is available in Standard App Service plans and higher. Front-end intra-cluster traffic between App Service front-ends and the workers running application workloads can now be encrypted. Below is a simple diagram to help you understand how it works.
-
-Follow these steps to enable end-to-end TLS encryption:
-1. Browse to your app in the [Azure portal](https://portal.azure.com/)
-1. In the left menu, select **configuration** and then select the **General settings** tab.
-1. Under __End-to-end TLS encryption__, select **on**.
-1. Save the changes.
+End-to-end (E2E) TLS encryption is available in Standard App Service plans and higher. Front-end intra-cluster traffic between App Service front-ends and the workers running application workloads can now be encrypted.
## Next steps * [Secure a custom DNS name with a TLS/SSL binding](configure-ssl-bindings.md)
application-gateway Quickstart Create Application Gateway For Containers Byo Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-create-application-gateway-for-containers-byo-deployment.md
Previously updated : 02/27/2024 Last updated : 08/12/2024
az network vnet subnet update \
ALB_SUBNET_ID=$(az network vnet subnet list --resource-group $VNET_RESOURCE_GROUP --vnet-name $VNET_NAME --query "[?name=='$ALB_SUBNET_NAME'].id" --output tsv) echo $ALB_SUBNET_ID ```-
-> [!NOTE]
-> The NSG for the delegated subnet can only use the default rules for incoming traffic. For example: AllowVNetInBound, AllowAzureLoadBalancerInBound, and DenyAllInbound. No other incoming NSG rule is supported.
### Delegate permissions to managed identity
automation Automation Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md
Title: Automation services in Azure - overview
description: This article tells what are the Automation services in Azure and how to compare and use it to automate the lifecycle of infrastructure and applications. keywords: azure automation services, automanage, Bicep, Blueprints, Guest Config, Policy, Functions Previously updated : 08/03/2022 Last updated : 08/09/2024 # Choose the Automation services in Azure
-This article explains various Automation services offered in the Azure environment. These services can automate business and operational processes and solve integration problems amongst multiple services, systems, and processes. Automation services can define input, action, activity to be performed, conditions, error handling, and output generation. Using these services you can run various activities on a schedule or do a manual demand-based execution. Each service has its unique advantages and target audience.
+This article explains various Automation services offered in the Azure environment. These services can automate business and operational processes and solve integration problems among multiple services, systems, and processes. Automation services can define input, action, activity to be performed, conditions, error handling, and output generation. Using these services, you can run various activities on a schedule or do a manual demand-based execution. Each service has its unique advantages and target audience.
Using these services, you can shift effort from manually performing operational tasks towards building automation for these tasks, including: - Reduce time to perform an action - Reduce risk in performing the action-- Increased human capacity for further innovation
+- Increase human capacity for further innovation
- Standardize operations ## Categories in Automation operations Automation is required in three broad categories of operations: -- **Deployment and management of resources** ΓÇöcreate and configure programmatically using automation or infrastructure as code tooling to deliver repeatable and consistent deployment and management of cloud resources. For example, an Azure Network Security Group can be deployed, and security group rules are created using an Azure Resource Manager template or an automation script.
+- **Deployment and management of resources**: Create and configure programmatically using automation or infrastructure as code tooling to deliver repeatable and consistent deployment and management of cloud resources. For example, an Azure Network Security Group can be deployed, and security group rules are created using an Azure Resource Manager template or an automation script.
-- **Response to external events** ΓÇöbased on a critical external event such as responding to database changes, acting as per the inputs given to a web page, and so on, you can diagnose and resolve issues.
+- **Response to external events**: Based on a critical external event, such as responding to database changes, acting as per the inputs given to a web page, and so on, you can diagnose and resolve issues.
-- **Complex Orchestration** ΓÇöby integrating with first or third party products, you can define an end to end automation workflows.
+- **Complex Orchestration**: By integrating with first- or third-party products, you can define end-to-end automation workflows.
## Azure services for Automation
Multiple Azure services can fulfill the above requirements. Each service has its
- Azure Resource Manager (ARM) templates with Bicep - Azure Blueprints - Azure Automation
- - Azure Automanage (for machine configuration and management.)
+ - Azure Automanage (for machine configuration and management)
**Responding to external events** - Azure Functions - Azure Automation
- - Azure Policy Guest Config (to take an action when there's a change in the compliance state of resource.)
+ - Azure Policy Guest Config (to take an action when there's a change in the compliance state of resource)
-**Complex orchestration and integration with 1st or 3rd party products**
+**Complex orchestration and integration with first- or third-party products**
- Azure Logic Apps
- - Azure Functions or Azure Automation. (Azure Logic app has over 400+ connectors to other services, including Azure Automation and Azure Functions, which could be used to meet complex automation scenarios.)
+ - Azure Functions or Azure Automation (Azure Logic app has over 400+ connectors to other services, including Azure Automation and Azure Functions, which could be used to meet complex automation scenarios)
:::image type="content" source="media/automation-services/automation-services-overview.png" alt-text="Screenshot shows an Overview of Automation services.":::
Azure Resource Manager provides a language to develop repeatable and consistent
### Bicep
-We've introduced a new language named [Bicep](../azure-resource-manager/bicep/overview.md) that offers the same capabilities as ARM templates but with a syntax that's easier to use. Each Bicep file is automatically converted to an ARM template during deployment. If you're considering infrastructure as code options, we recommend Bicep. For more information, see [What is Bicep?](../azure-resource-manager/bicep/overview.md)
+We introduced a new language named [Bicep](../azure-resource-manager/bicep/overview.md) that offers the same capabilities as ARM templates but with a syntax that's easier to use. Each Bicep file is automatically converted to an ARM template during deployment. If you're considering infrastructure as code options, we recommend Bicep. For more information, see [What is Bicep?](../azure-resource-manager/bicep/overview.md)
The following table describes the scenarios and users for ARM template and Bicep: | **Scenarios** | **Users** | | | -- |
- | Create, manage, and update infrastructure resources, such as virtual machines, networks, storage accounts, containers and so on. </br> </br> Deploy apps, add tags, assign policies, assign role-based access control all declaratively as code and integrated with your CI\CD tools. </br> </br> Manage multiple environments such as production, non-production, and disaster recovery. </br> </br> Deploy resources consistently and reliably at a scale. | Application Developers, Infrastructure Administrators, DevOps Engineers using Azure for the first time or using Azure as their primary cloud. </br> </br> IT Engineer\Cloud Architect responsible for cloud infrastructure deployment. |
+ | Create, manage, and update infrastructure resources, such as virtual machines, networks, storage accounts, containers, and so on. </br> </br> Deploy apps, add tags, assign policies, and assign role-based access control all declaratively as code and integrated with your CI\CD tools. </br> </br> Manage multiple environments such as production, nonproduction, and disaster recovery. </br> </br> Deploy resources consistently and reliably at a scale. | Application Developers, Infrastructure Administrators, DevOps Engineers using Azure for the first time or using Azure as their primary cloud. </br> </br> IT Engineer\Cloud Architect responsible for cloud infrastructure deployment. |
### Azure Blueprints (Preview)
- Azure Blueprints (Preview) define a repeatable set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts such as, Role assignments, Policy assignments, ARM templates and Resource groups. [Learn more](../governance/blueprints/overview.md).
+>[!Note]
+> On July 11, 2026, Azure Blueprints (Preview) will be deprecated. [Learn more](../governance/blueprints/overview.md)
+
+ Azure Blueprints (Preview) define a repeatable set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts such as Role assignments, Policy assignments, ARM templates, and Resource groups. [Learn more](../governance/blueprints/index.yml).
| **Scenarios** | **Users** | | | |
The following table describes the scenarios and users for ARM template and Bicep
### [Azure Automation](./overview.md)
-Azure Automation orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environments. It provides a persistent shared assets including variables, connections, objects that allow orchestration of complex jobs. [Learn more](./automation-runbook-gallery.md).
+Azure Automation orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environments. It provides persistent shared assets, including variables, connections, and objects, that allow orchestration of complex jobs. [Learn more](./automation-runbook-gallery.md).
-There are more than 3,000 modules in the PowerShell Gallery, and the PowerShell community continues to grow. Azure Automation based on PowerShell modules can work with multiple applications and vendors, both 1st party and 3rd party. As more application vendors release PowerShell modules for integration, extensibility and automation tasks, you could use an existing PowerShell script as-is to execute it as a PowerShell runbook in an automation account without making any changes.
+There are more than 3,000 modules in the PowerShell Gallery, and the PowerShell community continues to grow. Azure Automation based on PowerShell modules can work with multiple applications and vendors, both first- and third-party. As more application vendors release PowerShell modules for integration, extensibility, and automation tasks, you could use an existing PowerShell script as-is to execute it as a PowerShell runbook in an automation account without making any changes.
| **Scenarios** | **Users** | | -- | - |
- | Allows Automation to write an [Automation PowerShell runbook](./learn/powershell-runbook-managed-identity.md) that deploys an Azure resource by using an [Azure Resource Manager template](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).</br> </br> Schedule tasks, for example ΓÇô Stop dev/test VMs or services at night and turn on during the day. </br> </br> Response to alerts such as system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation where you can manage to automate on-premises servers such as SQL Server, Active Directory and so on. </br> </br> Azure resource life-cycle management and governance include resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure administrators manage the on-premises infrastructure using scripts or executing long-running jobs such as month-end operations on servers running on-premises. |
+ | Allows Automation to write an [Automation PowerShell runbook](./learn/powershell-runbook-managed-identity.md) that deploys an Azure resource by using an [Azure Resource Manager template](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).</br> </br> Schedule tasks; for example, stop dev/test VMs or services at night and turn on during the day. </br> </br> Response to alerts such as system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation where you can manage to automate on-premises servers such as SQL Server, Active Directory, and so on. </br> </br> Azure resource life-cycle management and governance include resource provisioning, deprovisioning, adding correct tags, locks, NSGs, and so on. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python-based scripting. </br> </br> Infrastructure administrators manage the on-premises infrastructure using scripts or executing long-running jobs such as month-end operations on servers running on-premises. |
### Azure Automation based in-guest management **Configuration management** : Collects inventory and tracks changes in your environment. [Learn more](./change-tracking/overview.md).
-You can configure desired the state of your machines to discover and correct configuration drift. [Learn more](./automation-dsc-overview.md).
+You can configure the desired state of your machines to discover and correct the configuration drift. [Learn more](./automation-dsc-overview.md).
-**Update management** : Assess compliance of servers and can schedule update installation on your machines. [Learn more](./update-management/overview.md).
+**Update management** : Assess compliance of servers and schedule update installation on your machines. [Learn more](./update-management/overview.md).
| **Scenarios** | **Users** | | - | - |
- | Detect and alert on software, services, file and registry changes to your machines, vigilant on everything installed in your servers. </br> </br> Assess and install updates on your servers using Azure Update management. </br> </br> Configure the desired state of your servers and ensure they stay compliant. | </br> </br> Central IT\Infrastructure Administrators\Auditors looking for regulatory requirements at scale and ensuring end state of severs looks as desired, patched and audited. |
+ | Detect and alert on software, services, file, and registry changes to your machines, vigilant on everything installed in your servers. </br> </br> Assess and install updates on your servers using Azure Update management. </br> </br> Configure the desired state of your servers and ensure they stay compliant. | </br> </br> Central IT\Infrastructure Administrators\Auditors looking for regulatory requirements at scale and ensuring that the end state of the servers looks as desired, patched, and audited. |
### Azure Automanage (Preview)
-Replaces repetitive, day-to-day operational tasks with an exception-only management model, where a healthy, steady-state of VM is equal to hands-free management. [Learn more](../automanage/index.yml).
+Replaces repetitive, day-to-day operational tasks with an exception-only management model, where a healthy, steady state of VM is equal to hands-free management. [Learn more](../automanage/index.yml).
**Linux and Windows support**
- - You can intelligently onboard virtual machines to select best practices Azure services.
- - It allows you to configure each service per Azure best practices automatically.
- - It supports customization of best practice services through VM Best practices template for Dev\Test and Production workload.
- - You can monitor for drift and correct it when detected.
- - It provides a simple experience (point, select, set, and forget).
+
+ - Allows you to intelligently onboard virtual machines to select best practices Azure services
+ - Allows you to configure each service as per Azure best practices automatically
+ - Supports customization of best practice services through VM Best practices template for Dev\Test and Production workload
+ - Allows you to monitor for drift and correct it when detected
+ - Provides a simple experience (point, select, set, and forget)
| **Scenarios** | **Users** | | | - |
- | Automatically configures guest operating system per Microsoft baseline configuration. </br> </br> Automatically detects for drift and corrects it across a VMΓÇÖs entire lifecycle. </br> </br> Aims at a hands-free management of machines. | The IT Administrators, Infra Administrators, IT Operations Administrators are responsible for managing server workload, day to day admin tasks such as backup, disaster recovery, security updates, responding to security threats, and so on across Azure and on-premises. </br> </br> Developers who do not wish to manage servers or spend the time on fewer priority tasks. |
+ | Automatically configures guest operating system as per Microsoft baseline configuration. </br> </br> Automatically detects for drift and corrects it across a VM's entire lifecycle. </br> </br> Aims at a hands-free management of machines. | The IT Administrators, Infra Administrators, IT Operations Administrators are responsible for managing server workload, day-to-day admin tasks, such as backup, disaster recovery, security updates, responding to security threats, and so on, across Azure and on-premises. </br> </br> Developers who don't wish to manage servers or spend time on lower priority tasks. |
## Respond to events in Automation workflow ### Azure Policy based Guest Configuration
-Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](../governance/machine-configuration/remediation-options.md).
+Azure Policy based Guest Configuration is the next iteration of Azure Automation State Configuration. [Learn more](../governance/machine-configuration/remediation-options.md).
You can check on what is installed in:
Azure Policy based Guest configuration is the next iteration of Azure Automation
| **Scenarios** | **Users** | | - | - |
- | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](../governance/machine-configuration/remediation-options.md#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change. |
+ | Obtain compliance data that can include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they're deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](../governance/machine-configuration/remediation-options.md#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change. |
### Azure Automation - Process Automation Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. [Learn more](./automation-runbook-types.md).
- - It provides persistent shared assets, including variables, connections and objects that allows orchestration of complex jobs.
- - You can invoke a runbook on the basis of [Azure Monitor alert](./automation-create-alert-triggered-runbook.md) or through a [webhook](./automation-webhooks.md).
+ - Provides persistent shared assets, including variables, connections, and objects that allow orchestration of complex jobs
+ - Allows you to invoke a runbook based on [Azure Monitor alert](./automation-create-alert-triggered-runbook.md) or through a [webhook](./automation-webhooks.md)
| **Scenarios** | **Users** | | | - |
- | Respond to system alerts, service alerts, or high CPU/memory alerts from 1st party or 3rd party monitoring tools like Splunk or ServiceNow, create ServiceNow tickets basis alerts and so on. </br> </br> Hybrid automation scenarios where you can manage automation on on-premises servers such as SQL Server, Active Directory and so on based on an external event.</br> </br> Azure resource life-cycle management and governance that includes Resource provisioning, deprovisioning, adding correct tags, locks, NSGs and so on based on Azure monitor alerts. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. |
+ | Respond to system alerts, service alerts, or high CPU/memory alerts from first- or third-party monitoring tools such as Splunk or ServiceNow, create ServiceNow tickets basis alerts, and so on. </br> </br> Hybrid automation scenarios where you can manage automation on on-premises servers such as SQL Server, Active Directory, and so on, based on an external event.</br> </br> Azure resource life cycle management and governance that includes Resource provisioning, deprovisioning, adding correct tags, locks, NSGs, and so on, based on Azure monitor alerts. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python-based scripting. |
### Azure functions
-Provides a serverless event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform. [Learn more](../azure-functions/functions-overview.md).
+Provides a serverless, event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform. [Learn more](../azure-functions/functions-overview.md).
- - You can use a variety of languages to write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code. Functions runtime is an open source.
+ - You can use a variety of languages to write functions in a language of your choice, such as C#, Java, JavaScript, PowerShell, or Python, and focus on specific pieces of code. Functions runtime is an open source.
- You can choose the hosting plan according to your function app scaling requirements, functionality, and resources required. - You can orchestrate complex workflows through [durable functions](../azure-functions/durable/durable-functions-overview.md?tabs=csharp).
- - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](../azure-functions/functions-best-practices.md?tabs=csharp#write-robust-functions).
- - When you write PowerShell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
+ - You should avoid large and long-running functions that can cause unexpected timeout issues. [Learn more](../azure-functions/functions-best-practices.md?tabs=csharp#write-robust-functions).
+ - When you write PowerShell scripts within the Function Apps, you must tweak the scripts to define how the function behaves, such as how it's triggered and its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
| **Scenarios** | **Users** | | -- | -- |
- | Respond to events on resources: such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts to send the teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br> Respond to database changes. | The Application developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless applications where Azure Functions could be part of a larger application workflow. |
+ | Respond to events on resources such as add tags to resource group basis cost center when VM is deleted, and so on. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals, and so on. </br> </br> Process Azure alerts to send the teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br> Respond to database changes. | The Application developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless applications where Azure Functions could be part of a larger application workflow. |
## Orchestrate complex jobs in Azure Automation
Provides a serverless event-driven compute platform for automation that allows y
Logic Apps is a platform for creating and running complex orchestration workflows that integrate your apps, data, services, and systems. [Learn more](../logic-apps/logic-apps-overview.md).
- - Allows you to build smart integrations between 1st party and 3rd party apps, services and systems running across on-premises, hybrid and cloud native.
+ - Allows you to build smart integrations between first- and third-party apps, services and systems running across on-premises, hybrid and cloud native.
- Allows you to use managed connectors from a 450+ and growing Azure connectors ecosystem to use in your workflows. - Provides a first-class support for enterprise integration and B2B scenarios.
- - Flexibility to visually create and edit workflows - Low Code\no code approach
- - Runs only in the cloud.
- - Provides a large collection of ready made actions and triggers.
+ - Allows flexibility to visually create and edit workflows - low code\no code approach
+ - Runs only on the cloud.
+ - Provides a large collection of ready-made actions and triggers.
| **Scenarios** | **Users** | | - | |
- | Schedule and send email notifications using Office 365 when a specific event happens. For example, a new file is uploaded. </br> </br> Route and process customer orders across on-premises systems and cloud services. </br></br> Move uploaded files from an SFTP or FTP server to Azure Storage. </br> </br> Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review. | The Pro integrators and developers, IT professionals who would want to use low code/no code option for Advanced integration scenarios to external systems or APIs. |
+ | Schedule and send email notifications using Office 365 when a specific event happens. For example, a new file is uploaded. </br> </br> Route and process customer orders across on-premises systems and cloud services. </br></br> Move uploaded files from an SFTP or FTP server to Azure Storage. </br> </br> Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review. | The Pro integrators and developers, IT professionals who would want to use low code/no code option for advanced integration scenarios to external systems or APIs. |
### Azure Automation - Process Automation
-Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. It provides persistent shared assets, including variables, connections, and objects that allows orchestration of complex jobs. [Learn more](./overview.md).
+Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. It provides persistent shared assets, including variables, connections, and objects, that allow orchestration of complex jobs. [Learn more](./overview.md).
| **Scenarios** | **Users** | | | -- |
- | Azure resource life-cycle management and governance which includes Resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on through runbooks that are triggered from ITSM alerts. </br></br> Use hybrid worker as a bridge from cloud to on-premises enabling resource\user management on-premises. </br></br> Execute complex disaster recovery workflows through Automation runbooks. </br></br> Execute automation runbooks as part of Logic apps workflow through Azure Automation Connector. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure Administrators managing on-premises infrastructure using scripts or executing long running jobs such as month-end operations on servers running on-premises. |
+ | Azure resource life cycle management and governance, which includes resource provisioning, deprovisioning, adding correct tags, locks, NSGs, and so on, through runbooks that are triggered from ITSM alerts. </br></br> Use hybrid worker as a bridge from cloud to on-premises, enabling resource\user management on-premises. </br></br> Execute complex disaster recovery workflows through Automation runbooks. </br></br> Execute automation runbooks as part of Logic apps workflow through Azure Automation Connector. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python-based scripting. </br> </br> Infrastructure Administrators managing on-premises infrastructure using scripts or executing long running jobs such as month-end operations on servers running on-premises. |
### Azure functions
-Provides a serverless event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform [Learn more](../azure-functions/functions-overview.md).
+Provides a serverless, event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform [Learn more](../azure-functions/functions-overview.md).
- - You can use a variety of languages to write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code. Functions runtime is an open source.
+ - You can use a variety of languages to write functions in a language of your choice, such as C#, Java, JavaScript, PowerShell, or Python, and focus on specific pieces of code. Functions runtime is an open source.
- You can choose the hosting plan according to your function app scaling requirements, functionality, and resources required. - You can orchestrate complex workflows through [durable functions](../azure-functions/durable/durable-functions-overview.md?tabs=csharp).
- - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](../azure-functions/functions-best-practices.md?tabs=csharp#write-robust-functions).
- - When you write PowerShell scripts within Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
+ - You should avoid large and long-running functions that can cause unexpected timeout issues. [Learn more](../azure-functions/functions-best-practices.md?tabs=csharp#write-robust-functions).
+ - When you write PowerShell scripts within Function Apps, you must tweak the scripts to define how the function behaves, such as how it's triggered and its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
| **Scenarios** | **Users** | | -- | -- |
- | Respond to events on resources: such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts where you can send teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br>Executes Azure Function as part of Logic apps workflow through Azure Function Connector. | Application Developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless applications where single or multiple Azure Functions could be part of a larger application workflow. |
+ | Respond to events on resources such as add tags to resource group basis cost center when VM is deleted, and so on. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals, and so on. </br> </br> Process Azure alerts where you can send team's event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br>Executes Azure Function as part of Logic apps workflow through Azure Function Connector. | Application Developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless applications where single or multiple Azure Functions could be part of a larger application workflow. |
## Next steps-- To learn on how to securely execute the automation jobs, see [best practices for security in Azure Automation](./automation-security-guidelines.md).+
+To learn on how to securely execute the automation jobs, see [best practices for security in Azure Automation](./automation-security-guidelines.md).
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Connected Machine agent description: This article has release notes for Azure Connected Machine agent. For many of the summarized issues, there are links to more details. Previously updated : 07/16/2024 Last updated : 08/12/2024
This page is updated monthly, so revisit it regularly. If you're looking for ite
> Only Connected Machine agent versions within the last 1 year are officially supported by the product group. Customers should update to an agent version within this window. >
-## Version 1.44 - July 2024
+## Version 1.45 - August 2024
Download for [Windows](https://aka.ms/AzureConnectedMachineAgent) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ### Fixed
+- Fixed an issue where EnableEnd telemetry would sometimes be sent too soon.
+- Added sending a failed timed-out EnableEnd telemetry log if extension takes longer than the allowed time to complete.
+
+### New features
+
+- Azure Arc proxy now supports HTTP traffic.
+- Mew proxy.bypass value 'AMA' added to support AMA VM extension proxy bypass.
+
+## Version 1.44 - July 2024
+
+Download for [Windows](https://download.microsoft.com/download/d/#installing-a-specific-version-of-the-agent)
+
+### Fixed
+ - Fixed a bug where the service would sometimes reject reports from an upgraded extension if the previous extension was in a failed state. - Setting OPENSSL_CNF environment at process level to override build openssl.cnf path on Windows. - Fixed access denied errors in writing configuration files.
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
ms. Previously updated : 07/28/2024 Last updated : 08/12/2024 # Customer intent: As a VI admin, I want to connect my VMM management server to Azure Arc.
This Quickstart shows you how to connect your SCVMM management server to Azure A
| **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
-| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud or a host group with a minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported; Dynamic IP allocation using DHCP isn't supported. Static IP allocation can be performed by one of the following approaches:<br><br> 1. **VMM IP Pool**: Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least three IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP Pool and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. <br> <br> 2. **Custom IP range**: Ensure that your VM network has three continuous free IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP range and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. If the VM network is configured with a VLAN, the VLAN ID is required as an input. Azure Arc Resource Bridge requires internal and external DNS resolution to the required sites and the on-premises management machine for the Static gateway IP and the IP address(es) of your DNS server(s) are needed. <br/><br/> A library share with write permission for the SCVMM admin account through which Resource Bridge deployment is going to be performed.|
+| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud or a host group with a minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. The supported storage configurations are hybrid storage (flash and HDD) and all-flash storage (SSDs or NVMe). <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported; Dynamic IP allocation using DHCP isn't supported. Static IP allocation can be performed by one of the following approaches:<br><br> 1. **VMM IP Pool**: Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least three IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP Pool and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. <br> <br> 2. **Custom IP range**: Ensure that your VM network has three continuous free IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP range and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. If the VM network is configured with a VLAN, the VLAN ID is required as an input. Azure Arc Resource Bridge requires internal and external DNS resolution to the required sites and the on-premises management machine for the Static gateway IP and the IP address(es) of your DNS server(s) are needed. <br/><br/> A library share with write permission for the SCVMM admin account through which Resource Bridge deployment is going to be performed.|
| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. If the SCVMM server is installed in a High Availability configuration, the user should be a part of the local administrator accounts in all the SCVMM cluster nodes. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM and the deployment of the Arc Resource bridge VM. | | **Workstation** | The workstation will be used to run the helper script. Ensure you have [64-bit Azure CLI installed](/cli/azure/install-azure-cli) on the workstation.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. |
azure-arc Support Matrix For System Center Virtual Machine Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/support-matrix-for-system-center-virtual-machine-manager.md
Previously updated : 07/28/2024 Last updated : 08/12/2024 keywords: "VMM, Arc, Azure" # Customer intent: As a VI admin, I want to understand the support matrix for System Center Virtual Machine Manager.
Azure Arc-enabled SCVMM works with VMM 2019 and 2022 versions and supports SCVMM
| **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
-| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud or a host group with a minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported; Dynamic IP allocation using DHCP isn't supported. Static IP allocation can be performed by one of the following approaches:<br><br> 1. **VMM IP Pool**: Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least three IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP Pool and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. <br> <br> 2. **Custom IP range**: Ensure that your VM network has three continuous free IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP range and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. If the VM network is configured with a VLAN, the VLAN ID is required as an input. Azure Arc Resource Bridge requires internal and external DNS resolution to the required sites and the on-premises management machine for the Static gateway IP and the IP address(es) of your DNS server(s) are needed. <br/><br/> A library share with write permission for the SCVMM admin account through which Resource Bridge deployment is going to be performed. |
+| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud or a host group with a minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. The supported storage configurations are hybrid storage (flash and HDD) and all-flash storage (SSDs or NVMe). <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported; Dynamic IP allocation using DHCP isn't supported. Static IP allocation can be performed by one of the following approaches:<br><br> 1. **VMM IP Pool**: Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least three IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP Pool and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. <br> <br> 2. **Custom IP range**: Ensure that your VM network has three continuous free IP addresses. If your SCVMM server is behind a firewall, all the IPs in this IP range and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. If the VM network is configured with a VLAN, the VLAN ID is required as an input. Azure Arc Resource Bridge requires internal and external DNS resolution to the required sites and the on-premises management machine for the Static gateway IP and the IP address(es) of your DNS server(s) are needed. <br/><br/> A library share with write permission for the SCVMM admin account through which Resource Bridge deployment is going to be performed. |
| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. If the SCVMM server is installed in a High Availability configuration, the user should be a part of the local administrator accounts in all the SCVMM cluster nodes. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM and the deployment of the Arc Resource Bridge VM. | | **Workstation** | The workstation will be used to run the helper script. Ensure you have [64-bit Azure CLI installed](/cli/azure/install-azure-cli) on the workstation.<br/><br/> When you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. |
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere? description: Azure Arc-enabled VMware vSphere extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 06/27/2024 Last updated : 08/08/2024
Azure Arc-enabled VMware vSphere currently works with vCenter Server versions 7
> [!NOTE] > Azure Arc-enabled VMware vSphere supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point.
->If you're trying to enable Arc for Azure VMware Solution (AVS) private cloud, see [Deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md).
+If you're trying to enable Arc for Azure VMware Solution (AVS) private cloud, see [Deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md).
## Supported regions
You can use Azure Arc-enabled VMware vSphere in these supported regions:
- East US 2 - West US 2 - West US 3
+- Central US
- North Central US - South Central US - Canada Central
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
Title: Use Microsoft Entra ID for cache authentication
+ Title: Use Microsoft Entra for cache authentication
-description: Learn how to use Microsoft Entra ID with Azure Cache for Redis.
+description: Learn how to use Microsoft Entra with Azure Cache for Redis.
-# Use Microsoft Entra ID for cache authentication
+# Use Microsoft Entra for cache authentication
-Azure Cache for Redis offers two methods to [authenticate](cache-configure.md#authentication) to your cache instance: Access keys and Microsoft Entra ID
+Azure Cache for Redis offers two methods to [authenticate](cache-configure.md#authentication) to your cache instance: access keys and Microsoft Entra.
Although access key authentication is simple, it comes with a set of challenges around security and password management. For contrast, in this article, you learn how to use a Microsoft Entra token for cache authentication.
-Azure Cache for Redis offers a password-free authentication mechanism by integrating with [Microsoft Entra ID)](/azure/active-directory/fundamentals/active-directory-whatis). This integration also includes [role-based access control](/azure/role-based-access-control/) functionality provided through [access control lists (ACLs)](https://redis.io/docs/management/security/acl/) supported in open source Redis.
+Azure Cache for Redis offers a password-free authentication mechanism by integrating with [Microsoft Entra](/azure/active-directory/fundamentals/active-directory-whatis). This integration also includes [role-based access control](/azure/role-based-access-control/) functionality provided through [access control lists (ACLs)](https://redis.io/docs/management/security/acl/) supported in open-source Redis.
-To use the ACL integration, your client application must assume the identity of a Microsoft Entra entity, like service principal or managed identity, and connect to your cache. In this article, you learn how to use your service principal or managed identity to connect to your cache, and how to grant your connection predefined permissions based on the Microsoft Entra artifact being used for the connection.
+To use the ACL integration, your client application must assume the identity of a Microsoft Entra entity, like service principal or managed identity, and connect to your cache. In this article, you learn how to use your service principal or managed identity to connect to your cache. You also learn how to grant your connection predefined permissions based on the Microsoft Entra artifact that's used for the connection.
## Scope of availability
-| **Tier** | Basic, Standard, Premium | Enterprise, Enterprise Flash |
+| Tier | Basic, Standard, Premium | Enterprise, Enterprise Flash |
|:--|::|:-:|
-| **Availability** | Yes | No |
+| Availability | Yes | No |
## Prerequisites and limitations -- Microsoft Entra ID-based authentication is supported for SSL connections and TLS 1.2 or higher.-- Microsoft Entra ID-based authentication isn't supported on Azure Cache for Redis instances that [depend on Cloud Services](./cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic).-- Microsoft Entra ID based authentication isn't supported in the Enterprise tiers of Azure Cache for Redis Enterprise.
+- Microsoft Entra authentication is supported for SSL connections and TLS 1.2 or higher.
+- Microsoft Entra authentication isn't supported on Azure Cache for Redis instances that [depend on Azure Cloud Services](./cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic).
+- Microsoft Entra authentication isn't supported in the Enterprise tiers of Azure Cache for Redis Enterprise.
- Some Redis commands are blocked. For a full list of blocked commands, see [Redis commands not supported in Azure Cache for Redis](cache-configure.md#redis-commands-not-supported-in-azure-cache-for-redis). > [!IMPORTANT]
-> Once a connection is established using Microsoft Entra token, client applications must periodically refresh Microsoft Entra token before expiry, and send an `AUTH` command to Redis server to avoid disruption of connections. For more information, see [Configure your Redis client to use Microsoft Entra ID](#configure-your-redis-client-to-use-microsoft-entra-id).
+> After a connection is established by using a Microsoft Entra token, client applications must periodically refresh the Microsoft Entra token before expiry. Then the apps must send an `AUTH` command to the Redis server to avoid disrupting connections. For more information, see [Configure your Redis client to use Microsoft Entra](#configure-your-redis-client-to-use-microsoft-entra).
-## Enable Microsoft Entra ID authentication on your cache
+## Enable Microsoft Entra authentication on your cache
-1. In the Azure portal, select the Azure Cache for Redis instance where you'd like to configure Microsoft Entra token-based authentication.
+1. In the Azure portal, select the Azure Cache for Redis instance where you want to configure Microsoft Entra token-based authentication.
-1. Select **Authentication** from the Resource menu.
+1. On the **Resource** menu, select **Authentication**.
-1. In the working pane, select **Enable Microsoft Entra Authentication**.
+1. On the working pane, select the **Microsoft Entra Authentication** tab.
-1. Select **Enable Microsoft Entra Authentication**, and enter the name of a valid user. The user you enter is automatically assigned _Data Owner Access Policy_ by default when you select **Save**. You can also enter a managed identity or service principal to connect to your cache instance.
+1. Select **Enable Microsoft Entra Authentication** and enter the name of a valid user. The user you enter is automatically assigned **Data Owner Access Policy** by default when you select **Save**. You can also enter a managed identity or service principal to connect to your cache instance.
- :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-enable-microsoft-entra.png" alt-text="Screenshot showing authentication selected in the resource menu and the enable Microsoft Entra authentication checked.":::
+ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-enable-microsoft-entra.png" alt-text="Screenshot showing authentication selected in the resource menu and the Enable Microsoft Entra authentication checkbox.":::
-1. A popup dialog box displays asking if you want to update your configuration, and informing you that it takes several minutes. Select **Yes.**
+1. A pop-up dialog asks if you want to update your configuration and informs you that it takes several minutes. Select **Yes.**
> [!IMPORTANT]
- > Once the enable operation is complete, the nodes in your cache instance reboots to load the new configuration. We recommend performing this operation during your maintenance window or outside your peak business hours. The operation can take up to 30 minutes.
+ > After the enable operation is finished, the nodes in your cache instance reboot to load the new configuration. We recommend that you perform this operation during your maintenance window or outside your peak business hours. The operation can take up to 30 minutes.
-For information on using Microsoft Entra ID with Azure CLI, see the [references pages for identity](/cli/azure/redis/identity).
+For information on how to use Microsoft Entra with the Azure CLI, see the [reference pages for identity](/cli/azure/redis/identity).
## Disable access key authentication on your cache
-Using Microsoft Entra ID is the secure way to connect your cache. We recommend using Microsoft Entra ID and disabling access keys.
+Using Microsoft Entra is the secure way to connect your cache. We recommend that you use Microsoft Entra and disable access keys.
-When you disable access key Authentication for a cache, all existing client connections are terminated, whether they use access keys or Microsoft Entra ID auth-based. You're advised to follow the recommended Redis client best practices to implement proper retry mechanisms for reconnecting MS Entra-based connections, if any.
+When you disable access key authentication for a cache, all existing client connections are terminated, whether they use access keys or Microsoft Entra authentication. Follow the recommended Redis client best practices to implement proper retry mechanisms for reconnecting Microsoft Entra-based connections, if any.
Before you disable access keys: -- Before you disable access keys, Microsoft Entra ID authorization must be enabled.
+- Microsoft Entra authorization must be enabled.
- Disabling access keys is only available for Basic, Standard, and Premium tier caches.-- For geo-replicated caches, before you disable accces keys, you must: 1) unlink the caches, 2) disable access keys, and finally, 3) relink the caches.
+- For geo-replicated caches, you must:
-If you have a cache where access keys are used, and you want to disable access keys, follow this procedure.
+ 1. Unlink the caches.
+ 1. Disable access keys.
+ 1. Relink the caches.
-1. In the Azure portal, select the Azure Cache for Redis instance where you'd like to disable access keys.
+If you have a cache where access keys are used and you want to disable access keys, follow this procedure:
-1. Select **Authentication** from the Resource menu.
+1. In the Azure portal, select the Azure Cache for Redis instance where you want to disable access keys.
-1. In the working pane, selectΓÇ»**Access keys**.
+1. On the **Resource** menu, selectΓÇ»**Authentication**.
+
+1. On the working pane, selectΓÇ»**Access keys**.
1. SelectΓÇ»**Disable Access Keys Authentication**. Then, selectΓÇ»**Save**.
- :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-disable-access-keys.png" alt-text="Screenshot showing access keys in the working pane with a red box around Disable Access Key Authentication. ":::
+ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-disable-access-keys.png" alt-text="Screenshot showing access keys in the working pane with the Disable Access Keys Authentication checkbox. ":::
-1. You're asked to confirm that you want to update your configuration. SelectΓÇ»**Yes**.
+1. Confirm that you want to update your configuration by selectingΓÇ»**Yes**.
> [!IMPORTANT]
-> When the **Disable Access Key Authentication**" setting is changed for a cache, all existing client connections, using access keys or Microsoft Entra ID, are terminated. Follow the best practices to implement proper retry mechanisms for reconnecting MS Entra-based connections. For more information, see [Connection resilience](cache-best-practices-connection.md).
+> When the **Disable Access Keys Authentication** setting is changed for a cache, all existing client connections, using access keys or Microsoft Entra, are terminated. Follow the best practices to implement proper retry mechanisms for reconnecting Microsoft Entra-based connections. For more information, see [Connection resilience](cache-best-practices-connection.md).
-## Using data access configuration with your cache
+## Use data access configuration with your cache
-If you would like to use a custom access policy instead of Redis Data Owner, go to the **Data Access Configuration** on the Resource menu. For more information, see [Configure a custom data access policy for your application](cache-configure-role-based-access-control.md#configure-a-custom-data-access-policy-for-your-application).
+If you want to use a custom access policy instead of Redis Data Owner, go to **Data Access Configuration** on the **Resource** menu. For more information, see [Configure a custom data access policy for your application](cache-configure-role-based-access-control.md#configure-a-custom-data-access-policy-for-your-application).
-1. In the Azure portal, select the Azure Cache for Redis instance where you'd like to add to the Data Access Configuration.
+1. In the Azure portal, select the Azure Cache for Redis instance where you want to add to the data access configuration.
-1. Select **Data Access Configuration** from the Resource menu.
+1. On the **Resource** menu, select **Data Access Configuration**.
-1. Select **Add** and choose **New Redis User**.
+1. Select **Add** and then select **New Redis User**.
-1. On the **Access Policy** tab, select one the available policies in the table: **Data Owner**, **Data Contributor**, or **Data Reader**. Then, select the **Next:Redis Users**.
+1. On the **Access Policies** tab, select one of the available policies in the table: **Data Owner**, **Data Contributor**, or **Data Reader**. Then, select **Next: Redis Users**.
:::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-new-redis-user.png" alt-text="Screenshot showing the available Access Policies.":::
-1. Choose either the **User or service principal** or **Managed Identity** to determine how to assign access to your Azure Cache for Redis instance. If you select **User or service principal**, and you want to add a _user_, you must first [enable Microsoft Entra Authentication](#enable-microsoft-entra-id-authentication-on-your-cache).
+1. Choose either **User or service principal** or **Managed Identity** to determine how to assign access to your Azure Cache for Redis instance. If you select **User or service principal** and you want to add a user, you must first [enable Microsoft Entra authentication](#enable-microsoft-entra-authentication-on-your-cache).
+
+1. Then, choose **Select members** and choose **Select**. Then, select **Next: Review + assign**.
-1. Then, select **Select members** and select **Select**. Then, select **Next : Review + Assign**.
- :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-select-members.png" alt-text="Screenshot showing members to add as New Redis Users.":::
+ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-select-members.png" alt-text="Screenshot showing members to add as new Redis users.":::
-1. A dialog box displays a popup notifying you that upgrading is permanent and might cause a brief connection blip. Select **Yes.**
+1. A pop-up dialog notifies you that upgrading is permanent and might cause a brief connection blip. Select **Yes.**
> [!IMPORTANT]
- > Once the enable operation is complete, the nodes in your cache instance reboots to load the new configuration. We recommend performing this operation during your maintenance window or outside your peak business hours. The operation can take up to 30 minutes.
+ > After the enable operation is finished, the nodes in your cache instance reboot to load the new configuration. We recommend that you perform this operation during your maintenance window or outside your peak business hours. The operation can take up to 30 minutes.
-## Configure your Redis client to use Microsoft Entra ID
+## Configure your Redis client to use Microsoft Entra
-Because most Azure Cache for Redis clients assume that a password and access key are used for authentication, you likely need to update your client workflow to support authentication using Microsoft Entra ID. In this section, you learn how to configure your client applications to connect to Azure Cache for Redis using a Microsoft Entra token.
+Because most Azure Cache for Redis clients assume that a password and access key are used for authentication, you likely need to update your client workflow to support authentication by using Microsoft Entra. In this section, you learn how to configure your client applications to connect to Azure Cache for Redis by using a Microsoft Entra token.
-### Microsoft Entra Client Workflow
+### Microsoft Entra client workflow
-1. Configure your client application to acquire a Microsoft Entra token for scope, `https://redis.azure.com/.default` or `acca5fbb-b7e4-4009-81f1-37e38fd66d78/.default`, using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview).
+1. Configure your client application to acquire a Microsoft Entra token for scope, `https://redis.azure.com/.default` or `acca5fbb-b7e4-4009-81f1-37e38fd66d78/.default`, by using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview).
-1. Update your Redis connection logic to use following `User` and `Password`:
+1. Update your Redis connection logic to use the following `User` and `Password`:
- `User` = Object ID of your managed identity or service principal
- - `Password` = Microsoft Entra token that you acquired using MSAL
+ - `Password` = Microsoft Entra token that you acquired by using MSAL
-1. Ensure that your client executes a Redis [AUTH command](https://redis.io/commands/auth/) automatically before your Microsoft Entra token expires using:
+1. Ensure that your client executes a Redis [AUTH command](https://redis.io/commands/auth/) automatically before your Microsoft Entra token expires by using:
- `User` = Object ID of your managed identity or service principal - `Password` = Microsoft Entra token refreshed periodically ### Client library support
-The library [`Microsoft.Azure.StackExchangeRedis`](https://www.nuget.org/packages/Microsoft.Azure.StackExchangeRedis) is an extension of `StackExchange.Redis` that enables you to use Microsoft Entra ID to authenticate connections from a Redis client application to an Azure Cache for Redis. The extension manages the authentication token, including proactively refreshing tokens before they expire to maintain persistent Redis connections over multiple days.
+The library [`Microsoft.Azure.StackExchangeRedis`](https://www.nuget.org/packages/Microsoft.Azure.StackExchangeRedis) is an extension of `StackExchange.Redis` that enables you to use Microsoft Entra to authenticate connections from a Redis client application to an Azure Cache for Redis. The extension manages the authentication token, including proactively refreshing tokens before they expire to maintain persistent Redis connections over multiple days.
-This [code sample](https://github.com/Azure/Microsoft.Azure.StackExchangeRedis) demonstrates how to use the `Microsoft.Azure.StackExchangeRedis` NuGet package to connect to your Azure Cache for Redis instance using Microsoft Entra ID.
+[This code sample](https://github.com/Azure/Microsoft.Azure.StackExchangeRedis) demonstrates how to use the `Microsoft.Azure.StackExchangeRedis` NuGet package to connect to your Azure Cache for Redis instance by using Microsoft Entra.
-The following table includes links to code samples, which demonstrate how to connect to your Azure Cache for Redis instance using a Microsoft Entra token. A wide variety of client libraries are included in multiple languages.
+The following table includes links to code samples. They demonstrate how to connect to your Azure Cache for Redis instance by using a Microsoft Entra token. Various client libraries are included in multiple languages.
-| **Client library** | **Language** | **Link to sample code**|
+| Client library | Language | Link to sample code|
|-|-|-| | StackExchange.Redis | .NET | [StackExchange.Redis code sample](https://github.com/Azure/Microsoft.Azure.StackExchangeRedis) |
-| redis-py | Python | [redis-py code Sample](https://aka.ms/redis/aad/sample-code/python) |
+| redis-py | Python | [redis-py code sample](https://aka.ms/redis/aad/sample-code/python) |
| Jedis | Java | [Jedis code sample](https://aka.ms/redis/aad/sample-code/java-jedis) | | Lettuce | Java | [Lettuce code sample](https://aka.ms/redis/aad/sample-code/java-lettuce) | | Redisson | Java | [Redisson code sample](https://aka.ms/redis/aad/sample-code/java-redisson) |
The following table includes links to code samples, which demonstrate how to con
### Best practices for Microsoft Entra authentication -- Configure private links or firewall rules to protect your cache from a Denial of Service attack.--- Ensure that your client application sends a new Microsoft Entra token at least 3 minutes before token expiry to avoid connection disruption.--- When calling the Redis server `AUTH` command periodically, consider adding a jitter so that the `AUTH` commands are staggered, and your Redis server doesn't receive lot of `AUTH` commands at the same time.
+- Configure private links or firewall rules to protect your cache from a denial of service attack.
+- Ensure that your client application sends a new Microsoft Entra token at least three minutes before token expiry to avoid connection disruption.
+- When you call the Redis server `AUTH` command periodically, consider adding a jitter so that the `AUTH` commands are staggered. In this way, your Redis server doesn't receive too many `AUTH` commands at the same time.
## Related content
azure-cache-for-redis Cache Configure Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure-role-based-access-control.md
The following list contains some examples of permission strings for various scen
## Configure your Redis client to use Microsoft Entra ID
-Now that you have configured Redis User and Data access policy for configuring role based access control, you need to update your client workflow to support authenticating using a specific user/password. To learn how to configure your client application to connect to your cache instance as a specific Redis User, see [Configure your Redis client to use Microsoft Entra ID](cache-azure-active-directory-for-authentication.md#configure-your-redis-client-to-use-microsoft-entra-id).
+Now that you have configured Redis User and Data access policy for configuring role based access control, you need to update your client workflow to support authenticating using a specific user/password. To learn how to configure your client application to connect to your cache instance as a specific Redis User, see [Configure your Redis client to use Microsoft Entra](cache-azure-active-directory-for-authentication.md#configure-your-redis-client-to-use-microsoft-entra).
## Next steps
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The instrumentation key for Application Insights. Don't use both `APPINSIGHTS_IN
Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. Use of `APPLICATIONINSIGHTS_CONNECTION_STRING` is recommended. - ## APPLICATIONINSIGHTS_AUTHENTICATION_STRING Enables access to Application Insights by using Microsoft Entra authentication. Use this setting when you must connect to your Application Insights workspace by using Microsoft Entra authentication. For more information, see [Microsoft Entra authentication for Application Insights](../azure-monitor/app/azure-ad-authentication.md).
azure-functions Functions Bindings Openai Assistant Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai-assistant-trigger.md
This example demonstrates how to create an assistant that adds a new todo task t
This example demonstrates how to create an assistant that adds a new todo task to a database. The trigger has a static description of `Create a new todo task` used by the model. The function itself takes a string, which represents a new task to add. When executed, the function adds the task as a new todo item in a custom item store and returns a response from the store. ::: zone-end ::: zone pivot="programming-language-powershell"
azure-functions Functions Bindings Openai Assistantcreate Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai-assistantcreate-output.md
This example demonstrates the creation process, where the HTTP PUT function that
This example demonstrates the creation process, where the HTTP PUT function that creates a new assistant chat bot with the specified ID. The response to the prompt is returned in the HTTP response. ::: zone-end ::: zone pivot="programming-language-powershell"
azure-functions Functions Bindings Openai Assistantpost Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai-assistantpost-input.md
This example demonstrates the creation process, where the HTTP POST function tha
This example demonstrates the creation process, where the HTTP POST function that sends user prompts to the assistant chat bot. The response to the prompt is returned in the HTTP response. ::: zone-end ::: zone pivot="programming-language-powershell"
azure-functions Functions Bindings Openai Assistantquery Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai-assistantquery-input.md
This example demonstrates the creation process, where the HTTP GET function that
This example demonstrates the creation process, where the HTTP GET function that queries the conversation history of the assistant chat bot. The response to the prompt is returned in the HTTP response. ::: zone-end ::: zone pivot="programming-language-powershell"
azure-functions Functions Bindings Openai Embeddings Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai-embeddings-input.md
This example shows how to retrieve embeddings stored at a specified file that is
::: zone pivot="programming-language-typescript" This example shows how to generate embeddings for a raw text string. This example shows how to retrieve embeddings stored at a specified file that is accessible to the function. ::: zone-end ::: zone pivot="programming-language-powershell"
azure-functions Functions Bindings Openai Embeddingsstore Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai-embeddingsstore-output.md
This example writes an HTTP input stream to a semantic document store at the pro
::: zone pivot="programming-language-typescript" This example writes an HTTP input stream to a semantic document store at the provided URL. ::: zone-end ::: zone pivot="programming-language-powershell"
azure-functions Functions Bindings Openai Semanticsearch Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai-semanticsearch-input.md
This example shows how to perform a semantic search on a file.
This example shows how to perform a semantic search on a file. ::: zone-end
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
To view the library for your Python version, go to:
### Azure Functions Python worker dependencies
-The Azure Functions Python worker requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they might be unavailable to your code when it's running outside of Azure Functions. You'll find a detailed list of dependencies in the "install\_requires" section of the [*setup.py*](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file.
+The Azure Functions Python worker requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they might be unavailable to your code when it's running outside of Azure Functions.
> [!NOTE] > If your function app's *requirements.txt* file contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by the Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in the *requirements.txt* file might cause unexpected issues.
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
Target-based scaling replaces the previous Azure Functions incremental scaling m
![Illustration of the equation: desired instances = event source length / target executions per instance.](./media/functions-target-based-scaling/target-based-scaling-formula.png)
-The default _target executions per instance_ values come from the SDKs used by the Azure Functions extensions. You don't need to make any changes for target-based scaling to work.
+In this equation, _event source length_ refers to the number of events that must be processed. The default _target executions per instance_ values come from the SDKs used by the Azure Functions extensions. You don't need to make any changes for target-based scaling to work.
## Considerations
azure-functions Monitor Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions-reference.md
The following table lists the metrics available for the Microsoft.Web/sites reso
>These metrics aren't available when your function app runs on Linux in a [Consumption plan](./consumption-plan.md). [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] [!INCLUDE [horz-monitor-ref-metrics-dimensions-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions-intro.md)]
The following table lists the metrics available for the Microsoft.Web/sites reso
[!INCLUDE [horz-monitor-ref-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-resource-logs.md)] ### Supported resource logs for Microsoft.Web/sites The log specific to Azure Functions is **FunctionAppLogs**.
azure-functions Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions.md
Azure Functions offers built-in integration with Application Insights to monitor functions executions. For detailed information about how to integrate, configure, and use Application Insights to monitor Azure Functions, see the following articles: - [Monitor executions in Azure Functions](functions-monitoring.md)-- [How to configure monitoring for Azure Functions](configure-monitoring.md)
+- [Configure monitoring for Azure Functions](configure-monitoring.md)
- [Analyze Azure Functions telemetry in Application Insights](analyze-telemetry-data.md). - [Monitor Azure Functions with Application Insights](/azure/azure-monitor/app/monitor-functions)
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
Title: Azure Monitor Agent extension versions description: This article describes the version details for the Azure Monitor Agent virtual machine extension. Previously updated : 4/2/2023 Last updated : 08/12/2024 -+ # Azure Monitor Agent extension versions
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
Stateless alerts fire each time the condition is met. The alert condition for al
- All activity log alerts are stateless. - The frequency of notifications for stateless metric alerts differs based on the alert rule's configured frequency: - **Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent sometime between one and six minutes.
- - **Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and double the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent sometime between 15 to 30 minutes.
+ - **Alert frequency of equal to or more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and double the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent sometime between 15 to 30 minutes.
### Stateful alerts Stateful alerts fire when the rule conditions are met, and will not fire again or trigger any more actions until the conditions are resolved.
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
If your log search alert didn't fire when it should have, check the following it
- [Custom logs tables](../agents/data-sources-custom-logs.md) haven't been created because the data flow hasn't started. - Changes in the [query language](/azure/kusto/query/) include a revised format for commands and functions, so the query provided earlier is no longer valid.
- [Azure Advisor](../../advisor/advisor-overview.md) warns you about this behavior. It adds a recommendation about the affected log search alert rule. The category used is 'High Availability' with medium impact and a description of 'Repair your log alert rule to ensure monitoring'.
+ Azure Service Health monitors the health of your cloud resources, including log search alert rules. When a log search alert rule is healthy, the rule runs and the query executes successfully. You can use [resource health for log search alert rules](https://learn.microsoft.com/azure/azure-monitor/alerts/log-alert-rule-health) to learn about the issues affecting your log search alert rules.
1. **Was the the log search alert rule disabled?**
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 12/15/2023 Last updated : 08/12/2024 # Application Insights overview
This section lists all supported platforms and frameworks.
* [Power BI](https://powerbi.microsoft.com/blog/explore-your-application-insights-data-with-power-bi/) * [Power BI for workspace-based resources](../logs/log-powerbi.md)
-### Unsupported SDKs
+### Unsupported Software Development Kits (SDKs)
Many community-supported Application Insights SDKs exist. Azure Monitor only provides support when you use the supported instrumentation options listed in this article. We're constantly assessing opportunities to expand our support for other languages. For the latest news, see [Azure updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights).
From [client webpages](./javascript-sdk.md):
* Exception details and message accompanying the error * Line & column number of error * URL where error was raised
-* Network Dependency Requests made by your app XHR and Fetch (fetch collection is disabled by default) requests, include information on:
- * Url of dependency source
- * Command & Method used to request the dependency
- * Duration of the request
- * Result code and success status of the request
- * ID (if any) of user making the request
- * Correlation context (if any) where request is made
+ * Network Dependency Requests made by your app XML Http Request (XHR) and Fetch (fetch collection is disabled by default) requests, include information on:
+ * Url of dependency source
+ * Command & Method used to request the dependency
+ * Duration of the request
+ * Result code and success status of the request
+ * ID (if any) of user making the request
+ * Correlation context (if any) where request is made
* User information (for example, Location, network, IP) * Device information (for example, Browser, OS, version, language, model) * Session information
Use the [REST API](/rest/api/application-insights/) to run [Log Analytics](../lo
### Can I send telemetry to the Application Insights portal?
-We recommend that you use our SDKs and use the [SDK API](./api-custom-events-metrics.md). There are variants of the SDK for various [platforms](./app-insights-overview.md#supported-languages). These SDKs handle processes like buffering, compression, throttling, and retries. However, the [ingestion schema](https://github.com/microsoft/ApplicationInsights-dotnet/tree/master/BASE/Schem) are public.
+We recommend the [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md).
+
+The [ingestion schema](https://github.com/microsoft/ApplicationInsights-dotnet/tree/master/BASE/Schem) are available publicly.
### How long does it take for telemetry to be collected?
Data is sent to an Application Insights [Log Analytics workspace](../logs/log-an
#### Privacy
-Application Insights doesn't handle sensitive data by default, as long as you don't put sensitive data in URLs as plain text and ensure your custom code doesn't collect personal or other sensitive details. During development and testing, check the sent data in your IDE and browser's debugging output windows.
+Application Insights doesn't handle sensitive data by default. We recommend you don't put sensitive data in URLs as plain text and ensure your custom code doesn't collect personal or other sensitive details. During development and testing, check the sent data in your IDE and browser's debugging output windows.
-For archived information on this topic, see [Data collection, retention, and storage in Application Insights](/previous-versions/azure/azure-monitor/app/data-retention-privacy).
+For archived information, see [Data collection, retention, and storage in Application Insights](/previous-versions/azure/azure-monitor/app/data-retention-privacy).
### What is the Application Insights pricing model?
-Application Insights is billed through the Log Analytics workspace into which its log data ingested.
-The default Pay-as-you-go Log Analytics pricing tier includes 5 GB per month of free data allowance per billing account.
-Learn more about [Azure Monitor logs pricing options](https://azure.microsoft.com/pricing/details/monitor/).
+Application Insights is billed through the Log Analytics workspace into which its log data ingested. The default Pay-as-you-go Log Analytics pricing tier includes 5 GB per month of free data allowance per billing account. Learn more about [Azure Monitor logs pricing options](https://azure.microsoft.com/pricing/details/monitor/).
### Are there data transfer charges between an Azure web app and Application Insights?
This answer depends on the distribution of our endpoints, *not* on where your Ap
### Do I incur network costs if my Application Insights resource is monitoring an Azure resource (that is, telemetry producer) in a different region?
-Yes, you may incur more network costs, which vary depending on the region the telemetry is coming from and where it's going.
+Yes, you can incur more network costs, which vary depending on the region the telemetry is coming from and where it's going.
Refer to [Azure bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/) for details. ## Help and support
Refer to [Azure bandwidth pricing](https://azure.microsoft.com/pricing/details/b
For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
-### Microsoft Q&A questions forum
+### Microsoft Questions and Answers (Q&A) forum
-Post general questions to the Microsoft Q&A [answers forum](/answers/topics/24223/azure-monitor.html).
+Post general questions to the [Microsoft Questions and Answers (Q&A) forum](/answers/topics/24223/azure-monitor.html).
### Stack Overflow
Leave product feedback for the engineering team in the [Feedback Community](http
### Troubleshooting
-Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/welcome-azure-monitor) for Application Insights.
+- [OpenTelemetry Distro](opentelemetry-enable.md#troubleshooting)
+- [Application Map](app-map.md#troubleshooting-tips)
## Next steps
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Autoinstrumentation for Azure Monitor Application Insights
description: Overview of autoinstrumentation for Azure Monitor Application Insights codeless application performance management. Previously updated : 12/15/2023 Last updated : 08/12/2024
Autoinstrumentation enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). It provides easy access to experiences such as the [application dashboard](overview-dashboard.md) and [application map](app-map.md).
-If your language and platform are supported, select the corresponding link in the [Supported environments, languages, and resource providers table](#supported-environments-languages-and-resource-providers) for more detailed information. In many cases, autoinstrumentation is enabled by default.
+The term "autoinstrumentation" is a portmanteau, a linguistic blend where parts of multiple words combine into a new word. "Autoinstrumentation" combines "auto" and "instrumentation." It sees widespread use in software observability and describes the process of adding instrumentation code to applications without manual coding by developers.
+
+The autoinstrumentation process varies by language and platform, but often involves a toggle button in the Azure portal. The following example shows a toggle button for [Azure App Service](../../app-service/getting-started.md#getting-started-with-azure-app-service) autoinstrumentation.
++
+> [!TIP]
+> *We do not provide autoinstrumentation specifics for all languages and platforms in this article.* For detailed information, select the corresponding link in the [Supported environments, languages, and resource providers table](#supported-environments-languages-and-resource-providers). In many cases, autoinstrumentation is enabled by default.
## What are the autoinstrumentation advantages?
The following table shows the current state of autoinstrumentation availability.
Links are provided to more information for each supported scenario. > [!NOTE]
-> If your hosting environment or resource provider is not listed in the following table, autoinstrumentation is not supported. You can manually instrument your code using Application Insights SDKs or Azure Monitor OpenTelemetry Distros. For more information, see [Data Collection Basics of Azure Monitor Application Insights](opentelemetry-overview.md).
+> If your hosting environment or resource provider is not listed in the following table, then autoinstrumentation is not supported. In this case, we recoomend manually instrumenting using the [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md). For more information, see [Data Collection Basics of Azure Monitor Application Insights](opentelemetry-overview.md).
|Environment/Resource provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python | |-|-|-|--|-|--|
Links are provided to more information for each supported scenario.
> [!NOTE] > Autoinstrumentation was known as "codeless attach" before October 2021.
-## JavaScript (Web) SDK Loader Script injection by configuration
-
-When using supported Software Development Kits (SDKs), you can enable SDK injection in configuration to automatically inject JavaScript (Web) SDK Loader Script onto each page.
+## Frequently asked questions
+#### Should the term "autoinstrumentation" be hyphenated?
- | Language
- | : |
- | [ASP.NET Core](./asp-net-core.md?tabs=netcorenew%2Cnetcore6#enable-client-side-telemetry-for-web-applications) |
- | [Node.js](./nodejs.md#browser-sdk-loader) |
- | [Java](./java-standalone-config.md#browser-sdk-loader-preview) |
+We follow the [Microsoft Style Guide](/style-guide/punctuation/dashes-hyphens/hyphens#prefixes) for product documentation published to the [Microsoft Learn](/) platform.
-For other methods to instrument your application with the Application Insights JavaScript SDK, see [Get started with the JavaScript SDK](./javascript-sdk.md).
+In general, we donΓÇÖt include a hyphen after the "auto" prefix.
## Next steps
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
We provide the [Debug plugin](https://github.com/microsoft/ApplicationInsights-J
Follow the steps in this section to instrument your application with the Application Insights JavaScript SDK. > [!TIP]
-> Good news! We're making it even easier to enable JavaScript. Check out where [JavaScript (Web) SDK Loader Script injection by configuration is available](./codeless-overview.md#javascript-web-sdk-loader-script-injection-by-configuration)!
+> Good news! We're making it even easier to enable JavaScript with JavaScript (Web) SDK Loader Script injection by configuration.
+>
+> - [ASP.NET Core](./asp-net-core.md?tabs=netcorenew%2Cnetcore6#enable-client-side-telemetry-for-web-applications)
+> - [Node.js](./nodejs.md#browser-sdk-loader)
+> - [Java](./java-standalone-config.md#browser-sdk-loader-preview)
### Add the JavaScript code
azure-monitor Kubernetes Monitoring Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-private-link.md
If your AKS cluster isn't in the same region as your Azure Monitor workspace, th
:::image type="content" source="media/kubernetes-monitoring-private-link/azure-monitor-workspace-data-collection-rule.png" alt-text="A screenshot show the data collection rules page for an Azure Monitor workspace." lightbox="media/kubernetes-monitoring-private-link/azure-monitor-workspace-data-collection-rule.png" :::
-## Ingestion from a private AKS cluster
+### Ingestion from a private AKS cluster
By default, a private AKS cluster can send data to Managed Prometheus and your Azure Monitor workspace over the public network using a public Data Collection Endpoint. If you choose to use an Azure Firewall to limit the egress from your cluster, you can implement one of the following:
If you choose to use an Azure Firewall to limit the egress from your cluster, yo
- `*.ingest.monitor.azure.com` - Enable the Azure Firewall to access the Azure Monitor Private Link scope and DCE that's used for data ingestion.
-## Private link ingestion for remote write
+### Private link ingestion for remote write
Use the following steps to set up remote write for a Kubernetes cluster over a private link virtual network and an Azure Monitor Private Link scope. 1. Create your Azure virtual network.
Data for Container insights, is stored in a [Log Analytics workspace](../logs/lo
### Cluster using managed identity authentication -
-### Existing AKS Cluster
-
-**Use default Log Analytics workspace**
+**Existing AKS cluster with default Log Analytics workspace**
```azurecli az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>"
Example:
az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource" ```
-**Use existing Log Analytics workspace**
+**Existing AKS cluster with existing Log Analytics workspace**
```azurecli az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>"
Example:
az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/ my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource" ```
-### New AKS cluster
+**New AKS cluster**
```azurecli az aks create --resource-group rgName --name clusterName --enable-addons monitoring --workspace-resource-id "workspaceResourceId" --ampls-resource-id "azure-monitor-private-link-scope-resource-id"
az aks create --resource-group "my-resource-group" --name "my-cluster" --enabl
```
-## Cluster using legacy authentication
+### Cluster using legacy authentication
Use the following procedures to enable network isolation by connecting your cluster to the Log Analytics workspace using [Azure Private Link](../logs/private-link-security.md) if your cluster is not using managed identity authentication. This requires a [private AKS cluster](/azure/aks/private-clusters). 1. Create a private AKS cluster following the guidance in [Create a private Azure Kubernetes Service cluster](/azure/aks/private-clusters).
azure-monitor Edge Pipeline Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/edge-pipeline-configure.md
Title: Configuration of Azure Monitor pipeline for edge and multicloud
+ Title: Configuration of Azure Monitor pipeline at edge and multicloud
description: Configuration of Azure Monitor pipeline for edge and multicloud Last updated 04/25/2024
-# Configuration of Azure Monitor edge pipeline
-[Azure Monitor pipeline](./pipeline-overview.md) is a data ingestion pipeline providing consistent and centralized data collection for Azure Monitor. The [edge pipeline](./pipeline-overview.md#edge-pipeline) enables at-scale collection, and routing of telemetry data before it's sent to the cloud. It can cache data locally and sync with the cloud when connectivity is restored and route telemetry to Azure Monitor in cases where the network is segmented and data cannot be sent directly to the cloud. This article describes how to enable and configure the edge pipeline in your environment.
+# Configuration of Azure Monitor pipeline at edge
+[Azure Monitor pipeline](./pipeline-overview.md) is a data ingestion pipeline providing consistent and centralized data collection for Azure Monitor. The [pipeline at edge](./pipeline-overview.md#edge-pipeline) enables at-scale collection, and routing of telemetry data before it's sent to the cloud. It can cache data locally and sync with the cloud when connectivity is restored and route telemetry to Azure Monitor in cases where the network is segmented and data cannot be sent directly to the cloud. This article describes how to enable and configure the pipeline at edge in your environment.
## Overview
-The Azure Monitor edge pipeline is a containerized solution that is deployed on an [Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md) and leverages OpenTelemetry Collector as a foundation. The following diagram shows the components of the edge pipeline. One or more data flows listen for incoming data from clients, and the pipeline extension forwards the data to the cloud, using the local cache if necessary.
+The Azure Monitor pipeline at edge is a containerized solution that is deployed on an [Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md) and leverages OpenTelemetry Collector as a foundation. The following diagram shows the components of the pipeline at edge. One or more data flows listen for incoming data from clients, and the pipeline extension forwards the data to the cloud, using the local cache if necessary.
-The pipeline configuration file defines the data flows and cache properties for the edge pipeline. The [DCR](./pipeline-overview.md#data-collection-rules) defines the schema of the data being sent to the cloud pipeline, a transformation to filter or modify the data, and the destination where the data should be sent. Each data flow definition for the pipeline configuration specifies the DCR and stream within that DCR that will process that data in the cloud pipeline.
+The pipeline configuration file defines the data flows and cache properties for the pipeline at edge. The [DCR](./pipeline-overview.md#data-collection-rules) defines the schema of the data being sent to the cloud pipeline, a transformation to filter or modify the data, and the destination where the data should be sent. Each data flow definition for the pipeline configuration specifies the DCR and stream within that DCR that will process that data in the cloud pipeline.
> [!NOTE]
-> Private link is supported by edge pipeline for the connection to the cloud pipeline.
+> Private link is supported by pipeline at edge for the connection to the cloud pipeline.
-The following components and configurations are required to enable the Azure Monitor edge pipeline. If you use the Azure portal to configure the edge pipeline, then each of these components is created for you. With other methods, you need to configure each one.
+The following components and configurations are required to enable the Azure Monitor pipeline at edge. If you use the Azure portal to configure the pipeline at edge, then each of these components is created for you. With other methods, you need to configure each one.
| Component | Description |
The following components and configurations are required to enable the Azure Mon
## Supported configurations **Supported distros**<br>
-Edge pipeline is supported on the following Kubernetes distributions:
+Azure Monitor pipeline at edge is supported on the following Kubernetes distributions:
- Canonical - Cluster API Provider for Azure
Edge pipeline is supported on the following Kubernetes distributions:
- VMware Tanzu Kubernetes Grid **Supported locations**<br>
-Edge pipeline is supported in the following Azure regions:
+Azure Monitor pipeline at edge is supported in the following Azure regions:
- East US2 - West US2
Edge pipeline is supported in the following Azure regions:
- [Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md ) in your own environment with an external IP address. See [Connect an existing Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md) for details on enabling Arc for a cluster. - The Arc-enabled Kubernetes cluster must have the custom locations features enabled. See [Create and manage custom locations on Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/custom-locations#enable-custom-locations-on-your-cluster).-- Log Analytics workspace in Azure Monitor to receive the data from the edge pipeline. See [Create a Log Analytics workspace in the Azure portal](../../azure-monitor/logs/quick-create-workspace.md) for details on creating a workspace.
+- Log Analytics workspace in Azure Monitor to receive the data from the pipeline at edge. See [Create a Log Analytics workspace in the Azure portal](../../azure-monitor/logs/quick-create-workspace.md) for details on creating a workspace.
- The following resource providers must be registered in your Azure subscription. See [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types). - Microsoft.Insights - Microsoft.Monitor
Edge pipeline is supported in the following Azure regions:
## Workflow You don't need a detail understanding of the different steps performed by the Azure Monitor pipeline to configure it using the Azure portal. You may need a more detailed understanding of it though if you use another method of installation or if you need to perform more advanced configuration such as transforming the data before it's stored in its destination.
-The following tables and diagrams describe the detailed steps and components in the process for collecting data using the edge pipeline and passing it to the cloud pipeline for storage in Azure Monitor. Also included in the tables is the configuration required for each of those components.
+The following tables and diagrams describe the detailed steps and components in the process for collecting data using the pipeline at edge and passing it to the cloud pipeline for storage in Azure Monitor. Also included in the tables is the configuration required for each of those components.
| Step | Action | Supporting configuration | |:|:|:|
The following tables and diagrams describe the detailed steps and components in
| 3. | Exporter tries to send the data to the cloud pipeline. | Exporter in the pipeline configuration includes URL of the DCE, a unique identifier for the DCR, and the stream in the DCR that defines how the data will be processed. | | 3a. | Exporter stores data in the local cache if it can't connect to the DCE. | Persistent volume for the cache and configuration of the local cache is enabled in the pipeline configuration. | | Step | Action | Supporting configuration | |:|:|:|
-| 4. | Cloud pipeline accepts the incoming data. | The DCR includes a schema definition for the incoming stream that must match the schema of the data coming from the edge pipeline. |
+| 4. | Cloud pipeline accepts the incoming data. | The DCR includes a schema definition for the incoming stream that must match the schema of the data coming from the pipeline at edge. |
| 5. | Cloud pipeline applies a transformation to the data. | The DCR includes a transformation that filters or modifies the data before it's sent to the destination. The transformation may filter data, remove or add columns, or completely change its schema. The output of the transformation must match the schema of the destination table. | | 6. | Cloud pipeline sends the data to the destination. | The DCR includes a destination that specifies the Log Analytics workspace and table where the data will be stored. |
The following tables and diagrams describe the detailed steps and components in
## Segmented network
+[Network segmentation](/azure/architecture/networking/guide/network-level-segmentation) is a model where you use software defined perimeters to create a different security posture for different parts of your network. In this model, you may have a network segment that can't connect to the internet or to other network segments. The pipeline at edge can be used to collect data from these network segments and send it to the cloud pipeline.
-
-[Network segmentation](/azure/architecture/networking/guide/network-level-segmentation) is a model where you use software defined perimeters to create a different security posture for different parts of your network. In this model, you may have a network segment that can't connect to the internet or to other network segments. The edge pipeline can be used to collect data from these network segments and send it to the cloud pipeline.
- To use Azure Monitor pipeline in a layered network configuration, you must add the following entries to the allowlist for the Arc-enabled Kubernetes cluster. See [Configure Azure IoT Layered Network Management Preview on level 4 cluster](/azure/iot-operations/manage-layered-network/howto-configure-l4-cluster-layered-network?tabs=k3s#configure-layered-network-management-preview-service).
To use Azure Monitor pipeline in a layered network configuration, you must add t
## Create table in Log Analytics workspace
-Before you configure the data collection process for the edge pipeline, you need to create a table in the Log Analytics workspace to receive the data. This must be a custom table since built-in tables aren't currently supported. The schema of the table must match the data that it receives, but there are multiple steps in the collection process where you can modify the incoming data, so you the table schema doesn't need to match the source data that you're collecting. The only requirement for the table in the Log Analytics workspace is that it has a `TimeGenerated` column.
+Before you configure the data collection process for the pipeline at edge, you need to create a table in the Log Analytics workspace to receive the data. This must be a custom table since built-in tables aren't currently supported. The schema of the table must match the data that it receives, but there are multiple steps in the collection process where you can modify the incoming data, so you the table schema doesn't need to match the source data that you're collecting. The only requirement for the table in the Log Analytics workspace is that it has a `TimeGenerated` column.
See [Add or delete tables and columns in Azure Monitor Logs](../logs/create-custom-table.md) for details on different methods for creating a table. For example, use the CLI command below to create a table with the three columns called `Body`, `TimeGenerated`, and `SeverityText`.
az monitor log-analytics workspace table create --workspace-name my-workspace --
## Enable cache
-Edge devices in some environments may experience intermittent connectivity due to various factors such as network congestion, signal interference, power outage, or mobility. In these environments, you can configure the edge pipeline to cache data by creating a [persistent volume](https://kubernetes.io) in your cluster. The process for this will vary based on your particular environment, but the configuration must meet the following requirements:
+Edge devices in some environments may experience intermittent connectivity due to various factors such as network congestion, signal interference, power outage, or mobility. In these environments, you can configure the pipeline at edge to cache data by creating a [persistent volume](https://kubernetes.io) in your cluster. The process for this will vary based on your particular environment, but the configuration must meet the following requirements:
- Metadata namespace must be the same as the specified instance of Azure Monitor pipeline. - Access mode must support `ReadWriteMany`.
Once the volume is created in the appropriate namespace, configure it using para
> [!CAUTION] > Each replica of the edge pipeline stores data in a location in the persistent volume specific to that replica. Decreasing the number of replicas while the cluster is disconnected from the cloud will prevent that data from being backfilled when connectivity is restored.
+Data is retrieved from the cache using first-in-first-out (FIFO). Any data older than 48 hours will be discarded.
+ ## Enable and configure pipeline The current options for enabling and configuration are detailed in the tabs below.
The settings in this tab are described in the following table.
### [CLI](#tab/CLI) ### Configure pipeline using Azure CLI
-Following are the steps required to create and configure the components required for the Azure Monitor edge pipeline using Azure CLI.
+Following are the steps required to create and configure the components required for the Azure Monitor pipeline at edge using Azure CLI.
### Edge pipeline extension
az customlocation create --name my-cluster-custom-location --resource-group my-r
### DCE
-The following ARM template creates the [data collection endpoint (DCE)](./data-collection-endpoint-overview.md) required for the edge pipeline to connect to the cloud pipeline. You can use an existing DCE if you already have one in the same region. Replace the properties in the following table before deploying the template.
+The following ARM template creates the [data collection endpoint (DCE)](./data-collection-endpoint-overview.md) required for the pipeline at edge to connect to the cloud pipeline. You can use an existing DCE if you already have one in the same region. Replace the properties in the following table before deploying the template.
```azurecli az monitor data-collection endpoint create -g "myResourceGroup" -l "eastus2euap" --name "myCollectionEndpoint" --public-network-access "Enabled"
az monitor data-collection endpoint create -g "myResourceGroup" -l "eastus2euap"
### DCR
-The DCR is stored in Azure Monitor and defines how the data will be processed when it's received from the edge pipeline. The edge pipeline configuration specifies the `immutable ID` of the DCR and the `stream` in the DCR that will process the data. The `immutable ID` is automatically generated when the DCR is created.
+The DCR is stored in Azure Monitor and defines how the data will be processed when it's received from the pipeline at edge. The pipeline at edge configuration specifies the `immutable ID` of the DCR and the `stream` in the DCR that will process the data. The `immutable ID` is automatically generated when the DCR is created.
Replace the properties in the following template and save them in a json file before running the CLI command to create the DCR. See [Structure of a data collection rule in Azure Monitor](./data-collection-rule-overview.md) for details on the structure of a DCR.
Replace the properties in the following template and save them in a json file be
| - `streams` | One or more streams (defined in `streamDeclarations`). You can include multiple stream if they're being sent to the same destination. | | - `destinations` | One or more destinations (defined in `destinations`). You can include multiple destinations if they're being sent to the same destination. | | - `transformKql` | Transformation to apply to the data before sending it to the destination. Use `source` to send the data without any changes. The output of the transformation must match the schema of the destination table. See [Data collection transformations in Azure Monitor](./data-collection-transformations.md) for details on transformations. |
-| - `outputStream` | Specifies the destination table in the Log Analytics workspace. The table must already exist in the workspace. For custom tables, prefix the table name with *Custom-*. Built-in tables are not currently supported with edge pipeline. |
+| - `outputStream` | Specifies the destination table in the Log Analytics workspace. The table must already exist in the workspace. For custom tables, prefix the table name with *Custom-*. Built-in tables are not currently supported with pipeline at edge. |
```json
az role assignment create --assignee "00000000-0000-0000-0000-000000000000" --ro
``` ### Edge pipeline configuration
-The edge pipeline configuration defines the details of the edge pipeline instance and deploy the data flows necessary to receive and send telemetry to the cloud.
+The edge pipeline configuration defines the details of the pipeline at edge instance and deploy the data flows necessary to receive and send telemetry to the cloud.
Replace the properties in the following table before deploying the template.
Replace the properties in the following table before deploying the template.
| **Exporters** | One entry for each destination. | | `type` | Only currently supported type is `AzureMonitorWorkspaceLogs`. | | `name` | Must be unique for the pipeline instance. The name is used in the `pipelines` section of the configuration. |
-| `dataCollectionEndpointUrl` | URL of the DCE where the edge pipeline will send the data. You can locate this in the Azure portal by navigating to the DCE and copying the **Logs Ingestion** value. |
+| `dataCollectionEndpointUrl` | URL of the DCE where the pipeline at edge will send the data. You can locate this in the Azure portal by navigating to the DCE and copying the **Logs Ingestion** value. |
| `dataCollectionRule` | Immutable ID of the DCR that defines the data collection in the cloud pipeline. From the JSON view of your DCR in the Azure portal, copy the value of the **immutable ID** in the **General** section. | | - `stream` | Name of the stream in your DCR that will accept the data. | | - `maxStorageUsage` | Capacity of the cache. When 80% of this capacity is reached, the oldest data is pruned to make room for more data. |
az deployment group create --resource-group my-resource-group --template-file C:
### ARM template sample to configure all components
-You can deploy all of the required components for the Azure Monitor edge pipeline using the single ARM template shown below. Edit the parameter file with specific values for your environment. Each section of the template is described below including sections that you must modify before using it.
+You can deploy all of the required components for the Azure Monitor pipeline at edge using the single ARM template shown below. Edit the parameter file with specific values for your environment. Each section of the template is described below including sections that you must modify before using it.
| Component | Type | Description |
In the Azure portal, navigate to the **Kubernetes services** menu and select you
- \<pipeline name\>-external-service - \<pipeline name\>-service
-Click on the entry for **\<pipeline name\>-external-service** and note the IP address and port in the **Endpoints** column. This is the external IP address and port that your clients will send data to.
+Click on the entry for **\<pipeline name\>-external-service** and note the IP address and port in the **Endpoints** column. This is the external IP address and port that your clients will send data to. See [Retrieve ingress endpoint](#retrieve-ingress-endpoint) for retrieving this address from the client.
### Verify heartbeat Each pipeline configured in your pipeline instance will send a heartbeat record to the `Heartbeat` table in your Log Analytics workspace every minute. The contents of the `OSMajorVersion` column should match the name your pipeline instance. If there are multiple workspaces in the pipeline instance, then the first one configured will be used. Retrieve the heartbeat records using a log query as in the following example: ## Client configuration Once your edge pipeline extension and instance are installed, then you need to configure your clients to send data to the pipeline. ### Retrieve ingress endpoint
-Each client requires the external IP address of the pipeline. Use the following command to retrieve this address:
+Each client requires the external IP address of the Azure Monitor pipeline service. Use the following command to retrieve this address:
```azurecli kubectl get services -n <namespace where azure monitor pipeline was installed> ```
-If the application producing logs is external to the cluster, copy the *external-ip* value of the service *nginx-controller-service* with the load balancer type. If the application is on a pod within the cluster, copy the *cluster-ip* value. If the external-ip field is set to *pending*, you will need to configure an external IP for this ingress manually according to your cluster configuration.
+- If the application producing logs is external to the cluster, copy the *external-ip* value of the service *\<pipeline name\>-service* or *\<pipeline name\>-external-service* with the load balancer type.
+- If the application is on a pod within the cluster, copy the *cluster-ip* value.
+
+> [!NOTE]
+> If the external-ip field is set to *pending*, you will need to configure an external IP for this ingress manually according to your cluster configuration.
| Client | Description | |:|:|
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
In general, there's no cost to ingest standard metrics (platform metrics) into a
Custom metrics are retained for the [same amount of time as platform metrics](../essentials/data-platform-metrics.md#retention-of-metrics). > [!NOTE]
-> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-preaggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../cost-usage.md) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
+> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data.
## Custom metric definitions
azure-monitor Azure Resource Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/azure-resource-queries.md
Title: Querying logs for Azure resources description: In Log Analytics, queries typically execute in the context of a workspace. A workspace may contain data for many resources, making it difficult to isolate data for a particular resource. Previously updated : 12/07/2021 Last updated : 08/12/2024
The `dataSources` payload filters the results further by describing which worksp
To clearly state what data such a query would return: - Logs for VM1 in WS1, excluding Tables.Custom from the workspace.
- - Logs for VM2, excluding SecurityEvent and SecurityBaseline, in WS2.
+ - Logs for VM2, excluding SecurityEvent and SecurityBaseline, in WS2.
azure-monitor Batch Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/batch-queries.md
Title: Batch queries description: The Azure Monitor Log Analytics API supports batching. Previously updated : 11/22/2021 Last updated : 08/12/2024
azure-monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/cache.md
Title: Caching description: To improve performance, responses can be served from a cache. By default, responses are stored for 2 minutes. Previously updated : 08/06/2022 Last updated : 08/12/2024
azure-monitor Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/errors.md
Title: Azure Monitor Log Analytics API errors description: This section contains a non-exhaustive list of known common errors that can occur in the Azure Monitor Log Analytics API, their causes, and possible solutions. Previously updated : 11/29/2021 Last updated : 08/12/2024
azure-monitor Prefer Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/prefer-options.md
Title: Prefer options description: The API supports setting some request options using the Prefer header. This section describes how to set each preference and their values. Previously updated : 11/29/2021 Last updated : 08/12/2024
azure-monitor Request Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/request-format.md
Title: Request format description: The Azure Monitor Log Analytics API request format. Previously updated : 11/22/2021 Last updated : 08/12/2024
azure-monitor Response Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/response-format.md
Title: Azure Monitor Log Analytics API response format description: The Azure Monitor Log Analytics API response is JSON that contains an array of table objects. Previously updated : 11/21/2021 Last updated : 08/12/2024
azure-monitor Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/timeouts.md
Title: Timeouts of query executions description: Query execution times can vary widely based on the complexity of the query, the amount of data being analyzed, and the load on the system and workspace at the time of the query. Previously updated : 11/28/2021 Last updated : 08/12/2024
azure-monitor Basic Logs Azure Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-azure-tables.md
All custom tables created with or migrated to the [Logs ingestion API](logs-inge
| Service | Table | |:|:|
-| Azure Active Directory | [AADDomainServicesDNSAuditsGeneral](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsGeneral)<br> [AADDomainServicesDNSAuditsDynamicUpdates](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsDynamicUpdates)<br>[AADServicePrincipalSignInLogs](/azure/azure-monitor/reference/tables/AADServicePrincipalSignInLogs) |
+| Azure Active Directory | [AADDomainServicesDNSAuditsGeneral](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsGeneral)<br> [AADDomainServicesDNSAuditsDynamicUpdates](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsDynamicUpdates)<br>[AADManagedIdentitySignInLogs](/azure/azure-monitor/reference/tables/AADManagedIdentitySignInLogs)<br>[AADNonInteractiveUserSignInLogs](/azure/azure-monitor/reference/tables/AADNonInteractiveUserSignInLogs)<br>[AADServicePrincipalSignInLogs](/azure/azure-monitor/reference/tables/AADServicePrincipalSignInLogs) <br>[ADFSSignInLogs](/azure/azure-monitor/reference/tables/ADFSSignInLogs) |
| Azure Load Balancing | [ALBHealthEvent](/azure/azure-monitor/reference/tables/ALBHealthEvent) | | Azure Databricks | [DatabricksBrickStoreHttpGateway](/azure/azure-monitor/reference/tables/databricksbrickstorehttpgateway)<br>[DatabricksDataMonitoring](/azure/azure-monitor/reference/tables/databricksdatamonitoring)<br>[DatabricksFilesystem](/azure/azure-monitor/reference/tables/databricksfilesystem)<br>[DatabricksDashboards](/azure/azure-monitor/reference/tables/databricksdashboards)<br>[DatabricksCloudStorageMetadata](/azure/azure-monitor/reference/tables/databrickscloudstoragemetadata)<br>[DatabricksPredictiveOptimization](/azure/azure-monitor/reference/tables/databrickspredictiveoptimization)<br>[DatabricksIngestion](/azure/azure-monitor/reference/tables/databricksingestion)<br>[DatabricksMarketplaceConsumer](/azure/azure-monitor/reference/tables/databricksmarketplaceconsumer)<br>[DatabricksLineageTracking](/azure/azure-monitor/reference/tables/databrickslineagetracking) | API Management | [ApiManagementGatewayLogs](/azure/azure-monitor/reference/tables/ApiManagementGatewayLogs)<br>[ApiManagementWebSocketConnectionLogs](/azure/azure-monitor/reference/tables/ApiManagementWebSocketConnectionLogs) |
azure-monitor Custom Fields Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-fields-migrate.md
Title: Migration of custom fields to KQL-based transformations in Azure Monitor
description: Learn how to migrate custom fields in a Log Analytics workspace in Azure Monitor with KQL-based custom columns using transformations. Previously updated : 03/31/2023 Last updated : 08/12/2024
azure-monitor Custom Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-fields.md
Previously updated : 03/31/2023 Last updated : 08/12/2024
azure-monitor Custom Logs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md
Previously updated : 05/23/2023 Last updated : 08/12/2024
azure-monitor Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/delete-workspace.md
Previously updated : 07/30/2023- Last updated : 08/12/2024 # Delete and recover an Azure Log Analytics workspace
azure-monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/functions.md
A function is a log query in Azure Monitor that can be used in other log queries
- To view or use functions, you need `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example. -- To create or edit functions, you need `microsoft.operationalinsights/workspaces/savedSearches/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example.
+- To create or edit functions, you need `microsoft.operationalinsights/workspaces/savedSearches/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example.
## Types of functions There are two types of functions in Azure Monitor:
azure-monitor Kql Machine Learning Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/kql-machine-learning-azure-monitor.md
Previously updated : 07/26/2023 Last updated : 08/12/2024 # Customer intent: As a data analyst, I want to use the native machine learning capabilities of Azure Monitor Logs to gain insights from my log data without having to export data outside of Azure Monitor.
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
description: This article describes a method to use Azure Logic Apps to query da
Previously updated : 07/02/2023 Last updated : 08/12/2024
azure-monitor Notebooks Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/notebooks-azure-monitor-logs.md
Previously updated : 02/28/2023 Last updated : 08/12/2024 # Customer intent: As a data scientist, I want to run custom code on data in Azure Monitor Logs to gain insights without having to export data outside of Azure Monitor.
azure-monitor Resource Manager Log Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/resource-manager-log-queries.md
Previously updated : 06/13/2022 Last updated : 08/12/2024 # Resource Manager template samples for log queries in Azure Monitor
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
Title: Restore logs in Azure Monitor description: Restore a specific time range of data in a Log Analytics workspace for high-performance queries. Previously updated : 10/01/2022 Last updated : 08/12/2024++ # Restore logs in Azure Monitor
azure-monitor Set Up Logs Ingestion Api Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/set-up-logs-ingestion-api-prerequisites.md
Previously updated : 06/12/2023 Last updated : 08/12/2024 # Set up resources required to send data to Azure Monitor Logs using the Logs Ingestion API
azure-monitor Tutorial Workspace Transformations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-api.md
description: Describes how to add a custom transformation to data flowing throug
Previously updated : 07/17/2023 Last updated : 08/12/2024 # Tutorial: Add transformation in workspace data collection rule to Azure Monitor using Resource Manager templates
azure-monitor Tutorial Workspace Transformations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-portal.md
description: Describes how to add a custom transformation to data flowing throug
Previously updated : 07/17/2023 Last updated : 08/12/2024 # Tutorial: Add a transformation in a workspace data collection rule by using the Azure portal
azure-monitor Whats New Scom Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/scom-manage-instance/whats-new-scom-managed-instance.md
description: This article provides details of what's new in each version of Azur
Previously updated : 05/22/2024 Last updated : 08/12/2024
This article provides details of what's new in each version of Azure Monitor SCOM Managed Instance.
+## Version 1.0.103
+
+- Bug fix in domain connectivity checks of validation to prevent timeouts.
+ ## Version 1.0.100 - Bug fix in pre-patch/pre-scale validations.
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacit
## Considerations * Resize operations on Azure NetApp Files volumes don't result in data loss.
-* Volume quotas are indexed against `maxfiles` limits. Once a volume has surpassed a `maxfiles` limit, you cannot reduce the volume size below the quota that corresponds to that `maxfiles` limit. For more information and specific limits, see [`maxfiles` limits](azure-netapp-files-resource-limits.md#maxfiles-limits-).
+* Volume quotas are indexed against `maxfiles` limits. Once a volume has surpassed a `maxfiles` limit, you cannot reduce the volume size below the quota that corresponds to that `maxfiles` limit. For more information and specific limits, see [`maxfiles` limits](maxfiles-concept.md).
* Capacity pools with Basic network features have a minimum size of 4 TiB. For capacity pools with Standard network features, the minimum size is 1 TiB. For more information, see [Resource limits](azure-netapp-files-resource-limits.md) * Volume resize operations are nearly instantaneous but not always immediate. There can be a short delay for the volume's updated size to appear in the portal. Verify the size from a host perspective before re-attempting the resize operation.
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
Previously updated : 07/23/2024 Last updated : 08/09/2024 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Maximum size of a single large volume on dedicated capacity (preview) | 2,048 TiB | No | | Maximum size of a single file | 16 TiB | No | | Maximum size of directory metadata in a single directory | 320 MB | No |
-| Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](#directory-limit). | No |
-| Maximum number of files `maxfiles` per volume | See [`maxfiles`](#maxfiles) | Yes |
+| Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](directory-sizes-concept.md#directory-limit). | No |
+| Maximum number of files `maxfiles` per volume | See [`maxfiles`](maxfiles-concept.md) | Yes |
| Maximum number of export policy rules per volume | 5 | No | | Maximum number of quota rules per volume | 100 | No | | Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No |
For more information, see [Capacity management FAQs](faq-capacity-management.md)
For limits and constraints related to Azure NetApp Files network features, see [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#considerations).
-## Determine if a directory is approaching the limit size <a name="directory-limit"></a>
-
-You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB). If you reach the maximum size limit for a single directory for Azure NetApp Files, the error `No space left on device` occurs.
-
-For a 320-MB directory, the number of blocks is 655,360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
-
-Examples:
-
-```console
-[makam@cycrh6rtp07 ~]$ stat bin
-File: 'bin'
-Size: 4096 Blocks: 8 IO Block: 65536 directory
-
-[makam@cycrh6rtp07 ~]$ stat tmp
-File: 'tmp'
-Size: 12288 Blocks: 24 IO Block: 65536 directory
-
-[makam@cycrh6rtp07 ~]$ stat tmp1
-File: 'tmp1'
-Size: 4096 Blocks: 8 IO Block: 65536 directory
-```
-
-## `Maxfiles` limits <a name="maxfiles"></a>
-
-Azure NetApp Files volumes have a value called `maxfiles` that refers to the maximum number of files and folders (also known as inodes) a volume can contain. When the `maxfiles` limit is reached, clients receive "out of space" messages when attempting to create new files or folders. If you experience this issue, contact Microsoft technical support.
-
-The `maxfiles` limit for an Azure NetApp Files volume is based on the size (quota) of the volume, where the service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size and uses the following guidelines.
--- For regular volumes less than or equal to 683 GiB, the default `maxfiles` limit is 21,251,126.-- For regular volumes greater than 683 GiB, the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a maximum of 2,147,483,632.-- For [large volumes](large-volumes-requirements-considerations.md), the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a default maximum of 15,938,355,048.-- Each inode uses roughly 288 bytes of capacity in the volume. Having many inodes in a volume can consume a non-trivial amount of physical space overhead on top of the capacity of the actual data.
- - If a file is less than 64 bytes in size, it's stored in the inode itself and doesn't use additional capacity. This capacity is only used when files are actually allocated to the volume.
- - Files larger than 64 bytes do consume additional capacity on the volume. For instance, if there are one million files greater than 64 bytes in an Azure NetApp Files volume, then approximately 274 MiB of capacity would belong to the inodes.
--
-The following table shows examples of the relationship `maxfiles` values based on volume sizes for regular volumes.
-
-| Volume size | Estimated maxfiles limit |
-| - | - |
-| 0 ΓÇô 683 GiB | 21,251,126 |
-| 1 TiB (1,073,741,824 KiB) | 31,876,709 |
-| 10 TiB (10,737,418,240 KiB) | 318,767,099 |
-| 50 TiB (53,687,091,200 KiB) | 1,593,835,519 |
-| 100 TiB (107,374,182,400 KiB) | 2,147,483,632 |
-
-The following table shows examples of the relationship `maxfiles` values based on volume sizes for large volumes.
-
-| Volume size | Estimated maxfiles limit |
-| - | - |
-| 50 TiB (53,687,091,200 KiB) | 1,593,835,512 |
-| 100 TiB (107,374,182,400 KiB) | 3,187,671,024 |
-| 200 TiB (214,748,364,800 KiB) | 6,375,342,024 |
-| 500 TiB (536,870,912,000 KiB) | 15,938,355,048 |
-
-To see the `maxfiles` allocation for a specific volume size, check the **Maximum number of files** field in the volumeΓÇÖs overview pane.
--
-You can't set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens on a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles` [quota request](#request-limit-increase) for the volume.
- ## Request limit increase You can create an Azure support request to increase the adjustable limits from the [Resource Limits](#resource-limits) table.
You can create an Azure support request to increase the adjustable limits from t
>[!NOTE] > Depending on available resources in the region and the limit increase requested, Azure support may require additional information in order to determine the feasibility of the request.
-1. Go to **New Support Request** under **Support + troubleshooting**.
+1. Navigate to **Help** then **Support + troubleshooting**.
+1. Under the **How can we help you** heading, enter "regional capacity quota" in the text field then select **Go**.
+
+ :::image type="content" source="./media/azure-netapp-files-resource-limits/support-how-can-we-help.png" alt-text="Screenshot that shows the How can we help heading." lightbox="./media/azure-netapp-files-resource-limits/support-how-can-we-help.png":::
+
+ 1. Under the **Current selection** heading, search for "Azure NetApp Files" in the text field for **Which service are you having an issue with?**.
+ 1. Select **Azure NetApp Files** then **Next**.
+
+ :::image type="content" source="./media/azure-netapp-files-resource-limits/support-service.png" alt-text="Screenshot of choosing a service option." lightbox="./media/azure-netapp-files-resource-limits/support-service.png":::
+
+ 1. Under **Which resource are you having an issue with?**, locate and select your subscription. Then locate and select your resource (the NetApp account).
+
+ :::image type="content" source="./media/azure-netapp-files-resource-limits/support-resource.png" alt-text="Screenshot with the option to select your subscription and resource." lightbox="./media/azure-netapp-files-resource-limits/support-resource.png":::
+
+ 1. Under **Are you having one of the following issues?**, select **Storage: Azure NetApp Files limits** then **Next**.
+
+ :::image type="content" source="./media/azure-netapp-files-resource-limits/support-issue.png" alt-text="Screenshot showing the option to choose Azure NetApp Files limits as an issue." lightbox="./media/azure-netapp-files-resource-limits/support-issue.png":::
+
+ 1. Select **Create a support request**.
-2. Under the **Problem description** tab, provide the required information:
+1. Under the **Problem description** tab, provide the required information:
1. For **Issue Type**, select **Service and Subscription Limits (Quotas)**. 2. For **Subscription**, select your subscription. 3. For **Quota Type**, select **Storage: Azure NetApp Files limits**. ![Screenshot that shows the Problem Description tab.](./media/shared/support-problem-descriptions.png)-
-3. Under the **Additional details** tab, select **Enter details** in the Request Details field.
+
+1. Under the **Additional details** tab, select **Enter details** in the Request Details field.
![Screenshot that shows the Details tab and the Enter Details field.](./media/shared/quota-additional-details.png)
-4. To request limit increase, provide the following information in the Quota Details window that appears:
+1. To request limit increase, provide the following information in the Quota Details window that appears:
1. In **Quota Type**, select the type of resource you want to increase. For example: * *Regional Capacity Quota per Subscription (TiB)*
You can create an Azure support request to increase the adjustable limits from t
![Screenshot that shows how to display and request increase for regional quota.](./media/azure-netapp-files-resource-limits/quota-details-regional-request.png)
-5. Select **Save and continue**. Select **Review + create** to create the request.
+1. Select **Save and continue**. Select **Review + create** to create the request.
## Next steps
+- [Understand `maxfiles` limits](maxfiles-concept.md)
+- [Understand maximum directory sizes](directory-sizes-concept.md)
- [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md) - [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) - [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md)
azure-netapp-files Directory Sizes Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/directory-sizes-concept.md
+
+ Title: Understand directory sizes in Azure NetApp Files
+description: Learn how metadata impacts directory sizes in Azure NetApp Files.
++++ Last updated : 07/23/2024++
+# Understand directory sizes in Azure NetApp Files
+
+When a file is created in a directory, an entry is added to a hidden index file within the Azure NetApp Files volume. This index file helps keep track of the existing inodes in a directory and helps expedite lookup requests for directories with a high number of files. As entries are added to this file, the file size increases (but never decrease) at a rate of approximately 512 bytes per entry depending on the length of the filename. Longer file names add more size to the file. Symbolic links also add entries to this file. This concept is known as the directory size, which is a common element in all Linux-based filesystems. Directory size isn't the maximum total number of files in a single Azure NetApp Files volume. That is determined by the [`maxfiles` value](maxfiles-concept.md).
+
+By default, when a new directory is created, it consumes 4 KiB (4,096 bytes) or eight 512-byte blocks. You can view the size of a newly created directory from a Linux client using the stat command.
+
+```
+# mkdir dirsize
+# stat dirsize
+File: ΓÇÿdirsizeΓÇÖ
+Size: 4096 Blocks: 8 IO Block: 32768 directory
+```
+
+Directory sizes are specific to a single directory and don't combine in sizes. For example, if there are 10 directories in a volume, each can approach the 320-MiB directory size limit in a single volume.
+
+## Determine if a directory is approaching the limit size <a name="directory-limit"></a>
+
+For a 320-MiB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4-5 million files maximum for a 320-MiB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory.
+You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB). If you reach the maximum size limit for a single directory for Azure NetApp Files, the error `No space left on device` occurs.
+
+For a 320-MB directory, the number of blocks is 655,360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. For information on how to monitor the maxdirsize, see [Monitoring `maxdirsize`]().
+
+## Directory size considerations
+
+When dealing with a high-file count environment, consider the following recommendations:
+
+- Azure NetApp Files volumes support up to 320 MiB for directory sizes. This value can't be increased.
+- Once a volumeΓÇÖs directory size has been exceeded, clients display an out-of-space error even if there's available free space in the volume.
+- For regular volumes, a 320 MiB directory size equates to roughly 4-5 million files in a single directory. This value is dependent on the file name lengths.
+- [Large volumes](large-volumes-requirements-considerations.md) have a different architecture than regular volumes.
+- High file counts in a single directory can present performance problems when searching. When possible, limit the total size of a single directory to 2 MiB (roughly 27,000 files) when frequent searches are needed.
+ - If more files are needed in a single directory, adjust search performance expectations accordingly. While Azure NetApp Files indexes the directory file listings for performance, searches can take some time with high file counts.
+- When designing your file system, avoid flat directory layouts. For information about different approaches to directory layouts, see [About directory layouts](#about-directory-layouts).
+- To resolve issues where the directory size has been exceeded and new files can't be created, delete or move files out of the relevant directory.
+
+## About directory layouts
+
+The `maxdirsize` value can create concerns when you're using flat directory structures, where a single folder contains millions of files at a single level. Folder structures where files, folders, and subfolders are interspersed have a low impact on `maxdirsize`. There are several directory structure methodologies.
+
+A **flat directory structure** is a single directory with many files below the same directory.
++
+A **wide directory structure** contains many top-level directories with files spread across all directories.
++
+A **deep directory structure** contains fewer top-level directories with many subdirectories. Although this structure provides fewer files per folder, file path lengths can become an issue if directory layouts are too deep and file paths become too long. For details on file path lengths, see [Understand file path lengths in Azure NetApp Files](understand-path-lengths.md).
++
+### Impact of flat directory structures in Azure NetApp Files
+
+Flat directory structures (many files in a single or few directories) have a negative effect on a wide array of file systems, Azure NetApp File volumes, or others. Potential issues include:
+
+- Memory pressure
+- CPU utilization
+- Network performance/latency (especially during mass queries of files, `GETATTR` operations, `READDIR` operations)
+
+Due to the design of Azure NetApp Files large volumes, the impact of `maxdirsize` is unique. Azure NetApp Files large volume `maxdirsize` is impacted uniquely due to its design. Unlike a regular volume, a large volume uses remote hard links inside Azure NetApp Files to help redirect traffic across different storage devices to provide more scale and performance. When using flat directories, there's a higher ratio of internal remote hard links to local files. These remote hard links count against the total `maxdirsize` value, so a large volume might approach its `maxdirsize` limit faster than a regular volume.
+
+For example, if a single directory has millions of files and generates roughly 85% remote hard links for the file system, you can expect `maxdirsize` to be exhausted at nearly twice the amount as a regular volume would.
+
+For best results with directory sizes in Azure NetApp Files:
+
+- **Avoid flat directory structures in Azure NetApp Files**. **Wide or deep directory structures work best**, provided the [path length](understand-path-lengths.md) of the file or folder doesn't exceed NAS protocol standards.
+- If flat directory structures are unavoidable, monitor the `maxdirsize` for the directories.
+
+## Monitor `maxdirsize`
+
+For a single directory, use the `stat` command to find the directory size.
+
+```
+# stat /mnt/dir_11/c5
+```
+
+Although the `stat` command can be used to check the directory size of a specific directory, it might not be as efficient to run it individually against a single directory. To see a list of the largest directory sizes sorted from largest to smallest, the following command provides that while omitting snapshot directories from the query.
+
+```
+# find /mnt -name .snapshot -prune -o -type d -ls -links 2 -prune | sort -rn -k 7 | head | awk '{print $2 " " $11}' | sort -rn
+```
+
+>[!NOTE]
+>The directory size reported by the stat command is in bytes. The size reported in the find command is in KiB.
+
+**Example**
+```
+# stat /mnt/dir_11/c5
+
+ File: ΓÇÿ/mnt/dir_11/c5ΓÇÖ
+
+ Size: 322396160 Blocks: 632168 IO Block: 32768 directory
+
+# find /mnt -name .snapshot -prune -o -type d -ls -links 2 -prune | sort -rn -k 7 | head | awk '{print $2 " " $11}' | sort -rn
+316084 /mnt/dir_11/c5
+
+3792 /mnt/dir_19
+
+3792 /mnt/dir_16
+```
+
+In the previous, the directory size of `/mnt/dir_11/c5` is 316,084 KiB (308.6 MiB), which approaches the 320-MiB limit. That equates to around 4.1 million files.
+
+```
+# ls /mnt/dir_11/c5 | wc -l
+4171624
+```
+
+In this case, consider corrective actions such as moving or deleting files.
+
+## More information
+
+* [Azure NetApp Files resources limits](azure-netapp-files-resource-limits.md)
+* [Understand file path lengths in Azure NetApp Files](understand-path-lengths.md)
azure-netapp-files Faq Capacity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-capacity-management.md
Azure NetApp Files provides capacity pool and volume usage metrics. You can also
## How do I determine if a directory is approaching the limit size? You can use the `stat` command from a client to see whether a directory is approaching the [maximum size limit](azure-netapp-files-resource-limits.md#resource-limits) for directory metadata (320 MB).
-See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#directory-limit) for the limit and calculation.
+
+See [Understand directory sizes in Azure NetApp Files](directory-sizes-concept.md) for the limit and calculation.
## Does snapshot space count towards the usable / provisioned capacity of a volume?
Yes, the [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-
## Does Azure NetApp Files support auto-grow for volumes or capacity pools?
-No, Azure NetApp Files volumes and capacity pool do not auto-grow upon filling up. See [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md).
+No, Azure NetApp Files volumes and capacity pool don't auto-grow upon filling up. See [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md).
You can use the community supported [Logic Apps ANFCapacityManager tool](https://github.com/ANFTechTeam/ANFCapacityManager) to manage capacity-based alert rules. The tool can automatically increase volume sizes to prevent your volumes from running out of space. ## Does the destination volume of a replication count towards hard volume quota?
-No, the destination volume of a replication does not count towards hard volume quota.
+No, the destination volume of a replication doesn't count towards hard volume quota.
## Can I manage Azure NetApp Files through Azure Storage Explorer?
-No. Azure NetApp Files is not supported by Azure Storage Explorer.
+No. Azure NetApp Files isn't supported by Azure Storage Explorer.
## Why is volume space not freed up immediately after deleting large amount of data in a volume?
azure-netapp-files Maxfiles Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/maxfiles-concept.md
+
+ Title: Understand maxfiles limits in Azure NetApp Files
+description: Learn about the impact of maxfiles on Azure NetApp Files quotas and how to resolve "out of space" messages.
++++ Last updated : 08/09/2024++
+# Understand `maxfiles` limits in Azure NetApp Files
+
+Azure NetApp Files volumes have a value called `maxfiles` that refers to the maximum number of files and folders (also known as inodes) a volume can contain. When the `maxfiles` limit is reached, clients receive "out of space" messages when attempting to create new files or folders. If you experience this issue, contact Microsoft technical support.
+
+The `maxfiles` limit for an Azure NetApp Files volume is based on the size (quota) of the volume, where the service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size and uses the following guidelines.
+
+- For regular volumes less than or equal to 683 GiB, the default `maxfiles` limit is 21,251,126.
+- For regular volumes greater than 683 GiB, the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a maximum of 2,147,483,632.
+- For [large volumes](large-volumes-requirements-considerations.md), the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a default maximum of 15,938,355,048.
+- Each inode uses roughly 288 bytes of capacity in the volume. Having many inodes in a volume can consume a non-trivial amount of physical space overhead on top of the capacity of the actual data.
+ - If a file is less than 64 bytes in size, it's stored in the inode itself and doesn't use additional capacity. This capacity is only used when files are actually allocated to the volume.
+ - Files larger than 64 bytes do consume additional capacity on the volume. For instance, if there are one million files greater than 64 bytes in an Azure NetApp Files volume, then approximately 274 MiB of capacity would belong to the inodes.
++
+The following table shows examples of the relationship `maxfiles` values based on volume sizes for regular volumes.
+
+| Volume size | Estimated `maxfiles` limit |
+| - | - |
+| 0 ΓÇô 683 GiB | 21,251,126 |
+| 1 TiB (1,073,741,824 KiB) | 31,876,709 |
+| 10 TiB (10,737,418,240 KiB) | 318,767,099 |
+| 50 TiB (53,687,091,200 KiB) | 1,593,835,519 |
+| 100 TiB (107,374,182,400 KiB) | 2,147,483,632 |
+
+The following table shows examples of the relationship `maxfiles` values based on volume sizes for large volumes.
+
+| Volume size | Estimated `maxfiles` limit |
+| - | - |
+| 50 TiB (53,687,091,200 KiB) | 1,593,835,512 |
+| 100 TiB (107,374,182,400 KiB) | 3,187,671,024 |
+| 200 TiB (214,748,364,800 KiB) | 6,375,342,024 |
+| 500 TiB (536,870,912,000 KiB) | 15,938,355,048 |
+
+To see the `maxfiles` allocation for a specific volume size, check the **Maximum number of files** field in the volumeΓÇÖs overview pane.
++
+You can't set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens on a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles` [quota request](azure-netapp-files-resource-limits.md#request-limit-increase) for the volume.
+
+## Next steps
+
+* [Azure NetApp Files resource limits](azure-netapp-files-resource-limits.md)
+* [Understand maximum directory sizes](directory-sizes-concept.md)
azure-resource-manager Bicep Core Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-core-diagnostics.md
If you need more information about a particular diagnostic code, select the **Fe
| BCP334 | Warning | The provided value can have a length as small as {sourceMinLength} and may be too short to assign to a target with a configured minimum length of {targetMinLength}. | | BCP335 | Warning | The provided value can have a length as large as {sourceMaxLength} and may be too long to assign to a target with a configured maximum length of {targetMaxLength}. | | BCP337 | Error | This declaration type is not valid for a Bicep Parameters file. Specify a "{LanguageConstants.UsingKeyword}", "{LanguageConstants.ParameterKeyword}" or "{LanguageConstants.VariableKeyword}" declaration. |
-| BCP338 | Error | Failed to evaluate parameter "{parameterName}": {message} |
+| <a id='BCP338' />[BCP338](./diagnostics/bcp338.md) | Error | Failed to evaluate parameter \<parameter-name>: \<error-message>` |
| BCP339 | Error | The provided array index value of "{indexSought}" is not valid. Array index should be greater than or equal to 0. | | BCP340 | Error | Unable to parse literal YAML value. Please ensure that it is well-formed. | | BCP341 | Error | This expression is being used inside a function declaration, which requires a value that can be calculated at the start of the deployment. {variableDependencyChainClause}{accessiblePropertiesClause} | | BCP342 | Error | User-defined types are not supported in user-defined function parameters or outputs. | | BCP344 | Error | Expected an assert identifier at this location. | | BCP345 | Error | A test declaration can only reference a Bicep File |
-| BCP0346 | Error | Expected a test identifier at this location. |
-| BCP0347 | Error | Expected a test path string at this location. |
+| BCP346 | Error | Expected a test identifier at this location. |
+| BCP347 | Error | Expected a test path string at this location. |
| BCP348 | Error | Using a test declaration statement requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.TestFramework)}". | | BCP349 | Error | Using an assert declaration requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.Assertions)}". | | BCP350 | Error | Value of type "{valueType}" cannot be assigned to an assert. Asserts can take values of type 'bool' only. |
azure-resource-manager Bicep Functions Parameters File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-parameters-file.md
Title: Bicep functions - parameters file
description: This article describes the Bicep functions to be used in Bicep parameter files. Previously updated : 03/20/2024 Last updated : 08/09/2024 # Parameters file function for Bicep
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
The string value of the environment variable or a default value.
+### Remarks
+
+The following command sets the environment variable only for the PowerShell process in which it's executed. You get [BCP338](./diagnostics/bcp338.md) from Visual Studio Code.
+
+```PowerShell
+$env:testEnvironmentVariable = "Hello World!"
+```
+
+To set the environment variable at the user level, use the following command:
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('testEnvironmentVariable','Hello World!', 'User')
+```
+
+To set the environment variable at the machine level, use the following command:
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('testEnvironmentVariable','Hello World!', 'Machine')
+```
+
+For more information, see [Environment.SetEnvironmentVariable Method](/dotnet/api/system.environment.setenvironmentvariable).
+ ### Examples The following examples show how to retrieve the values of environment variables.
azure-resource-manager Bcp338 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp338.md
+
+ Title: BCP338
+description: Error - Failed to evaluate parameter <parameter-name>.
++ Last updated : 08/09/2024++
+# Bicep error code - BCP338
+
+This error occurs when Bicep can't resolve a parameter name in the Bicep parameter file.
+
+## Error description
+
+`Failed to evaluate parameter <parameter-name>: <error-message>`.
+
+## Solution
+
+Check the parameter value.
+
+## Examples
+
+The following Bicep parameter file raises the error because _testEnvironmentVariable_ can't be found:
+
+```bicep
+using 'main.bicep'
+param parTest = readEnvironmentVariable('testEnvironmentVariable')
+```
+
+It could be because the environment variable isn't defined in the user or system level. For more information, see [`readEnvironmentVariable`](../bicep-functions-parameters-file.md)
+You can fix the error by assigning a string whose length is within the allowable range.
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager User Defined Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md
For more information, see [Custom tagged union data type](./data-types.md#custom
## Import types between Bicep files
-[Bicep CLI version 0.21.X or higher](./install.md) is required to use this compile-time import feature. The experimental flag `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features).
-
-Only user-defined data types that bear the `@export()` decorator can be imported to other templates. Currently, this decorator can only be used on `type` statements.
+Only user-defined data types that bear the `@export()` decorator can be imported to other templates.
The following example enables you to import the two user-defined data types from other templates:
azure-signalr Signalr Concept Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-internals.md
ms.devlang: csharp Previously updated : 03/29/2023 Last updated : 08/09/2024 # Azure SignalR Service internals
Once the application server is started:
- For ASP.NET SignalR: Azure SignalR Service SDK opens five WebSocket connections per hub to SignalR Service and one per application WebSocket connection.
-The initial number of connections defaults to 5 and is configurable using the `InitialHubServerConnectionCount` option in the SignalR Service SDK. For more information, see [configuration](signalr-howto-use.md#configure-options).
+The initial number of connections defaults to 5 and is configurable using the `InitialHubServerConnectionCount` option in the SignalR Service SDK. For more information, see [configuration](signalr-howto-use.md#configure-options).
-While the application server is connected to the SignalR service, the Azure SignalR service may send load-balancing messages to the server. Then, the SDK starts new server connections to the service for better performance. Messages to and from clients are multiplexed into these connections.
+While the application server is connected to the SignalR service, the Azure SignalR service sends load-balancing messages to the server. Then, the SDK starts new server connections to the service for better performance. Messages to and from clients are multiplexed into these connections.
Server connections are persistently connected to the SignalR Service. If a server connection is disconnected due to a network issue:
Server connections are persistently connected to the SignalR Service. If a serve
When you use the SignalR Service, clients connect to the service instead of the application server. There are three steps to establish persistent connections between the client and the SignalR Service.
-1. A client sends a negotiate request to the application server.
+1. Client sends a negotiate request to the application server.
1. The application server uses Azure SignalR Service SDK to return a redirect response containing the SignalR Service URL and access token. - For ASP.NET Core SignalR, a typical redirect response looks like:
SignalR Service transmits data from the client to the pairing application server
SignalR Service doesn't save or store customer data, all customer data received is transmitted to the target server or clients in real-time.
-The Azure SignalR Service acts as a logical transport layer between application server and clients. All persistent connections are offloaded to SignalR Service. As a result, the application server only needs to handle the business logic in the hub class, without worrying about client connections.
+The Azure SignalR Service acts as a logical transport layer between application server and clients. All persistent connections are offloaded to SignalR Service. As a result, the application server only needs to handle the business logic in the hub class, without worrying about client connections.
## Next steps
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Virtual Rooms empower developers with essential security and control capabilitie
| Voice (VoIP) | ✔️ | | Video | ✔️ | | Client initiated dial-out to a PSTN number | ✔️ |
-| Server-side call management (Call Automation)* | ✔️ |
| Server initiated dial-out to a PSTN number** | ✔️ |
+| Server-side call management (Call Automation)* | ✔️ |
| PSTN Dial-in | ❌ | | Async Messaging (Chat) | ❌ | | Interoperability with Microsoft Teams | ❌ |
Virtual Rooms empower developers with essential security and control capabilitie
## When to use Virtual Rooms Following table shows when to use Virtual Rooms.+ | Condition | Use Rooms | | | | | When it is important to control who is allowed to join a call (invite-only experience). | ✔️ |
At a high level, to conduct calls in a Virtual Rooms you need to create and mana
| Get list of users invited to join a Virtual Room | ✔️ | ❌ | ❌ | | A user initiates a Virtual Rooms call or joins an in-progress call | ❌ | ✔️ | ❌ | | Dial-out to a PSTN user | ❌ | ✔️ | ✔️* |
-| Add/Remove VoIP participants to an in-progress call | ❌ | ✔️ | ✔️ |
-| Get list of participants who joined the in-progress call | ❌ | ✔️ | ✔️ |
+| Add/Remove VoIP participants to an in-progress call | ❌ | ✔️ | ✔️* |
+| Get list of participants who joined the in-progress call | ❌ | ✔️ | ✔️* |
| Start/Stop call captions and change captions language | ❌ | ✔️* | ❌ | | Manage call recording | ❌ | ❌ | ✔️* | | Send/Receive DTMF to/from PSTN participants | ❌ | ❌ | ✔️* |
-| Send announcements to participants | ❌ | ❌ | ✔️* |
+| Play audio prompts to participants | ❌ | ❌ | ✔️* |
[Calling client SDK](../voice-video-calling/calling-sdk-features.md#detailed-capabilities) provides the full list of client-side in-call operations and explains how to use them.
Developers can allow/disallow the ability for call participants to dial-out to a
1. A participant with Presenter role adds PSTN number into a call 1. PSTN user accepts and joins a room call +
+### Virtual Rooms API/SDKs
+
+Rooms are created and managed via rooms APIs or SDKs. Use the rooms API/SDKs in your server application for `room` operations:
+- Create
+- Modify
+- Delete
+- Set and update the list of participants
+- Set and modify the Room validity
+- Assign roles and permissions to users
+ |Virtual Rooms SDK | Version | State| |-| :--: | :--: | | Virtual Rooms SDKs | 2024-04-15 | Generally Available - Fully supported |
The *Open Room* concept is now deprecated. Going forward, *Invite Only* rooms ar
## Next steps: - Use the [QuickStart to create, manage, and join a room](../../quickstarts/rooms/get-started-rooms.md). - Learn how to [join a room call](../../quickstarts/rooms/join-rooms-call.md).
+- Learn how to [manage a room call](../../quickstarts/rooms/manage-rooms-call.md).
- Review the [Network requirements for media and signaling](../voice-video-calling/network-requirements.md). - Analyze your Rooms data, see: [Rooms Logs](../Analytics/logs/rooms-logs.md). - Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../azure-monitor/logs/log-analytics-tutorial.md).
container-instances Container Instances Tutorial Deploy Confidential Containers Cce Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md
In this tutorial, you learn how to:
In this tutorial, you deploy a Hello World application that generates a hardware attestation report. You start by creating an ARM template with a container group resource to define the properties of this application. You then use this ARM template with the Azure CLI confcom tooling to generate a CCE policy for attestation.
-This tutorial uses [this ARM template](https://raw.githubusercontent.com/Azure-Samples/aci-confidential-hello-world/main/template.json?token=GHSAT0AAAAAAB5B6SJ7VUYU3G6MMQUL7KKKY7QBZBA) as an example. To view the source code for this application, see [Azure Container Instances Confidential Hello World](https://aka.ms/ccacihelloworld).
+This tutorial uses [this ARM template](https://raw.githubusercontent.com/microsoft/confidential-container-demos/main/hello-world/ACI/arm-template.json) as an example. To view the source code for this application, see [Azure Confidential Container Instances Hello World](https://github.com/microsoft/confidential-container-demos/tree/main/hello-world/ACI).
The example template adds two properties to the Container Instances resource definition to make the container group confidential:
The example template adds two properties to the Container Instances resource def
* `confidentialComputeProperties`: Enables you to pass in a custom CCE policy for attestation of your container group. If you don't add this object to the resource, the software components that run within the container group won't be validated. > [!NOTE]
-> The `ccePolicy` parameter under `confidentialComputeProperties` is blank. You'll fill it in after you generate the policy later in the tutorial.
+> The `ccePolicy` parameter under `confidentialComputeProperties` is blank. You'll fill it in when you generate the policy later in the tutorial.
Use your preferred text editor to save this ARM template on your local machine as *template.json*.
Use your preferred text editor to save this ARM template on your local machine a
}, "image": { "type": "string",
- "defaultValue": "mcr.microsoft.com/aci/aci-confidential-helloworld:v1",
+ "defaultValue": "mcr.microsoft.com/public/acc/samples/aci/helloworld:2.7",
"metadata": { "description": "Container image to deploy. Should be of the form repoName/imagename:tag for images stored in public Docker Hub, or a fully qualified URI for other registries. Images from private registries require additional registry credentials." }
With the ARM template that you crafted and the Azure CLI confcom extension, you
1. To generate the CCE policy, run the following command by using the ARM template as input: ```azurecli-interactive
- az confcom acipolicygen -a .\template.json --print-policy
+ az confcom acipolicygen -a .\template.json
```
- When this command finishes, a Base64 string generated as output should appear in the following format. This string is the CCE policy that you copy and paste into your ARM template as the value of the `ccePolicy` property.
-
- ```output
- cGFja2FnZSBwb2xpY3kKCmFwaV9zdm4gOj0gIjAuOS4wIgoKaW1wb3J0IGZ1dHVyZS5rZXl3b3Jkcy5ldmVyeQppbXBvcnQgZnV0dXJlLmtleXdvcmRzLmluCgpmcmFnbWVudHMgOj0gWwpdCgpjb250YWluZXJzIDo9IFsKICAgIHsKICAgICAgICAiY29tbWFuZCI6IFsiL3BhdXNlIl0sCiAgICAgICAgImVudl9ydWxlcyI6IFt7InBhdHRlcm4iOiAiUEFUSD0vdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluOi9zYmluOi9iaW4iLCAic3RyYXRlZ3kiOiAic3RyaW5nIiwgInJlcXVpcmVkIjogdHJ1ZX0seyJwYXR0ZXJuIjogIlRFUk09eHRlcm0iLCAic3RyYXRlZ3kiOiAic3RyaW5nIiwgInJlcXVpcmVkIjogZmFsc2V9XSwKICAgICAgICAibGF5ZXJzIjogWyIxNmI1MTQwNTdhMDZhZDY2NWY5MmMwMjg2M2FjYTA3NGZkNTk3NmM3NTVkMjZiZmYxNjM2NTI5OTE2OWU4NDE1Il0sCiAgICAgICAgIm1vdW50cyI6IFtdLAogICAgICAgICJleGVjX3Byb2Nlc3NlcyI6IFtdLAogICAgICAgICJzaWduYWxzIjogW10sCiAgICAgICAgImFsbG93X2VsZXZhdGVkIjogZmFsc2UsCiAgICAgICAgIndvcmtpbmdfZGlyIjogIi8iCiAgICB9LApdCmFsbG93X3Byb3BlcnRpZXNfYWNjZXNzIDo9IHRydWUKYWxsb3dfZHVtcF9zdGFja3MgOj0gdHJ1ZQphbGxvd19ydW50aW1lX2xvZ2dpbmcgOj0gdHJ1ZQphbGxvd19lbnZpcm9ubWVudF92YXJpYWJsZV9kcm9wcGluZyA6PSB0cnVlCmFsbG93X3VuZW5jcnlwdGVkX3NjcmF0Y2ggOj0gdHJ1ZQoKCm1vdW50X2RldmljZSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQp1bm1vdW50X2RldmljZSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQptb3VudF9vdmVybGF5IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnVubW91bnRfb3ZlcmxheSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpjcmVhdGVfY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmV4ZWNfaW5fY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmV4ZWNfZXh0ZXJuYWwgOj0geyAiYWxsb3dlZCIgOiB0cnVlIH0Kc2h1dGRvd25fY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnNpZ25hbF9jb250YWluZXJfcHJvY2VzcyA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpwbGFuOV9tb3VudCA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpwbGFuOV91bm1vdW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmdldF9wcm9wZXJ0aWVzIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmR1bXBfc3RhY2tzIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnJ1bnRpbWVfbG9nZ2luZyA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpsb2FkX2ZyYWdtZW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnNjcmF0Y2hfbW91bnQgOj0geyAiYWxsb3dlZCIgOiB0cnVlIH0Kc2NyYXRjaF91bm1vdW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnJlYXNvbiA6PSB7ImVycm9ycyI6IGRhdGEuZnJhbWV3b3JrLmVycm9yc30K
- ```
-
-2. Save the changes to your local copy of the ARM template.
+ When this command finishes, a Base64 string generated as output will automatically appear in the `ccePolicy` property of the ARM template.
## Deploy the template
In the following steps, you use the Azure portal to review the properties of the
The presence of the attestation report below the Azure Container Instances logo confirms that the container is running on hardware that supports a TEE.
- If you deploy to hardware that doesn't support a TEE (for example, by choosing a region where Container Instances Confidential isn't available), no attestation report appears.
+ If you deploy to hardware that doesn't support a TEE (for example, by choosing a region where Confidential Container Instances isn't available), no attestation report appears.
## Related content
Now that you've deployed a confidential container group on Container Instances,
* [Confidential containers on Azure Container Instances](./container-instances-confidential-overview.md) * [Azure CLI confcom extension examples](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md)
-* [Confidential Hello World application](https://aka.ms/ccacihelloworld)
+* [Confidential Hello World application](https://github.com/microsoft/confidential-container-demos/tree/main/hello-world/ACI)
container-instances Using Azure Container Registry Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/using-azure-container-registry-mi.md
When access to an Azure Container Registry (ACR) is [restricted using a private
## Limitations * Windows containers don't support system-assigned managed identity-authenticated image pulls with ACR, only user-assigned.
-* The Azure container registry must have [Public Access set to either 'Select networks' or 'None'](../container-registry/container-registry-access-selected-networks.md). To set the Azure container registry's Public Access to 'All networks', visit ACI's article on [how to authenticate with ACR with service principal based authentication](container-instances-using-azure-container-registry.md).
- ## Configure registry authentication Your container registry must have Trusted Services enabled. To find instructions on how to enable trusted services, see [Allow trusted services to securely access a network-restricted container registry][allow-access-trusted-services].
cosmos-db Quickstart Rag Chatbot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/quickstart-rag-chatbot.md
At the end, we'll create a simple UX using Gradio to allow users to type in ques
**Important Note**: This sample requires you to set up accounts for Azure Cosmos DB for NoSQL and Azure OpenAI. To get started, visit: - [Azure Cosmos DB for NoSQL Python Quickstart](../nosql/quickstart-python.md) - [Azure Cosmos DB for NoSQL Vector Search](../nosql/vector-search.md)-- [Azure OpenAI](../../ai-services/openai/toc.yml) ### 1. Install Required Packages
This quickstart guide is designed to help you set up and get running with Azure
- [Azure PostgreSQL Server pgvector Extension](../../postgresql/flexible-server/how-to-use-pgvector.md)
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
description: This article helps you better understand data included in Cost Management. It also explains how frequently data is processed, collected, shown, and closed. Previously updated : 06/04/2024 Last updated : 08/12/2024
# Understand Cost Management data
-This article helps you better understand Azure cost and usage data included in Cost Management. It explains how frequently data is processed, collected, shown, and closed. You're billed for Azure usage monthly. Although billing cycles are monthly periods, cycle start and end dates vary by subscription type. How often Cost Management receives usage data varies based on different factors. Such factors include how long it takes to process the data and how frequently Azure services emit usage to the billing system.
+This article helps you better understand Azure cost and usage data included in Cost Management. It explains how frequently data is processed, collected, shown, and closed. You receive a bill for your Azure usage each month. Although billing cycles are monthly periods, cycle start and end dates vary by subscription type. How often Cost Management receives usage data varies based on different factors. Such factors include how long it takes to process the data and how frequently Azure services emit usage to the billing system.
Cost Management includes all usage and purchases, including commitment discounts (that is, reservations and savings plans) and third-party offerings, for Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) accounts. Microsoft Online Services Agreement (MOSA) accounts only include usage from Azure and Marketplace services with applicable commitment discounts applied but don't include Marketplace or commitment discounts purchases. Support and other costs aren't included. Costs are estimated until an invoice is generated and don't factor in credits. Cost Management also includes costs associated with New Commerce products like Microsoft 365 and Dynamics 365 that are invoiced along with Azure.
Costs shown in Cost Management are rounded. Costs returned by the Query API aren
## Historical data might not match invoice
-Historical data for credit-based and pay-in-advance offers might not match your invoice. Some Azure pay-as-you-go, MSDN, and Visual Studio offers can have Azure credits and advanced payments applied to the invoice. The historical data shown in Cost Management is based on your estimated consumption charges only. Cost Management historical data doesn't include payments and credits. Historical data shown for the following offers might not match exactly with your invoice.
+Historical data for credit-based and pay-in-advance offers might not match your invoice. Some Azure pay-as-you-go, MSDN, and Visual Studio offers can have Azure credits and advanced payments applied to the invoice. The historical data (closed month data) shown in Cost Management is based on your estimated consumption charges only. For the following listed offers, Cost Management historical data doesn't include payments and credits. Additionally, price changes might affect it. *The price shown on your invoice might differ from the price used for cost estimation.*
+
+For example, you get invoiced on January 5 for a service consumed in the month of December. It has a price of $86 per unit. On January 1, the unit price changed to $100. When you view your estimated charges in Cost Management, you see that your cost is the result of your consumed quantity * $100 (not $86, as shown in your invoice).
+
+>[!NOTE]
+>The price change might result in a a price decrease, not only an increase, as explained in this example.
+
+Historical data shown for the following offers might not match exactly with your invoice.
- Azure for Students (MS-AZR-0170P) - Azure in Open (MS-AZR-0111P)
cost-management-billing Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/azure-openai.md
+
+ Title: Save costs with Microsoft Azure OpenAI Service Provisioned Reservations
+description: Learn about how to save costs with Microsoft Azure OpenAI Service Provisioned Reservations.
+++++ Last updated : 08/12/2024+
+# customer intent: As a billing administrator, I want to learn about saving costs with Microsoft Azure OpenAI Service Provisioned Reservations and buy one.
++
+# Save costs with Microsoft Azure OpenAI Service Provisioned Reservations
+
+You can save money on Azure OpenAI provisioned throughput by committing to a reservation for your provisioned throughput units (PTUs) usage for a duration of one month or one year. This article explains how you can save money with Azure OpenAI Service Provisioned Reservations. For more information about Azure OpenAI PTUs, see [Provisioned throughput units onboarding](../../ai-services/openai/how-to/provisioned-throughput-onboarding.md).
+
+To purchase an Azure OpenAI reservation, you choose an Azure region, quantity, and then add the Azure OpenAI SKU to your cart. Then you choose the quantity of provisioned throughput units that you want to purchase.
+
+When you purchase a reservation, the Azure OpenAI provisioned throughput usage that matches the reservation attributes is no longer charged at the pay-as-you-go rates.
+
+A reservation applies to provisioned deployments only and doesn't include other offerings such as standard deployments or fine tuning. Azure OpenAI Service Provisioned Reservations also don't guarantee capacity availability. To ensure capacity availability, the recommended best practice is to create your deployments before you buy your reservation.
+
+When the reservation expires, Azure OpenAI deployments continue to run but are billed at the pay-as-you-go rate.
+
+You can choose to enable automatic renewal of reservations by selecting the option in the renewal settings or at time of purchase. With Azure OpenAI reservation auto renewal, the reservation renews using the same reservation order ID, and a new reservation doesn't get purchased. You can also choose to replace this reservation with a new reservation purchase in renewal settings and a replacement reservation is purchased when the reservation expires. By default, the replacement reservation has the same attributes as the expiring reservation. You can optionally change the name, billing frequency, term, or quantity in the renewal settings. Any user with owner access on the reservation and the subscription used for billing can set up renewal.
+
+For pricing information, see the [Azure OpenAI Service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) page.
+
+You can buy an Azure OpenAI reservation in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md). To buy a reservation:
+
+- You must have owner role or reservation purchaser role on an Azure subscription.
+- For Enterprise subscriptions, the **Reserved Instances** policy option must be enabled in the [Azure portal](../manage/direct-ea-administration.md#view-and-manage-enrollment-policies). If the setting is disabled, you must be an EA Admin to enable it.
+- Direct Enterprise customers can update the **Reserved Instances** policy settings in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). Navigate to the **Policies** menu to change settings.
+- For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure OpenAI Service Provisioned Reservations.
+
+For more information about how enterprise customers and pay-as-you-go customers are charged for reservation purchases, see [Understand Azure reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md) and [Understand Azure reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md).
+
+## Choose the right size before purchase
+
+The Azure OpenAI reservation size should be based on the total provisioned throughput units that you consume via deployments. Reservation purchases are made in one provisioned throughput unit increments.
+
+For example, assume that your total consumption of provisioned throughput units is 64 units. You want to purchase a reservation for all of it, so you should purchase 64 of reservation quantity.
+
+## Buy a Microsoft Azure OpenAI reservation
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Select **All services** > **Reservations** and then select **Azure OpenAI**
+ :::image type="content" source="./media/azure-openai/purchase-openai.png" border="true" alt-text="Screenshot showing the Purchase reservations page." lightbox="./media/azure-openai/purchase-openai.png" :::
+3. Select a subscription. Use the Subscription list to choose the subscription that gets used to pay for the reservation. The payment method of the subscription is charged the costs for the reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreement, or pay-as-you-go (offer numbers: MS-AZR-0003P or MS-AZR-0023P).
+ - For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage.
+ - For a pay-as-you-go subscription, the charges are billed to the credit card or invoice payment method on the subscription.
+4. Select a scope. Use the Scope list to choose a subscription scope. You can change the reservation scope after purchase.
+ - **Single resource group scope** - Applies the reservation discount to the matching resources in the selected resource group only.
+ - **Single subscription scope** - Applies the reservation discount to the matching resources in the selected subscription.
+ - **Shared scope** - Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. If a subscription is moved to different billing context, the benefit no longer applies to the subscription. It continues to apply to other subscriptions in the billing context.
+ - For enterprise customers, the billing context is the EA enrollment. The reservation shared scope would include multiple Microsoft Entra tenants in an enrollment.
+ - For Microsoft Customer Agreement customers, the billing scope is the billing profile.
+ - For pay-as-you-go customers, the shared scope is all pay-as-you-go subscriptions created by the account administrator.
+ - **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. The management group scope applies to all subscriptions throughout the entire management group hierarchy. To buy a reservation for a management group, you must have at least read permission on the management group and be a reservation owner or reservation purchaser on the billing subscription.
+5. Select a region to choose an Azure region that gets covered by the reservation and select **Add to cart**.
+ :::image type="content" source="./media/azure-openai/select-provisioned-throughput.png" border="true" alt-text="Screenshot showing the Select product to purchase page." lightbox="./media/azure-openai/select-provisioned-throughput.png" :::
+6. In the cart, choose the quantity of provisioned throughout units that you want to purchase. For example, a quantity of 64 would cover up to 64 deployed provisioned throughout units every hour.
+7. Select **Next: Review + Buy** and review your purchase choices and their prices.
+8. Select **Buy now**.
+9. After purchase, you can select **View this Reservation** to see your purchase status.
+
+## Cancel, exchange, or refund reservations
+
+You can cancel or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md). However, Exchanges aren't allowed for Azure OpenAI Service Provisioned Reservations.
+
+If you want to request a refund for your Azure OpenAI reservation, you can do so by following these steps:
+
+1. Sign in to the Azure portal and go to the Reservations page.
+2. Select the Azure OpenAI reservation that you want to refund and select **Return**.
+3. On the Refund reservation page, review the refund amount and select a **Reason for return**.
+4. Select **Return reserved instance**.
+5. Review the terms and conditions and agree to them.
+
+The refund amount is based on the prorated remaining term and the current price of the reservation. The refund amount is applied as a credit to your Azure account.
+
+After you request a refund, the reservation is canceled and you can view the status of your refund request on the [Reservations](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade) page in the Azure portal.
+
+The sum total of all canceled reservation commitment in your billing scope (such as EA, Microsoft Customer Agreement, and Microsoft Partner Agreement) can't exceed USD 50,000 in a 12-month rolling window.
+
+## How reservation discounts apply to Azure OpenAI
+
+After you buy a reservation for Azure OpenAI, the discount associated with the reservation automatically gets applied to any units you deployed in the specified region. As long as they fall within the scope of the reservation. The reservation discount applies to the usage emitted by the provisioned throughput pay-as-you-go meters.
+
+### Reservation discount application
+
+The application of the Azure OpenAI reservation is based on an hourly comparison between the reserved and deployed PTUs. The sum of deployed PTUs up-to the amount reserved are covered (paid for) via the reservation, while any deployed PTUs in excess of the reserved PTUs get charged at the hourly, pay-as-you-go rate. There are a few other points to keep in mind:
+
+- PTUs for partial-hour deployments are pro-rated based on the number of minutes the deployment exists during the hour. For example, a 100 PTU deployment that exists for only 15 minutes of an hour period is considered as a 25 PTU deployment. Specifically, 15 minutes is 1/4 of an hour, so only 1/4 of the deployed PTUs are considered for billing and reservation application during that hour.
+- Deployments are matched to reservations based on the reservation scope before the reservation is applied. For example, a reservation scoped to a single subscription only cover deployments within that subscription. Deployments in other subscriptions are charged the hourly, pay-as-you-go rate, unless they're covered by other reservations that have them in scope.
+
+The reservation price assumes a 24x7 deployment of the reserved PTUs. In periods with fewer deployed PTUs than reserved PTUs, all deployed PTUs get covered by the reservation, but the excess reserved PTUs aren't used. These excess reserved PTUs are lost and don't carry over to other periods.
+
+### Discount examples
+
+The following examples show how the Azure OpenAI reservation discount applies, depending on the deployments.
+
+- **Example 1** - A reservation that's exactly the same size as the deployed units. For example, you purchase 100 PTUs on a reservation and you deploy 100 PTUs. In this example, you only pay the reservation price.
+- **Example 2** - A reservation that's larger than your deployed units. For example, you purchase 300 PTUs on a reservation and you only deploy 100 PTUs. In this example, the reservation discount is applied to 100 PTUs. The remaining 200 PTUs, in the reservation will go unused, and won't carry forward to future billing periods.
+- **Example 3** - A reservation that's smaller than the deployed units. For example, you purchase 200 PTUs on a reservation and you deploy 600 PTUs. In this example, the reservation discount is applied to the 200 PTUs that were used. The remaining 400 PTUs are charged at the pay-as-you-go rate.
+- **Example 4** - A reservation that's the same size as the total of two deployments. For example, you purchase 200 PTUs on a reservation and you have two deployments of 100 PTUs each. In this example, the discount is applied to the sum of deployed units.
+
+## Increase the size of an Azure OpenAI reservation
+
+If you want to increase the size of your Azure OpenAI reservation, you can buy more Azure OpenAI Service Provisioned Reservations using the preceding steps.
+
+## Related content
+
+- To learn more about Azure reservations, see the following articles:
+ - [What are Azure Reservations?](save-compute-costs-reservations.md)
+ - [Manage Azure Reservations](manage-reserved-vm-instance.md)
+ - [Understand Azure Reservations discount](understand-reservation-charges.md)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [Azure Files](../../storage/files/files-reserve-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json) - [Azure VMware Solution](../../azure-vmware/reserved-instance.md?toc=/azure/cost-management-billing/reservations/toc.json) - [Azure Cosmos DB](../../cosmos-db/cosmos-db-reserved-capacity.md?toc=/azure/cost-management-billing/reservations/toc.json)
+- [Azure OpenAI](azure-openai.md)
- [Azure SQL Edge](prepay-sql-edge.md) - [Databricks](prepay-databricks-reserved-capacity.md) - [Data Explorer](/azure/data-explorer/pricing-reserved-capacity?toc=/azure/cost-management-billing/reservations/toc.json)
data-factory Connector Azure File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-file-storage.md
Previously updated : 01/05/2024 Last updated : 07/31/2024 # Copy data from or to Azure Files by using Azure Data Factory
The Azure Files connector supports the following authentication types. See the c
- [Account key authentication](#account-key-authentication) - [Shared access signature authentication](#shared-access-signature-authentication)
+- [System-assigned managed identity authentication](#system-assigned-managed-identity-authentication)
+- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
>[!NOTE] > If you were using Azure Files linked service with [legacy model](#legacy-model), where on ADF authoring UI shown as "Basic authentication", it is still supported as-is, while you are suggested to use the new model going forward. The legacy model transfers data from/to storage over Server Message Block (SMB), while the new model utilizes the storage SDK which has better throughput. To upgrade, you can edit your linked service to switch the authentication method to "Account key" or "SAS URI"; no change needed on dataset or copy activity.
The service supports the following properties for using shared access signature
} ```
+### System-assigned managed identity authentication
+
+A data factory or Synapse pipeline can be associated with a [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity), which represents that resource for authentication to other Azure services. You can use this system-assigned managed identity for Azure Files authentication. To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+To use system-assigned managed identity authentication, follow these steps:
+
+1. [Retrieve system-assigned managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the system-assigned managed identity object ID generated along with your factory or Synapse workspace.
+
+2. Grant the managed identity permission in Azure Files. For more information on the roles, see this [article](../role-based-access-control/built-in-roles/storage.md#storage-file-data-smb-share-reader).
+
+ - **As source**, in **Access control (IAM)**, grant at least the **Storage File Data SMB Share Reader** role.
+ - **As sink**, in **Access control (IAM)**, grant at least the **Storage File Data SMB Share Contributor** role.
+
+These properties are supported for an Azure Files linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The **type** property must be set to **AzureFileStorage**. | Yes |
+| serviceEndpoint | Specify the Azure Files service endpoint with the pattern of `https://<accountName>.file.core.windows.net/`. | Yes |
+| fileShare | Specify the file share. | Yes |
+| snapshot | Specify the date of the [file share snapshot](../storage/files/storage-snapshots-files.md) if you want to copy from a snapshot. | No |
+| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime. If not specified, it uses the default Azure Integration Runtime. |No |
+
+>[!NOTE]
+>System-assigned managed identity authentication is only supported by Azure integration runtime.
+
+**Example:**
+
+```json
+{
+ "name": "AzureFileStorageLinkedService",
+ "properties": {
+ "type": "AzureFileStorage",
+ "typeProperties": {
+ "serviceEndpoint": "https://<accountName>.file.core.windows.net/",
+ "fileShare": "<file share name>",
+ "snapshot": "<snapshot version>"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+### User-assigned managed identity authentication
+
+A data factory can be assigned with one or multiple [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity). You can use this user-assigned managed identity for Azure Files authentication, which allows to access and copy data from or to Azure Files. To learn more about managed identities for Azure resources, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+To use user-assigned managed identity authentication, follow these steps:
+
+1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant permission in Azure Files. For more information on the roles, see this [article](../role-based-access-control/built-in-roles/storage.md#storage-file-data-smb-share-reader).
+
+ - **As source**, in **Access control (IAM)**, grant at least the **Storage File Data SMB Share Reader** role.
+ - **As sink**, in **Access control (IAM)**, grant at least the **Storage File Data SMB Share Contributor** role.
+
+2. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](credentials.md) for each user-assigned managed identity.
+
+These properties are supported for an Azure Files linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The **type** property must be set to **AzureFileStorage**. | Yes |
+| serviceEndpoint | Specify the Azure Files service endpoint with the pattern of `https://<accountName>.file.core.windows.net/`. | Yes |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+| fileShare | Specify the file share. | Yes |
+| snapshot | Specify the date of the [file share snapshot](../storage/files/storage-snapshots-files.md) if you want to copy from a snapshot. | No |
+| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime. |No |
+
+**Example:**
+
+```json
+{
+ "name": "AzureFileStorageLinkedService",
+ "properties": {
+ "type": "AzureFileStorage",
+ "typeProperties": {
+ "serviceEndpoint": "https://<accountName>.file.core.windows.net/",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ },
+ "fileShare": "<file share name>",
+ "snapshot": "<snapshot version>"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ### Legacy model | Property | Description | Required |
data-factory Connector Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-servicenow.md
Previously updated : 06/17/2024 Last updated : 08/06/2024 # Copy data from ServiceNow using Azure Data Factory or Synapse Analytics
For a list of data stores that are supported as sources/sinks, see the [Supporte
The service provides a built-in driver to enable connectivity. Therefore you don't need to manually install any driver using this connector.
+## Prerequisite
+
+To use this connector, you need to have a role with at least read access to *sys_db_object* and *sys_dictionary* tables in ServiceNow.
+ ## Getting started [!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
dns Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-overview.md
Title: What is Azure DNS?
-description: Overview of DNS hosting service on Microsoft Azure. Host your domain on Microsoft Azure.
+ Title: Azure DNS overview
+description: An overview of services provided by Azure DNS.
Previously updated : 11/30/2023 Last updated : 08/12/2024 #Customer intent: As an administrator, I want to evaluate Azure DNS so I can determine if I want to use it instead of my current DNS service.
-# What is Azure DNS?
+# Azure DNS overview
-Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
+The Domain Name System (DNS) is responsible for translating (resolving) a service name to an IP address. Azure DNS provides DNS hosting, resolution, and load balancing for your applications using the Microsoft Azure infrastructure.
-You can't use Azure DNS to buy a domain name. For an annual fee, you can buy a domain name by using [App Service domains](../app-service/manage-custom-dns-buy-domain.md#buy-and-map-an-app-service-domain) or a third-party domain name registrar. Your domains then can be hosted in Azure DNS for record management. For more information, see [Delegate a domain to Azure DNS](dns-domain-delegation.md).
+Azure DNS supports both internet-facing DNS domains and private DNS zones, and provides the following
+- **[Azure Public DNS](public-dns-overview.md)** is a hosting service for DNS domains. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
-The following features are included with Azure DNS.
+- **[Azure Private DNS](private-dns-overview.md)** is a DNS service for your virtual networks. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution.
-## Reliability and performance
+- **[Azure DNS Private Resolver](dns-private-resolver-overview.md)** is a service that enables you to query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers.
-DNS domains in Azure DNS are hosted on Azure's global network of DNS name servers. Azure DNS uses anycast networking. Each DNS query is answered by the closest available DNS server to provide fast performance and high availability for your domain.
+- **[Azure Traffic Manager](/azure/traffic-manager/traffic-manager-overview)** is a DNS-based traffic load balancer. This service allows you to distribute traffic to your public facing applications across the global Azure regions.
-## Security
+Azure DNS enables multiple scenarios, including:
- Azure DNS is based on Azure Resource Manager, which provides features such as:
-
-* [Azure role-based access control (Azure RBAC)](../azure-resource-manager/management/overview.md) to control who has access to specific actions for your organization.
-
-* [Activity logs](../azure-resource-manager/management/overview.md) to monitor how a user in your organization modified a resource or to find an error when troubleshooting.
-
-* [Resource locking](../azure-resource-manager/management/lock-resources.md) to lock a subscription, resource group, or resource. Locking prevents other users in your organization from accidentally deleting or modifying critical resources.
-
-For more information, see [How to protect DNS zones and records](dns-protect-zones-recordsets.md).
-
-## DNSSEC
-
-Azure DNS does not currently support DNSSEC. In most cases, you can reduce the need for DNSSEC by consistently using HTTPS/TLS in your applications. If DNSSEC is a critical requirement for your DNS zones, you can host these zones with third-party DNS hosting providers.
-
-## Ease of use
-
- Azure DNS can manage DNS records for your Azure services and provide DNS for your external resources as well. Azure DNS is integrated in the Azure portal and uses the same credentials, support contract, and billing as your other Azure services.
-
-DNS billing is based on the number of DNS zones hosted in Azure and on the number of DNS queries received. To learn more about pricing, see [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
-
-Your domains and records can be managed by using the Azure portal, Azure PowerShell cmdlets, and the cross-platform Azure CLI. Applications that require automated DNS management can integrate with the service by using the REST API and SDKs.
-
-## Customizable virtual networks with private domains
-
-Azure DNS also supports private DNS domains. This feature allows you to use your own custom domain names in your private virtual networks rather than the Azure-provided names available today.
-
-For more information, see [Use Azure DNS for private domains](private-dns-overview.md).
-
-## Alias records
-
-Azure DNS supports alias record sets. You can use an alias record set to refer to an Azure resource, such as an Azure public IP address, an Azure Traffic Manager profile, or an Azure Content Delivery Network (CDN) endpoint. If the IP address of the underlying resource changes, the alias record set seamlessly updates itself during DNS resolution. The alias record set points to the service instance, and the service instance is associated with an IP address.
-
-Also, you can now point your apex or naked domain to a Traffic Manager profile or CDN endpoint using an alias record. An example is contoso.com.
-
-For more information, see [Overview of Azure DNS alias records](dns-alias.md).
+* [Host and resolve public domains](/azure/dns/dns-delegate-domain-azure-dns)
+* [Manage DNS resolution in your virtual networks](/azure/dns/private-dns-privatednszone)
+* [Enable autoregistration for VMs](/azure/dns/private-dns-autoregistration)
+* [Enable name resolution between Azure and your on-premises resources](/azure/dns/private-resolver-hybrid-dns)
+* [Secure hybrid networking](/azure/architecture/networking/architecture/azure-dns-private-resolver#use-dns-private-resolver)
+* [Monitor DNS metrics and alerts](/azure/dns/dns-alerts-metrics)
+* [Integrate with your other Azure services](/azure/dns/dns-for-azure-services)
+* [Perform Private Link and DNS integration at scale](/azure/cloud-adoption-framework/ready/azure-best-practices/private-link-and-dns-integration-at-scale)
+* Protect your [public](/azure/dns/dns-protect-zones-recordsets) and [private](/azure/dns/dns-protect-private-zones-recordsets) DNS zones and records
+* Enable automatic [fault tolerance](/azure/dns/private-resolver-reliability) and [failover](/azure/dns/tutorial-dns-private-resolver-failover) for DNS resolution
+* [Load-balance your applications](/azure/traffic-manager/traffic-manager-how-it-works)
+* Increase application [availability](/azure/traffic-manager/traffic-manager-monitoring) and [performance](/azure/traffic-manager/traffic-manager-configure-performance-routing-method)
+* [Monitor your application traffic patterns](/azure/traffic-manager/traffic-manager-traffic-view-overview)
## Next steps
-* To learn about DNS zones and records, see [DNS zones and records overview](dns-zones-records.md).
-
-* To learn how to create a zone in Azure DNS, see [Create a DNS zone](./dns-getstarted-portal.md).
-
-* For frequently asked questions about Azure DNS, see the [Azure DNS FAQ](dns-faq.yml).
-
-* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
+* To learn about Public DNS zones and records, see [DNS zones and records overview](dns-zones-records.md).
+* To learn about Private DNS zones, see [What is an Azure Private DNS zone](private-dns-privatednszone.md).
+* To learn about private resolver endpoints and rulesets, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
+* For frequently asked questions about Azure DNS, see [Azure DNS FAQ](dns-faq-private.yml).
+* For frequently asked questions about Azure Private DNS, see [Azure Private DNS FAQ](dns-faq.yml).
+* For frequently asked questions about Traffic Manager, see [Traffic Manager routing methods](/azure/traffic-manager/traffic-manager-faqs)
+* Also see [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Title: What is Azure DNS Private Resolver? description: In this article, get started with an overview of the Azure DNS Private Resolver service.- Previously updated : 07/01/2024 Last updated : 08/09/2024 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Azure DNS Private Resolver is a new service that enables you to query Azure DNS
## How does it work?
-Azure DNS Private Resolver requires an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md). When you create an Azure DNS Private Resolver inside a virtual network, one or more [inbound endpoints](#inbound-endpoints) are established that can be used as the destination for DNS queries. The resolver's [outbound endpoint](#outbound-endpoints) processes DNS queries based on a [DNS forwarding ruleset](#dns-forwarding-rulesets) that you configure. DNS queries that are initiated in networks linked to a ruleset can be sent to other DNS servers.
+Azure DNS Private Resolver requires an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md). When you create an Azure DNS Private Resolver inside a virtual network, one or more [inbound endpoints](#inbound-endpoints) are established that can be used as the destination for DNS queries. The resolver's [outbound endpoint](#outbound-endpoints) processes DNS queries based on a [DNS forwarding ruleset](#dns-forwarding-rulesets) that you configure. DNS queries that are initiated in networks linked to a ruleset can be sent to other DNS servers.
You don't need to change any DNS client settings on your virtual machines (VMs) to use the Azure DNS Private Resolver.
The following limits currently apply to Azure DNS Private Resolver:
### Virtual network restrictions The following restrictions hold with respect to virtual networks:-- VNets with [encryption](/azure/virtual-network/virtual-network-encryption-overview) enabled do not support Azure DNS Private Resolver.
+- VNets with [encryption](/azure/virtual-network/virtual-network-encryption-overview) enabled don't support Azure DNS Private Resolver.
- A DNS resolver can only reference a virtual network in the same region as the DNS resolver. - A virtual network can't be shared between multiple DNS resolvers. A single virtual network can only be referenced by a single DNS resolver.
Outbound endpoints have the following limitations:
### Other restrictions - IPv6 enabled subnets aren't supported.-- DNS private resolver does not support Azure ExpressRoute FastPath.
+- DNS private resolver doesn't support Azure ExpressRoute FastPath.
- DNS private resolver inbound endpoint provisioning isn't compatible with [Azure Lighthouse](../lighthouse/overview.md). - To see if Azure Lighthouse is in use, search for **Service providers** in the Azure portal and select **Service provider offers**.
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-for-azure-services.md
Previously updated : 01/10/2024 Last updated : 08/12/2024
No. Azure supports a single reverse DNS record for each Azure Cloud Service or P
### Can I configure reverse DNS for IPv6 PublicIpAddress resources?
-No. Azure currently supports reverse DNS only for IPv4 PublicIpAddress resources and Cloud Services.
+Yes. See [Azure support for reverse DNS](/azure/dns/dns-reverse-dns-overview#azure-support-for-reverse-dns).
### Can I send emails to external domains from my Azure Compute services?
dns Private Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-overview.md
Title: What is Azure Private DNS? description: In this article, get started with an overview of the private DNS hosting service on Microsoft Azure.- Previously updated : 06/21/2024 Last updated : 08/09/2024 #Customer intent: As an administrator, I want to evaluate Azure Private DNS so I can determine if I want to use it instead of my current DNS service. # What is Azure Private DNS?
-The Domain Name System (DNS) is responsible for translating (resolving) a service name to an IP address. Azure DNS is a hosting service for domains and provides naming resolution using the Microsoft Azure infrastructure. Azure DNS not only supports internet-facing DNS domains, but it also supports private DNS zones.
- Azure Private DNS provides a reliable and secure DNS service for your virtual networks. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. By using private DNS zones, you can use your own custom domain name instead of the Azure-provided names during deployment. Using a custom domain name helps you tailor your virtual network architecture to best suit your organization's needs. It provides a naming resolution for virtual machines (VMs) within a virtual network and connected virtual networks. Additionally, you can configure zones names with a split-horizon view, which allows a private and a public DNS zone to share the name. To resolve the records of a private DNS zone from your virtual network, you must link the virtual network with the zone. Linked virtual networks have full access and can resolve all DNS records published in the private zone. You can also enable [autoregistration](private-dns-autoregistration.md) on a [virtual network link](private-dns-virtual-network-links.md). When you enable autoregistration on a virtual network link, the DNS records for the virtual machines in that virtual network are registered in the private zone. When autoregistration gets enabled, Azure DNS will update the zone record whenever a virtual machine gets created, changes its' IP address, or gets deleted.
To resolve the records of a private DNS zone from your virtual network, you must
![DNS overview](./media/private-dns-overview/scenario.png) > [!NOTE]
-> As a best practice, do not use a *.local* domain for your private DNS zone. Not all operating systems support this.
+> As a best practice, don't use a *.local* domain for your private DNS zone. Not all operating systems support this.
## Private zone resiliency
dns Public Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/public-dns-overview.md
+
+ Title: What is Azure Public DNS?
+description: Overview of DNS hosting service on Microsoft Azure. Host your domain on Microsoft Azure.
+++ Last updated : 08/09/2024+
+#Customer intent: As an administrator, I want to evaluate Azure Public DNS so I can determine if I want to use it instead of my current DNS service.
++
+# What is Azure Public DNS?
+
+Azure Public DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
+
+You can't use Azure Public DNS to buy a domain name. For an annual fee, you can buy a domain name by using [App Service domains](../app-service/manage-custom-dns-buy-domain.md#buy-and-map-an-app-service-domain) or a third-party domain name registrar. Your domains then can be hosted in Azure Public DNS for record management. For more information, see [Delegate a domain to Azure DNS](dns-domain-delegation.md).
+
+The following features are included with Azure Public DNS.
+
+## Reliability and performance
+
+DNS domains in Azure Public DNS are hosted on Azure's global network of DNS name servers. Azure Public DNS uses anycast networking. Each DNS query is answered by the closest available DNS server to provide fast performance and high availability for your domain.
+
+## Security
+
+ Azure Public DNS is based on Azure Resource Manager, which provides features such as:
+
+* [Azure role-based access control (Azure RBAC)](../azure-resource-manager/management/overview.md) to control who has access to specific actions for your organization.
+* [Activity logs](../azure-resource-manager/management/overview.md) to monitor how a user in your organization modified a resource or to find an error when troubleshooting.
+* [Resource locking](../azure-resource-manager/management/lock-resources.md) to lock a subscription, resource group, or resource. Locking prevents other users in your organization from accidentally deleting or modifying critical resources.
+
+For more information, see [How to protect DNS zones and records](dns-protect-zones-recordsets.md).
+
+## DNSSEC
+
+Azure Public DNS doesn't currently support DNSSEC. In most cases, you can reduce the need for DNSSEC by consistently using HTTPS/TLS in your applications. If DNSSEC is a critical requirement for your DNS zones, you can host these zones with third-party DNS hosting providers.
+
+## Ease of use
+
+ Azure Public DNS can manage DNS records for your Azure services and provide DNS for your external resources as well. Azure Public DNS is integrated in the Azure portal and uses the same credentials, support contract, and billing as your other Azure services.
+
+DNS billing is based on the number of DNS zones hosted in Azure and on the number of DNS queries received. To learn more about pricing, see [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
+
+Your domains and records can be managed by using the Azure portal, Azure PowerShell cmdlets, and the cross-platform Azure CLI. Applications that require automated DNS management can integrate with the service by using the REST API and SDKs.
+
+## Customizable virtual networks with private domains
+
+Azure Public DNS also supports private DNS domains. This feature allows you to use your own custom domain names in your private virtual networks rather than the Azure-provided names available today.
+
+For more information, see [Use Azure DNS for private domains](private-dns-overview.md).
+
+## Alias records
+
+Azure Public DNS supports alias record sets. You can use an alias record set to refer to an Azure resource, such as an Azure public IP address, an Azure Traffic Manager profile, or an Azure Content Delivery Network (CDN) endpoint. If the IP address of the underlying resource changes, the alias record set seamlessly updates itself during DNS resolution. The alias record set points to the service instance, and the service instance is associated with an IP address.
+
+Also, you can now point your apex or naked domain to a Traffic Manager profile or CDN endpoint using an alias record. An example is contoso.com.
+
+For more information, see [Overview of Azure DNS alias records](dns-alias.md).
+
+## Next steps
+
+* To learn about DNS zones and records, see [DNS zones and records overview](dns-zones-records.md).
+* To learn how to create a zone in Azure Public DNS, see [Create a DNS zone](./dns-getstarted-portal.md).
+* For frequently asked questions about Azure DNS, see the [Azure DNS FAQ](dns-faq.yml).
+* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
expressroute Expressroute About Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-encryption.md
Yes. For the MACsec configuration, we support the preshared key mode only. It me
### Does traffic continue to flow if there's a mismatch in MACsec key between my devices and Microsoft's?
-No. If MACsec is configured and a key mismatch occurs, you lose connectivity to Microsoft. In other traffic doesn't fall back to an unencrypted connection, exposing your data.
+No. If MACsec is configured and a key mismatch occurs, you lose connectivity to Microsoft. In other words, traffic doesn't fall back to an unencrypted connection, exposing your data.
### Does enabling MACsec on ExpressRoute Direct degrade network performance?
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
This article helps you create a connection to link a virtual network (virtual ne
* If you enable the ExpressRoute premium add-on, you can link virtual networks outside of the geopolitical region of the ExpressRoute circuit. The premium add-on also allows you to connect more than 10 virtual networks to your ExpressRoute circuit depending on the bandwidth chosen. Check the [FAQ](expressroute-faqs.md) for more details on the premium add-on.
-* In order to create the connection from the ExpressRoute circuit to the target ExpressRoute virtual network gateway, the number of address spaces advertised from the local or peered virtual networks needs to be equal to or less than **200**. Once the connection has been successfully created, you can add more address spaces, up to 1,000, to the local or peered virtual networks.
- * Review guidance for [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md). ## Connect a virtual network to a circuit - same subscription
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-migrate.md
During your migration process, you may need to migrate your Classic firewall rul
1. From the Azure portal, select your standard firewall. On the **Overview** page, select **Migrate to firewall policy**.
- :::image type="content" source="media/premium-migrate/firewall-overview-migrate.png" alt-text="Migrate to firewall policy":::
+ :::image type="content" source="media/premium-migrate/firewall-overview-migrate.png" lightbox="media/premium-migrate/firewall-overview-migrate.png" alt-text="Screenshot showing migrate to firewall policy.":::
1. On the **Migrate to firewall policy** page, select **Review + create**. 1. Select **Create**.
frontdoor Edge Locations By Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/edge-locations-by-region.md
Previously updated : 05/30/2023 Last updated : 08/12/2024
frontdoor Front Door Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain.md
Previously updated : 04/04/2023 Last updated : 08/12/2024 #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
A custom domain can only be associated with one Front Door profile at a time. Ho
## Map the temporary afdverify subdomain
-When you map an existing domain that is in production, there are things consider. While you're registering your custom domain in the Azure portal, a brief period of downtime for the domain may occur. To avoid interruption of web traffic, map your custom domain to your Front Door default frontend host with the Azure afdverify subdomain first to create a temporary CNAME mapping. Your users can access your domain without interruption when the DNS mapping occurs.
+When you map an existing domain that is in production, there are things consider. While you're registering your custom domain in the Azure portal, a brief period of downtime for the domain might occur. To avoid interruption of web traffic, map your custom domain to your Front Door default frontend host with the Azure afdverify subdomain first to create a temporary CNAME mapping. Your users can access your domain without interruption when the DNS mapping occurs.
If you're using your custom domain for the first time with no production traffic, you can directly map your custom domain to your Front Door. You can skip ahead to [Map the permanent custom domain](#map-the-permanent-custom-domain).
To create a CNAME record with the afdverify subdomain:
2. Find the page for managing DNS records by consulting the provider's documentation or searching for areas of the web site labeled **Domain Name**, **DNS**, or **Name server management**.
-3. Create a CNAME record entry for your custom domain and complete the fields as shown in the following table (field names may vary):
+3. Create a CNAME record entry for your custom domain and complete the fields as shown in the following table (field names might vary):
| Source | Type | Destination | ||-||
For example, the procedure for the GoDaddy domain registrar is as follows:
- Type: Leave *CNAME* selected.
- - Host: Enter the subdomain of your custom domain to use, including the afdverify subdomain name. For example, afdverify.www.
+ - Host: Enter the subdomain of your custom domain for use, including the afdverify subdomain name. For example, afdverify.www.
- Points to: Enter the host name of your default Front Door frontend host, including the afdverify subdomain name. For example, afdverify.contoso-frontend.azurefd.net.
For example, the procedure for the GoDaddy domain registrar is as follows:
## Associate the custom domain with your Front Door
-After you've registered your custom domain, you can then add it to your Front Door.
+After you register your custom domain, you can then add it to your Front Door.
1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to the Front Door containing the frontend host that you want to map to a custom domain.
After you've registered your custom domain, you can then add it to your Front Do
## Verify the custom domain
-After you've completed the registration of your custom domain, verify that the custom domain references your default Front Door frontend host.
+After you complete the registration of your custom domain, verify that the custom domain references your default Front Door frontend host.
In your browser, navigate to the address of the file by using the custom domain. For example, if your custom domain is robotics.contoso.com, the URL to the cached file should be similar to the following URL: http:\//robotics.contoso.com/my-public-container/my-file.jpg. Verify that the result is that same as when you access the Front Door directly at *&lt;Front Door host&gt;*.azurefd.net. ## Map the permanent custom domain
-If you've verified that the afdverify subdomain has been successfully mapped to your Front Door, you can then map the custom domain directly to your default Front Door frontend host.
+To proceed with mapping the custom domain directly to your default Front Door frontend host, you need to ensure that the afdverify subdomain was successfully mapped to your Front Door. Once verified, you can proceed with mapping the custom domain.
To create a CNAME record for your custom domain:
To create a CNAME record for your custom domain:
2. Find the page for managing DNS records by consulting the provider's documentation or searching for areas of the web site labeled **Domain Name**, **DNS**, or **Name Server Management**.
-3. Create a CNAME record entry for your custom domain and complete the fields as shown in the following table (field names may vary):
+3. Create a CNAME record entry for your custom domain and complete the fields as shown in the following table (field names might vary):
| Source | Type | Destination | |--|-|--|
To create a CNAME record for your custom domain:
4. Save your changes.
-5. If you're previously created a temporary afdverify subdomain CNAME record, delete it.
+5. If you previously created a temporary afdverify subdomain CNAME record, delete it.
6. If you're using this custom domain in production for the first time, follow the steps for [Associate the custom domain with your Front Door](#associate-the-custom-domain-with-your-front-door) and [Verify the custom domain](#verify-the-custom-domain).
For example, the procedure for the GoDaddy domain registrar is as follows:
In the preceding steps, you added a custom domain to a Front Door. If you no longer want to associate your Front Door with a custom domain, you can remove the custom domain by doing these steps:
-1. Go to your DNS provider, delete the CNAME record for the custom domain or update the CNAME record for the custom domain to a non Front Door endpoint.
+1. Go to your DNS provider, delete the CNAME record for the custom domain, or update the CNAME record for the custom domain to a non Front Door endpoint.
> [!Important] > To prevent dangling DNS entries and the security risks they create, starting from April 9th 2021, Azure Front Door requires removal of the CNAME records to Front Door endpoints before the resources can be deleted. Resources include Front Door custom domains, Front Door endpoints or Azure resource groups that has Front Door custom domain(s) enabled.
frontdoor Front Door Routing Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-routing-architecture.md
Previously updated : 04/04/2023 Last updated : 08/12/2024 zone_pivot_groups: front-door-tiers
When Front Door receives an HTTP request, it uses the request's `Host` header to
::: zone-end
-The client and server perform a TLS handshake using the TLS certificate you've configured for your custom domain name, or by using the Front Door certificate when the `Host` header ends with `*.azurefd.net`.
+The client and server perform a TLS handshake using the TLS certificate you configured for your custom domain name, or by using the Front Door certificate when the `Host` header ends with `*.azurefd.net`.
## Evaluate WAF rules ::: zone pivot="front-door-standard-premium"
-If your domain has enabled the Web Application Firewall, WAF rules are evaluated.
+If your domain has Web Application Firewall enabled, WAF rules are evaluated.
::: zone-end ::: zone pivot="front-door-classic"
-If your frontend has enabled the Web Application Firewall, WAF rules are evaluated.
+If your frontend has Web Application Firewall enabled, WAF rules are evaluated.
::: zone-end
-If a rule has been violated, Front Door returns an error to the client and the request processing stops.
+If a rule gets violated, Front Door returns an error to the client and the request processing stops.
::: zone pivot="front-door-standard-premium"
The route specifies the [backend pool](front-door-backend-pool.md) that the requ
## Evaluate rule sets
-If you have defined [rule sets](front-door-rules-engine.md) for the route, they're executed in the order they're configured. [Rule sets can override the origin group](front-door-rules-engine-actions.md#RouteConfigurationOverride) specified in a route. Rule sets can also trigger a redirection response to the request instead of forwarding it to an origin.
+If you define [rule sets](front-door-rules-engine.md) for the route, they get process in the order configured. [Rule sets can override the origin group](front-door-rules-engine-actions.md#RouteConfigurationOverride) specified in a route. Rule sets can also trigger a redirection response to the request instead of forwarding it to an origin.
::: zone-end
If you have defined [rule sets](front-door-rules-engine.md) for the route, they'
## Evaluate rules engines
-If you have defined [rules engines](front-door-rules-engine.md) for the route, they're executed in the order they're configured. [Rules engines can override the backend pool](front-door-rules-engine-actions.md#route-configuration-overrides) specified in a routing rule. Rules engines can also trigger a redirection response to the request instead of forwarding it to a backend.
+If you define [rules engines](front-door-rules-engine.md) for the route, they get process in the order configured. [Rules engines can override the backend pool](front-door-rules-engine-actions.md#route-configuration-overrides) specified in a routing rule. Rules engines can also trigger a redirection response to the request instead of forwarding it to a backend.
::: zone-end
If caching is disabled or no response is available, the request is forwarded to
Front Door selects an origin to use within the origin group. Origin selection is based on several factors, including: -- The health of each origin, which Front Door monitors by using [health probes](front-door-health-probes.md).-- The [routing method](front-door-routing-methods.md) for your origin group.-- Whether you have enabled [session affinity](front-door-routing-methods.md#affinity)
+- Health of each origin, which Front Door monitors by using [health probes](front-door-health-probes.md).
+- [Routing method](front-door-routing-methods.md) for your origin group.
+- If you enable [session affinity](front-door-routing-methods.md#affinity)
-## Forward request to origin
+## Forward the request to the origin
Finally, the request is forwarded to the origin.
Finally, the request is forwarded to the origin.
Front Door selects a backend to use within the backend pool. Backend selection is based on several factors, including: -- The health of each backend, which Front Door monitors by using [health probes](front-door-health-probes.md).-- The [routing method](front-door-routing-methods.md) for your backend pool.-- Whether you have enabled [session affinity](front-door-routing-methods.md#affinity)
+- Health of each backend, which Front Door monitors by using [health probes](front-door-health-probes.md).
+- [Routing method](front-door-routing-methods.md) for your backend pool.
+- If you have enable [session affinity](front-door-routing-methods.md#affinity)
-## Forward request to backend
+## Forward the request to the backend
Finally, the request is forwarded to the backend.
Finally, the request is forwarded to the backend.
::: zone pivot="front-door-standard-premium" -- Learn how to [create a Front Door profile](standard-premium/create-front-door-portal.md).
+- Learn how to [create an Azure Front Door profile](standard-premium/create-front-door-portal.md).
::: zone-end ::: zone pivot="front-door-classic" -- Learn how to [create a Front Door profile](quickstart-create-front-door.md).
+- Learn how to [create an Azure Front Door (classic) profile](quickstart-create-front-door.md).
::: zone-end
frontdoor Front Door Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine.md
Previously updated : 05/15/2023 Last updated : 08/12/2024 zone_pivot_groups: front-door-tiers
A rule set is a customized rules engine that groups a combination of rules into
* Add, modify, or remove request/response header to hide sensitive information or capture important information through headers.
-* Support server variables to dynamically change the request header, response headers or URL rewrite paths/query strings. For example, when a new page load or when a form gets posted. Server variable is currently supported in **[rule set actions](front-door-rules-engine-actions.md)** only.
+* Support server variables to dynamically change the request header, response headers, or URL rewrite paths/query strings. For example, when a new page load or when a form gets posted. Server variable is currently supported in **[rule set actions](front-door-rules-engine-actions.md)** only.
## Architecture
-Rule sets handle requests at the Front Door edge. When a request arrives at your Front Door endpoint, WAF is processed first, followed by the settings configured in route. Those settings include the rule set associated to the route. Rule sets are processed in the order they appear under the routing configuration. Rules in a rule set also get processed in the order they appear. In order for all the actions in each rule to run, all the match conditions within a rule have to be met. If a request doesn't match any of the conditions in your rule set configuration, then only the default route settings get applied.
+Rule sets handle requests at the Front Door edge. When a request arrives at your Front Door endpoint, WAF (Web Application Firewall) is processed first, followed by the settings configured in route. Those settings include the rule set associated to the route. Rule sets are processed in the order they appear under the routing configuration. Rules in a rule set also get processed in the order they appear. In order for all the actions in each rule to run, all the match conditions within a rule have to be met. If a request doesn't match any of the conditions in your rule set configuration, then only the default route settings get applied.
If the **Stop evaluating remaining rules** is selected, then any remaining rule sets associated with the route don't get ran.
With a Front Door rule set, you can create any combination of configurations, ea
* *Match condition*: There are many match conditions that you can configure to parse an incoming request. A rule can contain up to 10 match conditions. Match conditions are evaluated with an **AND** operator. *Regular expression is supported in conditions*. A full list of match conditions can be found in [Rule set match conditions](rules-match-conditions.md).
-* *Action*: An action dictates how Front Door handles the incoming requests based on the matching conditions. You can modify caching behaviors, modify request headers, response headers, set URL rewrite and URL redirection. *Server variables are supported with Action*. A rule can contain up to five actions. A full list of actions can be found in [Rule set actions](front-door-rules-engine-actions.md).
+* *Action*: An action dictates how Front Door handles the incoming requests based on the matching conditions. You can modify caching behaviors, modify request headers, response headers, set URL rewrite, and URL redirection. *Server variables are supported with Action*. A rule can contain up to five actions. A full list of actions can be found in [Rule set actions](front-door-rules-engine-actions.md).
## ARM template support
Rule sets can be configured using Azure Resource Manager templates. For an examp
## Limitations
-For information about quota limits, refer to [Front Door limits, quotas and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-standard-and-premium-service-limits).
+For information about quota limits, refer to [Front Door limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-standard-and-premium-service-limits).
## Next steps
A Rules engine configuration allows you to customize how HTTP requests get handl
## Architecture
-Rules engine handles requests at the edge. When a request enters your Azure Front Door (classic) endpoint, WAF is processed first, followed by the Rules engine configuration associated with your frontend domain. If a Rules engine configuration gets processed, that means a match condition has been met. In order for all actions in each rule to be processed, all the match conditions within a rule has to be met. If a request doesn't match any of the conditions in your Rules engine configuration, then the default routing configuration is processed.
+Rules engine handles requests at the edge. When a request enters your Azure Front Door (classic) endpoint, WAF is processed first, followed by the Rules engine configuration associated with your frontend domain. If a Rules engine configuration gets processed, that means a match condition was found. In order for all actions in each rule to be processed, all the match conditions within a rule has to be met. If a request doesn't match any of the conditions in your Rules engine configuration, then the default routing configuration is processed.
For example, in the following diagram, a Rules engine is configured to append a response header. The header changes the max-age of the cache control if the request file has an extension of *.jpg*.
In Azure Front Door (classic) you can create Rules engine configurations of many
- *Rules engine configuration*: A set of rules that are applied to single route. Each configuration is limited to 25 rules. You can create up to 10 configurations. - *Rules engine rule*: A rule composed of up to 10 match conditions and 5 actions. - *Match condition*: There are many match conditions that can be utilized to parse your incoming requests. A rule can contain up to 10 match conditions. Match conditions are evaluated with an **AND** operator. For a full list of match conditions, see [Rules match conditions](rules-match-conditions.md). -- *Action*: Actions dictate what happens to your incoming requests - request/response header actions, forwarding, redirects, and rewrites are all available today. A rule can contain up to five actions; however, a rule may only contain one route configuration override. For a full list of actions, see [Rules actions](front-door-rules-engine-actions.md).
+- *Action*: Actions dictate what happens to your incoming requests - request/response header actions, forwarding, redirects, and rewrites are all available today. A rule can contain up to five actions; however, a rule might only contain one route configuration override. For a full list of actions, see [Rules actions](front-door-rules-engine-actions.md).
## Next steps
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
Previously updated : 06/01/2023 Last updated : 08/12/2024 zone_pivot_groups: front-door-tiers
zone_pivot_groups: front-door-tiers
::: zone pivot="front-door-standard-premium"
-Azure Front Door supports URL rewrite to change the request path being routed to your origin. URL rewrite allows you to set conditions to make sure the URL or the specified headers gets rewritten only when certain conditions get met. These conditions are based on the request and response information.
+Azure Front Door provides support for URL rewrite, enabling you to modify the request path that is being routed to your origin. This powerful feature allows you to define conditions that determine when the URL or specified headers should be rewritten. These conditions are based on the information present in the request and response.
-With this feature, you can redirect your end users to a different origin based on their device types, or the type of file requested. The URL rewrite action can be found in a rule set configuration.
+By using URL rewrite, you have the ability to redirect your end users to different origins based on factors such as their device type or the type of file they're requesting. The URL rewrite action can be easily configured within the rule set, providing you with fine-grained control over your routing behavior.
:::image type="content" source="./media/front-door-url-rewrite/front-door-url-rewrite.png" alt-text="Screenshot of URL rewrite action in a rule set configuration."::: ## Source pattern
-The **source pattern** is the URL path in the initial request you want to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, you can define a forward slash (`/`) as the source pattern value.
+The **source pattern** represents the URL path in the initial request that you wish to replace. Currently, the source pattern utilizes a prefix-based matching approach. To match all URL paths, you can specify a forward slash (`/`) as the value for the source pattern.
-For the source pattern in a URL rewrite action, only the path after the *patterns to match* in the route configuration is considered. For example, you have the following incoming URL format `contoso.com/pattern-to-match/source-pattern`, only `/source-pattern` gets considered by the rule set as the source pattern to be rewritten. The format of the out going URL after URL rewrite gets applied is `contoso.com/pattern-to-match/destination`.
+In the context of a URL rewrite action, only the path after the *patterns to match* in the route configuration is taken into consideration for the source pattern. For instance, the rule set considers only `/source-pattern` as the source pattern to be rewritten if you have an incoming URL format of `contoso.com/pattern-to-match/source-pattern`. After the URL rewrite is applied, the outgoing URL format will be `contoso.com/pattern-to-match/destination`.
-For situation, when you need to remove the `/pattern-to-match` segment of the URL, set the **origin path** for the origin group in route configuration to `/`.
+In cases where you need to remove the `/pattern-to-match` segment of the URL, you can set the **origin path** for the origin group in the route configuration to `/`.
## Destination
-The destination path used to replace the source pattern. For example, if the request URL path is `contoso.com/foo/1.jpg`, the source pattern is `/foo/`, and the destination is `/bar/`, the content gets served from `contoso.com/bar/1.jpg` from the origin.
+The destination path represents the path that replaces the source pattern. For instance, if the request URL path is `contoso.com/foo/1.jpg`, and the source pattern is `/foo/`, specifying the destination as `/bar/` results in the content being served from `contoso.com/bar/1.jpg` from the origin.
## Preserve unmatched path
-Preserve unmatched path allows you to append the remaining path after the source pattern to the new path. When preserve unmatched path is set to **No** (default), the remaining path after the source pattern gets removed.
+Preserve unmatched path allows you to control how the remaining path after the source pattern is handled. By setting preserve unmatched path to **Yes**, the remaining path is appended to the new path. On the other hand, setting it to **No** (default) will remove the remaining path after the source pattern.
+
+Here's an example showcasing the behavior of preserve unmatched path:
| Preserve unmatched path | Source pattern | Destination | Incoming request | Content served from origin | |--|--|--|--|--|
Preserve unmatched path allows you to append the remaining path after the source
[!INCLUDE [Azure Front Door (classic) retirement notice](../../includes/front-door-classic-retirement.md)]
-Azure Front Door (classic) supports URL rewrite by configuring a **Custom forwarding path** when configuring the forward routing type rule. By default, if only a forward slash (`/*`) is defined, Front Door copies the incoming URL path to the URL used in the forwarded request. The host header used in the forwarded request is as configured for the selected backend. For more information, see [Backend host header](origin.md#origin-host-header).
-
-The robust part of URL rewrite is the custom forwarding path copies any part of the incoming path that matches the wildcard path to the forwarded path.
+Azure Front Door (classic) provides support for URL rewrite by configuring a **Custom forwarding path** when setting up the forward routing type rule. By default, if only a forward slash (`/*`) is defined, Front Door replicates the incoming URL path in the forwarded request. The host header used in the forwarded request is based on the configuration of the selected backend. For more detailed information, see the [Backend host header](origin.md#origin-host-header) documentation.
-The following table shows an example of an incoming request and the corresponding forwarded path when using a custom forwarding path of `/fwd/` for a match path with a wildcard. The **a/b/c** part of the path represents the portion replacing the wildcard.
+The key aspect of URL rewrite lies in the ability to copy any matching part of the incoming path to the forwarded path when using a custom forwarding path with a wildcard match. The following table illustrates an example of an incoming request and the corresponding forwarded path when utilizing a custom forwarding path of `/fwd/`. The section denoted as **a/b/c** represents the portion that replaces the wildcard match.
| Incoming URL path | Match path | Custom forwarding path | Forwarded path | |--|--|--|--|
Consider a routing rule with the following combination of frontend hosts and pat
| | /foo/\* | | | /foo/bar/\* |
-The first column in the following table shows examples of incoming requests and the second column shows what would be the **most-specific** matching route defined. The next three columns in the table are examples of *Custom forwarding paths*.
+The following table illustrates examples of incoming requests and their corresponding most-specific matching routes. It also provides examples of custom forwarding paths and the resulting forwarded paths.
+
+For instance, consider the second row of the table. If the incoming request is `www.contoso.com/sub`, and the custom forwarding path is set to `/`, then the forwarded path would be `/sub`. However, if the custom forwarding path is set to `/fwd/`, then the forwarded path would be `/fwd/sub`. The emphasized parts of the paths indicate the portions that are part of the wildcard match.
-For example, the second row reads, for an incoming request of `www.contoso.com/sub`, if the custom forwarding path is `/`, then the forwarded path would be `/sub`. If the custom forwarding path was `/fwd/`, then the forwarded path is `/fwd/sub`. The **emphasized** parts of the paths represent the portions that are part of the wildcard match.
| Incoming request | Most-specific match path | / | /fwd/ | /foo/ | /foo/bar/ | |--|--|--|--|--|--|
For example, the second row reads, for an incoming request of `www.contoso.com/s
## Optional settings
-There are extra optional settings you can also specify for any given routing rule settings:
-
-* **Cache configuration** - If disabled or not specified, requests that match to this routing rule doesn't attempt to use cached content and instead always fetch from the backend. For more information, see [caching with Azure Front Door](front-door-caching.md).
+**Cache configuration** - If disabled or not specified, requests that match to this routing rule doesn't attempt to use cached content and instead always fetch from the backend. For more information, see [caching with Azure Front Door](front-door-caching.md).
::: zone-end
frontdoor Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/health-probes.md
Previously updated : 05/15/2023 Last updated : 08/12/2024
frontdoor How To Configure Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-configure-endpoints.md
Previously updated : 06/02/2023 Last updated : 08/12/2024
Before you can create a new endpoint with Front Door manager, you must have an A
* **Name** - Enter a unique name for the new Front Door endpoint. Azure Front Door generates a unique endpoint hostname based on the endpoint name in the form of `<endpointname>-hash.z01.azurefd.net`.
- * **Endpoint hostname** - A deterministic DNS name that helps prevent subdomain takeover. This name is used to access your resources through your Azure Front Door at the domainΓÇ»`<endpointname>-hash.z01.azurefd.net`.
+ * **Endpoint hostname** - A deterministic DNS (domain name system) name that helps prevent subdomain takeover. This name is used to access your resources through your Azure Front Door at the domainΓÇ»`<endpointname>-hash.z01.azurefd.net`.
* **Status** - Set as checked to enable this endpoint. ### Add a route
Before you can create a new endpoint with Front Door manager, you must have an A
* **Name** - Enter a unique name for the new route * **Enable route** - Set as checked to enable this route.
- * **Domains** - Select one or more domains that have been validated and isn't associated to another route. For more information, see [add a custom domain](standard-premium/how-to-add-custom-domain.md).
- * **Patterns to match** - Configure all URL path patterns that this route accepts. For example, you can set the pattern to match to `/images/*` to accept all requests on the URL `www.contoso.com/images/*`. Azure Front Door determines the traffic based on exact match first. If no paths match exactly, then Front Door looks for a wildcard path that matches. If no routing rules are found with a matching path, then the request get rejected and returns a 400: Bad Request error HTTP response. Patterns to match paths are not case sensitive, meaning paths with different casing are treated as duplicates. For example, you have a host using the same protocol with paths `/FOO` and `/foo`. These paths are considered duplicates, and aren't allowed in the *Patterns to match* field.
+ * **Domains** - Select one or more validated domains that aren't associated to another route. For more information, see [add a custom domain](standard-premium/how-to-add-custom-domain.md).
+ * **Patterns to match** - Configure all URL path patterns that this route accepts. For example, you can set the pattern to match to `/images/*` to accept all requests on the URL `www.contoso.com/images/*`. Azure Front Door determines the traffic based on exact match first. If no paths match exactly, then Front Door looks for a wildcard path that matches. If no routing rules are found with a matching path, then the request get rejected and returns a 400: Bad Request error HTTP response. Patterns to match paths aren't case sensitive, meaning paths with different casing are treated as duplicates. For example, you have a host using the same protocol with paths `/FOO` and `/foo`. These paths are considered duplicates, and aren't allowed in the *Patterns to match* field.
* **Accepted protocols** - Specify the protocols you want Azure Front Door to accept when the client is making the request. You can specify HTTP, HTTPS, or both. * **Redirect** - Specify whether HTTPS is enforced for the incoming HTTP requests. * **Origin group** - Select the origin group to forward traffic to when requests are made to the origin. For more information, see [configure an origin group](standard-premium/how-to-create-origin.md).
Before you can create a new endpoint with Front Door manager, you must have an A
* **Name** - Enter a unique name within this Front Door profile for the security policy. * **Domains** - Select one or more domains you want to apply this security policy to.
- * **WAF Policy** - Select an existing or create a new WAF policy. When you select an existing WAF policy, it must be the same tier as the Front Door profile. For more information, see [configure WAF policy for Front Door](../web-application-firewall/afds/waf-front-door-create-portal.md).
+ * **WAF (Web Application Firewall) Policy** - Select an existing or create a new WAF policy. When you select an existing WAF policy, it must be the same tier as the Front Door profile. For more information, see [configure WAF policy for Front Door](../web-application-firewall/afds/waf-front-door-create-portal.md).
1. Select **Save** to create the security policy and associate it with the endpoint.
Before you can create a new endpoint with Front Door manager, you must have an A
## Configure origin timeout
-Origin timeout is the amount of time Azure Front Door waits until it considers the connection to origin has timed out. You can set this value on the overview page of the Azure Front Door profile. This value is applied to all endpoints in the profile.
+Origin timeout is the amount of time Azure Front Door waits until it considers the connection to origin valid. You can set this value on the overview page of the Azure Front Door profile. This value is applied to all endpoints in the profile.
:::image type="content" source="./media/how-to-configure-endpoints/origin-timeout.png" alt-text="Screenshot of the origin timeout settings on the overview page of the Azure Front Door profile.":::
Origin timeout is the amount of time Azure Front Door waits until it considers t
In order to remove an endpoint, you first have to remove any security policies associated with the endpoint. Then select **Delete endpoint** to remove the endpoint from the Azure Front Door profile. ## Next steps
frontdoor Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/origin.md
Previously updated : 04/04/2023 Last updated : 08/12/2024 zone_pivot_groups: front-door-tiers
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
Previously updated : 05/17/2023 Last updated : 08/12/2024
Azure Front Door Premium can connect to your origin using Private Link. Your ori
## How Private Link works
-When you enable Private Link to your origin in Azure Front Door Premium, Front Door creates a private endpoint on your behalf from an Azure Front Door managed regional private network. You'll receive an Azure Front Door private endpoint request at the origin pending your approval.
+When you enable Private Link to your origin in Azure Front Door Premium, Front Door creates a private endpoint on your behalf from an Azure Front Door managed regional private network. You receive an Azure Front Door private endpoint request at the origin pending your approval.
> [!IMPORTANT] > You must approve the private endpoint connection before traffic can pass to the origin privately. You can approve private endpoint connections by using the Azure portal, Azure CLI, or Azure PowerShell. For more information, see [Manage a Private Endpoint connection](../private-link/manage-private-endpoint.md).
-After you enable an origin for Private Link and approve the private endpoint connection, it can take a few minutes for the connection to be established. During this time, requests to the origin will receive an Azure Front Door error message. The error message will go away once the connection is established.
+After you enable an origin for Private Link and approve the private endpoint connection, it can take a few minutes for the connection to be established. During this time, requests to the origin receives an Azure Front Door error message. The error message goes away once the connection is established.
-Once your request is approved, a private IP address gets assigned from the Azure Front Door managed virtual network. Traffic between your Azure Front Door and your origin will communicate using the established private link over the Microsoft backbone network. Incoming traffic to your origin is now secured when arriving at your Azure Front Door.
+Once your request is approved, a private IP address gets assigned from the Azure Front Door managed virtual network. Traffic between your Azure Front Door and your origin communicates using the established private link over the Microsoft backbone network. Incoming traffic to your origin is now secured when arriving at your Azure Front Door.
:::image type="content" source="./media/private-link/enable-private-endpoint.png" alt-text="Screenshot of enable Private Link service checkbox from origin configuration page.":::
Once your request is approved, a private IP address gets assigned from the Azure
### Private endpoint creation
-Within a single Azure Front Door profile, if two or more Private Link enabled origins are created with the same set of Private Link, resource ID and group ID, then for all such origins only one private endpoint gets created. Connections to the backend can be enabled using this private endpoint. This setup means you only have to approve the private endpoint once because only one private endpoint gets created. If you create more Private Link enabled origins using the same set of Private Link location, resource ID and group ID, you won't need to approve anymore private endpoints.
+Within a single Azure Front Door profile, if two or more Private Link enabled origins are created with the same set of Private Link, resource ID and group ID, then for all such origins only one private endpoint gets created. Connections to the backend can be enabled using this private endpoint. This setup means you only have to approve the private endpoint once because only one private endpoint gets created. If you create more Private Link enabled origins using the same set of Private Link location, resource ID, and group ID, you don't need to approve anymore private endpoints.
#### Single private endpoint
-For example, a single private endpoint gets created for all the different origins across different origin groups but in the same Azure Front Door profile as shown in the below table:
+For example, a single private endpoint gets created for all the different origins across different origin groups but in the same Azure Front Door profile as shown in the following table:
:::image type="content" source="./media/private-link/single-endpoint.png" alt-text="Diagram showing a single private endpoint created for origins created in the same Azure Front Door profile.":::
A new private endpoint gets created in the following scenario:
### Private endpoint removal
-When an Azure Front Door profile gets deleted, private endpoints associated with the profile will also get deleted.
+When an Azure Front Door profile gets deleted, private endpoints associated with the profile also get deleted.
#### Single private endpoint
-If AFD-Profile-1 gets deleted, then the PE1 private endpoint across all the origins will also be deleted.
+If AFD-Profile-1 gets deleted, then the PE1 private endpoint across all the origins also gets deleted.
#### Multiple private endpoints
-* If AFD-Profile-1 gets deleted, all private endpoints from PE1 through to PE4 will be deleted.
+* If AFD-Profile-1 gets deleted, all private endpoints from PE1 through to PE4 gets deleted.
:::image type="content" source="./media/private-link/delete-multiple-endpoints.png" alt-text="Diagram showing if AFD-Profile-1 gets deleted, all private endpoints from PE1 through PE4 gets deleted.":::
-* Deleting a Front Door profile won't affect private endpoints created for a different Front Door profile.
+* Deleting an Azure Front Door profile doesn't affect private endpoints created for a different Front Door profile.
- :::image type="content" source="./media/private-link/delete-multiple-profiles.png" alt-text="Diagram showing Azure Front Door profile getting deleted won't affect private endpoints in other Front Door profiles.":::
+ :::image type="content" source="./media/private-link/delete-multiple-profiles.png" alt-text="Diagram showing Azure Front Door profile getting deleted but doesn't affect private endpoints in other Front Door profiles.":::
For example:
- * If AFD-Profile-2 gets deleted, only PE5 will be removed.
- * If AFD-Profile-3 gets deleted, only PE6 will be removed.
- * If AFD-Profile-4 gets deleted, only PE7 will be removed.
- * If AFD-Profile-5 gets deleted, only PE8 will be removed.
+ * If AFD-Profile-2 gets deleted, only PE5 is removed.
+ * If AFD-Profile-3 gets deleted, only PE6 is removed.
+ * If AFD-Profile-4 gets deleted, only PE7 is removed.
+ * If AFD-Profile-5 gets deleted, only PE8 is removed.
## Region availability
Azure Front Door private link is available in the following regions:
| East US 2 | UK South | | East Asia | | South Central US | West Europe | | | | West US 3 | Sweden Central | | |
-| US Gov Arizona |||
-| US Gov Texas |||
+| US Gov Arizona | | | |
+| US Gov Texas | | | |
## Limitations
Origin support for direct private endpoint connectivity is currently limited to:
* Web App * Internal load balancers, or any services that expose internal load balancers such as Azure Kubernetes Service, Azure Container Apps or Azure Red Hat OpenShift * Storage Static Website
-* Application Gateway (Preview only. Please do not put production workloads)
+* Application Gateway (Preview only. Don't use in production environments)
> [!NOTE] > * This feature isn't supported with Azure App Service Slots or Functions.
frontdoor How To Enable Private Link Internal Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer.md
Previously updated : 06/01/2023 Last updated : 08/12/2024
This article guides you through how to configure Azure Front Door Premium to con
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Review [Secure your origin with Private Link](../private-link.md) to understand how Private Link works with Azure Front Door.
+* Review the [Secure your origin with Private Link](../private-link.md) documentation to better understand how Private Link works with Azure Front Door.
* Create a [Private Link](../../private-link/create-private-link-service-portal.md) service for your origin web servers. ## Enable private connectivity to an internal load balancer
In this section, you map the Private Link service to a private endpoint created
1. Navigate to your Azure Front Door Premium profile, then select **Origin groups** from under *Settings* in the left side menu pane.
-1. Select an existing or create a new origin group to add an internal load balancer origin.
+1. Select an existing origin group or create a new one to add to an internal load balancer origin.
1. Select **+ Add an origin** to add new origin. Select or enter the following settings to configure the internal load balancer origin.
In this section, you map the Private Link service to a private endpoint created
:::image type="content" source="../media/how-to-enable-private-link-internal-load-balancer/private-endpoint-pending-approval.png" alt-text="Screenshot of pending approval for private link.":::
-1. The *connection state* should change to **Approved**. It may take a couple of minutes for the connection to fully establish. You can now access your internal load balancer from Azure Front Door.
+1. The *connection state* should change to **Approved**. It might take a couple of minutes for the connection to fully establish. You can now access your internal load balancer from Azure Front Door.
:::image type="content" source="../media/how-to-enable-private-link-storage-account/private-endpoint-approved.png" alt-text="Screenshot of approved private link request.":::
frontdoor Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/troubleshoot-issues.md
Previously updated : 04/04/2023 Last updated : 08/12/2024
The cause of this issue can be one of three things:
### Troubleshooting steps * Send the request to your origin directly without going through Azure Front Door. See how long your origin normally takes to respond.
-* Send the request through Azure Front Door and see if you're getting any 503 responses. If not, the problem may not be a timeout issue. Create a support request to troubleshoot the issue further.
+* Send the request through Azure Front Door and see if you're getting any 503 responses. If not, the problem might not be a timeout issue. Create a support request to troubleshoot the issue further.
* If requests going through Azure Front Door result in a 503 error response code then configure the **Origin response timeout** for Azure Front Door. You can increase the default timeout to up to 4 minutes (240 seconds). To configure the setting, go to overview page of the Front Door profile. Select **Origin response timeout** and enter a value between *16* and *240* seconds. > [!NOTE] > The ability to configure Origin response timeout is only available in Azure Front Door Standard/Premium.
The cause of this issue can be one of three things:
The cause of this problem can be one of three things: * The backend pool is an IP address.
-* The backend server returns a certificate that doesn't match the FQDN of the Azure Front Door backend pool.
+* The backend server returns a certificate that doesn't match the fully qualified domain name (FQDN) of the Azure Front Door backend pool.
* The backend pool is an Azure Web Apps server. ### Troubleshooting steps
The cause of this problem can be one of three things:
* The backend pool is an Azure Web Apps server:
- - Check if the Azure web app is configured with IP-based SSL instead of being SNI based. If the web app is configured as IP based, it should be changed to SNI.
+ - Check if the Azure web app is configured with IP-based SSL instead of being SNI (server name indication) based. If the web app is configured as IP based, it should be changed to SNI.
- If the backend is unhealthy because of a certificate failure, a 503 error message is returned. You can verify the health of the backends on ports 80 and 443. If only 443 is unhealthy, it's likely an issue with SSL. Because the backend is configured to use the FQDN, we know it's sending SNI. Use OPENSSL to verify the certificate that's being returned. To do this check, connect to the backend by using `-servername`. It should return the SNI, which needs to match with the FQDN of the backend pool:
The cause of this problem can be one of three things:
### Symptom * You created an Azure Front Door instance. A request to the domain or frontend host returns an HTTP 400 status code.
-* You created a DNS mapping for a custom domain to the frontend host that you configured. Sending a request to the custom domain host name returns an HTTP 400 status code. It doesn't appear to route to the backend that you configured.
+* You created a DNS (domain name server) mapping for a custom domain to the frontend host that you configured. Sending a request to the custom domain host name returns an HTTP 400 status code. It doesn't appear to route to the backend that you configured.
### Cause
This behavior is separate from the web application firewall (WAF) functionality
### Troubleshooting steps - Verify that your requests are in compliance with the requirements set out in the necessary RFCs.-- Take note of any HTML message body that's returned in response to your request. A message body often explains exactly *how* your request is noncompliant.
+- Take note of any HTML message body that gets returned in response to your request. A message body often explains exactly *how* your request is noncompliant.
## My origin is configured as an IP address.
The origin is configured as an IP address. The origin is healthy, but rejecting
### Cause
-Azure Front Door users the origin host name as the SNI header during SSL handshake. Since the origin is configured as an IP address, the failure can be caused by one of the following reasons:
+Azure Front Door users the origin host name as the SNI header during SSL handshake. Since the origin is configured as an IP address, the failure can be one of the following reasons:
-* Certificate name check is enabled in the Front Door origin configuration. It's recommended to leave this setting enabled. Certificate name check requires the origin host name to match the certificate name or one of the entries in the subject alternative names extension.
-* If certificate name check is disabled, then the cause is likely due to the origin certificate logic rejecting any requests that don't have a valid host header in the request that matches the certificate.
+* If the certificate name check is disabled, it's possible that the cause of the issue lies in the origin certificate logic. This logic might be rejecting any requests that don't have a valid host header matching the certificate.
### Troubleshooting steps
frontdoor Understanding Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/understanding-pricing.md
Title: Compare pricing between Azure Front Door tiers
-description: This article describes the billing model for Azure Front Door and compares the pricing for the Standard, Premium and (classic) tiers.
+description: This article describes the billing model for Azure Front Door and compares the pricing for the Standard, Premium, and (classic) tiers.
Previously updated : 05/30/2023 Last updated : 08/12/2024
> [!NOTE] > Prices shown in this article are examples and are for illustration purposes only. For pricing information according to your region, see the [Pricing page](https://azure.microsoft.com/pricing/details/frontdoor/)
-Azure Front Door has three tiers: Standard, Premium, and (classic). This article describes the billing model for Azure Front Door and compares the pricing for the Standard, Premium and (classic) tiers. When migrating from Azure Front Door (classic) to Standard or Premium, we recommend you do a cost analysis to understand the pricing differences between the tiers. We show you how to evaluate cost that you can apply your environment.
+Azure Front Door has three tiers: Standard, Premium, and (classic). This article describes the billing model for Azure Front Door and compares the pricing for the Standard, Premium, and (classic) tiers. When migrating from Azure Front Door (classic) to Standard or Premium, we recommend you do a cost analysis to understand the pricing differences between the tiers. We show you how to evaluate cost that you can apply your environment.
## Pricing model comparison
The following are general guidance for getting the right metrics to estimate the
| Azure Front Door Standard/Premium meter | How to calculate from Azure Front Door (classic) metrics | |--|--|
- | Base fee | - If you need managed WAF rules, bot protection, or Private Link: **$330/month** </br> - If you only need custom WAF rules: **$35/month** |
+ | Base fee | - If you need managed WAF (Web Application Firewall) rules, bot protection, or Private Link: **$330/month** </br> - If you only need custom WAF rules: **$35/month** |
| Requests | **For Standard:** </br>1. Go to your Azure Front Door (classic) profile, select **Metrics** from under *Monitor* in the left side menu pane. </br>2. Select the **Request Count** from the *Metrics* drop-down menu. </br> 3. To view regional metrics, you can apply a split to the data by selecting **Client Country** or **Client Region**. </br> 4. If you select *Client Country*, you need to map them to the corresponding Azure Front Door pricing zone. </br> :::image type="content" source="./media/understanding-pricing/request-count.png" alt-text="Screenshot of the request count metric for Front Door (classic)." lightbox="./media/understanding-pricing/request-count.png"::: </br> **For Premium:** </br>You can look at the **Request Count** and the **WAF Request Count** metric in the Azure Front Door (classic) profile. </br> :::image type="content" source="./media/understanding-pricing/waf-request-count.png" alt-text="Screenshot of the Web Application Firewall request count metric for Front Door (classic)." lightbox="./media/understanding-pricing/waf-request-count.png"::: | | Egress from Azure Front Door edge to client | You can obtain this data from your Azure Front Door (classic) invoice or from the **Billable Response Size** metric in the Azure Front Door (classic) profile. To get a more accurate estimation, apply split by *Client Count* or *Client Region*.</br> :::image type="content" source="./media/understanding-pricing/billable-response-size.png" alt-text="Screenshot of the billable response size metric for Front Door (classic)." lightbox="./media/understanding-pricing/billable-response-size.png"::: | | Ingress from Azure Front Door edge to origin | You can obtain this data from your Azure Front Door (classic) invoice. Refer to the quantities for Data transfer from client to edge location as an estimation. |
Azure Front Door Premium is ~45% cheaper than Azure Front Door (classic) for sta
|--|--|--| | Base fee | $0 | $35 | | Egress from Azure Front Door edge to client | $39,500 = (10 TB * $ 0.34/GB) + (40 TB * $ 0.29/GB) + (100 TB * $ 0.245/GB) | $12,790 = (10 TB * $ 0.109/GB) + (40 TB * $ 0.085/GB) + (100 TB * $ 0.083/GB) |
-| Egress from Azure Front Door edge to origin | $0 | $0.72= 4.5GB * $0.16/GB |
-| Ingress from client to Azure Front Door edge | $0.05 = 4.5GB * $0.01 | $0 |
-| Ingress from origin to Azure Front Door edge | $900 = 0.1 TB * $ 0/GB + 7.5TB * $ 0.12/GB | $0 |
+| Egress from Azure Front Door edge to origin | $0 | $0.72= 4.5 GB * $0.16/GB |
+| Ingress from client to Azure Front Door edge | $0.05 = 4.5 GB * $0.01 | $0 |
+| Ingress from origin to Azure Front Door edge | $900 = 0.1 TB * $ 0/GB + 7.5 TB * $ 0.12/GB | $0 |
| Requests | $0 | $1.62 = 1.5 million requests * $0.0108 per 10,000 requests | | Routing rules | $$43.8 = ($0.03 * 2 rules) * 730 hrs | $0 | | Total | $40,444 | $12,827.34 |
In this comparison, Azure Front Door Premium is ~5% more expensive than Azure Fr
| WAF managed/ defaults rule set requests processed | $20 = 20 million requests * $1 per million requests | $0 | | Total | $17,551 .30| $29,945 |
-In this comparison, Azure Front Door Premium is 1.7x more expensive than Azure Front Door (classic) because of the higher base fee for each profile. The outbound data transfer is 45% less for Azure Front Door Premium compared to Azure Front Door (classic). With Premium tier, you don't have to pay for route rules which account for $7,700 of the total cost.
+In this comparison, Azure Front Door Premium is 1.7x more expensive than Azure Front Door (classic) because of the higher base fee for each profile. The outbound data transfer is 45% less for Azure Front Door Premium compared to Azure Front Door (classic). With Premium tier, you don't have to pay for route rules, which account for $7,700 of the total cost.
#### Suggestion to reduce cost
governance Migrating From Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/whats-new/migrating-from-azure-automation.md
section outlines the expected steps for migration.
1. Unregister servers from Azure Automation State Configuration 1. Assign configurations to servers using machine configuration
-Machine configuration uses DSC version 3 with PowerShell version 7. DSC version 3 can coexist with
+Machine configuration uses DSC version 2 with PowerShell version 7. DSC version 3 can coexist with
older versions of DSC in [Windows][02] and [Linux][03]. The implementations are separate. However, there's no conflict detection.
hdinsight Log Analytics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md
Previously updated : 04/11/2024 Last updated : 08/12/2024 # Log Analytics migration guide for Azure HDInsight clusters
Considering customer feedback, the Azure HDInsight team invested in integration
- Faster log delivery - Resource-based table grouping and default queries
-> [!NOTE]
-> New Azure Montitor integration is in Public Preview across all regions where HDInsight is available.
-- ## Benefits of the new Azure Monitor integration This document outlines the changes to the Azure Monitor integration and provides best-practices for using the new tables.
iot-operations Howto Deploy Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-deploy-dapr.md
Azure IoT Operations supports two of these building blocks, powered by [MQTT bro
- Publish and subscribe - State management
-To use the Dapr pluggable components, define the component spec for each of the APIs and then [register this to the cluster](https://docs.dapr.io/operations/components/pluggable-components-registration/). The Dapr components listen to a Unix domain socket placed on the shared volume. The Dapr runtime connects with each socket and discovers all services from a given building block API that the component implements.
+To use the Dapr pluggable components, define the component spec for each of the APIs and then [register with the cluster](https://docs.dapr.io/operations/components/pluggable-components-registration/). The Dapr components listen to a Unix domain socket placed on the shared volume. The Dapr runtime connects with each socket and discovers all services from a given building block API that the component implements.
## Install Dapr runtime
To create the yaml file, use the following component definitions:
> [!div class="mx-tdBreakAll"] > | Component | Description | > |-|-|
-> | `metadata.name` | The component name is important and is how a Dapr application references the component. |
-> | `metadata.annotations` | Component annotations used by Dapr sidecar injector, defining the image location and required volume mounts
-> | `spec.type` | [The type of the component](https://docs.dapr.io/operations/components/pluggable-components-registration/#define-the-component), which needs to be declared exactly as shown |
-> | `spec.metadata.keyPrefix` | Defines the key prefix used when communicating to the statestore backend. See the [Dapr documentation](https://docs.dapr.io/developing-applications/building-blocks/state-management/howto-share-state) for more information |
-> | `spec.metadata.hostname` | The MQTT broker hostname. Defaults to `aio-mq-dmqtt-frontend` |
-> | `spec.metadata.tcpPort` | The MQTT broker port number. Default is `8883` |
-> | `spec.metadata.useTls` | Define if TLS is used by the MQTT broker. Defaults to `true` |
-> | `spec.metadata.caFile` | The certificate chain path for validating the MQTT broker. Required if `useTls` is `true`. This file must be mounted in the pod with the specified volume name |
-> | `spec.metadata.satAuthFile ` | The Service Account Token (SAT) file is used to authenticate the Dapr components with the MQTT broker. This file must be mounted in the pod with the specified volume name |
+> | `metadata:name` | The component name is important and is how a Dapr application references the component. |
+> | `metadata:annotations:dapr.io/component-container` | Component annotations used by Dapr sidecar injector, defining the image location, volume mounts and logging configuration |
+> | `spec:type` | [The type of the component](https://docs.dapr.io/operations/components/pluggable-components-registration/#define-the-component), which needs to be declared exactly as shown |
+> | `spec:metadata:keyPrefix` | Defines the key prefix used when communicating to the statestore backend. See more information, see [Dapr documentation](https://docs.dapr.io/developing-applications/building-blocks/state-management/howto-share-state) for more information |
+> | `spec:metadata:hostname` | The MQTT broker hostname. Default is `aio-mq-dmqtt-frontend` |
+> | `spec:metadata:tcpPort` | The MQTT broker port number. Default is `8883` |
+> | `spec:metadata:useTls` | Define if TLS is used by the MQTT broker. Default is `true` |
+> | `spec:metadata:caFile` | The certificate chain path for validating the MQTT broker. Required if `useTls` is `true`. This file must be mounted in the pod with the specified volume name |
+> | `spec:metadata:satAuthFile ` | The Service Account Token (SAT) file is used to authenticate the Dapr components with the MQTT broker. This file must be mounted in the pod with the specified volume name |
1. Save the following yaml, which contains the Azure IoT Operations component definitions, to a file named `components.yaml`:
To create the yaml file, use the following component definitions:
"volumeMounts": [ { "name": "mqtt-client-token", "mountPath": "/var/run/secrets/tokens" }, { "name": "aio-ca-trust-bundle", "mountPath": "/var/run/certs/aio-mq-ca-cert" }
+ ],
+ "env": [
+ { "name": "pubSubLogLevel", "value": "Information" },
+ { "name": "stateStoreLogLevel", "value": "Information" }
] } spec:
To configure authorization policies to MQTT broker, first you create a [BrokerAu
## Next steps
-Now that you have deployed the Dapr components, you can [Use Dapr to develop distributed applications](howto-develop-dapr-apps.md).
+Now that the Dapr components are deployed to the cluster, you can [Use Dapr to develop distributed applications](howto-develop-dapr-apps.md).
iot-operations Howto Develop Dapr Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-develop-dapr-apps.md
The first step is to write an application that uses a Dapr SDK to publish/subscr
After you finish writing the Dapr application, build the container:
-1. To package the application into a container, run the following command:
+1. Package the application into a container with the following command:
```bash docker build . -t my-dapr-app
After you finish writing the Dapr application, build the container:
## Deploy a Dapr application
-The following [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) definition contains the volumes required to deploy the application along with the required containers. This deployment utilizes the Dapr sidecar injector to automatically add the pluggable component pod.
+The following [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) definition contains volumes for SAT authentication and TLS certificate chain, and utilizes Dapr sidecar injection to automatically add the pluggable components to the Pod.
-The yaml contains both a ServiceAccount, used to generate SATs for authentication with MQTT broker and the Dapr application Deployment.
-
-To create the yaml file, use the following definitions:
+The following definition components might require customization to your specific application:
> | Component | Description | > |-|-|
-> | `volumes.mqtt-client-token` | The System Authentication Token used for authenticating the Dapr pluggable components with the MQTT broker |
-> | `volumes.aio-ca-trust-bundle` | The chain of trust to validate the MQTT broker TLS cert. This defaults to the test certificate deployed with Azure IoT Operations |
-> | `containers.mq-dapr-app` | The Dapr application container you want to deploy |
+> | `template:metadata:annotations:dapr.io/inject-pluggable-components` | Allows the IoT Operations pluggable components to be [automatically injected](https://docs.dapr.io/operations/components/pluggable-components-registration/) into the pod |
+> | `template:metadata:annotations:dapr.io/app-port` | Tells Dapr which port your application is listening on. If your application us not using this feature (such as a pubsub subscription), then remove this line |
+> | `volumes:mqtt-client-token` | The System Authentication Token used for authenticating the Dapr pluggable components with the MQTT broker |
+> | `volumes:aio-ca-trust-bundle` | The chain of trust to validate the MQTT broker TLS cert. This defaults to the test certificate deployed with Azure IoT Operations |
+> | `containers:mq-dapr-app` | The Dapr application container you want to deploy |
+
+> [!CAUTION]
+> If your Dapr application is not listening for traffic from the Dapr sidecar, then remove the `dapr.io/app-port` and `dapr.io/app-protocol` [annotations](https://docs.dapr.io/reference/arguments-annotations-overview/) otherwise the Dapr sidecar will fail to initialize.
1. Save the following yaml to a file named `dapr-app.yaml`:
To create the yaml file, use the following definitions:
apiVersion: apps/v1 kind: Deployment metadata:
- name: mq-dapr-app
+ name: my-dapr-app
namespace: azure-iot-operations spec:
- replicas: 1
selector: matchLabels:
- app: mq-dapr-app
+ app: my-dapr-app
template: metadata: labels:
- app: mq-dapr-app
+ app: my-dapr-app
annotations: dapr.io/enabled: "true" dapr.io/inject-pluggable-components: "true"
- dapr.io/app-id: "mq-dapr-app"
+ dapr.io/app-id: "my-dapr-app"
dapr.io/app-port: "6001" dapr.io/app-protocol: "grpc" spec:
To create the yaml file, use the following definitions:
kubectl get pods -w ```
- The workload pod should report all pods running after a short interval, as shown in the following example output:
+ The pod should report three containers running after a short interval, as shown in the following example output:
```output
- pod/dapr-workload created
NAME READY STATUS RESTARTS AGE ...
- dapr-workload 3/3 Running 0 30s
+ my-dapr-app 3/3 Running 0 30s
``` ## Troubleshooting
-If the application doesn't start or you see the pods in `CrashLoopBackoff`, the logs for `daprd` are most helpful. The `daprd` is a container that automatically deploys with your Dapr application.
+If the application doesn't start or you see the containers in `CrashLoopBackoff` state, the log for the `daprd` container often contains useful information.
-Run the following command to view the logs:
+Run the following command to view the logs for the daprd component:
```bash
-kubectl logs dapr-workload daprd
+kubectl logs -l app=my-dapr-app -c daprd
``` ## Next steps
iot-operations Howto Develop Mqttnet Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-develop-mqttnet-apps.md
Last updated 07/02/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-[MQTTnet](https://dotnet.github.io/MQTTnet/) is an open-source, high performance .NET library for MQTT based communication. This article uses a Kubernetes service account token and MQTTnet to connect to MQTT broker. You should use service account tokens to connect to in-cluster clients.
+[MQTTnet](https://dotnet.github.io/MQTTnet/) is an open-source, high performance .NET library for MQTT based communication. This article uses a Kubernetes service account token and MQTTnet to connect to MQTT broker. You should use service account tokens to connect in-cluster applications.
## Sample code The [sample code](https://github.com/Azure-Samples/explore-iot-operations/tree/main/samples/mqtt-client-dotnet/Program.cs) performs the following steps:
-1. Creates an MQTT client using the `MQTTFactory` class:
+1. Creates an MQTT client using the `MqttFactory` class:
```csharp var mqttFactory = new MqttFactory(); var mqttClient = mqttFactory.CreateMqttClient(); ```
-1. The following Kubernetes pod specification mounts the service account token to the specified path on the container file system. The mounted token is used as the password with well-known username `K8S-SAT`:
+1. The [Kubernetes pod specification](#pod-specification) mounts the service account on the container file system. The contents of the file are read:
+##3. The mounted token is used as the password with well-known username `K8S-SAT`:
```csharp
- string token_path = "/var/run/secrets/tokens/mqtt-client-token";
+ static string sat_auth_file = "/var/run/secrets/tokens/mqtt-client-token";
...-
- static async Task<int> MainAsync()
- {
- ...
-
- // Read SAT Token
- var satToken = File.ReadAllText(token_path);
+ var satToken = File.ReadAllBytes(sat_auth_file);
```
-1. All options for the MQTT client are bundled in the class named `MqttClientOptions`. It's possible to fill options manually in code via the properties but you should use the `MqttClientOptionsBuilder` as advised in the [client](https://github.com/dotnet/MQTTnet/wiki/Client) documentation. The following code shows how to use the builder with the following options:
+1. The MQTT client options are configured using the `MqttClientOptions` class. Using the `MqttClientOptionsBuilder` as advised in the [client](https://github.com/dotnet/MQTTnet/wiki/Client) documentation is the advised way of setting the options:
```csharp
- # Create TCP based options using the builder amd connect to broker
var mqttClientOptions = new MqttClientOptionsBuilder()
- .WithTcpServer(broker, 1883)
+ .WithTcpServer(hostname, tcp_port)
.WithProtocolVersion(MqttProtocolVersion.V500)
- .WithClientId("sampleid")
- .WithCredentials("K8S-SAT", satToken)
- .Build();
+ .WithClientId("mqtt-client-dotnet")
+ .WithAuthentication("K8S-SAT", satToken);
```
-1. After setting up the MQTT client options, a connection can be established. The following code shows how to connect with a server. You can replace the *CancellationToken.None* with a valid *CancellationToken*, if needed.
+5. After setting up the MQTT client options, a connection can be established. The following code shows how to connect with a server. You can replace the `CancellationToken.None` with a valid CancellationToken if needed.
```csharp
- var response = await mqttClient.ConnectAsync(mqttClientOptions, CancellationToken.None);
+ var response = await mqttClient.ConnectAsync(mqttClientOptions.Build(), CancellationToken.None);
```
-1. MQTT messages can be created using the properties directly or via using `MqttApplicationMessageBuilder`. This class has some useful overloads that allow dealing with different payload formats. The API of the builder is a fluent API. The following code shows how to compose an application message and publish them to a topic called *sampletopic*:
+6. MQTT messages can be created using the properties directly or using `MqttApplicationMessageBuilder`. This class has overloads that allow dealing with different payload formats. The API of the builder is a fluent API. The following code shows how to compose an application message and publish them to an article called *sampletopic*:
```csharp var applicationMessage = new MqttApplicationMessageBuilder()
The [sample code](https://github.com/Azure-Samples/explore-iot-operations/tree/m
.Build(); await mqttClient.PublishAsync(applicationMessage, CancellationToken.None);
- Console.WriteLine("The MQTT client published a message.");
``` ## Pod specification
-The `serviceAccountName` field in the pod configuration must match the service account associated with the token being used. Also, note the `serviceAccountToken.expirationSeconds` is set to **86400 seconds**, and once it expires, you need to reload the token from disk. This logic isn't currently implemented in the sample.
+The `serviceAccountName` field in the pod configuration must match the service account associated with the token being used. Also, note the `serviceAccountToken.expirationSeconds` is set to **86400 seconds**, and once it expires, you need to reload the token from disk. This logic isn't implemented in this sample.
```yaml apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: mqtt-client
+ namespace: azure-iot-operations
++
+apiVersion: v1
kind: Pod metadata: name: mqtt-client-dotnet
- labels:
- app: publisher
+ namespace: azure-iot-operations
spec: serviceAccountName: mqtt-client
- volumes:
- # SAT token used to authenticate between the application and the MQTT broker
- - name: mqtt-client-token
- projected:
- sources:
- - serviceAccountToken:
- path: mqtt-client-token
- audience: aio-mq-dmqtt
- expirationSeconds: 86400
-
- # Certificate chain for the application to validate the MQTT broker
- - name: aio-mq-ca-cert-chain
- configMap:
- name: aio-mq-ca-cert-chain
+ volumes:
+
+ # SAT token used to authenticate between the application and the MQTT broker
+ - name: mqtt-client-token
+ projected:
+ sources:
+ - serviceAccountToken:
+ path: mqtt-client-token
+ audience: aio-mq
+ expirationSeconds: 86400
+
+ # Certificate chain for the application to validate the MQTT broker
+ - name: aio-ca-trust-bundle
+ configMap:
+ name: aio-ca-trust-bundle-test-only
containers:
- - name: mqtt-client-dotnet
- image: ghcr.io/azure-samples/explore-iot-operations/mqtt-client-dotnet:latest
- imagePullPolicy: IfNotPresent
- volumeMounts:
- - name: mqtt-client-token
- mountPath: /var/run/secrets/tokens
- - name: aio-mq-ca-cert-chain
- mountPath: /certs/aio-mq-ca-cert/
- env:
- - name: IOT_MQ_HOST_NAME
- value: "aio-mq-dmqtt-frontend"
- - name: IOT_MQ_PORT
- value: "8883"
- - name: IOT_MQ_TLS_ENABLED
- value: "true"
+ - name: mqtt-client-dotnet
+ image: ghcr.io/azure-samples/explore-iot-operations/mqtt-client-dotnet:latest
+ volumeMounts:
+ - name: mqtt-client-token
+ mountPath: /var/run/secrets/tokens/
+ - name: aio-ca-trust-bundle
+ mountPath: /var/run/certs/aio-mq-ca-cert/
+ env:
+ - name: hostname
+ value: "aio-mq-dmqtt-frontend"
+ - name: tcpPort
+ value: "8883"
+ - name: useTls
+ value: "true"
+ - name: caFile
+ value: "/var/run/certs/aio-mq-ca-cert/ca.crt"
+ - name: satAuthFile
+ value: "/var/run/secrets/tokens/mqtt-client-token"
```
-The token is mounted into the container at the path specified in `containers[].volumeMount.mountPath`
- To run the sample, follow the instructions in its [README](https://github.com/Azure-Samples/explore-iot-operations/tree/main/samples/mqtt-client-dotnet). ## Related content -- [MQTT broker overview](../manage-mqtt-broker/overview-iot-mq.md)-- [Develop with MQTT broker](edge-apps-overview.md)
+- [Publish and subscribe MQTT messages using MQTT broker](../manage-mqtt-broker/overview-iot-mq.md)
+- [Develop highly available distributed applications](edge-apps-overview.md)
iot-operations Tutorial Event Driven With Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/tutorial-event-driven-with-dapr.md
In this walkthrough, you deploy a Dapr application to the cluster. The Dapr appl
The Dapr application performs the following steps: 1. Subscribes to the `sensor/data` topic for sensor data.
-1. When data is receiving on the topic, it's forwarded to the MQTT broker state store.
-1. Every **10 seconds**, it fetches the data from the state store and calculates the *min*, *max*, *mean*, *median*, and *75th percentile* values on any sensor data timestamped in the last **30 seconds**.
-1. Data older than **30 seconds** is expired from the state store.
-1. The result is published to the `sensor/window_data` topic in JSON format.
+1. When data is receiving on the topic, it's published to the MQTT broker state store.
+2. Every **10 seconds**, it fetches the data from the state store and calculates the *min*, *max*, *mean*, *median*, and *75th percentile* values on any sensor data timestamped in the last **30 seconds**.
+3. Data older than **30 seconds** is expired from the state store.
+4. The result is published to the `sensor/window_data` topic in JSON format.
> [!NOTE] > This tutorial [disables Dapr CloudEvents](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-raw/) which enables it to publish and subscribe using raw MQTT.
To start, create a yaml file that uses the following definitions:
1. Confirm that the application deployed successfully. The pod should report all containers are ready after a short interval, as shown with the following command: ```bash
- kubectl get pods -n azure-iot-operations
+ kubectl get pods -l app=mq-event-driven-dapr -n azure-iot-operations
``` With the following output: ```output
- NAME READY STATUS RESTARTS AGE
- ...
- mq-event-driven-dapr 3/3 Running 0 30s
+ NAME READY STATUS RESTARTS AGE
+ mq-event-driven-dapr 3/3 Running 0 30s
``` ## Deploy the simulator Simulate test data by deploying a Kubernetes workload. It simulates a sensor by sending sample temperature, vibration, and pressure readings periodically to the MQTT broker using an MQTT client on the `sensor/data` topic.
-1. Patch BrokerListener to allow unauthenticated connection, to simplify injection of simulated data:
-
- ```bash
- kubectl patch BrokerListener listener -n azure-iot-operations --type=json -p='[{ "op": "add", "path": "/spec/ports/1", "value": {"port":1883} }]'
- ```
-
-1. Deploy the simulator from the Explore IoT Operations repository:
+1. Deploy the simulator from the *Explore IoT Operations* repository:
```bash kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/tutorials/mq-event-driven-dapr/simulate-data.yaml
To verify the MQTT bridge is working, deploy an MQTT client to the cluster.
## Verify the Dapr application output
-1. Open a shell to the mosquitto client pod:
+1. Open a shell to the Mosquitto client pod:
```bash kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh
To verify the MQTT bridge is working, deploy an MQTT client to the cluster.
## Optional - Create the Dapr application
-The above tutorial uses a prebuilt container of the Dapr application. If you would like to modify and build the code yourself, follow these steps:
+ThIs tutorial uses a prebuilt container of the Dapr application. If you would like to modify and build the code yourself, follow these steps:
### Prerequisites
The above tutorial uses a prebuilt container of the Dapr application. If you wou
git clone https://github.com/Azure-Samples/explore-iot-operations ```
-1. Change to the Dapr tutorial directory in the [Explore IoT Operations](https://github.com/Azure-Samples/explore-iot-operations) repository:
+1. Change to the Dapr tutorial directory:
```bash cd explore-iot-operations/tutorials/mq-event-driven-dapr/src
The above tutorial uses a prebuilt container of the Dapr application. If you wou
## Troubleshooting
-If the application doesn't start or you see the pods in `CrashLoopBackoff`, the logs for `daprd` are most helpful. The `daprd` is a container that is automatically deployed with your Dapr application.
+If the application doesn't start or you see the containers in `CrashLoopBackoff`, the `daprd` container log often contains useful information.
-Run the following command to view the logs:
+Run the following command to view the logs for the daprd component:
```bash
-kubectl logs dapr-workload daprd
+kubectl logs -l app=mq-event-driven-dapr -n azure-iot-operations -c daprd
``` ## Next steps
iot-operations Howto Deploy Iot Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md
The Azure portal deployment experience is a helper tool that generates a deploym
| `--disable-rsync-rules` | | Disable the resource sync rules on the deployment feature flag if you don't have **Microsoft.Authorization/roleAssignment/write** permissions in the resource group. | | `--name` | String | Provide a name for your Azure IoT Operations instance. Otherwise, a default name is assigned. You can view the `instanceName` parameter in the command output. | | `--no-progress` | | Disables the deployment progress display in the terminal. |
- | `--simulate-pc` | | Include the OPC PLC simulator that ships with the OPC UA connector. |
+ | `--simulate-plc` | | Include the OPC PLC simulator that ships with the OPC UA connector. |
| `--sp-app-id`,<br>`--sp-object-id`,<br>`--sp-secret` | Service principal app ID, service principal object ID, and service principal secret | Include all or some of these parameters to use an existing service principal, app registration, and secret instead of allowing `init` to create new ones. For more information, see [Configure service principal and Key Vault manually](howto-manage-secrets.md#configure-service-principal-and-key-vault-manually). | ### [Azure portal](#tab/portal)
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
Title: Differences between Standard and Consumption logic apps description: Learn the differences between Standard workflows (single-tenant) and Consumption workflows (multitenant) in Azure Logic Apps.-+ ms.suite: integration Previously updated : 05/31/2024 Last updated : 08/11/2024 # Differences between Standard single-tenant logic apps versus Consumption multitenant logic apps
For the **Standard** logic app workflow, these capabilities have changed, or the
* **Deployment targets**: You can't deploy a **Standard** logic app resource to an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) nor to Azure deployment slots.
+* **Terraform templates**: You can't use these templates with a **Standard** logic app resource for complete infrastructure deployment. For more information, see [What is Terraform on Azure](/azure/developer/terraform/overview)?
+ * **Azure API Management**: You currently can't import a **Standard** logic app resource into Azure API Management. However, you can import a **Consumption** logic app resource. * **Authentication to backend storage**: Single-tenant Azure Logic Apps relies only on storage access keys to connect with the backend Azure Storage account. Alternative authentication methods, such as Microsoft Entra ID (Enterprise ID) and managed identity, currently aren't supported. So, when you deploy an Azure storage account alongside a **Standard** logic app, make sure that you enable storage access keys.
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Title: What is automated ML? AutoML
description: Learn how Azure Machine Learning can automatically generate a model by using the parameters and criteria you provide with automated machine learning. -+
machine-learning Concept Automl Forecasting At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-at-scale.md
-+
machine-learning Concept Automl Forecasting Calendar Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-calendar-features.md
-+
machine-learning Concept Automl Forecasting Deep Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-deep-learning.md
-+
machine-learning Concept Automl Forecasting Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-evaluation.md
-+
machine-learning Concept Automl Forecasting Lags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-lags.md
-+
machine-learning Concept Automl Forecasting Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md
-+
machine-learning Concept Automl Forecasting Sweeping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-sweeping.md
-+
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
Title: 'What is an Azure Machine Learning compute instance?'
description: Learn about the Azure Machine Learning compute instance, a fully managed cloud-based workstation. -+
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
Title: Understand compute targets
description: Learn how to designate a compute resource or environment to train or deploy your model with Azure Machine Learning. -+
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
Title: Prevent overfitting and imbalanced data with Automated ML
description: Identify and manage common pitfalls of machine learning models by using Automated ML solutions in Azure Machine Learning. -+
machine-learning Deploy Jais Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/deploy-jais-models.md
Title: How to deploy JAIS models with Azure Machine Learning studio
description: Learn how to deploy JAIS models with Azure Machine Learning studio. -+ Last updated 05/02/2024
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
-+ Last updated 01/10/2024
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
-+
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
-+
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
description: Set up Azure Machine Learning automated ML to train natural languag
-+
machine-learning How To Automl Forecasting Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md
-+
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
-+ Last updated 08/01/2023
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Title: Create compute clusters
description: Learn how to create compute clusters in your Azure Machine Learning workspace. Use the compute cluster as a compute target for training or inference. -+
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
-+ Last updated 03/04/2024
machine-learning How To Create Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md
Title: Create a compute instance
description: Learn how to create an Azure Machine Learning compute instance. Use as your development environment, or as compute target for dev/test purposes. -+
machine-learning How To Customize Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-customize-compute-instance.md
Title: Customize compute instance with a script
description: Create a customized compute instance, using a startup script. Use the compute instance as your development environment, or as compute target for dev/test purposes. -+
machine-learning How To Deploy Models Jamba https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-jamba.md
Title: How to deploy Jamba models with Azure Machine Learning studio
description: How to deploy Jamba models with Azure Machine Learning studio -+ Last updated 06/19/2024
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
Azure Machine Learning is composed of multiple Azure services. There are multipl
* You must be familiar with creating and working with [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md).
+## Workspace identity types
+
+The Azure Machine Learning workspace uses a __managed identity__ to communicate with other services. Multiple identity types are supported for Azure Machine Learning.
+
+| Managed identity type | Role assignment creation | Purpose |
+| - | :-: | :-: |
+| System-assigned (SAI) | Managed by Microsoft | Lifecycle tied to resource; single resource use; simple to get started |
+| System-assigned+user-assigned (SAI+UAI) | [Managed by you](#user-assigned-managed-identity) | Independent lifecycle for user-assigned identity, multi-resource use, controls least privileged access. Access data in training jobs. |
+
+Once a workspace is created with SAI identity type, it can be updated to SAI+UAI, but not back from SAI+UAI to SAI. You may assign multiple user-assigned identities to the same workspace.
++ ## Azure Container Registry and identity types This table lists the support matrix when authenticating to __Azure Container Registry__, depending on the authentication method and the __Azure Container Registry's__ [public network access configuration](/azure/container-registry/container-registry-access-selected-networks).
Not supported currently.
> [!TIP] > To add a new UAI, you can specify the new UAI ID under the section user_assigned_identities in addition to the existing UAIs, it's required to pass all the existing UAI IDs.<br> To delete one or more existing UAIs, you can put the UAI IDs which needs to be preserved under the section user_assigned_identities, the rest UAI IDs would be deleted.<br>
-To update identity type from SAI to UAI|SAI, you can change type from "user_assigned" to "system_assigned, user_assigned".
### Add a user-assigned managed identity to a workspace in addition to a system-assigned identity
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
description: Use ONNX with Azure Machine Learning automated ML to make predictio
-+ Last updated 02/18/2024
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
-+
machine-learning How To Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-compute-instance.md
Title: Manage a compute instance
description: Learn how to manage an Azure Machine Learning compute instance. Use as your development environment, or as compute target for dev/test purposes. -+
machine-learning How To Manage Compute Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-compute-sessions.md
-+ Last updated 1/18/2023
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
description: Image data preparation for Azure Machine Learning automated ML to train computer vision models on classification, object detection, and segmentation -+
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
-+ Last updated 12/21/2023
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-understand-automated-ml.md
-+ Last updated 08/05/2024
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Title: Set up Automated ML for tabular data in the studio
description: Learn how to set up Automated ML training jobs for tabular data without a single line of code by using Automated ML in Azure Machine Learning studio. -+
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
Last updated 09/21/2023 -+
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
description: Set up Azure Machine Learning automated ML to train small object de
-+ Last updated 03/25/2024
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
Title: Hyperparameter for AutoML computer vision tasks
description: Learn which hyperparameters are available for computer vision tasks with automated ML. -+
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
Title: JSONL format for computer vision tasks
description: Learn how to format your JSONL files for data consumption in automated ML experiments for computer vision tasks with the CLI v2 and Python SDK v2. -+
machine-learning Reference Automl Nlp Cli Multilabel Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-nlp-cli-multilabel-classification.md
Title: 'CLI (v2) Automated ML NLP text classification multilabel job YAML schema
description: Reference documentation for the CLI (v2) automated ML NLP text classification multilabel job YAML schema. -+
machine-learning Reference Checkpoint Performance For Large Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-checkpoint-performance-for-large-models.md
Title: Optimize Checkpoint Performance for Large Models
description: Learn how Nebula can save time, resources, and money for large model training applications -+
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
Title: 'Tutorial: AutoML- train object detection model'
description: Train an object detection model to identify if an image contains certain objects with automated ML and the Azure Machine Learning CLI v2 and Python SDK v2. -+
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
Title: 'Tutorial: Demand forecasting & AutoML'
description: Train and deploy a demand forecasting model without writing code, using Azure Machine Learning's automated machine learning (automated ML) interface. -+
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-template.md
Title: Use a template to create a secure workspace
description: Use a template to create an Azure Machine Learning workspace and associated required Azure services inside a secure virtual network. -+
machine-learning Tutorial Create Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-vnet.md
Title: Create a secure workspace with Azure Virtual Network
description: Create an Azure Machine Learning workspace and required Azure services inside an Azure Virtual Network. -+
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
Title: 'Tutorial: AutoML- train no-code classification models'
description: Train a classification model without writing a single line of code using Azure Machine Learning automated ML in the studio UI. -+
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml.md
Title: What is automated ML? AutoML (v1)
description: Learn how automated machine learning in Azure Machine Learning can automatically generate a model by using the parameters and criteria you provide. -+
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
Title: Secure data access in the cloud v1
description: Learn how to securely connect to your data storage on Azure with Azure Machine Learning datastores and datasets v1 -+
machine-learning Concept Network Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-network-data-access.md
Title: Network data access in studio
description: Learn how data access works with Azure Machine Learning studio when your workspace or storage is in a virtual network. -+
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-forecast.md
-+
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models.md
description: Set up Azure Machine Learning automated ML to train computer vision
-+ Last updated 01/18/2022
machine-learning How To Auto Train Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md
Title: Train regression model with Automated ML (SDK v1)
description: Train a regression model to predict taxi fares with the Azure Machine Learning Python SDK by using the Azure Machine Learning Automated ML SDK (v1). -+
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models.md
description: Set up Azure Machine Learning automated ML to train natural languag
-+
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-features.md
-+
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train.md
-+ Last updated 01/24/2021
machine-learning How To Configure Cross Validation Data Splits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-cross-validation-data-splits.md
Title: Data splits and cross-validation in automated machine learning
description: Learn how to configure training, validation, cross-validation and test data for automated machine learning experiments. -+
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-databricks-automl-environment.md
-+ Last updated 10/21/2021
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-private-link.md
Title: Configure a private endpoint v1
description: 'Use a private endpoint to securely access your Azure Machine Learning workspace (v1) from a virtual network.' -+
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
Title: Create compute clusters CLI v1
description: Learn how to create compute clusters in your Azure Machine Learning workspace with CLI v1. Use the compute cluster as a compute target for training or inference. -+
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-manage-compute-instance.md
Title: Create and manage a compute instance with CLI v1
description: Learn how to create and manage an Azure Machine Learning compute instance with CLI v1. Use as your development environment, or as compute target for dev/test purposes. -+
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-generate-automl-training-code.md
-+
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-high-availability-machine-learning.md
Title: Failover & disaster recovery
description: Learn how to plan for disaster recovery and maintain business continuity for Azure Machine Learning. -+
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-identity-based-data-access.md
Title: Identity-based data access to storage services (v1) description: Learn how to use identity-based data access to connect to storage services on Azure with Azure Machine Learning datastores and the Machine Learning Python SDK v1.-+
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-inference-onnx-automl-image-models.md
description: Use ONNX with Azure Machine Learning automated ML to make predictions on computer vision models for classification, object detection, and instance segmentation. (v1) -+ Last updated 10/18/2021
machine-learning How To Machine Learning Interpretability Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-machine-learning-interpretability-automl.md
Title: Model explainability in automated ML (preview)
description: Learn how to get explanations for how your automated ML model determines feature importance and makes predictions when using the Azure Machine Learning SDK. -+
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images.md
description: Image data preparation for Azure Machine Learning automated ML to train computer vision models on classification, object detection, and segmentation v1 -+
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-web-service.md
Title: Secure web services using TLS
description: Learn how to enable HTTPS with TLS version 1.2 to secure a web service that's deployed through Azure Machine Learning. -+
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-auto-ml.md
-+ Last updated 10/21/2021
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-automl-small-object-detect.md
description: Set up Azure Machine Learning automated ML to train small object detection models. -+ Last updated 10/13/2021
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-automlstep-in-pipelines.md
Title: Use automated ML in ML pipelines
description: The AutoMLStep allows you to use automated machine learning in your pipelines. -+
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-automl-images-hyperparameters.md
Title: Hyperparameter for AutoML computer vision tasks (v1)
description: Learn which hyperparameters are available for computer vision tasks with automated ML (v1). -+
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-automl-images-schema.md
Title: JSONL format for computer vision tasks (v1)
description: Learn how to format your JSONL files for data consumption in automated ML experiments for computer vision tasks (v1). -+
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models.md
Title: 'Tutorial: AutoML- train object detection model (v1)'
description: Train an object detection model to identify if an image contains certain objects with automated ML and the Azure Machine Learning Python SDK automated ML. (v1) -+
migrate Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/appcat/java.md
The tool discovers application technology usage through static code analysis, pr
This tool is open source and is based on [WindUp](https://github.com/windup), a project created by Red Hat and published under the [Eclipse Public License](https://github.com/windup/windup/blob/master/LICENSE).
-## When should I use Azure Migrate application and code assessment?
+## Overview
The tool is designed to help organizations modernize their Java applications in a way that reduces costs and enables faster innovation. The tool uses advanced analysis techniques to understand the structure and dependencies of any Java application, and provides guidance on how to refactor and migrate the applications to Azure.
When the tool assesses for Cloud Readiness and related Azure services, it can al
* Azure Key Vault * Azure Front Door
-## How to use Azure Migrate application and code assessment for Java
+## Download
To use the `appcat` CLI, you must download the ZIP file described in the next section, and have a compatible JDK 11 or JDK 17 installation on your computer. The `appcat` CLI runs on any Java-compatible environment such as Windows, Linux, or Mac, both for Intel, Arm, and Apple Silicon hardware. We recommend you use the [Microsoft Build of OpenJDK](/java/openjdk).
-### Download
- > [!div class="nextstepaction"] > [Download Azure Migrate application and code assessment for Java 6.3.0.9](https://aka.ms/appcat/azure-migrate-appcat-for-java-cli-6.3.0.9-preview.zip). Updated on 2024-08-06.
The following previous releases are also available for download:
- [Azure Migrate application and code assessment for Java 6.3.0.8](https://aka.ms/appcat/azure-migrate-appcat-for-java-cli-6.3.0.8-preview.zip). Released on March, 2024. - [Azure Migrate application and code assessment for Java 6.3.0.7](https://aka.ms/appcat/azure-migrate-appcat-for-java-cli-6.3.0.7-preview.zip). Released on November, 2023.
-### Get started with appcat
+## Get started
+
+To run `appcat`, make sure you have a supported JDK installed. The tool supports the following JDKs:
+
+* Microsoft Build of OpenJDK 11
+* Microsoft Build of OpenJDK 17
+* Eclipse TemurinΓäó JDK 11
+* Eclipse TemurinΓäó JDK 17
+
+After you have a valid JDK installed, make sure its installation directory is properly configured in the `JAVA_HOME` environment variable.
-Unzip the zip file in a folder of your choice. You then get the following directory structure:
+To continue, download and unzip the package in a folder of your choice. You then get the following directory structure:
``` appcat-cli-<version> # APPCAT_HOME
The following guides provide the main documentation for `appcat` for Java:
* [CLI Usage Guide](https://azure.github.io/appcat-docs/cli/) * [Rules Development Guide](https://azure.github.io/appcat-docs/rules-development-guide/)
-## Discover technology usage and Cloud readiness without an Azure service in mind
+### Discover technology usage and Cloud readiness without an Azure service in mind
Discovery of technologies and Cloud readiness targets provide great insight into application replatform and modernization to the Cloud. The tool scans the application and its components to gain a comprehensive understanding of its structure, architecture, and dependencies. It also finds potential issues that might be challenging in a Cloud environment. The `discovery` target in particular is used to create a detailed inventory of the application and its components. This inventory serves as the basis for further analysis and planning. For more information, see the [Discovery report](#discovery-report) section.
This type of report is useful when you don't have a specific Azure service in mi
The tool always performs the `discovery` whether or not you include that value in the `--target` parameter.
-## Assess a Java application
+### Assess a Java application
The *assessment* phase is where the `appcat` CLI analyzes the application and its components to determine its suitability for replatorming and to identify any potential challenges or limitations. This phase involves analyzing the application code and checking its compliance with the selected targets.
The complete guide for Rules Development is available at [azure.github.io/appcat
### 6.3.0.9
-This release contains the following fixes to the known issues previously on 6.3.0.8, and includes a set of new rules. For more information, see below.
+This release contains the following fixes and includes a set of new rules. For more information, see below.
- Resolved an issue with the `localhost-java-00001` rule. - Introduced new rules for identifying technologies such as AWS S3, AWS SQS, Alibaba Cloud OSS, Alibaba Cloud SMS, Alibaba Scheduler X, Alibaba Cloud Seata, and Alibaba Rocket MQ.
migrate Start Here Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/start-here-vmware.md
products:
Last updated 05/08/2024
-# Customer intent - overview of the options for assessing an existing VMware deployment for migration
+# Customer intent - Overview of the options for assessing an existing VMware deployment for migration
mysql August 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/august-2024.md
# Azure Database For MySQL - Flexible Server August 2024 maintenance
-We're pleased to announce the June 2024 maintenance of the Azure Database for MySQL Flexible Server. This maintenance updates all existing 8.0.34 and, after, engine version servers to the 8.0.37 engine version, along with several security improvements and known issue fixes.
+We're pleased to announce the August 2024 maintenance of the Azure Database for MySQL Flexible Server. This maintenance updates all existing 8.0.34 and, after, engine version servers to the 8.0.37 engine version, along with several security improvements and known issue fixes.
## Engine version changes
No new features are being introduced in this maintenance update.
## Known issues fixes - Fix the issue that for some servers migrated from single server to flexible server, execute table partition leads to table corrupted-- Fix the issue that for some servers with audit/slow log enabled, when a large number of logs are generated, these servers might be missing server metrics, and start operation might be stuck for these servers if they are in a stopped state.
+- Fix the issue that for some servers with audit/slow log enabled, when a large number of logs are generated, these servers might be missing server metrics, and start operation might be stuck for these servers if they are in a stopped state.
nat-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-overview.md
description: Overview of Azure NAT Gateway features, resources, architecture, an
Previously updated : 04/29/2024 Last updated : 08/12/2024 #Customer intent: I want to understand what Azure NAT Gateway is and how to use it.
Azure NAT Gateway is a fully managed and highly resilient Network Address Transl
NAT Gateway provides dynamic SNAT port functionality to automatically scale outbound connectivity and reduce the risk of SNAT port exhaustion. *Figure: Azure NAT Gateway*
nat-gateway Region Move Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/region-move-nat-gateway.md
Previously updated : 01/22/2024 Last updated : 08/12/2024 # Customer intent: As a network administrator, I want to create and configure a Azure NAT Gateway after moving resources to another region.
After you move all the resources associated with the original NAT gateway instan
| **Instance details** | | | Name | Enter **nat-gateway**. | | Region | Select the name of the new region. |
- | Availability Zone | Select **None**. Instead, you can select the zone of the moved resources if applicable. |
+ | Availability Zone | Select **No Zone**. Instead, you can select the zone of the moved resources if applicable. |
| Idle timeout (minutes) | Enter **10**. | 4. Select the **Outbound IP** tab, or select **Next: Outbound IP** at the bottom of the page.
nat-gateway Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/resource-health.md
# Customer intent: As an IT administrator, I want to understand how to use resource health to monitor NAT gateway. Previously updated : 01/30/2024 Last updated : 08/12/2024 # Azure NAT Gateway Resource Health
openshift Howto Segregate Machinesets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-segregate-machinesets.md
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet metadata: labels:
- machine.openshift.io/cluster-api-cluster: XXX-XXX-XXX
+ machine.openshift.io/cluster-api-cluster: <INFRASTRUCTURE_ID>
machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: XXX-XXX-XXX-XXX-XXX
spec:
replicas: 1 selector: matchLabels:
- machine.openshift.io/cluster-api-cluster: XXX-XXX-XXX
- machine.openshift.io/cluster-api-machineset: XXX-XXX-XXX-XXX-XXX
+ machine.openshift.io/cluster-api-cluster: <INFRASTRUCTURE_ID>
+ machine.openshift.io/cluster-api-machineset: <INFRASTRUCTURE_ID>-infra-<REGION><ZONE>
template: metadata: creationTimestamp: null labels:
- machine.openshift.io/cluster-api-cluster: XXX-XXX-XXX
+ machine.openshift.io/cluster-api-cluster: <INFRASTRUCTURE_ID>
machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker
- machine.openshift.io/cluster-api-machineset: XXX-XXX-XXX-XXX-XXX
+ machine.openshift.io/cluster-api-machineset: <INFRASTRUCTURE_ID>-infra-<REGION><ZONE>
spec: metadata: creationTimestamp: null labels:
- node-role.kubernetes.io/<role>: ""
+ node-role.kubernetes.io/<role>: "" #Example: worker,infra
providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1
spec:
offer: aro4 publisher: azureopenshift resourceID: ""
- sku: XXX_XX
- version: XX.XX.XXX
- internalLoadBalancer: ""
+ sku: <SKU>
+ version: <VERSION>
kind: AzureMachineProviderSpec
- location: useast
+ location: <REGION>
metadata: creationTimestamp: null natRule: null
- networkResourceGroup: XX-XXXXXX
+ networkResourceGroup: <NETWORK_RESOURCE_GROUP>
osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false
- publicLoadBalancer: XXX-XXX-XXX
- resourceGroup: aro-fq5v3vye
- sshPrivateKey: ""
- sshPublicKey: ""
- subnet: XXX-XXX
+ publicLoadBalancer: <LOADBALANCER_NAME>
+ resourceGroup: <CLUSTER_RESOURCE_GROUP>
+ subnet: <SUBNET_NAME>
userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3
- vnet: XXX-XXX
- zone: "X"
+ vnet: <VNET_NAME>
+ zone: <ZONE>
``` ### Step 5: Apply the machine set
operator-service-manager Best Practices Onboard Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/best-practices-onboard-deploy.md
Title: Best practices for Azure Operator Service Manager
description: Understand best practices for Azure Operator Service Manager to onboard and deploy a network function (NF). Previously updated : 08/09/2024 Last updated : 08/12/2024
Any user trying to install cert-manager on the cluster, as part of a workload de
### Other Configuration Changes to Consider In addition to disabling the NfApp associated with the old user cert-manager, we have found other changes may be needed;
-1. If any other NfApps have DependsOn references to the old user cert-manager NfApp, these will need to be removed.
-2. If any other NfApps reference the old user cert-manager namespace value, this will need to be changed to the new azurehybridnetwork namespace value.
+1. If one NfApp contains both cert-manager and the CA installation, these must broken into two NfApps, so that the partner can disable cert-manager but enable CA installation.
+2. If any other NfApps have DependsOn references to the old user cert-manager NfApp, these will need to be removed.
+3. If any other NfApps reference the old user cert-manager namespace value, this will need to be changed to the new azurehybridnetwork namespace value.
### Cert-Manager Version Compatibility & Management
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure SignalR](../azure-signalr/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Spring Apps](reliability-spring-apps.md#availability-zone-support) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Storage: Ultra Disk | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
+| [Azure VMware Services](../azure-vmware/architecture-private-clouds.md) | | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
| [Azure Web PubSub](../azure-web-pubsub/concept-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Microsoft Fabric](reliability-fabric.md#availability-zone-support) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
reliability Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/glossary.md
To better understand regions and availability zones in Azure, it helps to unders
|-|-| | Region | A geographic perimeter that contains a set of datacenters. | | Datacenter | A facility that contains servers, networking equipment, and other hardware to support Azure resources and workloads. |
-| Availability zone | [A separated group of datacenters within a region.][availability-zones-overview] Each availability zone is independent of the others, with its own power, cooling, and networking infrastructure. [Many regions support availability zones.][azure-regions-with-availability-zone-support] |
-| Paired regions |A relationship between two Azure regions. [Some Azure regions][azure-region-pairs] are connected to another defined region to enable specific types of multi-region solutions. [Newer Azure regions aren't paired.][regions-with-availability-zones-and-no-region-pair] |
+| Availability zone | [A separated group of datacenters within a region](./availability-zones-overview.md). Each availability zone is independent of the others, with its own power, cooling, and networking infrastructure. [Many regions support availability zones](./availability-zones-service-support.md) |
+| Paired regions |A relationship between two Azure regions. [Some Azure regions](./cross-region-replication-azure.md#azure-paired-regions) are connected to another defined region to enable specific types of multi-region solutions. [Newer Azure regions aren't paired](./cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair) |
| Region architecture | The specific configuration of the Azure region, including the number of availability zones and whether the region is paired with another region. | | Locally redundant deployment | A deployment model in which a resource is deployed into a single region without reference to an availability zone. In a region that supports availability zones, the resource might be deployed in any of the region's availability zones. | | Zonal (pinned) deployment | A deployment model in which a resource is deployed into a specific availability zone. |
reliability Reliability Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machine-scale-sets.md
-+ Last updated 06/12/2023
sap Compliance Bcdr Reliabilty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/compliance-bcdr-reliabilty.md
Title: Resiliency in Azure Center for SAP Solutions description: Find out about reliability in Azure Center for SAP Solutions--++
sap Compliance Cedr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/compliance-cedr.md
Title: Customer enabled disaster recovery in Azure Center for SAP Solutions description: Find out about Customer enabled disaster recovery in Azure Center for SAP Solutions--++
sentinel Cef Name Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cef-name-mapping.md
+
+ Title: Common Event Format (CEF) key and CommonSecurityLog field mapping
+description: This article maps CEF keys to the corresponding field names in the CommonSecurityLog in Microsoft Sentinel.
+++ Last updated : 08/12/2024++
+# CEF and CommonSecurityLog field mapping
+
+The following tables map Common Event Format (CEF) field names to the names they use in Microsoft Sentinel's CommonSecurityLog, and might be helpful when you're working with a CEF data source in Microsoft Sentinel. For more information, see [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md).
++
+## A - C
+
+|CEF key name |CommonSecurityLog field name |Description |
+||||
+| act | <a name="deviceaction"></a> DeviceAction | The action mentioned in the event. |
+| app | ApplicationProtocol | The protocol used in the application, such as HTTP, HTTPS, SSHv2, Telnet, POP, IMPA, IMAPS, and so on. |
+| cat | DeviceEventCategory | Represents the category assigned by the originating device. Devices often use their own categorization schema to classify event. For example: `/Monitor/Disk/Read`. |
+| cnt | EventCount | A count associated with the event, showing how many times the same event was observed. |
+
+## D
+
+|CEF key name |CommonSecurityLog name |Description |
+||||
+|Device Vendor | DeviceVendor | String that, together with device product and version definitions, uniquely identifies the type of sending device. |
+|Device Product | DeviceProduct | String that, together with device vendor and version definitions, uniquely identifies the type of sending device. |
+|Device Version | DeviceVersion | String that, together with device product and vendor definitions, uniquely identifies the type of sending device. |
+| destinationDnsDomain | DestinationDnsDomain | The DNS part of the fully qualified domain name (FQDN). |
+| destinationServiceName | DestinationServiceName | The service that is targeted by the event. For example, `sshd`.|
+| destinationTranslatedAddress | DestinationTranslatedAddress | Identifies the translated destination referred to by the event in an IP network, as an IPv4 IP address. |
+| destinationTranslatedPort | DestinationTranslatedPort | Port, after translation, such as a firewall. <br>Valid port numbers: `0` - `65535` |
+| deviceDirection | <a name="communicationdirection"></a> CommunicationDirection | Any information about the direction the observed communication has taken. Valid values: <br>- `0` = Inbound <br>- `1` = Outbound |
+| deviceDnsDomain | DeviceDnsDomain | The DNS domain part of the full qualified domain name (FQDN) |
+|DeviceEventClassID | DeviceEventClassID | String or integer that serves as a unique identifier per event type. |
+| deviceExternalId | deviceExternalId | A name that uniquely identifies the device generating the event. |
+| deviceFacility | DeviceFacility | The facility generating the event.|
+| deviceInboundInterface | DeviceInboundInterface |The interface on which the packet or data entered the device. |
+| deviceNtDomain | DeviceNtDomain | The Windows domain of the device address |
+| deviceOutboundInterface | DeviceOutboundInterface |Interface on which the packet or data left the device. |
+| devicePayloadId |DevicePayloadId |Unique identifier for the payload associated with the event. |
+| deviceProcessName | ProcessName | Process name associated with the event. <br><br>For example, in UNIX, the process generating the syslog entry. |
+| deviceTranslatedAddress | DeviceTranslatedAddress | Identifies the translated device address that the event refers to, in an IP network. <br><br>The format is an Ipv4 address. |
+| dhost |DestinationHostName | The destination that the event refers to in an IP network. <br>The format should be an FQDN associated with the destination node, when a node is available. For example, `host.domain.com` or `host`. |
+| dmac | DestinationMacAddress | The destination MAC address (FQDN) |
+| dntdom | DestinationNTDomain | The Windows domain name of the destination address.|
+| dpid | DestinationProcessId |The ID of the destination process associated with the event.|
+| dpriv | DestinationUserPrivileges | Defines the destination use's privileges. <br>Valid values: `Admninistrator`, `User`, `Guest` |
+| dproc | DestinationProcessName | The name of the eventΓÇÖs destination process, such as `telnetd` or `sshd.` |
+| dpt | DestinationPort | Destination port. <br>Valid values: `*0` - `65535` |
+| dst | DestinationIP | The destination IpV4 address that the event refers to in an IP network. |
+| dtz | DeviceTimeZone | Timezone of the device generating the event |
+| duid |DestinationUserId | Identifies the destination user by ID. |
+| duser | DestinationUserName |Identifies the destination user by name.|
+| dvc | DeviceAddress | The IPv4 address of the device generating the event. |
+| dvchost | DeviceName | The FQDN associated with the device node, when a node is available. For example, `host.domain.com` or `host`.|
+| dvcmac | DeviceMacAddress | The MAC address of the device generating the event. |
+| dvcpid | Process ID | Defines the ID of the process on the device generating the event. |
+
+## E - I
+
+|CEF key name |CommonSecurityLog name |Description |
+||||
+|externalId | ExternalID | An ID used by the originating device. Typically, these values have increasing values that are each associated with an event. |
+|fileCreateTime | FileCreateTime | Time when the file was created. |
+|fileHash | FileHash | Hash of a file. |
+|fileId | FileID |An ID associated with a file, such as the inode. |
+| fileModificationTime | FileModificationTime |Time when the file was last modified. |
+| filePath | FilePath | Full path to the file, including the filename. For example: `C:\ProgramFiles\WindowsNT\Accessories\wordpad.exe` or `/usr/bin/zip`.|
+| filePermission |FilePermission |The file's permissions. |
+| fileType | FileType | File type, such as pipe, socket, and so on.|
+|fname | FileName| The file's name, without the path. |
+| fsize | FileSize | The size of the file. |
+|Host | Computer | Host, from Syslog |
+|in | ReceivedBytes |Number of bytes transferred inbound. |
++
+## M - P
+
+|CEF key name |CommonSecurityLog name |Description |
+||||
+|msg | Message | A message that gives more details about the event. |
+|Name | Activity | A string that represents a human-readable and understandable description of the event. |
+|oldFileCreateTime | OldFileCreateTime | Time when the old file was created. |
+|oldFileHash | OldFileHash | Hash of the old file. |
+|oldFileId | OldFileId | And ID associated with the old file, such as the inode. |
+| oldFileModificationTime | OldFileModificationTime |Time when the old file was last modified. |
+| oldFileName | OldFileName |Name of the old file. |
+| oldFilePath | OldFilePath | Full path to the old file, including the filename. <br>For example, `C:\ProgramFiles\WindowsNT\Accessories\wordpad.exe` or `/usr/bin/zip`.|
+| oldFilePermission | OldFilePermission |Permissions of the old file. |
+|oldFileSize | OldFileSize | Size of the old file.|
+| oldFileType | OldFileType | File type of the old file, such as a pipe, socket, and so on.|
+| out | SentBytes | Number of bytes transferred outbound. |
+| outcome | EventOutcome | Outcome of the event, such as `success` or `failure`.|
+|proto | Protocol | Transport protocol that identifies the Layer-4 protocol used. <br><br>Possible values include protocol names, such as `TCP` or `UDP`. |
++
+## R - T
+
+|CEF key name |CommonSecurityLog name |Description |
+||||
+| reason | Reason | The reason an audit event was generated. For example, `badd password` or `unknown user`. This could also be an error or return code. For example: `0x1234`. |
+|Request | RequestURL | The URL accessed for an HTTP request, including the protocol. For example, `http://www/secure.com` |
+|requestClientApplication | RequestClientApplication | The user agent associated with the request. |
+| requestContext | RequestContext | Describes the content from which the request originated, such as the HTTP Referrer. |
+| requestCookies | RequestCookies |Cookies associated with the request. |
+| requestMethod | RequestMethod | The method used to access a URL. <br><br>Valid values include methods such as `POST`, `GET`, and so on. |
+| rt | ReceiptTime | The time at which the event related to the activity was received. |
+|Severity | <a name="logseverity"></a> LogSeverity | A string or integer that describes the importance of the event.<br><br> Valid string values: `Unknown` , `Low`, `Medium`, `High`, `Very-High` <br><br>Valid integer values are:<br> - `0`-`3` = Low <br>- `4`-`6` = Medium<br>- `7`-`8` = High<br>- `9`-`10` = Very-High |
+| shost | SourceHostName |Identifies the source that event refers to in an IP network. Format should be a fully qualified domain name (DQDN) associated with the source node, when a node is available. For example, `host` or `host.domain.com`. |
+| smac | SourceMacAddress | Source MAC address. |
+| sntdom | SourceNTDomain | The Windows domain name for the source address. |
+| sourceDnsDomain | SourceDnsDomain | The DNS domain part of the complete FQDN. |
+| sourceServiceName | SourceServiceName | The service responsible for generating the event. |
+| sourceTranslatedAddress | SourceTranslatedAddress | Identifies the translated source that the event refers to in an IP network. |
+| sourceTranslatedPort | SourceTranslatedPort | Source port after translation, such as a firewall. <br>Valid port numbers are `0` - `65535`. |
+| spid | SourceProcessId | The ID of the source process associated with the event.|
+| spriv | SourceUserPrivileges | The source user's privileges. <br><br>Valid values include: `Administrator`, `User`, `Guest` |
+| sproc | SourceProcessName | The name of the event's source process.|
+| spt | SourcePort | The source port number. <br>Valid port numbers are `0` - `65535`. |
+| src | SourceIP |The source that an event refers to in an IP network, as an IPv4 address. |
+| suid | SourceUserID | Identifies the source user by ID. |
+| suser | SourceUserName | Identifies the source user by name. |
+| type | EventType | Event type. Value values include: <br>- `0`: base event <br>- `1`: aggregated <br>- `2`: correlation event <br>- `3`: action event <br><br>**Note**: This event can be omitted for base events. |
++
+## Custom fields
+
+The following tables map the names of CEF keys and CommonSecurityLog fields that are available for customers to use for data that doesn't apply to any of the built-in fields.
+
+### Custom IPv6 address fields
+
+The following table maps CEF key and CommonSecurityLog names for the *IPv6* address fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| c6a1 | DeviceCustomIPv6Address1 |
+| c6a1Label | DeviceCustomIPv6Address1Label |
+| c6a2 | DeviceCustomIPv6Address2 |
+| c6a2Label | DeviceCustomIPv6Address2Label |
+| c6a3 | DeviceCustomIPv6Address3 |
+| c6a3Label | DeviceCustomIPv6Address3Label |
+| c6a4 | DeviceCustomIPv6Address4 |
+| c6a4Label | DeviceCustomIPv6Address4Label |
+| cfp1 | DeviceCustomFloatingPoint1 |
+| cfp1Label | deviceCustomFloatingPoint1Label |
+| cfp2 | DeviceCustomFloatingPoint2 |
+| cfp2Label | deviceCustomFloatingPoint2Label |
+| cfp3 | DeviceCustomFloatingPoint3 |
+| cfp3Label | deviceCustomFloatingPoint3Label |
+| cfp4 | DeviceCustomFloatingPoint4 |
+| cfp4Label | deviceCustomFloatingPoint4Label |
++
+### Custom number fields
+
+The following table maps CEF key and CommonSecurityLog names for the *number* fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| cn1 | DeviceCustomNumber1 |
+| cn1Label | DeviceCustomNumber1Label |
+| cn2 | DeviceCustomNumber2 |
+| cn2Label | DeviceCustomNumber2Label |
+| cn3 | DeviceCustomNumber3 |
+| cn3Label | DeviceCustomNumber3Label |
++
+### Custom string fields
+
+The following table maps CEF key and CommonSecurityLog names for the *string* fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| cs1 | DeviceCustomString1 <sup>[1](#use-sparingly)</sup> |
+| cs1Label | DeviceCustomString1Label <sup>[1](#use-sparingly)</sup> |
+| cs2 | DeviceCustomString2 <sup>[1](#use-sparingly)</sup> |
+| cs2Label | DeviceCustomString2Label <sup>[1](#use-sparingly)</sup> |
+| cs3 | DeviceCustomString3 <sup>[1](#use-sparingly)</sup> |
+| cs3Label | DeviceCustomString3Label <sup>[1](#use-sparingly)</sup> |
+| cs4 | DeviceCustomString4 <sup>[1](#use-sparingly)</sup> |
+| cs4Label | DeviceCustomString4Label <sup>[1](#use-sparingly)</sup> |
+| cs5 | DeviceCustomString5 <sup>[1](#use-sparingly)</sup> |
+| cs5Label | DeviceCustomString5Label <sup>[1](#use-sparingly)</sup> |
+| cs6 | DeviceCustomString6 <sup>[1](#use-sparingly)</sup> |
+| cs6Label | DeviceCustomString6Label <sup>[1](#use-sparingly)</sup> |
+| flexString1 | FlexString1 |
+| flexString1Label | FlexString1Label |
+| flexString2 | FlexString2 |
+| flexString2Label | FlexString2Label |
++
+> [!TIP]
+> <a name="use-sparingly"></a><sup>1</sup> We recommend that you use the **DeviceCustomString** fields sparingly and use more specific, built-in fields when possible.
+>
+### Custom timestamp fields
+
+The following table maps CEF key and CommonSecurityLog names for the *timestamp* fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| deviceCustomDate1 | DeviceCustomDate1 |
+| deviceCustomDate1Label | DeviceCustomDate1Label |
+| deviceCustomDate2 | DeviceCustomDate2 |
+| deviceCustomDate2Label | DeviceCustomDate2Label |
+| flexDate1 | FlexDate1 |
+| flexDate1Label | FlexDate1Label |
++
+### Custom integer data fields
+
+The following table maps CEF key and CommonSecurityLog names for the *integer* fields available for custom data.
+
+|CEF key name |CommonSecurityLog name |
+|||
+| flexNumber1 | FlexNumber1 |
+| flexNumber1Label | FlexNumber1Label |
+| flexNumber2 | FlexNumber2 |
+| flexNumber2Label | FlexNumber2Label |
++
+## Enrichment fields
+
+The following **CommonSecurityLog** fields are added by Microsoft Sentinel to enrich the original events received from the source devices, and don't have mappings in CEF keys:
+
+### Threat intelligence fields
+
+|CommonSecurityLog field name |Description |
+|||
+| **IndicatorThreatType** | The [MaliciousIP](#MaliciousIP) threat type, according to the threat intelligence feed. |
+| <a name="MaliciousIP"></a>**MaliciousIP** | Lists any IP addresses in the message that correlates with the current threat intelligence feed. |
+| **MaliciousIPCountry** | The [MaliciousIP](#MaliciousIP) country/region, according to the geographic information at the time of the record ingestion. |
+| **MaliciousIPLatitude** | The [MaliciousIP](#MaliciousIP) longitude, according to the geographic information at the time of the record ingestion. |
+| **MaliciousIPLongitude** | The [MaliciousIP](#MaliciousIP) longitude, according to the geographic information at the time of the record ingestion. |
+| **ReportReferenceLink** | Link to the threat intelligence report. |
+| **ThreatConfidence** | The [MaliciousIP](#MaliciousIP) threat confidence, according to the threat intelligence feed. |
+| **ThreatDescription** | The [MaliciousIP](#MaliciousIP) threat description, according to the threat intelligence feed. |
+| **ThreatSeverity** | The threat severity for the [MaliciousIP](#MaliciousIP), according to the threat intelligence feed at the time of the record ingestion. |
++
+### Other enrichment fields
+
+|CommonSecurityLog field name |Description |
+|||
+|**OriginalLogSeverity** | Always empty, supported for integration with CiscoASA. <br>For details about log severity values, see the [LogSeverity](#logseverity) field. |
+|**RemoteIP** | The remote IP address. <br>This value is based on [CommunicationDirection](#communicationdirection) field, if possible. |
+|**RemotePort** | The remote port. <br>This value is based on [CommunicationDirection](#communicationdirection) field, if possible. |
+|**SimplifiedDeviceAction** | Simplifies the [DeviceAction](#deviceaction) value to a static set of values, while keeping the original value in the [DeviceAction](#deviceaction) field. <br>For example: `Denied` > `Deny`. |
+|**SourceSystem** | Always defined as **OpsManager**. |
++
+## Related content
+
+- [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md)
+- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)
storage Storage Blob Delete Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-go.md
Previously updated : 08/05/2024 Last updated : 08/12/2024 ms.devlang: golang
[!INCLUDE [storage-dev-guide-selector-delete-blob](../../../includes/storage-dev-guides/storage-dev-guide-selector-delete-blob.md)]
-This article shows how to delete blobs using the [Azure Storage client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme). If you've enabled [soft delete for blobs](soft-delete-blob-overview.md), you can restore deleted blobs during the retention period.
+This article shows how to delete blobs using the [Azure Storage client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme), and how to restore [soft-deleted](soft-delete-blob-overview.md) blobs during the retention period.
[!INCLUDE [storage-dev-guide-prereqs-go](../../../includes/storage-dev-guides/storage-dev-guide-prereqs-go.md)]
The authorization mechanism must have the necessary permissions to delete a blob
## Delete a blob + To delete a blob, call the following method: - [DeleteBlob](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.DeleteBlob)
storage Storage Blob Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md
Previously updated : 08/05/2024 Last updated : 08/12/2024 ms.devlang: java
[!INCLUDE [storage-dev-guide-selector-delete-blob](../../../includes/storage-dev-guides/storage-dev-guide-selector-delete-blob.md)]
-This article shows how to delete blobs with the [Azure Storage client library for Java](/jav), you can restore deleted blobs during the retention period.
+This article shows how to delete blobs with the [Azure Storage client library for Java](/jav) blobs during the retention period.
## Prerequisites
This article shows how to delete blobs with the [Azure Storage client library fo
## Delete a blob
-To delete a blob, call one of these methods:
+
+To delete a blob, call either of the following methods:
- [delete](/java/api/com.azure.storage.blob.specialized.blobclientbase#method-summary) - [deleteIfExists](/java/api/com.azure.storage.blob.specialized.blobclientbase#method-summary)
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
[!INCLUDE [storage-dev-guide-selector-delete-blob](../../../includes/storage-dev-guides/storage-dev-guide-selector-delete-blob.md)]
-This article shows how to delete blobs with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). If you've enabled [soft delete for blobs](soft-delete-blob-overview.md), you can restore deleted blobs during the retention period.
+This article shows how to delete blobs with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob), and how to restore [soft-deleted](soft-delete-blob-overview.md) blobs during the retention period.
## Prerequisites
This article shows how to delete blobs with the [Azure Storage client library fo
## Delete a blob + To delete a blob, create a [BlobClient](storage-blob-javascript-get-started.md#create-a-blobclient-object) then call either of these methods: - [BlobClient.delete](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-delete)
storage Storage Blob Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-python.md
Previously updated : 08/05/2024 Last updated : 08/12/2024 ms.devlang: python
[!INCLUDE [storage-dev-guide-selector-delete-blob](../../../includes/storage-dev-guides/storage-dev-guide-selector-delete-blob.md)]
-This article shows how to delete blobs using the [Azure Storage client library for Python](/python/api/overview/azure/storage). If you've enabled [soft delete for blobs](soft-delete-blob-overview.md), you can restore deleted blobs during the retention period.
+This article shows how to delete blobs using the [Azure Storage client library for Python](/python/api/overview/azure/storage), and how to restore [soft-deleted](soft-delete-blob-overview.md) blobs during the retention period.
To learn about deleting a blob using asynchronous APIs, see [Delete a blob asynchronously](#delete-a-blob-asynchronously).
To learn about deleting a blob using asynchronous APIs, see [Delete a blob async
## Delete a blob + To delete a blob, call the following method: - [BlobClient.delete_blob](/python/api/azure-storage-blob/azure.storage.blob.blobclient#azure-storage-blob-blobclient-delete-blob)
storage Storage Blob Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-typescript.md
description: Learn how to delete and restore a blob with TypeScript in your Azur
Previously updated : 08/05/2024 Last updated : 08/12/2024 ms.devlang: typescript
[!INCLUDE [storage-dev-guide-selector-delete-blob](../../../includes/storage-dev-guides/storage-dev-guide-selector-delete-blob.md)]
-This article shows how to delete blobs with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). If you've enabled [soft delete for blobs](soft-delete-blob-overview.md), you can restore deleted blobs during the retention period.
+This article shows how to delete blobs with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob), and how to restore [soft-deleted](soft-delete-blob-overview.md) blobs during the retention period.
## Prerequisites
This article shows how to delete blobs with the [Azure Storage client library fo
## Delete a blob + To delete a blob, create a [BlobClient](storage-blob-typescript-get-started.md#create-a-blobclient-object) then call either of these methods: - [BlobClient.delete](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-delete)
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
Previously updated : 08/05/2024 Last updated : 08/12/2024 ms.devlang: csharp
[!INCLUDE [storage-dev-guide-selector-delete-blob](../../../includes/storage-dev-guides/storage-dev-guide-selector-delete-blob.md)]
-This article shows how to delete blobs with the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). If you've enabled [soft delete for blobs](soft-delete-blob-overview.md), you can restore deleted blobs during the retention period.
+This article shows how to delete blobs with the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage), and how to restore [soft-deleted](soft-delete-blob-overview.md) blobs during the retention period.
[!INCLUDE [storage-dev-guide-prereqs-dotnet](../../../includes/storage-dev-guides/storage-dev-guide-prereqs-dotnet.md)]
The authorization mechanism must have the necessary permissions to delete a blob
## Delete a blob
-To delete a blob, call either of these methods:
+
+To delete a blob, call any of the following methods:
- [Delete](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.delete) - [DeleteAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.deleteasync)
storage Storage C Plus Plus How To Use Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-c-plus-plus-how-to-use-files.md
Title: "Quickstart: Azure Storage Files Share library v12 - C++" description: In this quickstart, you learn how to use the Azure Storage Files Share client library version 12 for C++ to create a files share and a file. Next, you learn how to set and retrieve metadata, then download the file to your local computer.--++ Last updated 06/22/2021
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
description: Learn about the scalability and performance targets for Azure stora
Previously updated : 05/13/2024 Last updated : 08/12/2024
Azure file share scale targets apply at the file share level.
| Provisioned size increase/decrease unit | N/A | 1 GiB | | Maximum size of a file share | 100 TiB | 100 TiB | | Maximum number of files in a file share | No limit | No limit |
-| Maximum request rate (Max IOPS) | <ul><li>20,000</li><li>1,000 or 100 requests per 100 ms, default</li></ul> | <ul><li>Baseline IOPS: 3000 + 1 IOPS per GiB, up to 102,400</li><li>IOPS bursting: Max (10,000, 3x IOPS per GiB), up to 102,400</li></ul> |
-| Throughput (ingress + egress) for a single file share (MiB/sec) | <ul><li>Up to storage account limits</li><li>Up to 60 MiB/sec, default</li></ul> | 100 + CEILING(0.04 * ProvisionedStorageGiB) + CEILING(0.06 * ProvisionedStorageGiB) |
+| Maximum request rate (Max IOPS) | 20,000 | <ul><li>Baseline IOPS: 3000 + 1 IOPS per GiB, up to 102,400</li><li>IOPS bursting: Max (10,000, 3x IOPS per GiB), up to 102,400</li></ul> |
+| Throughput (ingress + egress) for a single file share (MiB/sec) | Up to storage account limits | 100 + CEILING(0.04 * ProvisionedStorageGiB) + CEILING(0.06 * ProvisionedStorageGiB) |
| Maximum number of share snapshots | 200 snapshots | 200 snapshots | | Maximum object name length<sup>3</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters | | Maximum length of individual pathname component<sup>2</sup> (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters |
synapse-analytics Quickstart Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-cli.md
Title: 'Quickstart: Create a Synapse workspace using Azure CLI'
-description: Create an Azure Synapse workspace using Azure CLI by following the steps in this guide.
+ Title: 'Quickstart: Create an Azure Synapse Analytics workspace using Azure CLI'
+description: Create an Azure Synapse Analytics workspace using the Azure CLI by following the steps in this article.
-# Quickstart: Create an Azure synapse workspace with Azure CLI
+# Quickstart: Create an Azure Synapse Analytics workspace with the Azure CLI
The Azure CLI is Azure's command-line experience for managing Azure resources. You can use it in your browser with Azure Cloud Shell. You can also install it on macOS, Linux, or Windows and run it from the command line.
-In this quickstart, you learn to create a Synapse workspace by using the Azure CLI.
+In this quickstart, you learn how to create an Azure Synapse Analytics workspace by using the Azure CLI.
[!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] ## Prerequisites -- Download and install [jq](https://stedolan.github.io/jq/download/), a lightweight and flexible command-line JSON processor-- [Azure Data Lake Storage Gen2 storage account](../storage/common/storage-account-create.md)
+- Download and install [jq](https://stedolan.github.io/jq/download/), a lightweight and flexible command-line JSON processor.
+- [Azure Data Lake Storage Gen2 storage account](../storage/common/storage-account-create.md).
> [!IMPORTANT]
- > The Azure Synapse workspace needs to be able to read and write to the selected ADLS Gen2 account. In addition, for any storage account that you link as the primary storage account, you must have enabled **hierarchical namespace** at the creation of the storage account, as described on the [Create a Storage Account](../storage/common/storage-account-create.md?tabs=azure-portal#create-a-storage-account) page.
+ > An Azure Synapse Analytics workspace needs to be able to read and write to the selected Data Lake Storage Gen2 account. In addition, for any storage account that you link as the primary storage account, you must have enabled **hierarchical namespace** at the creation of the storage account, as described in [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal#create-a-storage-account).
[!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
-## Create an Azure Synapse workspace using the Azure CLI
+## Create an Azure Synapse Analytics workspace by using the Azure CLI
-1. Define necessary environment variables to create resources for Azure Synapse workspace.
+1. Define necessary environment variables to create resources for an Azure Synapse Analytics workspace.
- | Environment Variable Name | DescriptionΓÇ»|
+ | Environment Variable name | DescriptionΓÇ»|
||||
- |StorageAccountName| Name for your existing ADLS Gen2 storage account.|
- |StorageAccountResourceGroup| Name of your existing ADLS Gen2 storage account resource group. |
+ |StorageAccountName| Name for your existing Data Lake Storage Gen2 storage account.|
+ |StorageAccountResourceGroup| Name of your existing Data Lake Storage Gen2 storage account resource group. |
|FileShareName| Name of your existing storage file system.|
- |SynapseResourceGroup| Choose a new name for your Azure Synapse resource group. |
+ |SynapseResourceGroup| Choose a new name for your Azure Synapse Analytics resource group. |
|Region| Choose one of the [Azure regions](https://azure.microsoft.com/global-infrastructure/geographies/#overview). |
- |SynapseWorkspaceName| Choose a unique name for your new Azure Synapse Workspace. |
+ |SynapseWorkspaceName| Choose a unique name for your new Azure Synapse Analytics workspace. |
|SqlUser| Choose a value for a new username.| |SqlPassword| Choose a secure password.| |||
-1. Create a resource group as a container for your Azure Synapse workspace:
+1. Create a resource group as a container for your Azure Synapse Analytics workspace:
+ ```azurecli az group create --name $SynapseResourceGroup --location $Region ```
-1. Create an Azure Synapse Workspace:
+1. Create an Azure Synapse Analytics workspace:
+ ```azurecli az synapse workspace create \ --name $SynapseWorkspaceName \
In this quickstart, you learn to create a Synapse workspace by using the Azure C
--location $Region ```
-1. Get Web and Dev URL for Azure Synapse Workspace:
+1. Get the web and dev URLs for the Azure Synapse Analytics workspace:
+ ```azurecli WorkspaceWeb=$(az synapse workspace show --name $SynapseWorkspaceName --resource-group $SynapseResourceGroup | jq -r '.connectivityEndpoints | .web') WorkspaceDev=$(az synapse workspace show --name $SynapseWorkspaceName --resource-group $SynapseResourceGroup | jq -r '.connectivityEndpoints | .dev') ```
-1. Create a Firewall Rule to allow your access to Azure Synapse Workspace from your machine:
+1. Create a firewall rule to allow access to your Azure Synapse Analytics workspace from your machine:
```azurecli ClientIP=$(curl -sb -H "Accept: application/json" "$WorkspaceDev" | jq -r '.message')
In this quickstart, you learn to create a Synapse workspace by using the Azure C
az synapse workspace firewall-rule create --end-ip-address $ClientIP --start-ip-address $ClientIP --name "Allow Client IP" --resource-group $SynapseResourceGroup --workspace-name $SynapseWorkspaceName ```
-1. Open the Azure Synapse Workspace Web URL address stored in environment variable `WorkspaceWeb` to access your workspace:
+1. Open the Azure Synapse Analytics workspace web URL address stored in the environment variable `WorkspaceWeb` to access your workspace:
```azurecli echo "Open your Azure Synapse Workspace Web URL in the browser: $WorkspaceWeb" ```
- [ ![Azure Synapse workspace web](media/quickstart-create-synapse-workspace-cli/create-workspace-cli-1.png) ](media/quickstart-create-synapse-workspace-cli/create-workspace-cli-1.png#lightbox)
+ :::image type="content" source="media/quickstart-create-synapse-workspace-cli/create-workspace-cli-1.png" alt-text="Screenshot that shows the Azure Synapse Analytics workspace web." lightbox="media/quickstart-create-synapse-workspace-cli/create-workspace-cli-1.png":::
-1. Once deployed, additional permissions are required.
-- In the Azure portal, assign other users of the workspace to the **Contributor** role in the workspace. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). -- Assign other users the appropriate **[Synapse RBAC roles](security/synapse-workspace-synapse-rbac-roles.md)** using Synapse Studio.-- A member of the **Owner** role of the Azure Storage account must assign the **Storage Blob Data Contributor** role to the Azure Synapse workspace MSI and other users.
+1. After it's deployed, more permissions are required:
+
+ - In the Azure portal, assign other users of the workspace to the Contributor role in the workspace. For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
+ - Assign other users the appropriate [Azure Synapse Analytics role-based access control roles](security/synapse-workspace-synapse-rbac-roles.md) by using Synapse Studio.
+ - A member of the Owner role of the Azure Storage account must assign the Storage Blob Data Contributor role to the Azure Synapse Analytics workspace managed service identity and other users.
## Clean up resources
-Follow the steps below to delete the Azure Synapse workspace.
+Follow these steps to delete the Azure Synapse Analytics workspace.
+ > [!WARNING]
-> Deleting an Azure Synapse workspace will remove the analytics engines and the data stored in the database of the contained SQL pools and workspace metadata. It will no longer be possible to connect to the SQL or Apache Spark endpoints. All code artifacts will be deleted (queries, notebooks, job definitions and pipelines).
+> Deleting an Azure Synapse Analytics workspace removes the analytics engines and the data stored in the database of the contained SQL pools and workspace metadata. It will no longer be possible to connect to the SQL or Apache Spark endpoints. All code artifacts will be deleted (queries, notebooks, job definitions, and pipelines).
>
-> Deleting the workspace will **not** affect the data in the Data Lake Store Gen2 linked to the workspace.
+> Deleting the workspace won't affect the data in the Data Lake Storage Gen2 account linked to the workspace.
-If you want to delete the Azure Synapse workspace, complete the following command:
+If you want to delete the Azure Synapse Analytics workspace, complete the following command:
```azurecli az synapse workspace delete --name $SynapseWorkspaceName --resource-group $SynapseResourceGroup ```
-## Next steps
+## Related content
Next, you can [create SQL pools](quickstart-create-sql-pool-studio.md) or [create Apache Spark pools](quickstart-create-apache-spark-pool-studio.md) to start analyzing and exploring your data.
synapse-analytics Quickstart Create Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-powershell.md
Title: 'Quickstart: Create a Synapse workspace using Azure PowerShell'
-description: Create an Azure Synapse workspace using Azure PowerShell by following the steps in this guide.
+ Title: 'Quickstart: Create an Azure Synapse Analytics workspace using PowerShell'
+description: Create an Azure Synapse Analytics workspace using Azure PowerShell by following the steps in this article.
-# Quickstart: Create an Azure synapse workspace with Azure PowerShell
+# Quickstart: Create an Azure Synapse Analytics workspace with Azure PowerShell
Azure PowerShell is a set of cmdlets for managing Azure resources directly from PowerShell. You can use it in your browser with Azure Cloud Shell. You can also install it on macOS, Linux, or Windows.
-In this quickstart, you learn to create a Synapse workspace using Azure PowerShell.
+In this quickstart, you learn to create an Azure Synapse Analytics workspace by using Azure PowerShell.
If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
If you don't have an Azure subscription, create a [free Azure account](https://a
- [Azure Data Lake Storage Gen2 storage account](../storage/common/storage-account-create.md) > [!IMPORTANT]
- > The Azure Synapse workspace needs to be able to read and write to the selected ADLS Gen2
- > account. For any storage account that you link as the primary storage account, you must enable
- > **hierarchical namespace** at the creation of the storage account as described in
- > [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-powershell#create-a-storage-account).
+ > An Azure Synapse Analytics workspace needs to be able to read and write to the selected Azure Data Lake Storage Gen2 account. For any storage account that you link as the primary storage account, you must enable **hierarchical namespace** at the creation of the storage account as described in [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-powershell#create-a-storage-account).
If you choose to use Cloud Shell, see [Overview of Azure Cloud Shell](../cloud-shell/overview.md) for more information. ### Install the Azure PowerShell module locally
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
+If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account by using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
For more information about authentication with Azure PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps). ### Install the Azure Synapse PowerShell module > [!IMPORTANT]
-> While the **Az.Synapse** PowerShell module is in preview, you must install it separately using the `Install-Module` cmdlet. After this PowerShell module becomes generally available, it will be part of future Az PowerShell module releases and available by default from within Azure Cloud Shell.
+> While the `Az.Synapse` PowerShell module is in preview, you must install it separately by using the `Install-Module` cmdlet. After this PowerShell module becomes generally available, it will be part of future Az PowerShell module releases and available by default from within Cloud Shell.
```azurepowershell-interactive Install-Module -Name Az.Synapse ```
-## Create an Azure Synapse workspace using Azure PowerShell
+## Create an Azure Synapse Analytics workspace by using Azure PowerShell
-1. Define necessary environment variables to create resources for Azure Synapse workspace.
+1. Define necessary environment variables to create resources for an Azure Synapse Analytics workspace.
| Variable name | Description | | | -- |
- | StorageAccountName | Name for your existing ADLS Gen2 storage account. |
- | StorageAccountResourceGroup | Name of your existing ADLS Gen2 storage account resource group. |
+ | StorageAccountName | Name for your existing Azure Data Lake Storage Gen2 storage account. |
+ | StorageAccountResourceGroup | Name of your existing Azure Data Lake Storage Gen2 storage account resource group. |
| FileShareName | Name of your existing storage file system. |
- | SynapseResourceGroup | Choose a new name for your Azure Synapse resource group. |
+ | SynapseResourceGroup | Choose a new name for your Azure Synapse Analytics resource group. |
| Region | Choose one of the [Azure regions](https://azure.microsoft.com/global-infrastructure/geographies/#overview). |
- | SynapseWorkspaceName | Choose a unique name for your new Azure Synapse Workspace. |
+ | SynapseWorkspaceName | Choose a unique name for your new Azure Synapse Analytics workspace. |
| SqlUser | Choose a value for a new username. | | SqlPassword | Choose a secure password. |
- | ClientIP | Public IP Address of the system you're running PowerShell from. |
+ | ClientIP | Public IP address of the system you're running PowerShell from. |
| | |
-1. Create a resource group as a container for your Azure Synapse workspace:
+1. Create a resource group as a container for your Azure Synapse Analytics workspace:
```azurepowershell-interactive New-AzResourceGroup -Name $SynapseResourceGroup -Location $Region ```
-1. Create an Azure Synapse Workspace:
+1. Create an Azure Synapse Analytics workspace:
```azurepowershell-interactive $Cred = New-Object -TypeName System.Management.Automation.PSCredential ($SqlUser, (ConvertTo-SecureString $SqlPassword -AsPlainText -Force))
Install-Module -Name Az.Synapse
New-AzSynapseWorkspace @WorkspaceParams ```
-1. Get Web and Dev URL for Azure Synapse Workspace:
+1. Get the web and dev URLs for Azure Synapse Analytics workspace:
```azurepowershell-interactive $WorkspaceWeb = (Get-AzSynapseWorkspace -Name $SynapseWorkspaceName -ResourceGroupName $StorageAccountResourceGroup).ConnectivityEndpoints.web $WorkspaceDev = (Get-AzSynapseWorkspace -Name $SynapseWorkspaceName -ResourceGroupName $StorageAccountResourceGroup).ConnectivityEndpoints.dev ```
-1. Create a Firewall Rule to allow your access to Azure Synapse Workspace from your machine:
+1. Create a firewall rule to allow access to your Azure Synapse Analytics workspace from your machine:
```azurepowershell-interactive $FirewallParams = @{
Install-Module -Name Az.Synapse
New-AzSynapseFirewallRule @FirewallParams ```
-1. Open the Azure Synapse Workspace Web URL address stored in environment variable `WorkspaceWeb` to
+1. Open the Azure Synapse Analytics workspace web URL address stored in the environment variable `WorkspaceWeb` to
access your workspace: ```azurepowershell-interactive Start-Process $WorkspaceWeb ```
- ![Azure Synapse workspace web](media/quickstart-create-synapse-workspace-powershell/create-workspace-powershell-1.png)
+ ![Screenshot that shows the Azure Synapse Analytics workspace web.](media/quickstart-create-synapse-workspace-powershell/create-workspace-powershell-1.png)
+1. After it's deployed, more permissions are required.
-1. Once deployed, additional permissions are required.
-- In the Azure portal, assign other users of the workspace to the **Contributor** role in the workspace. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). -- Assign other users the appropriate **[Synapse RBAC roles](security/synapse-workspace-synapse-rbac-roles.md)** using Synapse Studio.-- A member of the **Owner** role of the Azure Storage account must assign the **Storage Blob Data Contributor** role to the Azure Synapse workspace MSI and other users.
+ - In the Azure portal, assign other users of the workspace to the Contributor role in the workspace. For instructions, see [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
+ - Assign other users the appropriate [Azure Synapse Analytics role-based access control roles](security/synapse-workspace-synapse-rbac-roles.md) by using Synapse Studio.
+ - A member of the Owner role of the Azure Storage account must assign the Storage Blob Data Contributor role to the Azure Synapse Analytics workspace managed service identity and other users.
## Clean up resources
-Follow the steps below to delete the Azure Synapse workspace.
+Follow these steps to delete the Azure Synapse Analytics workspace.
> [!WARNING]
-> Deleting an Azure Synapse workspace will remove the analytics engines and the data stored in the
-> database of the contained SQL pools and workspace metadata. It will no longer be possible to
-> connect to the SQL or Apache Spark endpoints. All code artifacts will be deleted (queries,
-> notebooks, job definitions and pipelines). Deleting the workspace will **not** affect the data in
-> the Data Lake Store Gen2 linked to the workspace.
+> Deleting an Azure Synapse Analytics workspace removes the analytics engines and the data stored in the database of the contained SQL pools and workspace metadata. It will no longer be possible to connect to the SQL or Apache Spark endpoints. All code artifacts will be deleted (queries, notebooks, job definitions, and pipelines).
+>
+> Deleting the workspace won't affect the data in the Azure Data Lake Storage Gen2 account linked to the workspace.
-If the Azure Synapse workspace created in this article isn't needed, you can delete it by running
-the following example.
+If the Azure Synapse Analytics workspace created in this article isn't needed, you can delete it by running
+the following example:
```azurepowershell-interactive Remove-AzSynapseWorkspace -Name $SynapseWorkspaceNam -ResourceGroupName $SynapseResourceGroup ```
-## Next steps
+## Related content
Next, you can [create SQL pools](quickstart-create-sql-pool-studio.md) or [create Apache Spark pools](quickstart-create-apache-spark-pool-studio.md) to start analyzing and exploring your data.
synapse-analytics Quickstart Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace.md
Title: 'Quickstart: create a Synapse workspace'
-description: Create an Synapse workspace by following the steps in this guide.
+ Title: 'Quickstart: Create an Azure Synapse Analytics workspace'
+description: Learn how to create an Azure Synapse Analytics workspace by following the steps in this quickstart.
-# Quickstart: Create a Synapse workspace
-This quickstart describes the steps to create an Azure Synapse workspace by using the Azure portal.
+# Quickstart: Create an Azure Synapse Analytics workspace
-## Create a Synapse workspace
+This quickstart describes the steps to create an Azure Synapse Analytics workspace by using the Azure portal.
-1. Open the [Azure portal](https://portal.azure.com), and at the top search for **Synapse**.
+## Create an Azure Synapse Analytics workspace
+
+1. Open the [Azure portal](https://portal.azure.com), and at the top, search for **Synapse**.
1. In the search results, under **Services**, select **Azure Synapse Analytics**. 1. Select **Add** to create a workspace.
-1. In the **Basics** tab, give the workspace a unique name. We'll use **mysworkspace** in this document
-1. You need an ADLSGEN2 account to create a workspace. The simplest choice is to create a new one. If you want to re-use an existing one you'll need to perform some additional configuration.
-1. OPTION 1 Creating a new ADLSGEN2 account
- 1. Under **Select Data Lake Storage Gen 2 / Account Name**, click **Create New** and provide a global unique name, such as **contosolake**.
- 1. Under **Select Data Lake Storage Gen 2 / File system name**, click **File System** and name it **users**.
-1. OPTION 2 See the [**Prepare a Storage Account**](#prepare-an-existing-storage-account-for-use-with-azure-synapse-analytics) instructions at the bottom of this document.
-1. Your Azure Synapse workspace will use this storage account as the "primary" storage account and the container to store workspace data. The workspace stores data in Apache Spark tables. It stores Spark application logs under a folder called **/synapse/workspacename**.
+1. On the **Basics** tab, give the workspace a unique name. We use **mysworkspace** in this document.
+1. You need an Azure Data Lake Storage Gen2 account to create a workspace. The simplest choice is to create a new one. If you want to reuse an existing one, you need to perform extra configuration:
+
+ - Option 1: Create a new Data Lake Storage Gen2 account:
+ 1. Under **Select Data Lake Storage Gen 2** > **Account Name**, select **Create New**. Provide a global unique name, such as **contosolake**.
+ 1. Under **Select Data Lake Storage Gen 2** > **File system name**, select **File System** and name it **users**.
+ - Option 2: See the instructions in [Prepare an existing storage account for use with Azure Synapse Analytics](#prepare-an-existing-storage-account-for-use-with-azure-synapse-analytics).
+1. Your Azure Synapse Analytics workspace uses this storage account as the primary storage account and the container to store workspace data. The workspace stores data in Apache Spark tables. It stores Spark application logs under a folder named */synapse/workspacename*.
1. Select **Review + create** > **Create**. Your workspace is ready in a few minutes. > [!NOTE]
-> After creating your Azure Synapse workspace, you will not be able to move the workspace to another Microsoft Entra tenant. If you do so through subscription migration or other actions, you may lose access to the artifacts within the workspace.
+> After you create your Azure Synapse Analytics workspace, you won't be able to move the workspace to another Microsoft Entra tenant. If you do so through subscription migration or other actions, you might lose access to the artifacts within the workspace.
## Open Synapse Studio
-After your Azure Synapse workspace is created, you have two ways to open Synapse Studio:
+After your Azure Synapse Analytics workspace is created, you have two ways to open Synapse Studio:
-* Open your Synapse workspace in the [Azure portal](https://portal.azure.com). On the top of the **Overview** section, select **Launch Synapse Studio**.
-* Go to the `https://web.azuresynapse.net` and sign in to your workspace.
+* Open your Synapse workspace in the [Azure portal](https://portal.azure.com). At the top of the **Overview** section, select **Launch Synapse Studio**.
+* Go to [Azure Synapse Analytics](https://web.azuresynapse.net) and sign in to your workspace.
## Prepare an existing storage account for use with Azure Synapse Analytics 1. Open the [Azure portal](https://portal.azure.com).
-1. Navigate to an existing ADLSGEN2 storage account
+1. Go to an existing Data Lake Storage Gen2 storage account.
1. Select **Access control (IAM)**.
-1. Select **Add** > **Add role assignment** to open the Add role assignment page.
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+1. Assign the following role. For more information, see [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
| Setting | Value | | | | | Role | Owner and Storage Blob Data Owner | | Assign access to | USER |
- | Members | your user name |
+ | Members | Your user name |
- ![Add role assignment page in Azure portal.](~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-page.png)
+ ![Screenshot that shows the Add role assignment page in Azure portal.](~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-page.png)
1. On the left pane, select **Containers** and create a container.
-1. You can give the container any name. In this document, we'll name the container **users**.
+1. You can give the container any name. In this document, we name the container **users**.
1. Accept the default setting **Public access level**, and then select **Create**. ### Configure access to the storage account from your workspace
-Managed identities for your Azure Synapse workspace might already have access to the storage account. Follow these steps to make sure:
+Managed identities for your Azure Synapse Analytics workspace might already have access to the storage account. Follow these steps to make sure:
1. Open the [Azure portal](https://portal.azure.com) and the primary storage account chosen for your workspace. 1. Select **Access control (IAM)**.
-1. Select **Add** > **Add role assignment** to open the Add role assignment page.
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+1. Assign the following role. For more information, see [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
| Setting | Value | | | |
Managed identities for your Azure Synapse workspace might already have access to
> [!NOTE] > The managed identity name is also the workspace name.
- ![Add role assignment page in Azure portal.](~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-page.png)
+ ![Screenshot that shows the Add role assignment pane in the Azure portal.](~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-page.png)
1. Select **Save**.
-## Next steps
+## Related content
* [Create a dedicated SQL pool](quickstart-create-sql-pool-studio.md) * [Create a serverless Apache Spark pool](quickstart-create-apache-spark-pool-portal.md)
-* [Use serverless SQL pool](quickstart-sql-on-demand.md)
+* [Use a serverless SQL pool](quickstart-sql-on-demand.md)
synapse-analytics Quickstart Deployment Template Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-deployment-template-workspaces.md
Title: 'Quickstart: Create an Azure Synapse workspace Azure Resource Manager template (ARM template)'
-description: Learn how to create a Synapse workspace by using Azure Resource Manager template (ARM template).
+ Title: 'Quickstart: Create an Azure Synapse Analytics workspace using an ARM template'
+description: Learn how to create an Azure Synapse Analytics workspace by using an Azure Resource Manager template (ARM template).
Last updated 02/04/2022
-# Quickstart: Create an Azure Synapse workspace using an ARM template
+# Quickstart: Create an Azure Synapse Analytics workspace by using an ARM template
-This Azure Resource Manager (ARM) template will create an Azure Synapse workspace with underlying Data Lake Storage. The Azure Synapse workspace is a securable collaboration boundary for analytics processes in Azure Synapse Analytics.
+This Azure Resource Manager template (ARM template) creates an Azure Synapse Analytics workspace with underlying Azure Data Lake Storage. The Azure Synapse Analytics workspace is a securable collaboration boundary for analytics processes in Azure Synapse Analytics.
[!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select **Deploy to Azure**. The template opens in the Azure portal.
## Prerequisites If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-To create an Azure Synapse workspace, a user must have **Azure Contributor** role and **User Access Administrator** permissions, or the **Owner** role in the subscription. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
+To create an Azure Synapse Analytics workspace, you must have the Azure Contributor role and User Access Administrator permissions, or the Owner role in the subscription. For more information, see [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
## Review the template You can review the template by selecting the **Visualize** link. Then select **Edit template**. - The template defines two resources:
The template defines two resources:
## Deploy the template
-1. Select the following image to sign in to Azure and open the template. This template creates a Synapse workspace.
+1. Select the following image to sign in to Azure and open the template. This template creates an Azure Synapse Analytics workspace.
- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FSynapse%2Fmaster%2FManage%2FDeployWorkspace%2Fazuredeploy.json":::
+ :::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Screenshot that shows the button to deploy the ARM template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FSynapse%2Fmaster%2FManage%2FDeployWorkspace%2Fazuredeploy.json":::
1. Enter or update the following values: - **Subscription**: Select an Azure subscription.
- - **Resource group**: Select **Create new** and enter a unique name for the resource group and select **OK**. A new resource group will facilitate resource clean up.
- - **Region**: Select a region. For example, **Central US**.
+ - **Resource group**: Select **Create new** and enter a unique name for the resource group and select **OK**. A new resource group facilitates resource clean-up.
+ - **Region**: Select a region. An example is **Central US**.
- **Name**: Enter a name for your workspace. - **SQL Administrator login**: Enter the administrator username for the SQL Server. - **SQL Administrator password**: Enter the administrator password for the SQL Server.
The template defines two resources:
- **Review and Create**: Select. - **Create**: Select.
-1. Once deployed, additional permissions are required.
-- In the Azure portal, assign other users of the workspace to the **Contributor** role in the workspace. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). -- Assign other users the appropriate **[Synapse RBAC roles](security/synapse-workspace-synapse-rbac-roles.md)** using Synapse Studio.-- A member of the **Owner** role of the Azure Storage account must assign the **Storage Blob Data Contributor** role to the Azure Synapse workspace MSI and other users.
+1. After it's deployed, more permissions are required:
+
+ - In the Azure portal, assign other users of the workspace to the Contributor role in the workspace. For more information, see [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
+ - Assign other users the appropriate [Azure Synapse Analytics role-based access control roles](security/synapse-workspace-synapse-rbac-roles.md) by using Synapse Studio.
+ - A member of the Owner role of the Azure Storage account must assign the Storage Blob Data Contributor role to the Azure Synapse Analytics workspace managed service identity and other users.
-## Next steps
+## Related content
-To learn more about Azure Synapse Analytics and Azure Resource Manager,
+To learn more about Azure Synapse Analytics and Resource
-- Read an [Overview of Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md)-- Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md)-- [Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+- Read an [Overview of Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md).
+- Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md).
+- [Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md).
Next, you can [create SQL pools](quickstart-create-sql-pool-studio.md) or [create Apache Spark pools](quickstart-create-apache-spark-pool-studio.md) to start analyzing and exploring your data.
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
This guide provides a structured approach for users looking to upgrade their Azu
1. Recreate Spark Pool 3.3 from the ground up. 1. Downgrade the current Spark Pool 3.3 to 3.1, remove any packages attached, and then upgrade again to 3.3.
+**Question:** Why can't I upgrade to 3.4 without recreating a new Spark pool?
+
+**Answer:** This is not allowed from UX, customer can use Azure PowerShell to update Spark version. Please use "ForceApplySetting", so that any existing clusters (with old version) are decommissioned.
+
+**Sample query:**
+
+```azurepowershell
+$_target_work_space = @("workspace1", "workspace2")
+
+Get-AzSynapseWorkspace |
+ ForEach-Object {
+ if ($_target_work_space -contains $_.Name) {
+ $_workspace_name = $_.Name
+ Write-Host "Updating workspace: $($_workspace_name)"
+ Get-AzSynapseSparkPool -WorkspaceName $_workspace_name |
+ ForEach-Object {
+ Write-Host "Updating Spark pool: $($_.Name)"
+ Write-Host "Current Spark version: $($_.SparkVersion)"
+
+ Update-AzSynapseSparkPool -WorkspaceName $_workspace_name -Name $_.Name -SparkVersion 3.4 -ForceApplySetting
+ }
+ }
+ }
+```
+ ## Related content - [Manage libraries for Apache Spark in Azure Synapse Analytics](apache-spark-azure-portal-add-libraries.md)
update-manager Guidance Migration Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-azure.md
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article provides a guide to start using Azure Update Manager (for update management) for virtual machines that are currently using Microsoft Configuration Manager (MCM).
+This article provides a guide to modernize management of servers for which you are currently using Microsoft Configuration Manager (MCM). We shall focus on Azure Update Manager that provides Azure based experiences for patch management, the major capability of MCM.
-Before initiating migration, you need to understand mapping between System Center components and equivalent services in Azure.
+To start with, let us list the Azure Services that provide equivalent capabilities for the different System Center components.
| **System Center Component** | **Azure equivalent service** | | | | | System Center Operations Manager (SCOM) | Azure Monitor SCOM Managed Instance |
-| System Center Configuration Manager (SCCM), now called Microsoft Configuration Manager (MCM) | Azure Update Manager, </br> Change Tracking and Inventory, </br> Guest Config, </br> Azure Automation, </br> Desired State Configuration (DSC), </br> Defender for Cloud |
+| System Center Configuration Manager (SCCM), now called Microsoft Configuration Manager (MCM) | Azure Update Manager, </br> Change Tracking and Inventory, </br> Azure
+ Machine Configuration (formerly called Azure Policy Guest Configuration), </br> Azure Automation, </br> Microsoft Defender for Cloud |
| System Center Virtual Machine Manager (SCVMM) | Arc enabled System Center VMM | | System Center Data Protection Manager (SCDPM) | Azure Backup | | System Center Orchestrator (SCORCH) | Azure Automation |
virtual-desktop Apply Windows License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/apply-windows-license.md
Title: Apply Windows license to session host virtual machines - Azure description: Describes how to apply the Windows license for Azure Virtual Desktop VMs.-+ Last updated 11/14/2022-++ # Apply Windows license to session host virtual machines
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Title: Azure Virtual Desktop identities and authentication - Azure description: Identities and authentication methods for Azure Virtual Desktop.-+ Last updated 07/16/2024-++ # Supported identities and authentication methods
virtual-desktop Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/automatic-migration.md
Title: Migrate automatically from Azure Virtual Desktop (classic) - Azure description: How to migrate automatically from Azure Virtual Desktop (classic) to Azure Virtual Desktop by using the migration module.-+ -+ Last updated 01/31/2022-+ # Migrate automatically from Azure Virtual Desktop (classic)
virtual-desktop Autoscale Create Assign Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-create-assign-scaling-plan.md
Title: Create and assign an autoscale scaling plan for Azure Virtual Desktop description: How to create and assign an autoscale scaling plan to optimize deployment costs.-+ Last updated 04/18/2024--++ # Create and assign an autoscale scaling plan for Azure Virtual Desktop
virtual-desktop Autoscale Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-diagnostics.md
Title: Set up diagnostics for Autoscale in Azure Virtual Desktop description: How to set up diagnostic reports for the scaling service in your Azure Virtual Desktop deployment.-+ Last updated 11/01/2023-++ # Set up diagnostics for Autoscale in Azure Virtual Desktop
virtual-desktop Autoscale Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-glossary.md
Title: Azure Virtual Desktop autoscale glossary for Azure Virtual Desktop - Azure description: A glossary of terms and concepts for the Azure Virtual Desktop autoscale feature.-+ Last updated 11/01/2023-++ # Autoscale glossary for Azure Virtual Desktop
virtual-desktop Autoscale Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scenarios.md
Title: Autoscale scaling plans and example scenarios in Azure Virtual Desktop description: Information about autoscale and a collection of four example scenarios that illustrate how various parts of autoscale for Azure Virtual Desktop work.-+ Last updated 11/01/2023--++ # Autoscale scaling plans and example scenarios in Azure Virtual Desktop
virtual-desktop Azure Ad Joined Session Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-ad-joined-session-hosts.md
Title: Microsoft Entra joined session hosts in Azure Virtual Desktop description: Learn about using Microsoft Entra joined session hosts in Azure Virtual Desktop.-+ Last updated 06/04/2024-++ # Microsoft Entra joined session hosts in Azure Virtual Desktop
virtual-desktop Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-advisor-recommendations.md
Title: Azure Advisor Azure Virtual Desktop Walkthrough - Azure description: How to resolve Azure Advisor recommendations for Azure Virtual Desktop.-+ Last updated 03/31/2021-++ # How to resolve Azure Advisor recommendations
virtual-desktop Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/cli-powershell.md
Title: Use Azure CLI and Azure PowerShell with Azure Virtual Desktop description: Learn about Azure CLI and Azure PowerShell with Azure Virtual Desktop and some useful example commands you can run. -+ Last updated 01/08/2024
virtual-desktop Configure Adfs Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-adfs-sso.md
Title: Configure single sign-on for Azure Virtual Desktop using AD FS - Azure description: How to configure single sign-on for an Azure Virtual Desktop environment using Active Directory Federation Services.--++ Last updated 06/30/2021-+ # Configure single sign-on for Azure Virtual Desktop using AD FS
virtual-desktop Configure Host Pool Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-load-balancing.md
Title: Configure host pool load balancing in Azure Virtual Desktop description: How to configure the load balancing method for pooled host pools in Azure Virtual Desktop.-
virtual-desktop Configure Host Pool Personal Desktop Assignment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-personal-desktop-assignment-type.md
Title: Configure personal desktop assignment in Azure Virtual Desktop - Azure description: How to configure automatic or direct assignment for an Azure Virtual Desktop personal desktop host pool.-+ Last updated 01/31/2024--++ # Configure personal desktop assignment
virtual-desktop Configure Validation Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-validation-environment.md
Title: Configure a host pool as a validation environment - Azure description: How to configure a host pool as a validation environment to test service updates before they roll out to production.-+ Last updated 03/01/2023--++ # Configure a host pool as a validation environment
virtual-desktop Connection Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-latency.md
Title: Analyze connection quality in Azure Virtual Desktop - Azure description: Connection quality for Azure Virtual Desktop users.-+ Last updated 01/05/2023--++ # Analyze connection quality in Azure Virtual Desktop
virtual-desktop Connection Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-quality-monitoring.md
Title: Collect and query Azure Virtual Desktop connection quality data (preview) - Azure description: How to set up and query the connection quality data table for Azure Virtual Desktop to diagnose connection issues.-+ Last updated 01/05/2023--++ # Collect and query connection quality data
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
Title: Configure FSLogix profile container on Azure Virtual Desktop with Azure NetApp Files description: Learn how to configure FSLogix profile container on Azure Virtual Desktop with Azure NetApp Files.-+ Last updated 07/01/2020-++ # Configure FSLogix profile container on Azure Virtual Desktop with Azure NetApp Files
virtual-desktop Create Host Pools User Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-user-profile.md
Title: Azure Virtual Desktop FSLogix profile container share - Azure description: How to set up an FSLogix profile container for an Azure Virtual Desktop host pool using a virtual machine-based file share.-+ Last updated 04/08/2022-++ # Create a profile container for a host pool using a file share
virtual-desktop Customize Feed For Virtual Desktop Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-feed-for-virtual-desktop-users.md
Title: Customize feed for Azure Virtual Desktop users - Azure description: How to customize feed for Azure Virtual Desktop users using the Azure portal and PowerShell cmdlets.-+ Last updated 02/01/2024--++ # Customize the feed for Azure Virtual Desktop users
virtual-desktop Customize Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-rdp-properties.md
Title: Customize RDP properties - Azure description: How to customize RDP Properties for Azure Virtual Desktop.-+ Last updated 07/26/2022--++ # Customize Remote Desktop Protocol (RDP) properties for a host pool
virtual-desktop Data Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/data-locations.md
Title: Data locations for Azure Virtual Desktop - Azure description: A brief overview of which locations Azure Virtual Desktop's data and metadata are stored in.-+ -+ Last updated 06/22/2022-+ # Data locations for Azure Virtual Desktop
virtual-desktop Delegated Access Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/delegated-access-virtual-desktop.md
Title: Delegated access in Azure Virtual Desktop - Azure description: How to delegate administrative capabilities on an Azure Virtual Desktop deployment, including examples.-+ Last updated 04/30/2020--++ # Delegated access in Azure Virtual Desktop
virtual-desktop Delete Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/delete-host-pool.md
Title: Delete Azure Virtual Desktop host pool - Azure description: How to delete a host pool in Azure Virtual Desktop.-+ Last updated 07/23/2021-++ # Delete a host pool
virtual-desktop Diagnostics Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/diagnostics-log-analytics.md
Title: Azure Virtual Desktop diagnostics log analytics - Azure description: How to use log analytics with the Azure Virtual Desktop diagnostics feature.-+ Last updated 05/27/2020-++ # Send diagnostic data to Log Analytics for Azure Virtual Desktop
virtual-desktop Fslogix Profile Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-profile-containers.md
Title: User profile management for Azure Virtual Desktop with FSLogix profile containers description: Learn about using User profile management for Azure Virtual Desktop with FSLogix profile containers to manage user profiles and personalization.-+ Last updated 01/04/2021-++ # User profile management for Azure Virtual Desktop with FSLogix profile containers
virtual-desktop Insights Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-costs.md
Title: Estimate Azure Virtual Desktop Insights monitoring costs - Azure description: How to estimate costs and pricing for using Azure Virtual Desktop Insights.-+ Last updated 09/12/2023-++ # Estimate Azure Virtual Desktop monitoring costs
virtual-desktop Insights Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-glossary.md
Title: Azure Virtual Desktop Insights glossary - Azure description: A glossary of terms and concepts related to Azure Virtual Desktop Insights.-+ Last updated 09/12/2023-++ # Azure Virtual Desktop Insights glossary
virtual-desktop Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights.md
Title: Enable Insights to monitor Azure Virtual Desktop description: Learn how to enable Insights to monitor Azure Virtual Desktop and send diagnostic data to a Log Analytics workspace.-+ Last updated 09/12/2023-++ # Enable Insights to monitor Azure Virtual Desktop
virtual-desktop Install Office On Wvd Master Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/install-office-on-wvd-master-image.md
Title: Install Office on a custom VHD image - Azure description: How to install and customize Office on an Azure Virtual Desktop custom image to Azure.-+ Last updated 05/08/2024-++ # Install Office on a custom VHD image
virtual-desktop Key Distribution Center Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/key-distribution-center-proxy.md
Title: Set up Kerberos Key Distribution Center proxy Azure Virtual Desktop - Azure description: How to set up an Azure Virtual Desktop host pool to use a Kerberos Key Distribution Center proxy.-+ Last updated 05/04/2021-++ # Configure a Kerberos Key Distribution Center proxy
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/language-packs.md
Title: Install language packs on Windows 10 VMs in Azure Virtual Desktop - Azure description: How to install language packs for Windows 10 multi-session VMs in Azure Virtual Desktop.-+ Last updated 06/01/2022-++ # Add language packs to a Windows 10 multi-session image
virtual-desktop Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/management.md
Title: Manage session hosts with Microsoft Intune - Azure Virtual Desktop description: Recommended ways for you to manage your Azure Virtual Desktop session hosts.-+ Last updated 04/11/2023-++ # Manage session hosts with Microsoft Intune
virtual-desktop Manual Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manual-migration.md
Title: Migrate manually from Azure Virtual Desktop (classic) - Azure description: How to migrate manually from Azure Virtual Desktop (classic) to Azure Virtual Desktop.-+ Last updated 09/11/2020-++ # Migrate manually from Azure Virtual Desktop (classic)
virtual-desktop Move Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/move-resources.md
Title: Move Azure Virtual Desktop resources between regions - Azure description: How to move Azure Virtual Desktop resources between regions.-+ Last updated 05/13/2022-++ # Move Azure Virtual Desktop resource between regions
virtual-desktop Multimedia Redirection Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection-intro.md
Title: Understanding multimedia redirection on Azure Virtual Desktop - Azure description: An overview of multimedia redirection on Azure Virtual Desktop.-+ Last updated 06/27/2024-++ # Understanding multimedia redirection for Azure Virtual Desktop
virtual-desktop Organization Internal External Commercial Purposes Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/organization-internal-external-commercial-purposes-recommendations.md
Title: Recommendations for deploying Azure Virtual Desktop for internal or external commercial purposes description: Learn about recommendations for deploying Azure Virtual Desktop for internal or external commercial purposes, such as for your organization's workers, or using delivering software-as-a-service applications. --+++ Last updated 07/14/2021
virtual-desktop Proxy Server Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/proxy-server-support.md
Title: Proxy server guidelines Azure Virtual Desktop - Azure description: Some guidelines and recommendations for using proxy servers in Azure Virtual Desktop deployments.-+ Last updated 08/08/2022-++ # Proxy server guidelines for Azure Virtual Desktop
virtual-desktop Remotefx Graphics Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remotefx-graphics-performance-counters.md
Title: Diagnose graphics performance issues Remote Desktop - Azure description: This article describes how to use RemoteFX graphics counters in remote desktop protocol sessions to diagnose performance issues with graphics in Azure Virtual Desktop.-+ Last updated 05/23/2019-++ # Diagnose graphics performance issues in Remote Desktop
virtual-desktop Scaling Automation Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/scaling-automation-logic-apps.md
Title: Scale session hosts using Azure Automation and Azure Logic Apps for Azure Virtual Desktop - Azure description: Learn about scaling Azure Virtual Desktop session hosts with Azure Automation and Azure Logic Apps.-+ Last updated 11/01/2023-++ # Scale session hosts using Azure Automation and Azure Logic Apps for Azure Virtual Desktop
virtual-desktop Session Host Status Health Checks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/session-host-status-health-checks.md
Title: Session host statuses and health checks in Azure Virtual Desktop description: Learn about the different statuses and health checks for session hosts in Azure Virtual Desktop.-+ Last updated 03/05/2024-++ # Session host statuses and health checks in Azure Virtual Desktop
virtual-desktop Set Up Customize Master Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-customize-master-image.md
Title: Prepare and customize a VHD image of Azure Virtual Desktop - Azure description: How to prepare, customize and upload an Azure Virtual Desktop image to Azure.-+ Last updated 04/21/2023-++ # Prepare and customize a VHD image for Azure Virtual Desktop
virtual-desktop Set Up Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-mfa.md
Title: Enforce Microsoft Entra multifactor authentication for Azure Virtual Desktop using Conditional Access - Azure description: How to enforce Microsoft Entra multifactor authentication for Azure Virtual Desktop using Conditional Access to help make it more secure.-+ Last updated 07/26/2024-++ # Enforce Microsoft Entra multifactor authentication for Azure Virtual Desktop using Conditional Access
virtual-desktop Set Up Scaling Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-scaling-script.md
Title: Set up scaling of session hosts using Azure Automation and Azure Logic Apps for Azure Virtual Desktop - Azure description: How to automatically scale Azure Virtual Desktop session hosts with Azure Automation.-+ -+ Last updated 11/01/2023-+ # Set up scaling tool using Azure Automation and Azure Logic Apps for Azure Virtual Desktop
virtual-desktop Set Up Service Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-service-alerts.md
Title: Set up service alerts for Azure Virtual Desktop - Azure description: How to set up Azure Service Health to receive service notifications for Azure Virtual Desktop.-+ Last updated 06/11/2019-++ # Set up service alerts
virtual-desktop Start Virtual Machine Connect Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect-faq.md
Title: Azure Virtual Desktop Start VM Connect FAQ - Azure description: Frequently asked questions and best practices for using the Start VM on Connect feature.-+ Last updated 11/01/2023-++ # Start VM on Connect FAQ
virtual-desktop Store Fslogix Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/store-fslogix-profile.md
Title: Storage FSLogix profile container Azure Virtual Desktop - Azure description: Options for storing your Azure Virtual Desktop FSLogix profile on Azure Storage.-+ Last updated 10/27/2022-++ # Storage options for FSLogix profile containers in Azure Virtual Desktop
virtual-desktop Tag Virtual Desktop Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tag-virtual-desktop-resources.md
Title: Tag Azure Virtual Desktop resources - Azure description: What tagging is, and how you can use it to manage Azure service costs in Azure Virtual Desktop.-+ Last updated 11/12/2021-++ # Tag Azure Virtual Desktop resources to manage costs
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
Title: Use Microsoft Teams on Azure Virtual Desktop - Azure description: How to use Microsoft Teams on Azure Virtual Desktop.-+ Last updated 06/27/2024-++ # Use Microsoft Teams on Azure Virtual Desktop
virtual-desktop Teams Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-supported-features.md
Title: Supported features for Microsoft Teams on Azure Virtual Desktop - Azure description: Supported features for Microsoft Teams on Azure Virtual Desktop.-+ Last updated 07/26/2023-++ # Supported features for Microsoft Teams on Azure Virtual Desktop
virtual-desktop Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/terminology.md
Title: Azure Virtual Desktop terminology - Azure description: Learn about the basic elements of Azure Virtual Desktop, like host pools, application groups, and workspaces.-+ Last updated 11/01/2023-++ # Azure Virtual Desktop terminology
virtual-desktop Troubleshoot Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-authorization.md
Title: Troubleshoot Azure Files Virtual Desktop - Azure description: How to troubleshoot issues with Azure Files in Azure Virtual Desktop.-+ Last updated 08/19/2021-++ # Troubleshoot Azure Files authentication with Active Directory
virtual-desktop Troubleshoot Azure Ad Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-azure-ad-connections.md
Title: Troubleshoot connections to Microsoft Entra joined VMs - Azure Virtual Desktop description: How to resolve issues when connecting to Microsoft Entra joined VMs in Azure Virtual Desktop.-+ Last updated 08/24/2022-++ # Troubleshoot connections to Microsoft Entra joined VMs
virtual-desktop Troubleshoot Connection Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-connection-quality.md
Title: Troubleshoot Azure Virtual Desktop connection quality description: How to troubleshoot connection quality issues in Azure Virtual Desktop.-+ Last updated 09/26/2022-++ # Troubleshooting connection quality in Azure Virtual Desktop
virtual-desktop Troubleshoot Device Redirections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-device-redirections.md
Title: Device redirections in Azure Virtual Desktop - Azure description: How to resolve issues with device redirections in Azure Virtual Desktop.-+ Last updated 11/14/2023-++ # Troubleshoot device redirections for Azure Virtual Desktop
virtual-desktop Troubleshoot Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-insights.md
Title: Troubleshoot Monitor Azure Virtual Desktop - Azure description: How to troubleshoot issues with Azure Virtual Desktop Insights.-+ Last updated 09/12/2023-++ # Troubleshoot Azure Virtual Desktop Insights
virtual-desktop Troubleshoot Management Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-management-issues.md
Title: Azure Virtual Desktop management issues - Azure description: Common management issues in Azure Virtual Desktop and how to solve them.-+ Last updated 06/19/2021-++ # Management issues
virtual-desktop Troubleshoot Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-multimedia-redirection.md
Title: Troubleshoot Multimedia redirection on Azure Virtual Desktop - Azure description: Known issues and troubleshooting instructions for multimedia redirection for Azure Virtual Desktop.-+ Last updated 06/27/2024-++ # Troubleshoot multimedia redirection for Azure Virtual Desktop
virtual-desktop Troubleshoot Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-powershell.md
Title: Azure Virtual Desktop PowerShell - Azure description: How to troubleshoot issues with PowerShell when you set up an Azure Virtual Desktop environment.-+ Last updated 06/05/2020--++ # Azure Virtual Desktop PowerShell
virtual-desktop Troubleshoot Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-quickstart.md
Title: Troubleshoot quickstart Azure Virtual Desktop description: How to troubleshoot issues with the Azure Virtual Desktop quickstart.-+ Last updated 08/06/2021-++ # Troubleshoot the Azure Virtual Desktop quickstart
virtual-desktop Troubleshoot Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-service-connection.md
Title: Troubleshoot service connection Azure Virtual Desktop - Azure description: How to resolve issues while setting up service connections in an Azure Virtual Desktop tenant environment.-+ Last updated 10/15/2020-++ # Azure Virtual Desktop service connections
virtual-desktop Troubleshoot Set Up Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-issues.md
Title: Azure Virtual Desktop environment host pool creation - Azure description: How to troubleshoot and resolve tenant and host pool issues during setup of an Azure Virtual Desktop environment.-+ -+ Last updated 02/17/2021-+ # Host pool creation
virtual-desktop Troubleshoot Set Up Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-overview.md
Title: Azure Virtual Desktop troubleshooting overview - Azure description: An overview for troubleshooting issues while setting up an Azure Virtual Desktop environment.-+ Last updated 10/14/2021-++ # Troubleshooting overview, feedback, and support for Azure Virtual Desktop
virtual-desktop Troubleshoot Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-teams.md
Title: Troubleshoot Microsoft Teams on Azure Virtual Desktop - Azure description: Known issues and troubleshooting instructions for Teams on Azure Virtual Desktop.-+ Last updated 03/07/2023-++ # Troubleshoot Microsoft Teams for Azure Virtual Desktop
virtual-desktop Troubleshoot Vm Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-vm-configuration.md
Title: Troubleshoot Azure Virtual Desktop session host - Azure description: How to resolve issues when you're configuring Azure Virtual Desktop session host virtual machines.-+ Last updated 05/11/2020-++ # Session host virtual machine configuration
virtual-desktop Configure Host Pool Personal Desktop Assignment Type 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/configure-host-pool-personal-desktop-assignment-type-2019.md
Title: Azure Virtual Desktop (classic) personal desktop assignment type - Azure description: How to configure the assignment type for an Azure Virtual Desktop (classic) personal desktop host pool.-+ Last updated 05/22/2020-++ # Configure the personal desktop host pool assignment type for Azure Virtual Desktop (classic)
virtual-desktop Connect Android 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-android-2019.md
Title: Connect to Azure Virtual Desktop (classic) from Android - Azure description: How to connect to Azure Virtual Desktop (classic) using the Android client.-+ Last updated 03/30/2020-++ # Connect to Azure Virtual Desktop (classic) with the Android client
virtual-desktop Connect Ios 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-ios-2019.md
Title: Connect to Azure Virtual Desktop (classic) from iOS - Azure description: How to connect to Azure Virtual Desktop (classic) using the iOS client.-+ Last updated 03/30/2020-++ # Connect to Azure Virtual Desktop (classic) with the iOS client
virtual-desktop Connect Macos 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-macos-2019.md
Title: Connect to Azure Virtual Desktop (classic) from macOS - Azure description: How to connect to Azure Virtual Desktop (classic) using the macOS client.-+ Last updated 03/30/2020-++ # Connect to Azure Virtual Desktop (classic) with the macOS client
virtual-desktop Connect Web 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-web-2019.md
Title: Connect Azure Virtual Desktop (classic) web client - Azure description: How to connect to Azure Virtual Desktop (classic) using the web client.-+ Last updated 03/21/2022-++ # Connect to Azure Virtual Desktop (classic) with the web client
virtual-desktop Connect Windows 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-windows-2019.md
Title: Connect to Azure Virtual Desktop (classic) Windows 10 or 7 - Azure description: How to connect to Azure Virtual Desktop (classic) using the Windows Desktop client.-+ Last updated 08/08/2022-++ # Connect with the Windows Desktop (classic) client
virtual-desktop Create Host Pools Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-arm-template.md
Title: Azure Virtual Desktop (classic) host pool Azure Resource Manager - Azure description: How to create a host pool in Azure Virtual Desktop (classic) with an Azure Resource Manager template.-+ -+ Last updated 03/30/2020-+ # Create a host pool in Azure Virtual Desktop (classic) with an Azure Resource Manager template
virtual-desktop Create Host Pools Azure Marketplace 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-azure-marketplace-2019.md
Title: Azure Virtual Desktop (classic) host pool Azure Marketplace - Azure description: How to create an Azure Virtual Desktop (classic) host pool by using the Azure Marketplace.-+ Last updated 03/31/2021-++ # Tutorial: Create a host pool in Azure Virtual Desktop (classic)
virtual-desktop Create Host Pools Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-powershell-2019.md
Title: Create Azure Virtual Desktop (classic) host pool PowerShell - Azure description: How to create a host pool in Azure Virtual Desktop (classic) with PowerShell cmdlets.-+ Last updated 08/08/2022-++ # Create a host pool in Azure Virtual Desktop (classic) with PowerShell
virtual-desktop Create Service Principal Role Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-service-principal-role-powershell.md
Title: Azure Virtual Desktop (classic) service principal role assignment - Azure description: How to create service principals and assign roles by using PowerShell in Azure Virtual Desktop (classic).-+ Last updated 05/27/2020--++ # Tutorial: Create service principals and role assignments with PowerShell in Azure Virtual Desktop (classic)
virtual-desktop Create Validation Host Pool 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-validation-host-pool-2019.md
Title: Azure Virtual Desktop (classic) host pool service updates - Azure description: Learn to create a validation host pool in Azure Virtual Desktop (classic) to monitor service updates before rolling out updates to production.-+ Last updated 05/27/2020-++ # Tutorial: Create a host pool to validate service updates in Azure Virtual Desktop (classic)
virtual-desktop Customize Feed Virtual Desktop Users 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/customize-feed-virtual-desktop-users-2019.md
Title: Customize feed for Azure Virtual Desktop (classic) users - Azure description: How to customize feed for Azure Virtual Desktop (classic) users with PowerShell cmdlets.-+ Last updated 03/30/2020-++ # Customize feed for Azure Virtual Desktop (classic) users
virtual-desktop Customize Rdp Properties 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/customize-rdp-properties-2019.md
Title: Customize RDP Properties with PowerShell Azure Virtual Desktop (classic) - Azure description: How to customize RDP Properties for Azure Virtual Desktop (classic) with PowerShell cmdlets.-+ Last updated 03/30/2020-++ # Customize Remote Desktop Protocol properties for an Azure Virtual Desktop (classic) host pool
virtual-desktop Data Locations 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/data-locations-2019.md
Title: Data locations for Azure Virtual Desktop (classic) - Azure description: A brief overview of which locations Azure Virtual Desktop (classic) data and metadata are stored in.-+ Last updated 03/30/2020-++ # Data locations for Azure Virtual Desktop (classic)
virtual-desktop Delegated Access Virtual Desktop 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/delegated-access-virtual-desktop-2019.md
Title: Delegated access in Azure Virtual Desktop (classic) - Azure description: How to delegate administrative capabilities on an Azure Virtual Desktop (classic) deployment, including examples.-+ Last updated 03/30/2020-++ # Delegated access in Azure Virtual Desktop (classic)
virtual-desktop Deploy Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/deploy-diagnostics.md
Title: Deploy the diagnostics tool for Azure Virtual Desktop (classic) - Azure description: How to deploy the diagnostics UX tool for Azure Virtual Desktop (classic).-+ -+ Last updated 12/15/2020-+ # Deploy the Azure Virtual Desktop (classic) diagnostics tool
virtual-desktop Diagnostics Log Analytics 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/diagnostics-log-analytics-2019.md
Title: Azure Virtual Desktop (classic) diagnostics log analytics - Azure description: How to use log analytics with the Azure Virtual Desktop (classic) diagnostics feature.-+ Last updated 03/30/2020-++ # Use Log Analytics for the diagnostics feature in Azure Virtual Desktop (classic)
virtual-desktop Diagnostics Role Service 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/diagnostics-role-service-2019.md
Title: Azure Virtual Desktop (classic) diagnose issues - Azure description: How to use the Azure Virtual Desktop (classic) diagnostics feature to diagnose issues.-+ Last updated 05/13/2020-++ # Identify and diagnose issues in Azure Virtual Desktop (classic)
virtual-desktop Environment Setup 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/environment-setup-2019.md
Title: Azure Virtual Desktop (classic) terminology - Azure description: The terminology used for basic elements of an Azure Virtual Desktop (classic) environment.-+ Last updated 03/30/2020-++ # Azure Virtual Desktop (classic) terminology
virtual-desktop Expand Existing Host Pool 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/expand-existing-host-pool-2019.md
Title: Expand existing Azure Virtual Desktop (classic) host pool with new session hosts - Azure description: How to expand an existing host pool with new session hosts in Azure Virtual Desktop (classic).-+ -+ Last updated 03/31/2021-+ # Expand an existing host pool with new session hosts in Azure Virtual Desktop (classic)
virtual-desktop Host Pool Load Balancing 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/host-pool-load-balancing-2019.md
Title: Azure Virtual Desktop (classic) host pool load-balancing - Azure description: Host pool load-balancing methods for an Azure Virtual Desktop environment.-+ Last updated 03/30/2020-++ # Host pool load-balancing methods in Azure Virtual Desktop (classic)
virtual-desktop Manage App Groups 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-app-groups-2019.md
Title: Manage application groups for Azure Virtual Desktop (classic) - Azure description: Learn how to set up Azure Virtual Desktop (classic) tenants in Microsoft Entra ID.-+ Last updated 08/16/2021-++ # Tutorial: Manage application groups for Azure Virtual Desktop (classic)
virtual-desktop Manage Resources Using Ui Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui-powershell.md
Title: Deploy a management tool for Azure Virtual Desktop (classic) using service principal - Azure description: How to deploy the management tool for Azure Virtual Desktop (classic) using PowerShell.-+ Last updated 03/30/2020--++ # Deploy an Azure Virtual Desktop (classic) management tool with PowerShell
virtual-desktop Manage Resources Using Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui.md
Title: Deploy management tool with an Azure Resource Manager template - Azure description: How to install a user interface tool with an Azure Resource Manager template to manage Azure Virtual Desktop (classic) resources.-+ -+ Last updated 03/30/2020-+ # Deploy an Azure Virtual Desktop (classic) management tool with an Azure Resource Manager template
virtual-desktop Publish Apps 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/publish-apps-2019.md
Title: Publish built-in apps in Azure Virtual Desktop (classic) - Azure description: How to publish built-in apps in Azure Virtual Desktop (classic).-+ Last updated 03/30/2020-++ # Publish built-in apps in Azure Virtual Desktop (classic)
virtual-desktop Set Up Scaling Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/set-up-scaling-script.md
Title: Scale session hosts Azure Automation Azure Virtual Desktop (classic) - Azure description: How to automatically scale Azure Virtual Desktop (classic) session hosts with Azure Automation.-+ Last updated 03/30/2020--++ # Scale Azure Virtual Desktop (classic) session hosts using Azure Automation
virtual-desktop Set Up Service Alerts 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/set-up-service-alerts-2019.md
Title: Set up service alerts for Azure Virtual Desktop (classic) - Azure description: How to set up Azure Service Health to receive service notifications for Azure Virtual Desktop (classic).-+ Last updated 05/27/2020-++ # Tutorial: Set up service alerts for Azure Virtual Desktop (classic)
virtual-desktop Tenant Setup Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md
Title: Create a tenant in Azure Virtual Desktop (classic) - Azure description: Describes how to set up Azure Virtual Desktop (classic) tenants in Microsoft Entra ID.-+ Last updated 03/30/2020-++ # Tutorial: Create a tenant in Azure Virtual Desktop (classic)
virtual-desktop Troubleshoot Management Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-management-tool.md
Title: Azure Virtual Desktop (classic) management tool - Azure description: How to troubleshoot issues with the Azure Virtual Desktop (classic) management tool.-+ Last updated 03/30/2020-++ # Troubleshoot the Azure Virtual Desktop (classic) management tool
virtual-desktop Troubleshoot Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-powershell-2019.md
Title: Azure Virtual Desktop (classic) PowerShell - Azure description: How to troubleshoot issues with PowerShell when you set up an Azure Virtual Desktop (classic) tenant environment.-+ Last updated 04/05/2022-++ # Azure Virtual Desktop (classic) PowerShell
virtual-desktop Troubleshoot Service Connection 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-service-connection-2019.md
Title: Troubleshoot service connection Azure Virtual Desktop (classic) - Azure description: How to resolve issues when you set up client connections in an Azure Virtual Desktop (classic) tenant environment.-+ Last updated 05/20/2020-++ # Azure Virtual Desktop (classic) service connections
virtual-desktop Troubleshoot Set Up Issues 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-issues-2019.md
Title: Azure Virtual Desktop (classic) tenant host pool creation - Azure description: How to troubleshoot and resolve tenant and host pool issues during setup of an Azure Virtual Desktop (classic) tenant environment.-+ -+ Last updated 03/30/2020-+ # Tenant and host pool creation in Azure Virtual Desktop (classic)
virtual-desktop Troubleshoot Set Up Overview 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-overview-2019.md
Title: Azure Virtual Desktop (classic) troubleshooting overview - Azure description: An overview for troubleshooting issues while setting up an Azure Virtual Desktop (classic) tenant environment.-+ Last updated 03/30/2020-++ # Azure Virtual Desktop (classic) troubleshooting overview, feedback, and support
virtual-desktop Troubleshoot Vm Configuration 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-vm-configuration-2019.md
Title: Troubleshoot Azure Virtual Desktop (classic) session host - Azure description: How to resolve issues when you're configuring Azure Virtual Desktop (classic) session host virtual machines.-+ Last updated 05/11/2020-++ # Azure Virtual Desktop (classic) session host virtual machine configuration
virtual-desktop Whats New Client Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-android-chrome-os.md
Title: What's new in the Remote Desktop client for Android and Chrome OS - Azure Virtual Desktop description: Learn about recent changes to the Remote Desktop client for Android and Chrome OS --+++ Last updated 04/11/2024
virtual-desktop Whats New Client Ios Ipados https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-ios-ipados.md
Title: What's new in the Remote Desktop client for iOS and iPadOS - Azure Virtual Desktop description: Learn about recent changes to the Remote Desktop client for iOS and iPadOS --+++ Last updated 07/08/2024
virtual-desktop Whats New Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-multimedia-redirection.md
Title: What's new in multimedia redirection MMR? - Azure Virtual Desktop description: New features and product updates for multimedia redirection for Azure Virtual Desktop.-+ Previously updated : 01/23/2024- Last updated : 08/12/2024++ # What's new in multimedia redirection?
The following table shows the latest available version of the MMR extension for
| Release | Latest version | Download | ||-|-|
-| Public | 1.0.2311.2004 | [MMR extension](https://aka.ms/avdmmr/msi) |
+| Public | 1.0.2024.4003 | [MMR extension](https://aka.ms/avdmmr/msi) |
+
+## Updates for version 1.0.2024.4003
+
+*Published: July 23, 2024*
+
+In this release, we've made the following changes:
+
+- Fixed a deadlock issue and improved telemetry processing.
## Updates for version 1.0.2311.2004
virtual-desktop Windows 11 Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/windows-11-language-packs.md
Title: Install language packs on Windows 11 Enterprise VMs in Azure Virtual Desktop - Azure description: How to install language packs for Windows 11 Enterprise VMs in Azure Virtual Desktop.-+ Last updated 10/20/2023-++ # Add languages to a Windows 11 Enterprise image
virtual-machine-scale-sets Alert Rules Automatic Repairs Service State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/alert-rules-automatic-repairs-service-state.md
description: Learn how to use Azure Alert Rules to get notified of changes to Au
-+ Last updated 06/14/2024
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-cli.md
description: Learn how to create a Virtual Machine Scale Set in Flexible orchest
-+ Last updated 06/14/2024
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-portal.md
description: Learn how to create a Virtual Machine Scale Set in Flexible orchest
-+ Last updated 06/14/2024
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-powershell.md
description: Learn how to create a Virtual Machine Scale Set in Flexible orchest
-+ Last updated 06/14/2024
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-rest-api.md
description: Learn how to create a Virtual Machine Scale Set in Flexible orchest
-+ Last updated 06/14/2024
virtual-machine-scale-sets Orchestration Modes Api Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/orchestration-modes-api-comparison.md
description: Learn about the API differences between the Uniform and Flexible or
-+ Last updated 06/14/2024
virtual-machine-scale-sets Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/overview.md
description: Learn about Azure Virtual Machine Scale Sets and how to automatical
-+ Last updated 06/14/2024
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
description: Lists Azure Policy built-in policy definitions for Azure Virtual Ma
-+ Last updated 06/14/2024
virtual-machine-scale-sets Quick Create Bicep Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-bicep-windows.md
description: Learn how to quickly create a Windows virtual machine scale with Bi
-+ Last updated 06/14/2024
virtual-machine-scale-sets Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-cli.md
description: Get started with your deployments by learning how to quickly create
-+ Last updated 06/14/2024
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-portal.md
description: Get started with your deployments by learning how to quickly create
-+ Last updated 06/14/2024
virtual-machine-scale-sets Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-powershell.md
description: Get started with your deployments by learning how to quickly create
-+ Last updated 06/14/2024
virtual-machine-scale-sets Quick Create Template Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-linux.md
description: Learn how to quickly create a Linux virtual machine scale with an A
-+ Last updated 06/14/2024
virtual-machine-scale-sets Quick Create Template Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-windows.md
description: Learn how to quickly create a Windows virtual machine scale with an
-+ Last updated 06/14/2024
virtual-machine-scale-sets Standby Pools Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/standby-pools-create.md
Title: Create a standby pool for Virtual Machine Scale Sets (Preview)
description: Learn how to create a standby pool to reduce scale-out latency with Virtual Machine Scale Sets. -+ Last updated 06/14/2024
virtual-machine-scale-sets Standby Pools Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/standby-pools-faq.md
Title: Frequently asked questions about standby pools for Virtual Machine Scale
description: Get answers to frequently asked questions for standby pools on Virtual Machine Scale Sets. -+ Last updated 06/14/2024
virtual-machine-scale-sets Standby Pools Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/standby-pools-overview.md
Title: Standby pools for Virtual Machine Scale Sets
description: Learn how to utilize standby pools to reduce scale-out latency with Virtual Machine Scale Sets. -+ Last updated 06/14/2024
virtual-machine-scale-sets Standby Pools Update Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/standby-pools-update-delete.md
Title: Delete or update a standby pool for Virtual Machine Scale Sets
description: Learn how to delete or update a standby pool for Virtual Machine Scale Sets. -+ Last updated 06/14/2024
virtual-machine-scale-sets Tutorial Connect To Instances Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-connect-to-instances-cli.md
description: Learn how to use the Azure CLI to connect to instances in your Virt
-+ Last updated 06/14/2024
virtual-machine-scale-sets Tutorial Connect To Instances Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-connect-to-instances-powershell.md
description: Learn how to use Azure PowerShell to connect to instances in your V
-+ Last updated 06/14/2024
virtual-machine-scale-sets Tutorial Create And Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-cli.md
description: Learn how to use the Azure CLI to create a Virtual Machine Scale Se
-+ Last updated 06/14/2024
virtual-machine-scale-sets Tutorial Create And Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-powershell.md
description: Learn how to use Azure PowerShell to create a Virtual Machine Scale
-+ Last updated 06/14/2024
virtual-machine-scale-sets Tutorial Modify Scale Sets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md
description: Learn how to modify and update an Azure Virtual Machine Scale Set A
-+ Last updated 06/14/2024
virtual-machine-scale-sets Tutorial Modify Scale Sets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-powershell.md
description: Learn how to modify and update an Azure Virtual Machine Scale Set u
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Attach Detach Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-attach-detach-vm.md
description: How to attach or detach a virtual machine to or from a Virtual Mach
-+ Last updated 06/14/2024
# Attach or detach a Virtual Machine to or from a Virtual Machine Scale Set
-## Attaching a VM to a Virtual Machine Scale Set
+## Attaching a Virtual Machine to a Virtual Machine Scale Set
> [!IMPORTANT]
-> You can only attach Virtual Machines (VMs) to a Virtual Machine Scale Set in **Flexible orchestration mode**. For more information, see [Orchestration modes for Virtual Machine Scale Sets](./virtual-machine-scale-sets-orchestration-modes.md).
+> You can only attach Virtual Machines to a Virtual Machine Scale Set in **Flexible orchestration mode**. For more information, see [Orchestration modes for Virtual Machine Scale Sets](./virtual-machine-scale-sets-orchestration-modes.md).
-There are times where you need to attach a virtual machine to a Virtual Machine Scale Set to benefit from the scale, availability, and flexibility that comes with scale sets. There are two ways to attach VMs to scale sets: manually create a new standalone VM in the scale set or attach an existing VM to the scale set.
+There are times where you need to attach a virtual machine (VM) to a Virtual Machine Scale Set to benefit from the scale, availability, and flexibility that comes with scale sets. There are two ways to attach VMs to scale sets: manually create a new standalone VM in the scale set or attach an existing VM to the scale set.
You can attach a new standalone VM to a scale set when you need a different configuration on a specific VM than what's defined in the scaling profile, or when the scale set doesn't have a scaling profile. Manually attaching VMs gives you full control over instance naming and placement into a specific availability zone or fault domain. The VM doesn't have to match the configuration in the scale set's scaling profile, so you can specify parameters like operating system, networking configuration, on-demand or Spot, and VM size. You can attach an existing VM to an existing Virtual Machine Scale Set by specifying which scale set you would like to attach to. The VM doesn't have to be the same as the VMs already running in the scale set, meaning it can have a different operating system, network configuration, priority, disk, and more.
-### Attach a new VM to a Virtual Machine Scale Set
+### Attach a new Virtual Machine to a Virtual Machine Scale Set
Attach a virtual machine to a Virtual Machine Scale Set at the time of VM creation by specifying the `virtualMachineScaleSet` property.
Attach a virtual machine to a Virtual Machine Scale Set at the time of VM creati
#### [Azure portal](#tab/portal-1) 1. Go to **Virtual Machines**.
-1. Select **Create**
+1. Select **Create**.
2. Select **Azure virtual machine**. 3. In the **Basics** tab, open the **Availability options** dropdown and select **Virtual Machine Scale Set**. 4. In the **Virtual Machine Scale Set** dropdown, select the scale set to which you want to add this virtual machine.
New-AzVm `
``` --
-### Exceptions to attaching a new VM to a Virtual Machine Scale Set
+### Exceptions to attaching a new Virtual Machine to a Virtual Machine Scale Set
- The VM must be in the same resource group as the scale set.-- If the scale set is regional (no availability zones specified), the virtual machine must also be regional. -- If the scale set is zonal or spans multiple zones (one or more availability zones specified), the virtual machine must be created in one of the zones spanned by the scale set. For example, you can't create a virtual machine in Zone 1, and place it in a scale set that spans Zones 2 and 3.-- The scale set must be in Flexible orchestration mode, and the singlePlacementGroup property must be false.
+- Regional virtual machines (no availability zones specified) can be attached to regional scale sets.
+- Zonal virtual machines can be attached to scale sets that specify one or more zone. The virtual machine must be in one of the zones spanned by the scale set. For example, you can't create a virtual machine in Zone 1, and place it in a scale set that spans Zones 2 and 3.
+- The scale set must be in Flexible orchestration mode, and the `singlePlacementGroup` property must be `false`.
-### Attach an existing VM to a Virtual Machine Scale Set (Preview)
+### Attach an existing Virtual Machine to a Virtual Machine Scale Set
Attach an existing virtual machine to a Virtual Machine Scale Set after the time of VM creation by specifying the `virtualMachineScaleSet` property. Attaching an existing VM to a scale set with a fault domain count of 1 doesn't require downtime.
-#### Enroll in the Preview
-
-Register for the `SingleFDAttachDetachVMToVmss` feature flag using the [az feature register](/cli/azure/feature#az-feature-register) command:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.Compute" --name "SingleFDAttachDetachVMToVmss"
-```
-
-It takes a few minutes for the feature to register. Verify the registration status by using the [az feature show](/cli/azure/feature#az-feature-register) command:
-
-```azurecli-interactive
-az feature show --namespace "Microsoft.Compute" --name "SingleFDAttachDetachVMToVmss"
-```
-- > [!NOTE] > Attaching a virtual machine to Virtual Machine Scale Set doesn't by itself update any VM networking parameters such as load balancers. If you would like this virtual machine to receive traffic from any load balancer, you must manually configure the VM network interface to receive traffic from the load balancer. Learn more about [Load balancers](../virtual-network/network-overview.md#load-balancers). >
az feature show --namespace "Microsoft.Compute" --name "SingleFDAttachDetachVMTo
2. Select the name of the virtual machine you'd like to attach to your scale set. 3. Under **Settings** select **Availability + scaling**. 4. In the **Scaling** section, select the **Get started** button. If the button is grayed out, your VM currently doesn't meet the requirements to be attached to a scale set.
-5. The **Attach to a VMSS** blade will appear on the right side of the page. Select the scale set you'd like to attach the VM to in the **Select a VMSS dropdown**.
+5. In the **Attach to a VMSS** blade on the right side of the page, select the scale set you'd like to attach the VM to in the **Select a VMSS** dropdown.
6. Select the **Attach** button at the bottom to attach the VM. #### [Azure CLI](#tab/cli-2)
Update-AzVM -ResourceGroupName $resourceGroupName -VM $vm -VirtualMachineScaleS
```
-### Limitations for attaching an existing VM to a scale set
+### Limitations for attaching an existing Virtual Machine to a scale set
- The scale set must use Flexible orchestration mode.-- The scale set must have a `platformFaultDomainCount` of *1*.
+- The scale set must have a `platformFaultDomainCount` of **1**.
- The VM and scale set must be in the same resource group. - The VM and target scale set must both be zonal, or they must both be regional. You can't attach a zonal VM to a regional scale set. - The VM can't be in a self-defined availability set.
Update-AzVM -ResourceGroupName $resourceGroupName -VM $vm -VirtualMachineScaleS
- The VM must have a managed disk. - The scale set must have `singlePlacementGroup` set to `False`. - Scale sets created without a scaling profile default to `singlePlacementGroup` set to `null`. To attach VMs to a scale set without a scaling profile, `singlePlacementGroup` needs to be set to `False` at the time of the scale set's creation. -- The VM can't be an RDMA capable HB-series or N-series VM.
+- The VM can't be a Remote Direct Memory Access (RDMA) capable HB-series or N-series VM.
-## Detaching a VM from a Virtual Machine Scale Set (Preview)
+## Detaching a Virtual Machine from a Virtual Machine Scale Set
Should you need to detach a VM from a scale set, you can follow the below steps to remove the VM from the scale set.
-> [!NOTE]
-> Detaching VMs created by the scale set will require the VM to be `Stopped` before the detach. VMs that were previously attached to the scale set can be detached while running.
- ### [Azure portal](#tab/portal-3) 1. Go to **Virtual Machines**.
-2. If your VM was created by the scale set, ensure the VM is `Stopped`. If the VM was created as a standalone VM, you can continue regardless of if the VM is `Running` or `Stopped`.
3. Select the name of the virtual machine you'd like to attach to your scale set. 4. Under **Settings** select **Availability + scaling**. 5. Select the **Detach from the VMSS** button at the top of the page.
Update-AzVMΓÇ»-ResourceGroupNameΓÇ»$resourceGroupNameΓÇ»-VMΓÇ»$vm -VirtualMachin
```
-### Limitations for detaching a VM from a scale set
+### Limitations for detaching a Virtual Machine from a scale set
- The scale set must use Flexible orchestration mode. - The scale set must have a `platformFaultDomainCount` of **1**.-- VMs created by the scale set must be `Stopped` prior to being detached. - Scale sets created without a scaling profile default to `singlePlacementGroup` set to `null`. To detach VMs from a scale set without a scaling profile, `singlePlacementGroup` needs to be set to `False`. - The VM can't be an RDMA capable HB-series or N-series VM.
-## Moving VMs between scale sets (Preview)
+## Moving Virtual Machines between scale sets
To move a VM from one scale set to another, use the following steps:
-1. [Detach](#detaching-a-vm-from-a-virtual-machine-scale-set-preview) the VM from scale set A.
-2. Once the detach completes, [attach](#attach-an-existing-vm-to-a-virtual-machine-scale-set-preview) the VM to scale set B.
+1. [Detach](#detaching-a-virtual-machine-from-a-virtual-machine-scale-set) the VM from scale set A.
+2. Once the detach completes, [attach](#attach-an-existing-virtual-machine-to-a-virtual-machine-scale-set) the VM to scale set B.
### Limitations
-The limitations for VMs to be [attached](#limitations-for-attaching-an-existing-vm-to-a-scale-set) or [detached](#limitations-for-detaching-a-vm-from-a-scale-set) to or from a scale set remain the same.
+The limitations for VMs to be [attached](#limitations-for-attaching-an-existing-virtual-machine-to-a-scale-set) or [detached](#limitations-for-detaching-a-virtual-machine-from-a-scale-set) to or from a scale set remain the same.
## Troubleshooting
-### Attach an existing VM to an existing scale set troubleshooting (Preview)
+### Attach an existing Virtual Machine to an existing scale set troubleshooting
| Error Message | Description | Troubleshooting Options | | - | -- | - |
-| Referenced Virtual Machine Scale Set '{vmssName}' does not support attaching an existing Virtual Machine to it. For more information, see https://aka.ms/vmo/attachdetach. | The subscription isn't enrolled in the VM Attach Detach Preview. | Ensure that your subscription is enrolled in the feature. Reference the [documentation](#enroll-in-the-preview) to check if you're enrolled. |
| The Virtual Machine Scale Set '{vmssUri}' referenced by the Virtual Machine does not exist. | The scale set resource doesn't exist, or isn't in Flexible Orchestration Mode. | Check to see if the scale set exists. If it does, check if it's using Uniform Orchestration Mode. | | This operation is not allowed because referenced Virtual Machine Scale Set '{vmssName}' does not have orchestration mode set to 'Flexible'. | The scale set isn't in Flexible Orchestration Mode. | Try attaching to another scale set with Flexible Orchestration Mode enabled. | | Referenced Virtual Machine '{vmName}' belongs to an Availability Set and attaching to a Virtual Machine Scale Set is not supported. For more information, see https://aka.ms/vmo/attachdetach. | `VmssDoesNotSupportAttachingExistingAvsetVM`: The VM that you attempted to attach is part of an Availability Set and can't be attached to a scale set. | VMs in an Availability Set can't be attached to a scale set. |
-| Referenced Virtual Machine Scale Set '{vmssName}' does not support attaching an existing Virtual Machine to it because the Virtual Machine Scale Set has more than 1 fault domains. For more information, see https://aka.ms/vmo/attachdetach. | `VmssDoesNotSupportAttachingExistingVMMultiFD`: The attach of the VM failed because the VM was trying to attach to a scale set with a platform fault domain count of more than 1.| VMs can only be attached to scale sets with a `platform fault domain count` of 1. Try attaching to a scale set with a platform fault domain count of 1, rather than a scale set with a platform fault domain count of more than 1. |
+| Referenced Virtual Machine Scale Set '{vmssName}' does not support attaching an existing Virtual Machine to it because the Virtual Machine Scale Set has more than 1 fault domains. For more information, see https://aka.ms/vmo/attachdetach. | `VmssDoesNotSupportAttachingExistingVMMultiFD`: The operation to attach the VM failed because the VM was trying to attach to a scale set with a platform fault domain count of more than one.| VMs can only be attached to scale sets with a `platform fault domain count` of 1. Try attaching to a scale set with a platform fault domain count of one. |
| Using a Virtual Machine '{vmName}' with unmanaged disks and attaching it to a Virtual Machine Scale Set is not supported. For more information, see https://aka.ms/vmo/attachdetach. | `VmssDoesNotSupportAttachingExistingVMUnmanagedDisk`: VMs with unmanaged disks can't be attached to a scale set. | To attach a VM with a disk to the scale set, ensure that the VM is using a managed disk. Visit the [documentation](../virtual-machines/windows/convert-unmanaged-to-managed-disks.md) to learn how to migrate from an unmanged disk to a managed disk. |
-| Referenced Virtual Machine '{vmName}' belongs to a proximity placement group (PPG) and attaching to a Virtual Machine Scale Set is not supported. For more information, see https://aka.ms/vmo/attachdetach. | `VmssDoesNotSupportAttachingPPGVM`: The attach of the VM failed because the VM is part of a Proximity Placement Group. | VMs from a Proximity Placement Group can't be attached to a scale set. [Remove the VM from the Proximity Placement Group](../virtual-machines/windows/proximity-placement-groups.md#move-an-existing-vm-out-of-a-proximity-placement-group) and then try to attach to the scale set. See the documentation to learn about how to move a VM out of a Proximity Placement Group. |
+| Referenced Virtual Machine '{vmName}' belongs to a proximity placement group (PPG) and attaching to a Virtual Machine Scale Set is not supported. For more information, see https://aka.ms/vmo/attachdetach. | `VmssDoesNotSupportAttachingPPGVM`: The operation to attach the VM failed because the VM is part of a Proximity Placement Group. | VMs from a Proximity Placement Group can't be attached to a scale set. [Remove the VM from the Proximity Placement Group](../virtual-machines/windows/proximity-placement-groups.md#move-an-existing-vm-out-of-a-proximity-placement-group) and then try to attach to the scale set. See the documentation to learn about how to move a VM out of a Proximity Placement Group. |
| PropertyChangeNotAllowed Changing property virtualMachineScaleSet.id isn't allowed. | The Virtual Machine Scale Set ID can't be changed to a different Virtual Machine Scale Set ID without detaching the VM from the scale set first. | Detach the VM from the Virtual Machine Scale Set, and then attach to the new scale set. |
+| Virtual Machine Scale Set '{0}' does not support attaching an existing Virtual Machine to it because the Virtual Machine Scale Set has single placement group set to true or does not have single placement group explicitly set to false. Please see https://aka.ms/vmo/attachdetach for more information. | `VmssDoesNotSupportAttachingWithSpg`: The operation for attaching the VM failed because the scale set is part of a Single Placement Group. | VMs can only be attached to scale sets with `singlePlacementGroup` set to `false`.|
+| Virtual Machine Scale Set does not support attaching Virtual Machine {0} because it uses VM Size {1} which can be only be used with a single placement group enabled Virtual Machine Scale Set. Please see https://aka.ms/vmo/attachdetach for more information. | The VM being attached is of a size that requires the scale set to use a Single Placement Group. | VMs requiring Single Placement Group can't be attached to a scale set. |
+|Virtual Machine Scale Set does not support attaching RDMA capable VM Sizes such as {0}. Please see https://aka.ms/vmo/attachdetach for more information. | RDMA capable VMs can't be detached from the scale set. The detach failed because the VM is RDMA capable. | Only VMs that aren't RDMA enabled can be detached from the scale set. |
-### Detach a VM from a scale set troubleshooting (Preview)
+### Detach a Virtual Machine from a scale set troubleshooting
| Error Message | Description | Troubleshooting options | | -- | - | - |
-| Virtual Machine Scale Set does not support detaching of Virtual Machines from it. For more information, see https://aka.ms/vmo/attachdetach. | The subscription isn't enrolled in the VM Attach Detach Preview. | Ensure that your subscription is enrolled in the feature. Reference the [documentation](#enroll-in-the-preview) to check if you're enrolled. |
| The Virtual Machine Scale Set '{vmssUri}' referenced by the Virtual Machine does not exist. | The scale set resource doesn't exist, or isn't in Flexible Orchestration Mode. | Check to see if the scale set exists. If it does, check if it's using Uniform Orchestration Mode. | | This operation is not allowed because referenced Virtual Machine Scale Set '{vmssName}' does not have orchestration mode set to 'Flexible'. | The scale set isn't in Flexible Orchestration Mode. | Only scale sets with Flexible Orchestration Mode can have VMs detached from them. |
-| Virtual Machine Scale Set '{vmssName}' does not support detaching an existing Virtual Machine from it because the Virtual Machine Scale Set has more than 1 fault domains. For more information, see https://aka.ms/vmo/attachdetach. | The detach of the VM failed because the scale set it's in has more than 1 platform fault domain. | VMs can only be detached from scale sets with a `platform fault domain count` of 1. |
+| Virtual Machine Scale Set '{vmssName}' does not support detaching an existing Virtual Machine from it because the Virtual Machine Scale Set has more than 1 fault domains. For more information, see https://aka.ms/vmo/attachdetach. | The detach of the VM failed because the scale set it's in has more than one platform fault domain. | VMs can only be detached from scale sets with a `platform fault domain count` of one. |
| OperationNotAllowed, Message: This operation is not allowed because referenced Virtual Machine Scale Set '{armId}' does not have orchestration mode set to 'Flexible' | The scale set you attempted to attach to or detach from is a scale set with Uniform Orchestration Mode. | Only scale sets with Flexible Orchestration Mode can have VMs detached from them. |
-| Virtual Machine was created with a Virtual Machine Scale Set association and must be deallocated before being detached. Deallocate the virtual machine and ensure that the resource is in deallocated power state before retrying detach operation. For more information, see https://aka.ms/vmo/attachdetach. | `VmssDoesNotSupportDetachNonDeallocatedVM`: Virtual Machines created by the Virtual Machine Scale Set with Flexible Orchestration Mode must be deallocated before being detached from the scale set. | Deallocate the VM and ensure that the resource is in a `deallocated` power state before retrying the detach operation. |
| PropertyChangeNotAllowed Changing property virtualMachineScaleSet.id is not allowed. | The Virtual Machine Scale Set ID can't be changed to a different Virtual Machine Scale Set ID without detaching the VM from the scale set first. | Detach the VM from the Virtual Machine Scale Set, and then attach to the new scale set. Ensure the `virtualMachineScaleSet.id` is set to the value of `null`. Incorrect values include: `""` and `"null"`. |
+| Virtual Machine Scale Set '{0}' does not support detaching Virtual Machine from it because the Virtual Machine Scale Set has single placement group set to true. Please see https://aka.ms/vmo/attachdetach for more information.| `VmssDoesNotSupportAttachingWithSpg`: The detach of the VM failed because the scale set is part of a Single Placement Group.| VMs can only be detached from scale sets with `singlePlacementGroup` set to `false`. |
+|Virtual Machine Scale Set does not support detaching RDMA capable VM Sizes such as {0}. Please see https://aka.ms/vmo/attachdetach for more information. | RDMA capable VMs can't be detached from the scale set. The detach failed because the VM is RDMA capable. | Only VMs that aren't RDMA enabled can be detached from the scale set. |
## What's next
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
Previously updated : 06/14/2024 Last updated : 08/09/2024 # Automatic instance repairs for Azure Virtual Machine Scale Sets
-> [!IMPORTANT]
-> The **Reimage** and **Restart** repair actions are currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. Some aspects of this feature may change prior to general availability (GA).
--
-Enabling automatic instance repairs for Azure Virtual Machine Scale Sets helps achieve high availability for applications by maintaining a set of healthy instances. If an unhealthy instance is found by [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md), automatic instance repairs will attempt to recover the instance by triggering repair actions such as deleting the unhealthy instance and creating a new one to replace it, reimaging the unhealthy instance (Preview), or restarting the unhealthy instance (Preview).
+Enabling automatic instance repairs for Azure Virtual Machine Scale Sets helps achieve high availability for applications by maintaining a set of healthy instances. If an unhealthy instance is found by [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md), automatic instance repairs will attempt to recover the instance by triggering repair actions such as deleting the unhealthy instance and creating a new one to replace it, reimaging the unhealthy instance, or restarting the unhealthy instance.
## Requirements for using automatic instance repairs
For instances marked as "Unhealthy" or "Unknown" (*Unknown* state is only availa
Automatic repairs policy is supported for compute API version 2018-10-01 or higher.
-The `repairAction` setting for Reimage (Preview) and Restart (Preview) is supported for compute API versions 2021-11-01 or higher.
+The `repairAction` setting for Reimage and Restart is supported for compute API versions 2021-11-01 or higher.
**Restrictions on resource or subscription moves**
The automatic instance repairs process goes as follows:
### Available repair actions
-> [!CAUTION]
-> The `repairAction` setting, is currently under PREVIEW and not suitable for production workloads. To preview the **Restart** and **Reimage** repair actions, you must register your Azure subscription with the AFEC flag `AutomaticRepairsWithConfigurableRepairActions` and your compute API version must be 2021-11-01 or higher.
-> For more information, see [feature registration](#feature-registration).
-
-There are three available repair actions for automatic instance repairs ΓÇô Replace, Reimage (Preview), and Restart (Preview). The default repair action is Replace, but you can switch to Reimage (Preview) or Restart (Preview) by enrolling in the preview and modifying the `repairAction` setting under `automaticRepairsPolicy` object.
+There are three available repair actions for automatic instance repairs ΓÇô Replace, Reimage, and Restart. The default repair action is Replace, but you can configure automatic repairs to use Reimage or Restart by modifying the `repairAction` setting under `automaticRepairsPolicy` object.
- **Replace** deletes the unhealthy instance and creates a new instance to replace it. The latest Virtual Machine Scale Set model is used to create the new instance. This repair action is the default.
The following table compares the differences between all three repair actions:
| Repair action | VM instance ID preserved? | Private IP preserved? | Managed data disk preserved? | Managed OS disk preserved? | Local (temporary) disk preserved? | |--|--|--|--|--|--|
-| Replace | No | No | No | No | No |
+| Replace (default) | No | No | No | No | No |
| Reimage | Yes | Yes | Yes | No | Yes | | Restart | Yes | Yes | Yes | Yes | Yes |
az vmss update \
-## Feature Registration
-
-Before configuring `repairAction` setting under `automaticRepairsPolicy`, register the feature providers to your subscription.
-
-### [Azure CLI](#tab/cli-3)
-
-```azurecli-interactive
-az feature register --name AutomaticRepairsWithConfigurableRepairActions --namespace Microsoft.Compute
-```
-### [Azure PowerShell](#tab/powershell-3)
-
-```azurepowershell-interactive
-Register-AzProviderFeature -FeatureName "AutomaticRepairsWithConfigurableRepairActions" -ProviderNamespace "Microsoft.Compute"
-```
--- ## Configure a repair action on automatic repairs policy
-> [!CAUTION]
-> The `repairAction` setting, is currently under PREVIEW and not suitable for production workloads. To preview the **Restart** and **Reimage** repair actions, you must register your Azure subscription with the AFEC flag `AutomaticRepairsWithConfigurableRepairActions` and your compute API version must be 2021-11-01 or higher.
-> For more information, see [feature registration](#feature-registration).
- The `repairAction` setting under `automaticRepairsPolicy` allows you to specify the desired repair action performed in response to an unhealthy instance. If you are updating the repair action on an existing automatic repairs policy, you must first disable automatic repairs on the scale set and re-enable with the updated repair action. This process is illustrated in the examples below. ### [REST API](#tab/rest-api-4)
virtual-machine-scale-sets Virtual Machine Scale Sets Change Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-change-upgrade-policy.md
description: Learn how to change the upgrade policy on Virtual Machine Scale Set
-+ Last updated 6/6/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Configure Rolling Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-configure-rolling-upgrades.md
description: Learn about how to configure rolling upgrades on Virtual Machine Sc
-+ Last updated 7/23/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Design Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview.md
keywords: linux virtual machine,Virtual Machine Scale Sets
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-cli.md
description: Common Azure CLI commands to manage Virtual Machine Scale Sets, suc
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-powershell.md
description: Common Azure PowerShell cmdlets to manage Virtual Machine Scale Set
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Maxsurge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-maxsurge.md
description: Learn about how to utilize rolling upgrades with MaxSurge on Virtua
-+ Last updated 7/23/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-start.md
description: Learn how to create a basic scale set template for Azure Virtual Ma
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
description: Learn how to use Flexible and Uniform orchestration modes for Virtu
-+ Last updated 06/14/2024
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Proximity Placement Groups  | Yes, when using one Availability Zone or none. Cannot be changed after deployment. Read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes, when using one Availability Zone or none. Can be changed after deployment stopping all instances. Read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes | | Azure Dedicated Hosts  | Yes | Yes | Yes | | Managed Identity | [User Assigned Identity](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md#user-assigned-managed-identity) only<sup>1</sup> | System Assigned or User Assigned | N/A (can specify Managed Identity on individual instances) |
-| Add/remove existing VM to the group | No | No | No |
+| Add/remove existing VM to the group | Yes | No | No |
| Service Fabric | No | Yes | No | | Azure Kubernetes Service (AKS) / AKE | No | Yes | No | | UserData | Yes | Yes | UserData can be specified for individual VMs |
virtual-machine-scale-sets Virtual Machine Scale Sets Perform Manual Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-perform-manual-upgrades.md
description: Learn about how to perform a manual upgrade on Virtual Machine Scal
-+ Last updated 6/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Reimage Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-reimage-virtual-machine.md
description: Learn how to reimage a virtual machine in a scale set.
-+ Last updated 02/06/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Scaling Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scaling-profile.md
description: The virtual machine scaling profile for Virtual Machine Scale Sets
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Set Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-set-upgrade-policy.md
description: Learn about to set the upgrade policy on Virtual Machine Scale Sets
-+ Last updated 6/6/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy.md
description: Learn about the different upgrade policies available for Virtual Ma
-+ Last updated 6/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Upgrade Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md
description: Learn how to modify and update an Azure Virtual Machine Scale Set w
-+ Last updated 6/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Vs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-vs-create.md
description: Deploy Virtual Machine Scale Sets using Visual Studio and a Resourc
-+ Last updated 06/14/2024
virtual-machine-scale-sets Vmss Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/vmss-support-help.md
Title: Azure Virtual Machine Scale Sets support and help options
description: How to obtain help and support for questions or problems when you create solutions using Azure Virtual Machine Scale Sets. -+ Last updated 06/14/2024
virtual-machine-scale-sets Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/whats-new.md
Title: "What's new for Virtual Machine Scale Sets" description: Learn about what's new for Virtual Machine Scale Sets in Azure. -+ Last updated 06/14/2024
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Title: Deploy a Premium SSD v2 managed disk
description: Learn how to deploy a Premium SSD v2 and about its regional availability. Previously updated : 08/09/2024 Last updated : 08/12/2024
You've now deployed a VM with a premium SSD v2.
## Adjust disk performance
-Unlike other managed disks, the performance of Premium SSD v2 disks can be configured independently of its size by using the Azure CLI and PowerShell. Making adjustments to disk performance by using the Azure portal is not currently supported. You can adjust the performance of a Premium SSD v2 disk four times within a 24 hour period.
+Unlike other managed disks, the performance of Premium SSD v2 disks can be configured independently of its size by using the Azure CLI and PowerShell. Making adjustments to disk performance by using the Azure portal is not currently supported. You can adjust the performance of a Premium SSD v2 disk four times within a 24 hour period. Creating a disk counts as one of these times, so for the first 24 hours after creating a premium SSD v2 disk you can only adjust its performance up to three times.
For conceptual information on adjusting disk performance, see [Premium SSD v2 performance](disks-types.md#premium-ssd-v2-performance).
Update-AzDisk -ResourceGroupName $resourceGroup -DiskName $diskName -DiskUpdate
# [Azure portal](#tab/portal)
-Currently, adjusting disk performance is only supported with the Azure CLI or Azure PowerShell module.
+1. Navigate to the disk you'd like to modify in the [Azure portal](https://portal.azure.com/).
+1. Select **Size + Performance**
+1. Set the values for **Disk IOPS** or **Disk throughput (MB/s)** or both, to meet your needs, then select **Save**.
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including Ultra Disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 08/06/2024 Last updated : 08/12/2024
Ultra disks must be used as data disks and can only be created as empty disks. Y
Ultra Disks offer up to 100 TiB per region per subscription by default, but Ultra Disks support higher capacity by request. To request an increase in capacity, request a quota increase or contact Azure Support.
+Ultra Disk sizes work like Premium SSD, Standard SSD, and Standard HDD sizes. When you create or modify an Ultra Disk, the size you set is billed as the next largest provisioned disk size. So if you were to deploy a 200 GiB Ultra Disk or set a 200 GiB Ultra Disk, you'll have a 200 GiB Ultra Disk that's billed as if it was 256 GiB, since that's the next largest provisioned disk size.
+ The following table provides a comparison of disk sizes and performance caps to help you decide which to use. |Disk Size (GiB) |IOPS Cap |Throughput Cap (MB/s) |
virtual-machines Disks Understand Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-understand-billing.md
Title: Understand Azure Disk Storage billing
description: Learn about the billing factors that affect Azure managed disks, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 03/08/2024 Last updated : 08/12/2024
There are two kinds of snapshots offered for Azure managed disks: Full snapshots
## Ultra Disks
-The price of an Azure Ultra Disk is determined by the combination of how large the disk is (its size) and what performance you select (IOPS and throughput) for your disk. If you share an Ultra Disk between multiple VMs that can affect its price as well. The following sections focus on these factors as they relate to the price of your Ultra Disk. For more information on how these factors work, see the [Ultra disks](disks-types.md#ultra-disks) section of the [Azure managed disk types](disks-types.md) article.
+The price of an Azure Ultra Disk is determined by the combination of how large the disk is (its disk size) and what performance you select (IOPS and throughput) for your disk. If you share an Ultra Disk between multiple VMs that can affect its price as well. The following sections focus on these factors as they relate to the price of your Ultra Disk. For more information on how these factors work, see the [Ultra disks](disks-types.md#ultra-disks) section of the [Azure managed disk types](disks-types.md) article.
### Ultra Disk size
-The size of your Ultra Disk also determines what performance caps your disk has. You have granular control of how much IOPS and throughput your disk has, up to that size's performance cap. Pricing increases as you increase your disk's size, and when you set higher IOPS and throughput. Ultra Disks offer up to 32 TiB per region per subscription by default, but support higher size by request. To request an increase in size, request a quota increase or contact Azure Support.
+Ultra Disk sizes work like Premium SSD, Standard SSD, and Standard HDD sizes. When you create or modify an Ultra Disk, the size you set is billed as the next largest provisioned disk size. So if you were to deploy a 200 GiB Ultra Disk or set a 200 GiB Ultra Disk, you'll have a 200 GiB Ultra Disk that's billed as if it was 256 GiB, since that's the next largest provisioned disk size.
+
+The disk size of your Ultra Disk also determines what performance caps your disk has. You have granular control of how much IOPS and throughput your disk has, up to that size's performance cap. Pricing increases as you increase your disk's size, and when you set higher IOPS and throughput. Ultra Disks offer up to 32 TiB per region per subscription by default, but support higher size by request. To request an increase in size, request a quota increase or contact Azure Support.
The following table outlines the available disk sizes and performance caps. Pricing increases as you increase in size.
virtual-machines Ddv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/ddv4-series.md
## Feature support
-Premium Storage: Supported<br>
-Premium Storage caching: Supported<br>
+Premium Storage: Not Supported<br>
+Premium Storage caching: Not Supported<br>
Live Migration: Supported<br> Memory Preserving Updates: Supported<br> VM Generation Support: Generation 1 and 2<br>
virtual-machines Ddv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/ddv5-series.md
## Feature support
-Premium Storage: Supported<br>
-Premium Storage caching: Supported<br>
+Premium Storage: Not Supported<br>
+Premium Storage caching: Not Supported<br>
Live Migration: Supported<br> Memory Preserving Updates: Supported<br> VM Generation Support: Generation 1 and 2<br>
virtual-machines Dv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dv5-series.md
## Feature support
-Premium Storage: Supported<br>
-Premium Storage caching: Supported<br>
+Premium Storage: Not Supported<br>
+Premium Storage caching: Not Supported<br>
Live Migration: Supported<br> Memory Preserving Updates: Supported<br> VM Generation Support: Generation 1 and 2<br>
virtual-machines Spot Placement Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/spot-placement-score.md
+
+ Title: Spot Placement Score (Preview)
+description: Learn how to use Azure Spot Placement Score to evaluate deployment success.
+++++ Last updated : 08/11/2024++++
+# Spot Placement Score (Preview)
+
+> [!IMPORTANT]
+> The Spot Placement Score feature is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Spot Placement Score evaluates the likelihood of success for individual Spot deployments by considering parameters like desired Spot Virtual Machine (VM) count, VM size, and your deployment region or zone. This feature enables you to generate a placement score to deploy a desired number of Spot Virtual Machines (VMs) across various combinations of regions, zones, and VM sizes. By inputting lists of up to eight regions and five VM sizes, you can obtain placement scores categorized as either High, Medium, or Low. A score of High indicates that the deployment is highly likely to succeed while a score of Low indicates that the deployment has a low chance of success. These scores are based on analyses of Spot capacity allocation probability and the survivability of the specified number of Spot VMs within each region and VM size combination. This functionality enhances deployment planning by providing predictive insights into deployment success and optimizing resource allocation for your Spot VMs.
+
+Using Spot Placement Score, you can achieve the following:
+
+- A clear evaluation of how likely your Spot deployment is to succeed based on specified parameters.
+
+- Identify the most suitable combination of regions and VM sizes to maximize Spot VM availability and survivability based on placement scores.
+
+- Improve the overall success rate of deploying Spot VMs by applying data-driven placement scores, reducing the risk of capacity issues or failures during deployment.
+
+## Cost
+
+There are no costs associated with this feature.
+
+## Considerations
+
+- Spot placement scores serve purely as a recommendation based on certain data points like eviction rate and VM availability. A high placement score doesn't guarantee that the Spot request will be fully or partially fulfilled.
+
+- Placement Scores are only valid at the time when it's requested. The same Placement Score isn't valid at a different time of the same day or another day. Any similarities are purely coincidental.
+
+- The Spot Placement Score is only relevant if the Spot request has the same configuration as the Spot Placement Score configuration; desired count, VM size, location, and zone. In all other circumstances, the likelihood of getting available Spot capacity won't align with the placement score generated by the tool.
+
+- Spot Placement Scores don't consider other constraints, such as Virtual Machine Scale Sets `SinglePlacementGroup`.
+
+- A subscription's available Spot VM quota needs to be checked or requested separately.
+
+- Spot Placement Score supports both regionally and zonally scoped placement score.
+
+- Spot Placement Score API internally calls other GET APIs and is part of your GET call quota.
+
+- A score of **High** or **Medium** doesn't guarantee allocation success or no evictions.
++
+## Configure your Spot Placement Score
+Configure your Spot Placement Score by defining your Spot specific requirements:
+- Number of desired Spot VMs
+- Up to five VM sizes
+- Up to eight regions
+- Availability zones
+
+We recommend that the placement scores for each combination of subscription, desired count, region, zone, and VM size are cached, to avoid calling the API with the same configuration frequently within a short period of time. The suggested cache TTL is a minimum of 15 minutes and a maximum of 30 minutes.
+
+### [Azure portal](#tab/portal)
+
+Find the Spot Placement Score in the Spot tab of the Virtual Machine Scale Sets creation process in the Azure portal. The following steps instruct you on how to access this feature during that process.
+
+1. Log in to the [Azure portal](https://portal.azure.com).
+
+1. In the search bar, search for and select **Virtual Machine Scale Sets**.
+
+1. Select **Create** on the **Virtual Machine Scale Sets** page.
+
+1. In the **Spot** tab, turn on the Spot option under the **Save money with Spot** section.
+
+1. Fill out the *Size*, *Region*, *Availability Zones*, and *Initial instance count* fields in the **Your Placement Score** section.
+
+1. Click on **Save + Apply** to receive your placement score for this configuration.
+
+### [REST API](#tab/rest-api)
+
+Use the following REST API to get your Spot Placement Score. The Placement Score API supports the following versions: *2024-03-01-preview* and *2024-06-01-preview*. You need to add the Role-Based Access Control (RBAC) role "Compute Recommendations Role" and select the members to enable the subscription they want to run the API on (/azure/role-based-access-control/role-assignments-portal).
+
+```
+POST https://management.azure.com/subscriptions/{subscription}/providers/Microsoft.Compute/locations/{region}/placementScores/spot/generate?api-version={api-version}
+```
+
+```json
+{
+"desiredLocations": "",
+"desiredSizes": [{
+ "sku": ""
+ }],
+"desiredCount": ""
+}
+```
+Some important terminology to consider:
+
+**Restricted SKU** is returned if the Spot VM SKU isn't available for the subscription.
+
+**Data Not Found** is returned when the data necessary to generate a score or recommendation is either not found in upstream databases, or is found but the data lifespan is greater than what our service considers "fresh".
+
+### [Azure CLI 2.0](#tab/cli)
+
+Access Spot Placement Score using Azure CLI command [az compute-recommender spot-placement-recommender](/cli/azure/compute-recommender#az-compute-recommender-spot-placement-recommender).
+
+```azurecli-interactive
+az compute-recommender spot-placement-recommender \
+ --availability-zones <> \
+ --desired-count <> \
+ --desired-locations <> \
+ --desired-sizes <> \
+ --ids <> \
+ --location <> \
+ --subscription <> \
+```
+
+### [Azure PowerShell](#tab/powershell)
+
+Access the Spot Placement Score using Azure PowerShell command [Invoke-AzSpotPlacementScore](/powershell/module/az.compute/invoke-azspotplacementscore) to call the API endpoint. Replace all parameters with your specific details:
+
+```azurepowershell-interactive
+Invoke-AzSpotPlacementScore
+ -Location <String>
+ -SubscriptionId <String>
+ -AvailabilityZone
+ -DesiredCount <Int32>
+ -DesiredLocation <String[]>
+ -DesiredSize <IResourceSize[]>
+```
+++
+## Examples
+
+The following examples have scenario assumptions and a table with the results score to help you understand how Spot Placement Score works.
+
+### Scenario 1
+This table is an example of a request returning regionally scoped placement scores for multiple desired VM sizes and regions.
+
+The following scenario assumptions apply to this example:
+- **Desired locations:** `westus`, `eastus`
+- **Desired sizes:** `Standard_D2_v2`, `Standard_D4_v2`
+- **Desired count:** 100
+- **Availability zones:** False
+
+| SKU | Region | Availability zone | Is quota available? | Placement score |
+|--|-||--|-|
+| Standard_D2_v2 | westus | False | True | High |
+| Standard_D4_v2 | westus | False | True | Low |
+| Standard_D2_v2 | eastus | False | True | Medium |
+| Standard_D4_v2 | eastus | False | True | High |
+
+### Scenario 2
+This table is an example of a request returning zonally scoped placement scores for multiple desired VM sizes and regions.
+
+The following scenario assumptions apply to this example:
+- **Desired locations:** `westus`, `eastus`
+- **Desired sizes:** `Standard_D2_v2`, `Standard_D4_v2`
+- **Desired count:** 100
+- **Availability zones:** True
+
+| SKU | Region | Availability zone | Is quota available? | Placement score |
+|--|-||--|-|
+| Standard_D2_v2 | westus | 1 | True | Medium |
+| Standard_D2_v2 | westus | 2 | True | Medium |
+| Standard_D2_v2 | westus | 3 | True | Medium |
+| Standard_D4_v2 | westus | 1 | True | High |
+| Standard_D4_v2 | westus | 2 | True | High |
+| Standard_D4_v2 | westus | 3 | True | High |
+| Standard_D2_v2 | eastus | 1 | True | Low |
+| Standard_D2_v2 | eastus | 2 | True | Low |
+| Standard_D2_v2 | eastus | 3 | True | Low |
+| Standard_D4_v2 | eastus | 1 | True | Medium |
+| Standard_D4_v2 | eastus | 2 | True | Medium |
+| Standard_D4_v2 | eastus | 3 | True | Medium |
++
+## Troubleshooting
+
+| Status code | Type | Condition |
+|--|-||--|
+| 200 | Successful request | Spot Placement Score operations complete successfully. |
+| 400 | Bad error request | At least one required input parameter isn't present, or the values of the provided parameters aren't valid. Produces a detailed error message about the failed request. |
+| 429 | Too many requests | Unable to generate placement score due to hitting a rate limit. |
+| 500 | Internal server error | The placement score generation failed. Produces a detailed error message about the failed request. |
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about Spot Priority Mix](../virtual-machine-scale-sets/spot-priority-mix.md)
virtual-machines Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/spot-vms.md
Previously updated : 06/14/2024- Last updated : 08/05/2024+
With variable pricing, you have option to set a max price, in US dollars (USD),
You can see historical pricing and eviction rates per size in a region in the portal while you are creating the VM. After selecting the checkbox to **Run with Azure Spot discount**, a link will appear under the size selection of the VM titled **View pricing history and compare prices in nearby regions**. By selecting that link you will be able to see a table or graph of spot pricing for the specified VM size. The pricing and eviction rates in the following images are only examples. > [!TIP]
-> Eviction rates are quoted _per hour_. For example, an eviction rate of 10% means a VM has a 10% chance of being evicted within the next hour, based on historical eviction data of the last 28 days.
+> Eviction rates are quoted _per hour_. For example, an eviction rate of 10% means a VM has a 10% chance of being evicted within the next hour, based on historical eviction data of the last 7 days.
**Chart**:
virtual-machines Virtual Machine Scale Sets Maintenance Control Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-cli.md
Title: Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using Azure CLI description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control and Azure CLI. -+ Last updated 11/22/2022
virtual-machines Virtual Machine Scale Sets Maintenance Control Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-portal.md
Title: Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using Azure portal description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control and Azure portal. -+ Last updated 11/22/2022
virtual-machines Virtual Machine Scale Sets Maintenance Control Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-powershell.md
Title: Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using PowerShell description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control and PowerShell. -+ Last updated 11/22/2022
virtual-machines Virtual Machine Scale Sets Maintenance Control Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-template.md
Title: Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets using an Azure Resource Manager template description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control and an Azure Resource Manager (ARM) template. -+ Last updated 11/22/2022
virtual-machines Virtual Machine Scale Sets Maintenance Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control.md
Title: Overview of Maintenance control for OS image upgrades on Azure Virtual Machine Scale Sets description: Learn how to control when automatic OS image upgrades are rolled out to your Azure Virtual Machine Scale Sets using Maintenance control. -+ Last updated 11/22/2022
virtual-network Tutorial Filter Network Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-cli.md
Title: Filter network traffic - Azure CLI description: In this article, you learn how to filter network traffic to a subnet, with a network security group, using the Azure CLI.- - Previously updated : 03/30/2018 Last updated : 08/09/2024 # Customer intent: I want to filter network traffic to virtual machines that perform similar functions, such as web servers.
A network security group contains security rules. Security rules specify a sourc
### Create application security groups
-First create a resource group for all the resources created in this article with [az group create](/cli/azure/group). The following example creates a resource group in the *eastus* location:
+First create a resource group for all the resources created in this article with [az group create](/cli/azure/group). The following example creates a resource group in the *westus2* location:
```azurecli-interactive az group create \
- --name myResourceGroup \
- --location eastus
+ --name test-rg \
+ --location westus2
``` Create an application security group with [az network asg create](/cli/azure/network/asg). An application security group enables you to group servers with similar port filtering requirements. The following example creates two application security groups. ```azurecli-interactive az network asg create \
- --resource-group myResourceGroup \
- --name myAsgWebServers \
- --location eastus
+ --resource-group test-rg \
+ --name asg-web-servers \
+ --location westus2
az network asg create \
- --resource-group myResourceGroup \
- --name myAsgMgmtServers \
- --location eastus
+ --resource-group test-rg \
+ --name asg-mgmt-servers \
+ --location westus2
``` ### Create a network security group
-Create a network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named *myNsg*:
+Create a network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named *nsg-1*:
```azurecli-interactive # Create a network security group az network nsg create \
- --resource-group myResourceGroup \
- --name myNsg
+ --resource-group test-rg \
+ --name nsg-1
``` ### Create security rules
-Create a security rule with [az network nsg rule create](/cli/azure/network/nsg/rule). The following example creates a rule that allows traffic inbound from the internet to the *myWebServers* application security group over ports 80 and 443:
+Create a security rule with [az network nsg rule create](/cli/azure/network/nsg/rule). The following example creates a rule that allows traffic inbound from the internet to the *asg-web-servers* application security group over ports 80 and 443:
```azurecli-interactive az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNsg \
+ --resource-group test-rg \
+ --nsg-name nsg-1 \
--name Allow-Web-All \ --access Allow \ --protocol Tcp \
az network nsg rule create \
--priority 100 \ --source-address-prefix Internet \ --source-port-range "*" \
- --destination-asgs "myAsgWebServers" \
+ --destination-asgs "asg-web-servers" \
--destination-port-range 80 443 ```
-The following example creates a rule that allows traffic inbound from the Internet to the *myMgmtServers* application security group over port 22:
+The following example creates a rule that allows traffic inbound from the Internet to the *asg-mgmt-servers* application security group over port 22:
```azurecli-interactive az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNsg \
+ --resource-group test-rg \
+ --nsg-name nsg-1 \
--name Allow-SSH-All \ --access Allow \ --protocol Tcp \
az network nsg rule create \
--priority 110 \ --source-address-prefix Internet \ --source-port-range "*" \
- --destination-asgs "myAsgMgmtServers" \
+ --destination-asgs "asg-mgmt-servers" \
--destination-port-range 22 ```
-In this article, SSH (port 22) is exposed to the internet for the *myAsgMgmtServers* VM. For production environments, instead of exposing port 22 to the internet, it's recommended that you connect to Azure resources that you want to manage using a [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [private](../expressroute/expressroute-introduction.md?toc=%2fazure%2fvirtual-network%2ftoc.json) network connection.
+In this article, the *asg-mgmt-servers* asg exposes SSH (port 22) to the internet. For production environments, use a [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [private](../expressroute/expressroute-introduction.md?toc=%2fazure%2fvirtual-network%2ftoc.json) network connection to manage Azure resources instead of exposing port 22 to the internet.
## Create a virtual network
-Create a virtual network with [az network vnet create](/cli/azure/network/vnet). The following example creates a virtual named *myVirtualNetwork*:
+Create a virtual network with [az network vnet create](/cli/azure/network/vnet). The following example creates a virtual named *vnet-1*:
```azurecli-interactive az network vnet create \
- --name myVirtualNetwork \
- --resource-group myResourceGroup \
+ --name vnet-1 \
+ --resource-group test-rg \
--address-prefixes 10.0.0.0/16 ```
-Add a subnet to a virtual network with [az network vnet subnet create](/cli/azure/network/vnet/subnet). The following example adds a subnet named *mySubnet* to the virtual network and associates the *myNsg* network security group to it:
+Add a subnet to a virtual network with [az network vnet subnet create](/cli/azure/network/vnet/subnet). The following example adds a subnet named *subnet-1* to the virtual network and associates the *nsg-1* network security group to it:
```azurecli-interactive az network vnet subnet create \
- --vnet-name myVirtualNetwork \
- --resource-group myResourceGroup \
- --name mySubnet \
+ --vnet-name vnet-1 \
+ --resource-group test-rg \
+ --name subnet-1 \
--address-prefix 10.0.0.0/24 \
- --network-security-group myNsg
+ --network-security-group nsg-1
``` ## Create virtual machines Create two VMs in the virtual network so you can validate traffic filtering in a later step.
-Create a VM with [az vm create](/cli/azure/vm). The following example creates a VM that will serve as a web server. The `--asgs myAsgWebServers` option causes Azure to make the network interface it creates for the VM a member of the *myAsgWebServers* application security group.
-
-The `--nsg ""` option is specified to prevent Azure from creating a default network security group for the network interface Azure creates when it creates the VM. To streamline this article, a password is used. Keys are typically used in production deployments. If you use keys, you must also configure SSH agent forwarding for the remaining steps. For more information, see the documentation for your SSH client. Replace `<replace-with-your-password>` in the following command with a password of your choosing.
+Create a VM with [az vm create](/cli/azure/vm). The following example creates a VM that serves as a web server. The `--asgs asg-web-servers` option causes Azure to make the network interface it creates for the VM a member of the *asg-web-servers* application security group. The `--nsg ""` option is specified to prevent Azure from creating a default network security group for the network interface Azure creates when it creates the VM. The command prompts you to create a password for the VM. SSH keys aren't used in this example to facilitate the later steps in this article. In a production environment, use SSH keys for security.
```azurecli-interactive
-adminPassword="<replace-with-your-password>"
- az vm create \
- --resource-group myResourceGroup \
- --name myVmWeb \
+ --resource-group test-rg \
+ --name vm-web \
--image Ubuntu2204 \
- --vnet-name myVirtualNetwork \
- --subnet mySubnet \
+ --vnet-name vnet-1 \
+ --subnet subnet-1 \
--nsg "" \
- --asgs myAsgWebServers \
+ --asgs asg-web-servers \
--admin-username azureuser \
- --admin-password $adminPassword
+ --authentication-type password \
+ --assign-identity
``` The VM takes a few minutes to create. After the VM is created, output similar to the following example is returned:
The VM takes a few minutes to create. After the VM is created, output similar to
```output { "fqdns": "",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVmWeb",
- "location": "eastus",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/test-rg/providers/Microsoft.Compute/virtualMachines/vm-web",
+ "location": "westus2",
"macAddress": "00-0D-3A-23-9A-49", "powerState": "VM running", "privateIpAddress": "10.0.0.4",
- "publicIpAddress": "13.90.242.231",
- "resourceGroup": "myResourceGroup"
+ "publicIpAddress": "203.0.113.24",
+ "resourceGroup": "test-rg"
} ```
-Take note of the **publicIpAddress**. This address is used to access the VM from the internet in a later step. Create a VM to serve as a management server:
+Create a VM with [az vm create](/cli/azure/vm). The following example creates a VM that serves as a management server. The `--asgs asg-mgmt-servers` option causes Azure to make the network interface it creates for the VM a member of the *asg-mgmt-servers* application security group.
+
+The following example creates a VM and adds a user account. The `--generate-ssh-keys` parameter causes the CLI to look for an available ssh key in `~/.ssh`. If one is found, that key is used. If not, one is generated and stored in `~/.ssh`. Finally, we deploy the latest `Ubuntu 22.04` image.
```azurecli-interactive az vm create \
- --resource-group myResourceGroup \
- --name myVmMgmt \
+ --resource-group test-rg \
+ --name vm-mgmt \
--image Ubuntu2204 \
- --vnet-name myVirtualNetwork \
- --subnet mySubnet \
+ --vnet-name vnet-1 \
+ --subnet subnet-1 \
--nsg "" \
- --asgs myAsgMgmtServers \
+ --asgs asg-mgmt-servers \
--admin-username azureuser \
- --admin-password $adminPassword
+ --generate-ssh-keys \
+ --assign-identity
```
-The VM takes a few minutes to create. After the VM is created, note the **publicIpAddress** in the returned output. This address is used to access the VM in the next step. Don't continue with the next step until Azure finishes creating the VM.
+The VM takes a few minutes to create. Don't continue with the next step until Azure finishes creating the VM.
+
+## Enable Microsoft Entra ID sign in for the virtual machines
+
+The following code example the extension to enable a Microsoft Entra ID sign-in for a Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines.
+
+```bash
+az vm extension set \
+ --publisher Microsoft.Azure.ActiveDirectory \
+ --name AADSSHLoginForLinux \
+ --resource-group test-rg \
+ --vm-name vm-web
+
+az vm extension set \
+ --publisher Microsoft.Azure.ActiveDirectory \
+ --name AADSSHLoginForLinux \
+ --resource-group test-rg \
+ --vm-name vm-mgmt
+```
## Test traffic filters
-Use the command that follows to create an SSH session with the *myVmMgmt* VM. Replace *\<publicIpAddress>* with the public IP address of your VM. In the example above, the IP address is *13.90.242.231*.
+Using an SSH client of your choice, connect to the VMs created previously. For example, the following command can be used from a command line interface such as [Windows Subsystem for Linux](/windows/wsl/install) to create an SSH session with the *vm-mgmt* VM. In the previous steps, we enabled Microsoft Entra ID sign-in for the VMs. You can sign-in to the virtual machines using your Microsoft Entra ID credentials or you can use the SSH key that you used to create the VMs. In the following example, we use the SSH key to sign in to management VM and then sign in to the web VM from the management VM with a password.
-```bash
-ssh azureuser@<publicIpAddress>
+For more information about how to SSH to a Linux VM and sign in with Microsoft Entra ID, see [Sign in to a Linux virtual machine in Azure by using Microsoft Entra ID and OpenSSH](/entra/identity/devices/howto-vm-sign-in-azure-ad-linux).
+
+### Store IP address of VM in order to SSH
+
+Run the following command to store the IP address of the VM as an environment variable:
+
+```bash
+export IP_ADDRESS=$(az vm show --show-details --resource-group test-rg --name vm-mgmt --query publicIps --output tsv)
```
-When prompted for a password, enter the password you entered in [Create VMs](#create-virtual-machines).
+```bash
+ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS
+```
-The connection succeeds, because port 22 is allowed inbound from the Internet to the *myAsgMgmtServers* application security group that the network interface attached to the *myVmMgmt* VM is in.
+The connection succeeds because the network interface attached to the *vm-mgmt* VM is in the *asg-mgmt-servers* application security group, which allows port 22 inbound from the Internet.
-Use the following command to SSH to the *myVmWeb* VM from the *myVmMgmt* VM:
+Use the following command to SSH to the *vm-web* VM from the *vm-mgmt* VM:
```bash
-ssh azureuser@myVmWeb
+ssh -o StrictHostKeyChecking=no azureuser@vm-web
```
-The connection succeeds because a default security rule within each network security group allows traffic over all ports between all IP addresses within a virtual network. You can't SSH to the *myVmWeb* VM from the Internet because the security rule for the *myAsgWebServers* doesn't allow port 22 inbound from the Internet.
+The connection succeeds because a default security rule within each network security group allows traffic over all ports between all IP addresses within a virtual network. You can't SSH to the *vm-web* VM from the Internet because the security rule for the *asg-web-servers* doesn't allow port 22 inbound from the Internet.
-Use the following commands to install the nginx web server on the *myVmWeb* VM:
+Use the following commands to install the nginx web server on the *vm-web* VM:
```bash # Update package source
sudo apt-get -y update
sudo apt-get -y install nginx ```
-The *myVmWeb* VM is allowed outbound to the Internet to retrieve nginx because a default security rule allows all outbound traffic to the Internet. Exit the *myVmWeb* SSH session, which leaves you at the `username@myVmMgmt:~$` prompt of the *myVmMgmt* VM. To retrieve the nginx welcome screen from the *myVmWeb* VM, enter the following command:
+The *vm-web* VM is allowed outbound to the Internet to retrieve nginx because a default security rule allows all outbound traffic to the Internet. Exit the *vm-web* SSH session, which leaves you at the `username@vm-mgmt:~$` prompt of the *vm-mgmt* VM. To retrieve the nginx welcome screen from the *vm-web* VM, enter the following command:
```bash
-curl myVmWeb
+curl vm-web
```
-Logout of the *myVmMgmt* VM. To confirm that you can access the *myVmWeb* web server from outside of Azure, enter `curl <publicIpAddress>` from your own computer. The connection succeeds, because port 80 is allowed inbound from the Internet to the *myAsgWebServers* application security group that the network interface attached to the *myVmWeb* VM is in.
+Sign out of the *vm-mgmt* VM. To confirm that you can access the *vm-web* web server from outside of Azure, enter `curl <publicIpAddress>` from your own computer. The connection succeeds because the *asg-web-servers* application security group, which the network interface attached to the *vm-web* VM is in, allows port 80 inbound from the Internet.
## Clean up resources When no longer needed, use [az group delete](/cli/azure/group) to remove the resource group and all of the resources it contains. ```azurecli-interactive
-az group delete --name myResourceGroup --yes
+az group delete \
+ --name test-rg \
+ --yes \
+ --no-wait
``` ## Next steps In this article, you created a network security group and associated it to a virtual network subnet. To learn more about network security groups, see [Network security group overview](./network-security-groups-overview.md) and [Manage a network security group](manage-network-security-group.md).
-Azure routes traffic between subnets by default. You may instead, choose to route traffic between subnets through a VM, serving as a firewall, for example. To learn how, see [Create a route table](tutorial-create-route-table-cli.md).
+Azure routes traffic between subnets by default. You can instead, choose to route traffic between subnets through a VM, serving as a firewall, for example. To learn how, see [Create a route table](tutorial-create-route-table-cli.md).
virtual-network Tutorial Restrict Network Access To Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-cli.md
Title: Restrict network access to PaaS resources - Azure CLI
-description: In this article, you learn how to limit and restrict network access to Azure resources, such as Azure Storage and Azure SQL Database, with virtual network service endpoints using the Azure CLI.
-
+description: This article teaches you how to use the Azure CLI to restrict network access to Azure resources like Azure Storage and Azure SQL Database with virtual network service endpoints.
- Previously updated : 03/14/2018 Last updated : 08/11/2024 # Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account.
Virtual network service endpoints enable you to limit network access to some Azu
## Create a virtual network
-Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *myResourceGroup* in the *eastus* location.
+Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *test-rg* in the *eastus* location.
```azurecli-interactive az group create \
- --name myResourceGroup \
+ --name test-rg \
--location eastus ```
Create a virtual network with one subnet with [az network vnet create](/cli/azur
```azurecli-interactive az network vnet create \
- --name myVirtualNetwork \
- --resource-group myResourceGroup \
+ --name vnet-1 \
+ --resource-group test-rg \
--address-prefix 10.0.0.0/16 \
- --subnet-name Public \
+ --subnet-name subnet-public \
--subnet-prefix 10.0.0.0/24 ```
az network vnet list-endpoint-services \
--out table ```
-Create an additional subnet in the virtual network with [az network vnet subnet create](/cli/azure/network/vnet/subnet). In this example, a service endpoint for *Microsoft.Storage* is created for the subnet:
+Create another subnet in the virtual network with [az network vnet subnet create](/cli/azure/network/vnet/subnet). In this example, a service endpoint for `Microsoft.Storage` is created for the subnet:
```azurecli-interactive az network vnet subnet create \
- --vnet-name myVirtualNetwork \
- --resource-group myResourceGroup \
- --name Private \
+ --vnet-name vnet-1 \
+ --resource-group test-rg \
+ --name subnet-private \
--address-prefix 10.0.1.0/24 \ --service-endpoints Microsoft.Storage ``` ## Restrict network access for a subnet
-Create a network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named *myNsgPrivate*.
+Create a network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named *nsg-private*.
```azurecli-interactive az network nsg create \
- --resource-group myResourceGroup \
- --name myNsgPrivate
+ --resource-group test-rg \
+ --name nsg-private
```
-Associate the network security group to the *Private* subnet with [az network vnet subnet update](/cli/azure/network/vnet/subnet). The following example associates the *myNsgPrivate* network security group to the *Private* subnet:
+Associate the network security group to the *subnet-private* subnet with [az network vnet subnet update](/cli/azure/network/vnet/subnet). The following example associates the *nsg-private* network security group to the *subnet-private* subnet:
```azurecli-interactive az network vnet subnet update \
- --vnet-name myVirtualNetwork \
- --name Private \
- --resource-group myResourceGroup \
- --network-security-group myNsgPrivate
+ --vnet-name vnet-1 \
+ --name subnet-private \
+ --resource-group test-rg \
+ --network-security-group nsg-private
``` Create security rules with [az network nsg rule create](/cli/azure/network/nsg/rule). The rule that follows allows outbound access to the public IP addresses assigned to the Azure Storage service: ```azurecli-interactive az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNsgPrivate \
+ --resource-group test-rg \
+ --nsg-name nsg-private \
--name Allow-Storage-All \ --access Allow \ --protocol "*" \
Each network security group contains several [default security rules](./network-
```azurecli-interactive az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNsgPrivate \
+ --resource-group test-rg \
+ --nsg-name nsg-private \
--name Deny-Internet-All \ --access Deny \ --protocol "*" \
The following rule allows SSH traffic inbound to the subnet from anywhere. The r
```azurecli-interactive az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNsgPrivate \
+ --resource-group test-rg \
+ --nsg-name nsg-private \
--name Allow-SSH-All \ --access Allow \ --protocol Tcp \
storageAcctName="<replace-with-your-unique-storage-account-name>"
az storage account create \ --name $storageAcctName \
- --resource-group myResourceGroup \
+ --resource-group test-rg \
--sku Standard_LRS \ --kind StorageV2 ``` After the storage account is created, retrieve the connection string for the storage account into a variable with [az storage account show-connection-string](/cli/azure/storage/account). The connection string is used to create a file share in a later step.
+For the purposes of this tutorial, the connection string is used to connect to the storage account. Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this procedure requires a high degree of trust in the application, and carries risks that aren't present in other flows. You should only use this flow when other more secure flows, such as managed identities, aren't viable.
+
+For more information about connecting to a storage account using a managed identity, see [Use a managed identity to access Azure Storage](/entra/identity/managed-identities-azure-resources/tutorial-linux-managed-identities-vm-access?pivots=identity-linux-mi-vm-access-storage).
+ ```azurecli-interactive saConnectionString=$(az storage account show-connection-string \ --name $storageAcctName \
- --resource-group myResourceGroup \
+ --resource-group test-rg \
--query 'connectionString' \ --out tsv) ```
Create a file share in the storage account with [az storage share create](/cli/a
```azurecli-interactive az storage share create \
- --name my-file-share \
+ --name file-share \
--quota 2048 \ --connection-string $saConnectionString > ``` ### Deny all network access to a storage account
-By default, storage accounts accept network connections from clients in any network. To limit access to selected networks, change the default action to *Deny* with [az storage account update](/cli/azure/storage/account). Once network access is denied, the storage account is not accessible from any network.
+By default, storage accounts accept network connections from clients in any network. To limit access to selected networks, change the default action to *Deny* with [az storage account update](/cli/azure/storage/account). Once network access is denied, the storage account isn't accessible from any network.
```azurecli-interactive az storage account update \ --name $storageAcctName \
- --resource-group myResourceGroup \
+ --resource-group test-rg \
--default-action Deny ``` ### Enable network access from a subnet
-Allow network access to the storage account from the *Private* subnet with [az storage account network-rule add](/cli/azure/storage/account/network-rule).
+Allow network access to the storage account from the *subnet-private* subnet with [az storage account network-rule add](/cli/azure/storage/account/network-rule).
```azurecli-interactive az storage account network-rule add \
- --resource-group myResourceGroup \
+ --resource-group test-rg \
--account-name $storageAcctName \
- --vnet-name myVirtualNetwork \
- --subnet Private
+ --vnet-name vnet-1 \
+ --subnet subnet-private
``` ## Create virtual machines
To test network access to a storage account, deploy a VM to each subnet.
### Create the first virtual machine
-Create a VM in the *Public* subnet with [az vm create](/cli/azure/vm). If SSH keys do not already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option.
+Create a VM in the *subnet-public* subnet with [az vm create](/cli/azure/vm). If SSH keys don't already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option.
```azurecli-interactive az vm create \
- --resource-group myResourceGroup \
- --name myVmPublic \
+ --resource-group test-rg \
+ --name vm-public \
--image Ubuntu2204 \
- --vnet-name myVirtualNetwork \
- --subnet Public \
+ --vnet-name vnet-1 \
+ --subnet subnet-public \
+ --admin-username azureuser \
--generate-ssh-keys ```
The VM takes a few minutes to create. After the VM is created, the Azure CLI sho
```azurecli { "fqdns": "",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVmPublic",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/test-rg/providers/Microsoft.Compute/virtualMachines/vm-public",
"location": "eastus", "macAddress": "00-0D-3A-23-9A-49", "powerState": "VM running", "privateIpAddress": "10.0.0.4",
- "publicIpAddress": "13.90.242.231",
- "resourceGroup": "myResourceGroup"
+ "publicIpAddress": "203.0.113.24",
+ "resourceGroup": "test-rg"
} ```
-Take note of the **publicIpAddress** in the returned output. This address is used to access the VM from the internet in a later step.
- ### Create the second virtual machine ```azurecli-interactive az vm create \
- --resource-group myResourceGroup \
- --name myVmPrivate \
+ --resource-group test-rg \
+ --name vm-private \
--image Ubuntu2204 \
- --vnet-name myVirtualNetwork \
- --subnet Private \
+ --vnet-name vnet-1 \
+ --subnet subnet-private \
+ --admin-username azureuser \
--generate-ssh-keys ```
The VM takes a few minutes to create. After creation, take note of the **publicI
## Confirm access to storage account
-SSH into the *myVmPrivate* VM. Replace *\<publicIpAddress>* with the public IP address of your *myVmPrivate* VM.
+SSH into the *vm-private* VM.
+
+Run the following command to store the IP address of the VM as an environment variable:
+
+```bash
+export IP_ADDRESS=$(az vm show --show-details --resource-group test-rg --name vm-private --query publicIps --output tsv)
+```
```bash
-ssh <publicIpAddress>
+ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS
``` Create a folder for a mount point: ```bash
-sudo mkdir /mnt/MyAzureFileShare
+sudo mkdir /mnt/file-share
``` Mount the Azure file share to the directory you created. Before running the following command, replace `<storage-account-name>` with the account name and `<storage-account-key>` with the key you retrieved in [Create a storage account](#create-a-storage-account). ```bash
-sudo mount --types cifs //<storage-account-name>.file.core.windows.net/my-file-share /mnt/MyAzureFileShare --options vers=3.0,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino
+sudo mount --types cifs //<storage-account-name>.file.core.windows.net/my-file-share /mnt/file-share --options vers=3.0,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino
```
-You receive the `user@myVmPrivate:~$` prompt. The Azure file share successfully mounted to */mnt/MyAzureFileShare*.
+You receive the `user@vm-private:~$` prompt. The Azure file share successfully mounted to */mnt/file-share*.
Confirm that the VM has no outbound connectivity to any other public IP addresses:
Confirm that the VM has no outbound connectivity to any other public IP addresse
ping bing.com -c 4 ```
-You receive no replies, because the network security group associated to the *Private* subnet does not allow outbound access to public IP addresses other than the addresses assigned to the Azure Storage service.
+You receive no replies, because the network security group associated to the *subnet-private* subnet doesn't allow outbound access to public IP addresses other than the addresses assigned to the Azure Storage service.
-Exit the SSH session to the *myVmPrivate* VM.
+Exit the SSH session to the *vm-private* VM.
## Confirm access is denied to storage account
-Use the following command to create an SSH session with the *myVmPublic* VM. Replace `<publicIpAddress>` with the public IP address of your *myVmPublic* VM:
+SSH into the *vm-public* VM.
+
+Run the following command to store the IP address of the VM as an environment variable:
+
+```bash
+export IP_ADDRESS=$(az vm show --show-details --resource-group test-rg --name vm-public --query publicIps --output tsv)
+```
```bash
-ssh <publicIpAddress>
+ssh -o StrictHostKeyChecking=no azureuser@$IP_ADDRESS
``` Create a directory for a mount point: ```bash
-sudo mkdir /mnt/MyAzureFileShare
+sudo mkdir /mnt/file-share
```
-Attempt to mount the Azure file share to the directory you created. This article assumes you deployed the latest version of Ubuntu. If you are using earlier versions of Ubuntu, see [Mount on Linux](../storage/files/storage-how-to-use-files-linux.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for additional instructions about mounting file shares. Before running the following command, replace `<storage-account-name>` with the account name and `<storage-account-key>` with the key you retrieved in [Create a storage account](#create-a-storage-account):
+Attempt to mount the Azure file share to the directory you created. This article assumes you deployed the latest version of Ubuntu. If you're using earlier versions of Ubuntu, see [Mount on Linux](../storage/files/storage-how-to-use-files-linux.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for more instructions about mounting file shares. Before running the following command, replace `<storage-account-name>` with the account name and `<storage-account-key>` with the key you retrieved in [Create a storage account](#create-a-storage-account):
```bash
-sudo mount --types cifs //storage-account-name>.file.core.windows.net/my-file-share /mnt/MyAzureFileShare --options vers=3.0,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino
+sudo mount --types cifs //storage-account-name>.file.core.windows.net/file-share /mnt/file-share --options vers=3.0,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino
```
-Access is denied, and you receive a `mount error(13): Permission denied` error, because the *myVmPublic* VM is deployed within the *Public* subnet. The *Public* subnet does not have a service endpoint enabled for Azure Storage, and the storage account only allows network access from the *Private* subnet, not the *Public* subnet.
+Access is denied, and you receive a `mount error(13): Permission denied` error, because the *vm-public* VM is deployed within the *subnet-public* subnet. The *subnet-public* subnet doesn't have a service endpoint enabled for Azure Storage, and the storage account only allows network access from the *subnet-private* subnet, not the *subnet-public* subnet.
-Exit the SSH session to the *myVmPublic* VM.
+Exit the SSH session to the *vm-public* VM.
From your computer, attempt to view the shares in your storage account with [az storage share list](/cli/azure/storage/share). Replace `<account-name>` and `<account-key>` with the storage account name and key from [Create a storage account](#create-a-storage-account):
az storage share list \
--account-key <account-key> ```
-Access is denied and you receive a *This request is not authorized to perform this operation* error, because your computer is not in the *Private* subnet of the *MyVirtualNetwork* virtual network.
+Access is denied and you receive a **This request isn't authorized to perform this operation** error, because your computer isn't in the *subnet-private* subnet of the *vnet-1* virtual network.
## Clean up resources When no longer needed, use [az group delete](/cli/azure) to remove the resource group and all of the resources it contains. ```azurecli-interactive
-az group delete --name myResourceGroup --yes
+az group delete \
+ --name test-rg \
+ --yes \
+ --no-wait
``` ## Next steps In this article, you enabled a service endpoint for a virtual network subnet. You learned that service endpoints can be enabled for resources deployed with multiple Azure services. You created an Azure Storage account and limited network access to the storage account to only resources within a virtual network subnet. To learn more about service endpoints, see [Service endpoints overview](virtual-network-service-endpoints-overview.md) and [Manage subnets](virtual-network-manage-subnet.md).
-If you have multiple virtual networks in your account, you may want to connect two virtual networks together so the resources within each virtual network can communicate with each other. To learn how, see [Connect virtual networks](tutorial-connect-virtual-networks-cli.md).
+If you have multiple virtual networks in your account, you might want to connect two virtual networks together so the resources within each virtual network can communicate with each other. To learn how, see [Connect virtual networks](tutorial-connect-virtual-networks-cli.md).
vpn-gateway Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md
- Title: 'Configure P2S for different user and group access: Microsoft Entra ID authentication and multi app'-
-description: Learn how to set up a Microsoft Entra tenant for P2S OpenVPN authentication and register multiple apps in Microsoft Entra ID to allow different access for different users and groups.
--- Previously updated : 07/09/2024----
-# Configure P2S for access based on users and groups - Microsoft Entra ID authentication - manual app registration
-
-When you use Microsoft Entra ID as the authentication method for point-to-site (P2S), you can configure P2S to allow different access for different users and groups. This article helps you set up a Microsoft Entra tenant for P2S Microsoft Entra authentication and create and register multiple VPN apps in Microsoft Entra ID to allow different access for different users and groups. For more information about P2S protocols and authentication, see [About point-to-site VPN](point-to-site-about.md).
-
-Considerations:
-
-* You can't create this type of granular access if you have only one VPN gateway.
-* To assign different users and groups different access, register multiple apps with Microsoft Entra ID and then link them to different VPN gateways.
-* Microsoft Entra ID authentication is supported only for OpenVPN® protocol connections and requires the Azure VPN Client.
-
-<a name='azure-ad-tenant'></a>
-
-## Microsoft Entra tenant
-
-The steps in this article require a Microsoft Entra tenant. If you don't have a Microsoft Entra tenant, you can create one using the steps in the [Create a new tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) article. Note the following fields when creating your directory:
-
-* Organizational name
-* Initial domain name
-
-<a name='create-azure-ad-tenant-users'></a>
-
-## Create Microsoft Entra tenant users
-
-1. Create two accounts in the newly created Microsoft Entra tenant. For steps, see [Add or delete a new user](../active-directory/fundamentals/add-users-azure-active-directory.md).
-
- * Global administrator account
- * User account
-
- The global administrator account is used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication.
-
-1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign user roles with Microsoft Entra ID](/entra/fundamentals/users-assign-role-azure-portal).
-
-## Authorize the Azure VPN application
--
-## Register additional applications
-
-In this section, you can register additional applications for various users and groups. Repeat the steps to create as many applications that are needed for your security requirements.
-
-* You must have more than one VPN gateway to configure this type of granular access.
-* Each application is associated to a different VPN gateway and can have a different set of users.
-
-### Add a scope
-
-1. In the Azure portal, select **Microsoft Entra ID**.
-1. In the left pane, select **App registrations**.
-1. At the top of the **App registrations** page, select **+ New registration**.
-1. On the **Register an application** page, enter the **Name**. For example, MarketingVPN or Group1. You can always change the name later.
- * Select the desired **Supported account types**.
- * At the bottom of the page, click **Register**.
-1. Once the new app has been registered, in the left pane, click **Expose an API**. Then click **+ Add a scope**.
- * On the **Add a scope** page, leave the default **Application ID URI**.
- * Click **Save and continue**.
-1. The page returns back to the **Add a scope** page. Fill in the required fields and ensure that **State** is **Enabled**.
-
- :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/add-scope.png" alt-text="Screenshot of Microsoft Entra ID add a scope page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/add-scope.png":::
-1. When you're done filling out the fields, click **Add scope**.
-
-### Add a client application
-
-1. On the **Expose an API** page, click **+ Add a client application**.
-1. On the **Add a client application** page, for **Client ID**, enter the following values depending on the cloud:
-
- * Azure Public: `41b23e61-6c1e-4545-b367-cd054e0ed4b4`
- * Azure Government: `51bb15d4-3a4f-4ebf-9dca-40096fe32426`
- * Azure Germany: `538ee9e6-310a-468d-afef-ea97365856a9`
- * Microsoft Azure operated by 21Vianet: `49f817b6-84ae-4cc0-928c-73f27289b3aa`
-1. Select the checkbox for the **Authorized scopes** to include. Then, click **Add application**.
-
- :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/add-application.png" alt-text="Screenshot of Microsoft Entra ID add client application page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/add-application.png":::
-
-1. Click **Add application**.
-
-### Copy Application (client) ID
-
-When you enable authentication on the VPN gateway, you'll need the **Application (client) ID** value in order to fill out the Audience value for the point-to-site configuration.
-
-1. Go to the **Overview** page.
-
-1. Copy the **Application (client) ID** from the **Overview** page and save it so that you can access this value later. You'll need this information to configure your VPN gateways.
-
- :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/client-id.png" alt-text="Screenshot showing Client ID value." lightbox="./media/openvpn-azure-ad-tenant-multi-app/client-id.png":::
-
-## Assign users to applications
-
-Assign the users to your applications. If you're specifying a group, the user must be a direct member of the group. Nested groups aren't supported.
-
-1. Go to your Microsoft Entra ID and select **Enterprise applications**.
-1. From the list, locate the application you registered and click to open it.
-1. Click **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**.
-1. For **Assignment required**, change the value to **Yes**. For more information about this setting, see [Application properties](../active-directory/manage-apps/application-properties.md#enabled-for-users-to-sign-in).
-1. If you've made changes, click **Save** to save your settings.
-1. In the left pane, click **Users and groups**. On the **Users and groups** page, click **+ Add user/group** to open the **Add Assignment** page.
-1. Click the link under **Users and groups** to open the **Users and groups** page. Select the users and groups that you want to assign, then click **Select**.
-1. After you finish selecting users and groups, click **Assign**.
-
-## Configure authentication for the gateway
-
-> [!IMPORTANT]
-> [!INCLUDE [Entra ID note for portal pages](../../includes/vpn-gateway-entra-portal-note.md)]
-
-In this step, you configure P2S Microsoft Entra authentication for the virtual network gateway.
-
-1. Go to the virtual network gateway. In the left pane, click **Point-to-site configuration**.
-
- :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png" alt-text="Screenshot showing point-to-site configuration page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png":::
-
- Configure the following values:
-
- * **Address pool**: client address pool
- * **Tunnel type:** OpenVPN (SSL)
- * **Authentication type**: Microsoft Entra ID
-
- For **Microsoft Entra ID** values, use the following guidelines for **Tenant**, **Audience**, and **Issuer** values.
-
- * **Tenant**: `https://login.microsoftonline.com/{TenantID}`
- * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Microsoft Entra Enterprise App - use application ID that you created and registered. If you use the application ID for the "Azure VPN" Microsoft Entra Enterprise App instead, this grants all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered.
- * **Issuer**: `https://sts.windows.net/{TenantID}` For the Issuer value, make sure to include a trailing **/** at the end.
-
-1. Once you finish configuring settings, click **Save** at the top of the page.
-
-## Download the Azure VPN Client profile configuration package
-
-In this section, you generate and download the Azure VPN Client profile configuration package. This package contains the settings that you can use to configure the Azure VPN Client profile on client computers.
--
-## Next steps
-
-* To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](point-to-site-entra-vpn-client-windows.md).
-* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).
vpn-gateway Point To Site Entra Register Custom App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-entra-register-custom-app.md
description: Learn how to create or modify a custom audience App ID or upgrade a
Previously updated : 08/05/2024 Last updated : 08/09/2024
This article provides high-level steps. The screenshots to register an applicati
## Prerequisites
-This article assumes that you already have a Microsoft Entra tenant and the permissions to create an Enterprise Application, typically the Cloud Application administrator role or higher. For more information, see [Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant) and [Assign user roles with Microsoft Entra ID](/entra/fundamentals/users-assign-role-azure-portal).
+* This article assumes that you already have a Microsoft Entra tenant and the permissions to create an Enterprise Application, typically the Cloud Application administrator role or higher. For more information, see [Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant) and [Assign user roles with Microsoft Entra ID](/entra/fundamentals/users-assign-role-azure-portal).
+
+* This article assumes that you're using the **Microsoft-registered App ID Azure Public** audience value `c632b3df-fb67-4d84-bdcf-b95ad541b5c8` to configure your custom app. This value has global consent, which means you don't need to manually register it to provide consent for your organization. We recommend that you use this value.
+
+ * At this time, there's only one supported audience value for the Microsoft-registered app. See the [supported audience value table](point-to-site-about.md#entra-id) for additional supported values.
+
+ * If the Microsoft-registered audience value isn't compatible with your configuration, you can still use the older manually registered ID values.
+
+* If you need to use a manually registered app ID value instead, you must give consent to allow the app to sign in and read user profiles before proceeding with this configuration.
+
+ 1. To grant admin consent for your organization, modify the following command to contain the desired `client_id` value. In the example, the client_id value is for Azure Public. See the [table](point-to-site-about.md#entra-id) for additional supported values.
+
+ ```https://login.microsoftonline.com/common/oauth2/authorize?client_id=41b23e61-6c1e-4545-b367-cd054e0ed4b4&response_type=code&redirect_uri=https://portal.azure.com&nonce=1234&prompt=admin_consent```
+
+ 1. Copy and paste the URL that pertains to your deployment location in the address bar of your browser.
+ 1. Select the account that has the **Global administrator** role if prompted.
+ 1. On the **Permissions** requested page, select **Accept**.
[!INCLUDE [Configure custom audience](../../includes/vpn-gateway-custom-audience.md)]
vpn-gateway Point To Site Entra Users Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-entra-users-access.md
+
+ Title: Configure P2S access based on users and groups - Microsoft Entra ID authentication
+
+description: Learn how to configure P2S access based on users and groups for Microsoft Entra ID authentication.
+++ Last updated : 08/12/2024++++
+# Scenario: Configure P2S access based on users and groups - Microsoft Entra ID authentication
+
+This article walks you through a scenario to configure access based on users and groups for point-to-site (P2S) VPN connections that use Microsoft Entra ID authentication. This scenario, you configure this type of access using multiple custom audience app IDs with specified permissions, and multiple P2S VPN gateways. For more information about P2S protocols and authentication, see [About point-to-site VPN](point-to-site-about.md).
+
+In this scenario, users have different access based on permissions to connect to specific P2S VPN gateways. At a high level, the workflow is as follows:
+
+1. Create a custom app for each P2S VPN gateway that you want to configure for P2S VPN with Microsoft Entra ID authentication. Make a note of the custom app ID.
+1. Add the Azure VPN Client application to the custom app configuration.
+1. Assign user and group permissions per custom app.
+1. When you configure your gateway for P2S VPN Microsoft Entra ID authentication, specify the Microsoft Entra ID tenant and the custom app ID that's associated with the users that you want to allow to connect via that gateway.
+1. The Azure VPN Client profile on the client's computer is configured using the settings from the P2S VPN gateway to which the user has permissions to connect.
+1. When a user connects, they're authenticated and are able to connect only to the P2S VPN gateway for which their account has permissions.
+
+Considerations:
+
+* You can't create this type of granular access if you have only one VPN gateway.
+* Microsoft Entra ID authentication is supported only for OpenVPN® protocol connections and requires the Azure VPN Client.
+*Take care configure each Azure VPN Client with the correct client profile package configuration settings to ensure that the user connects to the corresponding gateway to which they have permissions.
+* When you use the configuration steps in this exercise, it might be easiest to run the steps for the first custom app ID and gateway all the way through, then repeat for each subsequent custom app ID and gateway.
+
+## Prerequisites
+
+* This scenario requires a Microsoft Entra tenant. If you don't already have a tenant, [Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant). Make a note of the tenant ID. This value is needed when you configure your P2S VPN gateway for Microsoft Entra ID authentication.
+
+* This scenario requires multiple VPN gateways. You can only assign one custom app ID per gateway.
+
+ * If you don't already have at least two functioning VPN gateways that are compatible with Microsoft Entra ID authentication, see [Create and manage a VPN gateway - Azure portal](tutorial-create-gateway-portal.md) to create your VPN gateways.
+ * Some gateway options are incompatible with P2S VPN gateways that use Microsoft Entra ID authentication. Basic SKU and policy-based VPN types aren't supported. For more information about gateway SKUs, see [About gateway SKUs](about-gateway-skus.md). For more information about VPN types, see [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#vpntype).
+
+## Register an application
+
+To create a custom audience app ID value, which is specified when you configure your VPN gateway, you must register an application. Register an application. For steps, see [Register an application](point-to-site-entra-register-custom-app.md#register-an-application).
+
+* The **Name** field is user-facing. Use something intuitive that describes the users or groups that are connecting via this custom application.
+* For the rest of the settings, use the settings shown in the article.
+
+## Add a scope
+
+Add a scope. Adding a scope is part of the sequence to configure permissions for users and groups. For steps, see [Expose an API and add a scope](point-to-site-entra-register-custom-app.md#expose-an-api-and-add-a-scope). Later, you assign users and groups permissions to this scope.
+
+* Use something intuitive for the **Scope Name** field, such as Marketing-VPN-Users. Fill out the rest of the fields as necessary.
+* For **State**, select **Enable**.
+
+## Add the Azure VPN Client application
+
+Add the Azure VPN Client application **Client ID** and specify the **Authorized scope**. When you add the application, we recommend that you use the **Microsoft-registered** Azure VPN Client app ID for Azure Public, `c632b3df-fb67-4d84-bdcf-b95ad541b5c8` when possible. This app value has global consent, which means you don't need to manually register it. For steps, see [Add the Azure VPN Client application](point-to-site-entra-register-custom-app.md#add-the-azure-vpn-client-application).
+
+After you add the Azure VPN Client application, go to the **Overview** page and copy and save the **Application (client) ID**. You'll need this information to configure your P2S VPN gateway.
+
+## Assign users and groups
+
+Assign permissions to the users and/or groups that connect to the gateway. If you're specifying a group, the user must be a direct member of the group. Nested groups aren't supported.
+
+1. Go to your Microsoft Entra ID and select **Enterprise applications**.
+1. From the list, locate the application you registered and click to open it.
+1. Expand **Manage**, then select **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**.
+1. For **Assignment required**, change the value to **Yes**. For more information about this setting, see [Application properties](/entra/identity/enterprise-apps/application-properties#enabled-for-users-to-sign-in).
+1. If you've made changes, select **Save** at the top of the page.
+1. In the left pane, select **Users and groups**. On the **Users and groups** page, select **+ Add user/group** to open the **Add Assignment** page.
+1. Click the link under **Users and groups** to open the **Users and groups** page. Select the users and groups that you want to assign, then click **Select**.
+1. After you finish selecting users and groups, select **Assign**.
+
+## Configure a P2S VPN
+
+After you've completed the steps in the previous sections, continue to [Configure P2S VPN Gateway for Microsoft Entra ID authentication ΓÇô Microsoft-registered app](point-to-site-entra-gateway.md).
+
+* When you configure each gateway, associate the appropriate custom audience App ID.
+* Download the Azure VPN Client configuration packages to configure the Azure VPN Client for the users that have permissions to connect to the specific gateway.
+
+## Configure the Azure VPN Client
+
+Use the Azure VPN Client profile configuration package to configure the Azure VPN Client on each user's computer. Verify that the client profile corresponds to the P2S VPN gateway to which you want the user to connect.
+
+## Next steps
+
+* [Configure P2S VPN Gateway for Microsoft Entra ID authentication ΓÇô Microsoft-registered app](point-to-site-entra-gateway.md).
+* To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](point-to-site-entra-vpn-client-windows.md).
+* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).