Updates from: 08/20/2024 01:06:50
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-detection-model.md
You should be familiar with the concept of AI face detection. If you aren't, see
The different face detection models are optimized for different tasks. See the following table for an overview of the differences. -
-| Model | Description | Performance notes | Attributes | Landmarks |
-|||-|-|--|
-|**detection_01** | Default choice for all face detection operations. | Not optimized for small, side-view, or blurry faces. | Returns main face attributes (head pose, glasses, and so on) if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
-|**detection_02** | Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. | Doesn't return face attributes. | Doesn't return face landmarks. |
-|**detection_03** | Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns mask, blur, and head pose attributes if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
-
+| Model | Description | Performance notes | Landmarks |
+|-|-|-|--|
+|**detection_01** | Default choice for all face detection operations. | Not optimized for small, side-view, or blurry faces. | Returns face landmarks if they're specified in the detect call. |
+|**detection_02** | Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. | Doesn't return face landmarks. |
+|**detection_03** | Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns face landmarks if they're specified in the detect call. |
+
+Attributes are a set of features that can optionally be detected if they're specified in the detect call:
+
+| Model | accessories | blur | exposure | glasses | headPose | mask | noise | occlusion | qualityForRecognition |
+|-|:--:|:-:|:--:|:-:|:--:|:-:|:--:|::|::|
+|**detection_01** | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | ✔️ | ✔️ | ✔️ (for recognition_03 or 04) |
+|**detection_02** | | | | | | | | | |
+|**detection_03** | | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | ✔️ | ✔️ (for recognition_03 or 04) |
The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/whats-new.md
Learn what's new in Azure AI Vision. Check this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+## August 2024
+
+### New detectable Face attributes
+
+The glasses, occlusion, blur, and exposure attributes are available with the latest Detection 03 model. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
+
+## May 2024
+
+### New Face SDK 1.0.0-beta.1 (breaking changes)
+
+The Face SDK was rewritten in version 1.0.0-beta.1 to better meet the guidelines and design principles of Azure SDKs. C#, Python, Java, and JavaScript are the supported languages. Follow the [QuickStart](./quickstarts-sdk/identity-client-library.md) to get started.
+ ## February 2024 #### Multimodal embeddings GA: new multi-language model
ai-services Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/region-support.md
Some summarization features are only available in limited regions. More regions
|East US |✅ |✅ |✅ | |East US 2 |✅ |✅ |❌ | |West US |✅ |✅ |❌ |
+|USNat West |✅ |✅ |❌ |
+|USNat East |✅ |✅ |❌ |
+|USSec West |✅ |✅ |❌ |
+|USSec East |✅ |✅ |❌ |
|South UK |✅ |✅ |❌ | |Southeast Asia |✅ |✅ |❌ | |Australia East |✅ |✅ |❌ |
ai-studio Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/architecture.md
The top level AI Studio resources (hub and project) are based on Azure Machine L
- **AI Studio hub**: The hub is the top-level resource in AI Studio. The Azure resource provider for a hub is `Microsoft.MachineLearningServices/workspaces`, and the kind of resource is `Hub`. It provides the following features: - Security configuration including a managed network that spans projects and model endpoints.
- - Compute resources for interactive development, finetuning, open source, and serverless model deployments.
+ - Compute resources for interactive development, fine-tuning, open source, and serverless model deployments.
- Connections to other Azure services such as Azure OpenAI, Azure AI services, and Azure AI Search. Hub-scoped connections are shared with projects created from the hub. - Project management. A hub can have multiple child projects. - An associated Azure storage account for data upload and artifact storage.
Azure monitor and Azure Log Analytics provide monitoring and logging for the und
For more information on price and quota, use the following articles: - [Plan and manage costs](../how-to/costs-plan-manage.md)-- [Commitment tier pricing](../how-to/commitment-tier.md) - [Quota management](../how-to/quota.md) ## Next steps
ai-studio Commitment Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/commitment-tier.md
- Title: Commitment tier pricing for Azure AI-
-description: Learn how to sign up for commitment tier pricing instead of pay-as-you-go pricing.
---
- - ignite-2023
- - build-2024
- Previously updated : 5/21/2024-----
-# Commitment tier pricing for Azure AI Studio
--
-Azure AI Studio offers commitment tier pricing, each offering a discounted rate compared to the pay-as-you-go pricing model. With commitment tier pricing, you can commit to using the Azure AI Studio hubs and features for a fixed fee, enabling you to have a predictable total cost based on the needs of your workload.
-
-## Purchase a commitment plan by updating your Azure resource
-
-1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure subscription.
-1. Select the existing Azure resource you want to purchase a commitment plan for.
-1. From the collapsible left menu, select **Resource Management** > **Commitment tier pricing**.
-1. Select **Change** to view the available commitments for hosted API and container usage. Choose a commitment plan for one or more of the following offerings:
- * **Web**: web-based APIs, where you send data to Azure for processing.
- * **Connected container**: Docker containers that enable you to [deploy Azure AI services on premises](../../ai-services/cognitive-services-container-support.md), and maintain an internet connection for billing and metering.
-
-1. In the window that appears, select both a **Tier** and **Auto-renewal** option.
-
- * **Commitment tier** - The commitment tier for the feature. The commitment tier is enabled immediately when you select **Purchase** and you're charged the commitment amount on a pro-rated basis.
-
- * **Auto-renewal** - Choose how you want to renew, change, or cancel the current commitment plan starting with the next billing cycle. If you decide to autorenew, the **Auto-renewal date** is the date (in your local timezone) when you'll be charged for the next billing cycle. This date coincides with the start of the calendar month.
-
- > [!CAUTION]
- > Once you select **Purchase** you will be charged for the tier you select. Once purchased, the commitment plan is non-refundable.
- >
- > Commitment plans are charged monthly, except the first month upon purchase which is pro-rated (cost and quota) based on the number of days remaining in that month. For the subsequent months, the charge is incurred on the first day of the month.
-
-## Overage pricing
-
-If you use the resource above the quota provided, you're charged for the extra usage as per the overage amount mentioned in the commitment tier.
-
-## Purchase a different commitment plan
-
-The commitment plans have a calendar month commitment period. You can purchase a commitment plan at any time from the default pay-as-you-go pricing model. When you purchase a plan, you're charged a pro-rated price for the remaining month. During the commitment period, you can't change the commitment plan for the current month. However, you can choose a different commitment plan for the next calendar month. The billing for the next month would happen on the first day of the next month.
-
-## End a commitment plan
-
-If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's autorenewal to **Do not auto-renew**. Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You're able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of each month to end a commitment plan, and not be charged for the following month.
-
-## Purchase a commitment tier pricing plan for disconnected containers
-
-Commitment plans for disconnected containers have a calendar year commitment period. These plans are different the than web and connected container commitment plans. When you purchase a commitment plan, you're charged the full price immediately. During the commitment period you can't change your commitment plan. However, you can purchase more units at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
-
-You can choose a different commitment plan in the **Commitment Tier pricing** settings of your resource.
-
-## Overage pricing for disconnected containers
-
-To use a disconnected container beyond the quota initially purchased with your disconnected container commitment plan, you can purchase more quota by updating your commitment plan at any time.
-
-To purchase more quota, go to your resource in Azure portal and adjust the "unit count" of your disconnected container commitment plan using the slider. This adds more monthly quota and you're charged a pro-rated price based on the remaining days left in the current billing cycle.
-
-## See also
-
-* [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
ai-studio Deploy Models Cohere Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-command.md
zone_pivot_groups: azure-ai-model-catalog-samples-chat
# How to use Cohere Command chat models + In this article, you learn about Cohere Command chat models and how to use them. The Cohere family of models includes various models optimized for different use cases, including chat completions, embeddings, and rerank. Cohere models are optimized for various use cases that include reasoning, summarization, and question answering.
ai-studio Deploy Models Cohere Embed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-embed.md
zone_pivot_groups: azure-ai-model-catalog-samples-embeddings
# How to use Cohere Embed V3 models with Azure AI Studio + In this article, you learn about Cohere Embed V3 models and how to use them with Azure AI Studio. The Cohere family of models includes various models optimized for different use cases, including chat completions, embeddings, and rerank. Cohere models are optimized for various use cases that include reasoning, summarization, and question answering.
ai-studio Deploy Models Jais https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-jais.md
zone_pivot_groups: azure-ai-model-catalog-samples-chat
# How to use Jais chat models + In this article, you learn about Jais chat models and how to use them. JAIS 30b Chat is an autoregressive bi-lingual LLM for **Arabic** & **English**. The tuned versions use supervised fine-tuning (SFT). The model is fine-tuned with both Arabic and English prompt-response pairs. The fine-tuning datasets included a wide range of instructional data across various domains. The model covers a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, the Core42 team developed an in-house Arabic dataset and translated some open-source English instructions into Arabic.
ai-studio Deploy Models Jamba https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-jamba.md
zone_pivot_groups: azure-ai-model-catalog-samples-chat
# How to use Jamba-Instruct chat models + In this article, you learn about Jamba-Instruct chat models and how to use them. The Jamba-Instruct model is AI21's production-grade Mamba-based large language model (LLM) which uses AI21's hybrid Mamba-Transformer architecture. It's an instruction-tuned version of AI21's hybrid structured state space model (SSM) transformer Jamba model. The Jamba-Instruct model is built for reliable commercial use with respect to quality and performance.
ai-studio Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md
zone_pivot_groups: azure-ai-model-catalog-samples-chat
# How to use Meta Llama chat models + In this article, you learn about Meta Llama chat models and how to use them. Meta Llama 2 and 3 models and tools are a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF).
ai-studio Deploy Models Mistral Nemo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral-nemo.md
zone_pivot_groups: azure-ai-model-catalog-samples-chat
# How to use Mistral Nemo chat model + In this article, you learn about Mistral Nemo chat model and how to use them. Mistral AI offers two categories of models. Premium models including [Mistral Large and Mistral Small](deploy-models-mistral.md), available as serverless APIs with pay-as-you-go token-based billing. Open models including [Mistral Nemo](deploy-models-mistral-nemo.md), [Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01](deploy-models-mistral-open.md); available to also download and run on self-hosted managed endpoints.
ai-studio Deploy Models Mistral Open https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral-open.md
zone_pivot_groups: azure-ai-model-catalog-samples-chat
# How to use Mistral-7B and Mixtral chat models + In this article, you learn about Mistral-7B and Mixtral chat models and how to use them. Mistral AI offers two categories of models. Premium models including [Mistral Large and Mistral Small](deploy-models-mistral.md), available as serverless APIs with pay-as-you-go token-based billing. Open models including [Mistral Nemo](deploy-models-mistral-nemo.md), [Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01](deploy-models-mistral-open.md); available to also download and run on self-hosted managed endpoints.
ai-studio Deploy Models Mistral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral.md
zone_pivot_groups: azure-ai-model-catalog-samples-chat
# How to use Mistral premium chat models + In this article, you learn about Mistral premium chat models and how to use them. Mistral AI offers two categories of models. Premium models including [Mistral Large and Mistral Small](deploy-models-mistral.md), available as serverless APIs with pay-as-you-go token-based billing. Open models including [Mistral Nemo](deploy-models-mistral-nemo.md), [Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01](deploy-models-mistral-open.md); available to also download and run on self-hosted managed endpoints.
api-management Genai Gateway Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/genai-gateway-capabilities.md
+
+ Title: GenAI gateway capabilities in Azure API Management
+description: Learn about Azure API Management's policies and features to manage generative AI APIs, such as token rate limiting, load balancing, and semantic caching.
++++++ Last updated : 08/13/2024+++
+# Overview of generative AI gateway capabilities in Azure API Management
++
+This article introduces capabilities in Azure API Management to help you manage generative AI APIs, such as those provided by [Azure OpenAI Service](../ai-services/openai/overview.md). Azure API Management provides a range of policies, metrics, and other features to enhance security, performance, and reliability for the APIs serving your intelligent apps. Collectively, these features are called *generative AI (GenAI) gateway capabilities* for your generative AI APIs.
+
+> [!NOTE]
+> * This article focuses on capabilities to manage APIs exposed by Azure OpenAI Service. Many of the GenAI gateway capabilities apply to other large language model (LLM) APIs, including those available through [Azure AI Model Inference API](../ai-studio/reference/reference-model-inference-api.md).
+> * Generative AI gateway capabilities are features of API Management's existing API gateway, not a separate API gateway. For more information on API Management, see [Azure API Management overview](api-management-key-concepts.md).
+
+## Challenges in managing generative AI APIs
+
+One of the main resources you have in generative AI services is *tokens*. Azure OpenAI Service assigns quota for your model deployments expressed in tokens-per-minute (TPM) which is then distributed across your model consumers - for example, different applications, developer teams, departments within the company, etc.
+
+Azure makes it easy to connect a single app to Azure OpenAI Service: you can connect directly using an API key with a TPM limit configured directly on the model deployment level. However, when you start growing your application portfolio, you're presented with multiple apps calling single or even multiple Azure OpenAI Service endpoints deployed as pay-as-you-go or [Provisioned Throughput Units](../ai-services/openai/concepts/provisioned-throughput.md) (PTU) instances. That comes with certain challenges:
+
+* How is token usage tracked across multiple applications? Can cross-charges be calculated for multiple applications/teams that use Azure OpenAI Service models?
+* How do you ensure that a single app doesn't consume the whole TPM quota, leaving other apps with no option to use Azure OpenAI Service models?
+* How is the API key securely distributed across multiple applications?
+* How is load distributed across multiple Azure OpenAI endpoints? Can you ensure that the committed capacity in PTUs is exhausted before falling back to pay-as-you-go instances?
+
+The rest of this article describes how Azure API Management can help you address these challenges.
+
+## Import Azure OpenAI Service resource as an API
+
+[Import an API from an Azure OpenAI Service endpoint](azure-openai-api-from-specification.md) to Azure API management using a single-click experience. API Management streamlines the onboarding process by automatically importing the OpenAPI schema for the Azure OpenAI API and sets up authentication to the Azure OpenAI endpoint using managed identity, removing the need for manual configuration. Within the same user-friendly experience, you can preconfigure policies for [token limits](#token-limit-policy) and [emitting token metrics](#emit-token-metric-policy).
++
+## Token limit policy
+
+Configure the [Azure OpenAI token limit policy](azure-openai-token-limit-policy.md) to manage and enforce limits per API consumer based on the usage of Azure OpenAI Service tokens. With this policy you can set limits, expressed in tokens-per-minute (TPM).
++
+This policy provides flexibility to assign token-based limits on any counter key, such as subscription key, originating IP address, or an arbitrary key defined through a policy expression. The policy also enables precalculation of prompt tokens on the Azure API Management side, minimizing unnecessary requests to the Azure OpenAI Service backend if the prompt already exceeds the limit.
+
+The following basic example demonstrates how to set a TPM limit of 500 per subscription key:
+
+```xml
+<azure-openai-token-limit counter-key="@(context.Subscription.Id)"
+ tokens-per-minute="500" estimate-prompt-tokens="false" remaining-tokens-variable-name="remainingTokens">
+</azure-openai-token-limit>
+```
+
+> [!TIP]
+> To manage and enforce token limits for LLM APIs available through the Azure AI Model Inference API, API Management provides the equivalent [llm-token-limit](llm-token-limit-policy.md) policy.
++
+## Emit token metric policy
+
+The [Azure OpenAI emit token metric](azure-openai-emit-token-metric-policy.md) policy sends metrics to Application Insights about consumption of LLM tokens through Azure OpenAI Service APIs. The policy helps provide an overview of the utilization of Azure OpenAI Service models across multiple applications or API consumers. This policy could be useful for chargeback scenarios, monitoring, and capacity planning.
++
+This policy captures prompt, completions, and total token usage metrics and sends them to an Application Insights namespace of your choice. Moreover, you can configure or select from predefined dimensions to split token usage metrics, so you can analyze metrics by subscription ID, IP address, or a custom dimension of your choice.
+
+For example, the following policy sends metrics to Application Insights split by client IP address, API, and user:
+
+```xml
+<azure-openai-emit-token-metric namespace="openai">
+ <dimension name="Client IP" value="@(context.Request.IpAddress)" />
+ <dimension name="API ID" value="@(context.Api.Id)" />
+ <dimension name="User ID" value="@(context.Request.Headers.GetValueOrDefault("x-user-id", "N/A"))" />
+</azure-openai-emit-token-metric>
+```
+
+> [!TIP]
+> To send metrics for LLM APIs available through the Azure AI Model Inference API, API Management provides the equivalent [llm-emit-token-metric](llm-emit-token-metric-policy.md) policy.
+
+## Backend load balancer and circuit breaker
+
+One of the challenges when building intelligent applications is to ensure that the applications are resilient to backend failures and can handle high loads. By configuring your Azure OpenAI Service endpoints using [backends](backends.md) in Azure API Management, you can balance the load across them. You can also define circuit breaker rules to stop forwarding requests to the Azure OpenAI Service backends if they're not responsive.
+
+The backend [load balancer](backends.md#backends-in-api-management) supports round-robin, weighted, and priority-based load balancing, giving you flexibility to define a load distribution strategy that meets your specific requirements. For example, define priorities within the load balancer configuration to ensure optimal utilization of specific Azure OpenAI endpoints, particularly those purchased as PTUs.
++
+The backend [circuit breaker](backends.md#circuit-breaker) features dynamic trip duration, applying values from the Retry-After header provided by the backend. This ensures precise and timely recovery of the backends, maximizing the utilization of your priority backends.
++
+## Semantic caching policy
+
+Configure [Azure OpenAI semantic caching](azure-openai-enable-semantic-caching.md) policies to optimize token consumption by using semantic caching, which stores completions for prompts with similar meaning.
++
+In API Management, enable semantic caching by using Azure Redis Enterprise or another [external cache](api-management-howto-cache-external.md) compatible with RediSearch and onboarded to Azure API Management. By using the Azure OpenAI Service Embeddings API, the [azure-openai-semantic-cache-store](azure-openai-semantic-cache-store-policy.md) and [azure-openai-semantic-cache-lookup](azure-openai-semantic-cache-lookup-policy.md) policies store and retrieve semantically similar prompt completions from the cache. This approach ensures completions reuse, resulting in reduced token consumption and improved response performance.
+
+> [!TIP]
+> To enable semantic caching for LLM APIs available through the Azure AI Model Inference API, API Management provides the equivalent [llm-semantic-cache-store-policy](llm-semantic-cache-store-policy.md) and [llm-semantic-cache-lookup-policy](llm-semantic-cache-lookup-policy.md) policies.
++
+## Labs and samples
+
+* [Labs for the GenAI gateway capabilities of Azure API Management](https://github.com/Azure-Samples/AI-Gateway)
+* [Azure API Management (APIM) - Azure OpenAI Sample (Node.js)](https://github.com/Azure-Samples/genai-gateway-apim)
+* [Python sample code for using Azure OpenAI with API Management](https://github.com/Azure-Samples/openai-apim-lb/blob/main/docs/sample-code.md)
+* [AI hub gateway landing zone accelerator](https://github.com/Azure-Samples/ai-hub-gateway-solution-accelerator)
+
+## Architecture and design considerations
+
+* [GenAI gateway reference architecture using API Management](/ai/playbook/technology-guidance/generative-ai/dev-starters/genai-gateway/reference-architectures/apim-based)
+* [Designing and implementing a gateway solution with Azure OpenAI resources](/ai/playbook/technology-guidance/generative-ai/dev-starters/genai-gateway/)
+* [Use a gateway in front of multiple Azure OpenAI deployments or instances](/azure/architecture/ai-ml/guide/azure-openai-gateway-multi-backend)
+
+## Related content
+
+* [Blog: Introducing GenAI capabilities in Azure API Management](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/introducing-genai-gateway-capabilities-in-azure-api-management/ba-p/4146525)
+* [Blog: Integrating Azure Content Safety with API Management for Azure OpenAI Endpoints](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/integrating-azure-content-safety-with-api-management-for-azure/ba-p/4202505)
+* [Smart load balancing for OpenAI endpoints and Azure API Management](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/smart-load-balancing-for-openai-endpoints-and-azure-api/ba-p/3991616)
+* [Authenticate and authorize access to Azure OpenAI APIs using Azure API Management](api-management-authenticate-authorize-azure-openai.md)
app-service App Service Undelete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-undelete.md
The inputs for command are:
- **TargetName**: Target app for the deleted app to be restored to - **TargetAppServicePlanName**: App Service plan linked to the app - **Name**: Name for the app, should be globally unique.-- **ResourceGroupName**: Original resource group for the deleted app
+- **ResourceGroupName**: Original resource group for the deleted app, you can get it from Get-AzDeletedWebApp -Name <your_deleted_app> -Location <your_deleted_app_location>
- **Slot**: Slot for the deleted app - **RestoreContentOnly**: By default `Restore-AzDeletedWebApp` restores both your app configuration as well any content. If you want to only restore content, you can use the `-RestoreContentOnly` flag with this commandlet.
app-service Configure Language Java Deploy Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-deploy-run.md
You don't need to incrementally add instances (scaling out), you can add multipl
<a id="jboss-eap-hardware-options"></a>
-JBoss EAP is only available on the Premium v3 and Isolated v2 App Service Plan types. Customers that created a JBoss EAP site on a different tier during the public preview should scale up to Premium or Isolated hardware tier to avoid unexpected behavior.
+JBoss EAP is available in the following pricing tiers: **F1**,
+**P0v3**, **P1mv3**, **P2mv3**, **P3mv3**, **P4mv3**, and **P5mv3**.
::: zone-end
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
Here's an example procedure for checking DNS resolution:
1. Run the [kubectl exec](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec) command to connect to the pod by using PowerShell: ```bash
- kubectl exec -it dnsutil-win powershell
+ kubectl exec -it dnsutil-win -- powershell
``` 1. Run the [Resolve-DnsName](/powershell/module/dnsclient/resolve-dnsname) cmdlet in PowerShell to check whether the DNS resolution is working for the endpoint:
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
For example, the `connection` property for an Azure Blob trigger definition migh
### Configure an identity-based connection
-Some connections in Azure Functions can be configured to use an identity instead of a secret. Support depends on the extension using the connection. In some cases, a connection string may still be required in Functions even though the service to which you're connecting supports identity-based connections. For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
-
+Some connections in Azure Functions can be configured to use an identity instead of a secret. Support depends on the runtime version and the extension using the connection. In some cases, a connection string may still be required in Functions even though the service to which you're connecting supports identity-based connections. For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
> [!NOTE] > When running in a Consumption or Elastic Premium plan, your app uses the [`WEBSITE_AZUREFILESCONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring) and [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) settings when connecting to Azure Files on the storage account used by your function app. Azure Files doesn't support using managed identity when accessing the file share. For more information, see [Azure Files supported authentication scenarios](../storage/files/storage-files-active-directory-overview.md#supported-authentication-scenarios)
+Identity-based connections are only supported on Functions 4.x, If you are using version 1.x, you must first [migrate to version 4.x](./migrate-version-1-version-4.md).
The following components support identity-based connections:
An identity-based connection for an Azure service accepts the following common p
Other options may be supported for a given connection type. Refer to the documentation for the component making the connection.
+##### Azure SDK Environment Variables
+
+> [!CAUTION]
+> Use of the Azure SDK's [`EnvironmentCredential`][environment-credential] environment variables is not recommended due to the potentially unintentional impact on other connections. They also are not fully supported when deployed to Azure Functions.
+
+The environment variables associated with the Azure SDK's [`EnvironmentCredential`][environment-credential] can also be set, but these are not processed by the Functions service for scaling in Consumption plans. These environment variables are not specific to any one connection and will apply as a default unless a corresponding property is not set for a given connection. For example, if `AZURE_CLIENT_ID` is set, this would be used as if `<CONNECTION_NAME_PREFIX>__clientId` had been configured. Explicitly setting `<CONNECTION_NAME_PREFIX>__clientId` would override this default.
+
+[environment-credential]: /dotnet/api/azure.identity.environmentcredential
+ ##### Local development with identity-based connections > [!NOTE]
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
N/A
## Update > [!NOTE]
-> The recommendation is to enable [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) which may take **up to 5 weeks** after a new extension version is released for it to update installed extensions to the released (latest) version across all regions. Upgrades are issued in batches, so you may see some of your virtual machines, scale-sets or Arc-enabled servers get upgraded before others. If you need to upgrade an extension immediately, you may use the manual instructions below.
-
+> The recommendation is to enable [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) to update installed extensions to the released (latest) version across all regions. Upgrades are issued in batches, so you may see some of your virtual machines, scale-sets or Arc-enabled servers get upgraded before others. If you need to upgrade an extension immediately, you may use the manual instructions below.
#### [Portal](#tab/azure-portal) To perform a one-time update of the agent, you must first uninstall the existing agent version. Then install the new version as described.
azure-monitor Azure Monitor Agent Mma Removal Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-mma-removal-tool.md
function GetArcServersWithLogAnalyticsAgentExtensionInstalled {
$serverParallelThrottleLimit = $serversCount }
+ $serverGroups = @()
+ if($serversCount -eq 1) { $serverGroups += ,($serverList[0])
The script reports the total VM, VMSSs, or Arc enables servers seen in the subsc
## Step 4 Uninstall inventory This script iterates through the list of VM, Virtual Machine Scale Sets, and Arc enabled servers and uninstalls the legacy agent. If the VM, Virtual Machine Scale Sets, or Arc enabled server is not running you won't be able to remove the agent. ``` PowerShell
- .\MMAUnistallUtilityScript.ps1 UninstallMMAExtension
+ .\MMAUnistallUtilityScript.ps1 UninstallExtension
``` Once the script is complete you'll be able to see the removal status for your VM, Virtual Machine Scale Sets, and Arc enabled servers in the MMAInventory.csv file.
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
There are two methods to instrument your application:
**Autoinstrumentation** enables telemetry collection through configuration without touching the application's code. Although it's more convenient, it tends to be less configurable. It's also not available in all languages. See [Autoinstrumentation supported environments and languages](codeless-overview.md). When autoinstrumentation is available, it's the easiest way to enable Azure Monitor Application Insights.
-> [!TIP]
-> Currently, [Microsoft Entra authentication](azure-ad-authentication.md) is not available with autoinstrumentation. If you require Microsoft Entra auth, you'll need to use manual instrumentation.
- **Manual instrumentation** is coding against the Application Insights or OpenTelemetry API. In the context of a user, it typically refers to installing a language-specific SDK in an application. This means that you have to manage the updates to the latest package version by yourself. You can use this option if you need to make custom dependency calls or API calls that are not captured by default with autoinstrumentation. There are two options for manual instrumentation: - [Application Insights SDKs](asp-net-core.md)
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-aspnetcore-linux.md
description: Learn how to enable Profiler on your ASP.NET Core web application h
ms.devlang: csharp Previously updated : 09/22/2023 Last updated : 08/19/2024 # Customer Intent: As a .NET developer, I'd like to enable Application Insights Profiler for my .NET web application hosted in Linux
By using Profiler, you can track how much time is spent in each method of your live ASP.NET Core web apps that are hosted in Linux on Azure App Service. This article focuses on web apps hosted in Linux. You can also experiment by using Linux, Windows, and Mac development environments. In this article, you:--- Set up and deploy an ASP.NET Core web application hosted on Linux.-- Add Application Insights Profiler to the ASP.NET Core web application.
+> [!div class="checklist"]
+> - Set up and deploy an ASP.NET Core web application hosted on Linux.
+> - Add Application Insights Profiler to the ASP.NET Core web application.
## Prerequisites
azure-monitor Profiler Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-azure-functions.md
Title: Profile Azure Functions app with Application Insights Profiler
+ Title: Enable Profiler for Azure Functions apps
description: Enable Application Insights Profiler for Azure Functions app. ms.contributor: charles.weininger- Previously updated : 09/22/2023+ Last updated : 08/16/2024
-# Profile live Azure Functions app with Application Insights
+# Enable Profiler for Azure Functions apps
In this article, you'll use the Azure portal to: - View the current app settings for your Functions app.
From your Functions app overview page in the Azure portal:
The app settings now show up in the table:
- :::image type="content" source="./media/profiler-azure-functions/app-settings-table.png" alt-text="Screenshot showing the two new app settings in the table on the configuration pane.":::
> [!NOTE]
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
reviewer: cweining- Previously updated : 09/22/2023+ Last updated : 08/19/2024
When you use [Application Insights Profiler](./profiler-overview.md) or [Snapsho
- Processing and analysis. - Encryption-at-rest and lifetime management policies.
-Meanwhile, when you configure your own storage account (BYOS), artifacts are uploaded into a storage account that only you control and cover the cost for:
+Meanwhile, when you "bring your own storage" (BYOS), artifacts are uploaded into a storage account that only you control and cover the cost for:
- The encryption-at-rest policy and the Lifetime management policy. - Network access.
In this guide, you learn how to:
## Prerequisites -- Verify you've created your storage account in the same location as your Application Insights resource.-- If you've enabled [Private Link](../logs/private-link-security.md), allow connection to our Trusted Microsoft Service from your virtual network.
+- Verify you created your storage account in the same location as your Application Insights resource.
+- If you enabled [Private Link](../logs/private-link-security.md), allow connection to our Trusted Microsoft Service from your virtual network.
## Grant Diagnostic Services access to your storage account
A BYOS storage account is linked to an Application Insights resource. Start by g
| Assign access to | User, group, or service principal | | Members | Diagnostic Services Trusted Storage Access |
- :::image type="content" source="media/profiler-bring-your-own-storage/add-role-assignment-page.png" alt-text="Screenshot that shows the Add role assignment page in the Azure portal.":::
+ :::image type="content" source="media/profiler-bring-your-own-storage/add-role-assignment-page.png" alt-text="Screenshot that shows the role assignment page in the Azure portal.":::
Once assigned, you can see the role under the **Role assignments** section. :::image type="content" source="media/profiler-bring-your-own-storage/figure-11.png" alt-text="Screenshot that shows the IAM screen after Role assignments.":::
Before you begin, [install the Azure CLI](/cli/azure/install-azure-cli).
## Troubleshooting
-This section offers troubleshooting tips for common issues in configuring BYOS.
+Troubleshoot common issues in configuring BYOS.
- For general Profiler troubleshooting, see the [Profiler troubleshooting documentation](profiler-troubleshooting.md). - For general Snapshot Debugger troubleshooting, see the [Snapshot Debugger troubleshooting documentation](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot). ### Scenario: Template schema '{schema_uri}' isn't supported
-You've received an error similar to the following example:
+You received an error similar to the following example:
```powershell New-AzResourceGroupDeployment : 11:53:49 AM - Error: Code=InvalidTemplate; Message=Deployment template validation failed: 'Template schema
New-AzResourceGroupDeployment : 11:53:49 AM - Error: Code=InvalidTemplate; Messa
### Scenario: No registered resource provider found for location '{location}'
-You've received an error similar to the following example:
+You received an error similar to the following example:
```powershell New-AzResourceGroupDeployment : 6:18:03 PM - Resource microsoft.insights/components 'byos-test-westus2-ai' failed with message '{
australiasoutheast'."
### Scenario: Storage account location should match Application Insights component location
-You've received an error similar to the following example:
+You received an error similar to the following example:
```powershell New-AzResourceGroupDeployment : 1:01:12 PM - Resource microsoft.insights/components/linkedStorageAccounts 'byos-test-centralus-ai/serviceprofiler' failed with message '{
Make sure that the location of the Application Insights resource is the same as
This section provides answers to common questions about configuring BYOS for Profiler and Snapshot Debugger.
-### If I've enabled Profiler/Snapshot Debugger and BYOS, is my data migrated into my storage account?
+### If I enabled Profiler/Snapshot Debugger and BYOS, is my data migrated into my storage account?
No, it won't.
This section provides answers to common questions about configuring BYOS for Pro
Yes, it's possible.
-### If I've enabled BYOS, can I go back to using Diagnostic Services storage accounts to store my collected data?
+### If I enabled BYOS, can I go back to using Diagnostic Services storage accounts to store my collected data?
Yes, you can, but we don't currently support data migration from your BYOS.
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-cloudservice.md
Title: Enable Profiler for Azure Cloud Services | Microsoft Docs
+ Title: Enable Profiler for Azure Cloud Services
description: Profile Azure Cloud Services in real time with Application Insights Profiler.-+ Previously updated : 09/22/2023 Last updated : 08/16/2024 # Enable Profiler for Azure Cloud Services
azure-monitor Profiler Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-containers.md
Title: Profile Azure containers with Application Insights Profiler
description: Learn how to enable the Application Insights Profiler for your ASP.NET Core application running in Azure containers. ms.contributor: charles.weininger Previously updated : 09/22/2023 Last updated : 08/19/2024 # Customer Intent: As a .NET developer, I'd like to learn how to enable Profiler on my ASP.NET Core application running in my container.
You can enable the Application Insights Profiler for ASP.NET Core application ru
- Set up the Application Insights instrumentation key. In this article, you learn about the various ways that you can:--- Install the NuGet package in the project.-- Set the environment variable via the orchestrator (like Kubernetes).-- Learn security considerations around production deployment, like protecting your Application Insights instrumentation key.
+> [!div class="checklist"]
+> - Install the NuGet package in the project.
+> - Set the environment variable via the orchestrator (like Kubernetes).
+> - Learn security considerations around production deployment, like protecting your Application Insights instrumentation key.
## Prerequisites
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-servicefabric.md
Title: Enable Profiler for Azure Service Fabric applications description: Profile live Azure Service Fabric apps with Application Insights.-+ Previously updated : 09/22/2023 Last updated : 08/16/2024 # Enable Profiler for Azure Service Fabric applications Application Insights Profiler is included with Azure Diagnostics. You can install the Azure Diagnostics extension by using an Azure Resource Manager template (ARM template) for your Azure Service Fabric cluster. Get a [template that installs Azure Diagnostics on a Service Fabric cluster](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/ServiceFabricCluster.json).
-In this article, you:
--- Add the Application Insights Profiler property to your ARM template.-- Deploy your Service Fabric cluster with the Application Insights Profiler instrumentation key.-- Enable Application Insights on your Service Fabric application.-- Redeploy your Service Fabric cluster to enable Profiler.
+In this guide, you learn how to:
+> [!div class="checklist"]
+> - Add the Application Insights Profiler property to your ARM template.
+> - Deploy your Service Fabric cluster with the Application Insights Profiler instrumentation key.
+> - Enable Application Insights on your Service Fabric application.
+> - Redeploy your Service Fabric cluster to enable Profiler.
## Prerequisites
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-trackrequests.md
Title: Write code to track requests with Application Insights | Microsoft Docs description: Write code to track requests with Application Insights so you can get profiles for your requests.-+ Previously updated : 09/22/2023 Last updated : 08/19/2024
For other applications (like Azure Cloud Services worker roles and Azure Service
To manually track requests:
- 1. Early in the application lifetime, add the following code:
+1. Early in the application lifetime, add the following code:
- ```csharp
- using Microsoft.ApplicationInsights.Extensibility;
- ...
- // Replace with your own Application Insights instrumentation key.
- TelemetryConfiguration.Active.InstrumentationKey = "00000000-0000-0000-0000-000000000000";
- ```
+ ```csharp
+ using Microsoft.ApplicationInsights.Extensibility;
+ ...
+ // Replace with your own Application Insights instrumentation key.
+ TelemetryConfiguration.Active.InstrumentationKey = "00000000-0000-0000-0000-000000000000";
+ ```
- For more information about this global instrumentation key configuration, see [Use Service Fabric with Application Insights](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/blob/dev/appinsights/ApplicationInsights.md).
+ For more information about this global instrumentation key configuration, see [Use Service Fabric with Application Insights](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/blob/dev/appinsights/ApplicationInsights.md).
- 1. For any piece of code that you want to instrument, add a `StartOperation<RequestTelemetry>` **using** statement around it, as shown in the following example:
+1. For any piece of code that you want to instrument, add a `StartOperation<RequestTelemetry>` **using** statement around it, as shown in the following example:
- ```csharp
- using Microsoft.ApplicationInsights;
- using Microsoft.ApplicationInsights.DataContracts;
- ...
- var client = new TelemetryClient();
- ...
- using (var operation = client.StartOperation<RequestTelemetry>("Insert_Your_Custom_Event_Unique_Name"))
- {
- // ... Code I want to profile.
- }
- ```
+ ```csharp
+ using Microsoft.ApplicationInsights;
+ using Microsoft.ApplicationInsights.DataContracts;
+ ...
+ var client = new TelemetryClient();
+ ...
+ using (var operation = client.StartOperation<RequestTelemetry>("Insert_Your_Custom_Event_Unique_Name"))
+ {
+ // ... Code I want to profile.
+ }
+ ```
- Calling `StartOperation<RequestTelemetry>` within another `StartOperation<RequestTelemetry>` scope isn't supported. You can use `StartOperation<DependencyTelemetry>` in the nested scope instead. For example:
+1. Calling `StartOperation<RequestTelemetry>` within another `StartOperation<RequestTelemetry>` scope isn't supported. You can use `StartOperation<DependencyTelemetry>` in the nested scope instead. For example:
- ```csharp
- using (var getDetailsOperation = client.StartOperation<RequestTelemetry>("GetProductDetails"))
- {
- try
- {
- ProductDetail details = new ProductDetail() { Id = productId };
- getDetailsOperation.Telemetry.Properties["ProductId"] = productId.ToString();
-
- // By using DependencyTelemetry, 'GetProductPrice' is correctly linked as part of the 'GetProductDetails' request.
- using (var getPriceOperation = client.StartOperation<DependencyTelemetry>("GetProductPrice"))
- {
- double price = await _priceDataBase.GetAsync(productId);
- if (IsTooCheap(price))
- {
- throw new PriceTooLowException(productId);
- }
- details.Price = price;
- }
-
- // Similarly, note how 'GetProductReviews' doesn't establish another RequestTelemetry.
- using (var getReviewsOperation = client.StartOperation<DependencyTelemetry>("GetProductReviews"))
- {
- details.Reviews = await _reviewDataBase.GetAsync(productId);
- }
-
- getDetailsOperation.Telemetry.Success = true;
- return details;
- }
- catch(Exception ex)
- {
- getDetailsOperation.Telemetry.Success = false;
-
- // This exception gets linked to the 'GetProductDetails' request telemetry.
- client.TrackException(ex);
- throw;
- }
- }
- ```
+ ```csharp
+ using (var getDetailsOperation = client.Operation<RequestTelemetry>("GetProductDetails"))
+ {
+ try
+ {
+ ProductDetail details = new ProductDetail() { Id = productId };
+ getDetailsOperation.Telemetry.Properties["ProductId"] = productId.ToString();
+
+ // By using DependencyTelemetry, 'GetProductPrice' is correctly linked as part of the 'GetProductDetails' request.
+ using (var getPriceOperation = client.StartOperation<DependencyTelemetry>("GetProductPrice"))
+ {
+ double price = await _priceDataBase.GetAsync(productId);
+ if (IsTooCheap(price))
+ {
+ throw new PriceTooLowException(productId);
+ }
+ details.Price = price;
+ }
+
+ // Similarly, note how 'GetProductReviews' doesn't establish another RequestTelemetry.
+ using (var getReviewsOperation = client.StartOperation<DependencyTelemetry>("GetProductReviews"))
+ {
+ details.Reviews = await _reviewDataBase.GetAsync(productId);
+ }
+
+ getDetailsOperation.Telemetry.Success = true;
+ return details;
+ }
+ catch(Exception ex)
+ {
+ getDetailsOperation.Telemetry.Success = false;
+
+ // This exception gets linked to the 'GetProductDetails' request telemetry.
+ client.TrackException(ex);
+ throw;
+ }
+ }
+ ```
[!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
Title: Troubleshoot Application Insights Profiler description: Walk through troubleshooting steps and information to enable and use Application Insights Profiler. Previously updated : 07/10/2023 Last updated : 08/19/2024
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-vm.md
Title: Enable Profiler for web apps on an Azure virtual machine description: Profile web apps running on an Azure virtual machine or a virtual machine scale set by using Application Insights Profiler- Previously updated : 09/22/2023+ Last updated : 08/19/2024
In this article, you learn how to run Application Insights Profiler on your Azur
- PowerShell - Azure Resource Explorer
-With any of these methods, you:
+Select your preferred method tab to:
-- Configure the Azure Diagnostics extension to run Profiler.-- Install the Application Insights SDK on a VM.-- Deploy your application.-- View Profiler traces via the Application Insights instance in the Azure portal.
+In this guide, you learn how to:
+> [!div class="checklist"]
+> - Configure the Azure Diagnostics extension to run Profiler.
+> - Install the Application Insights SDK on a VM.
+> - Deploy your application.
+> - View Profiler traces via the Application Insights instance in the Azure portal.
## Prerequisites
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler.md
Title: Enable Profiler for Azure App Service apps | Microsoft Docs description: Profile live apps on Azure App Service with Application Insights Profiler.-+ Last updated 08/15/2024 # Enable Profiler for Azure App Service apps
-[Application Insights Profiler](./profiler-overview.md) is preinstalled as part of the Azure App Service runtime. You can run Profiler on ASP.NET and ASP.NET Core apps running on App Service by using the Basic service tier or higher. Follow these steps, even if you included the Application Insights SDK in your application at build time.
+[Application Insights Profiler](./profiler-overview.md) is preinstalled as part of the Azure App Service runtime. You can run Profiler on ASP.NET and ASP.NET Core apps running on App Service by using the Basic service tier or higher.
Codeless installation of Application Insights Profiler: - Follows [the .NET Core support policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps
## Enable Application Insights and Profiler
-The following sections show you how to enable Application Insights for the same subscription or different subscriptions.
+You can enable Profiler either when:
+- [Your Application Insights resource and App Service resource are in the same subscription](#for-application-insights-and-app-service-in-the-same-subscription), or
+- [Your Application Insights resource and App Service resource are in separate subscriptions.](#for-application-insights-and-app-service-in-different-subscriptions)
### For Application Insights and App Service in the same subscription
azure-netapp-files Performance Benchmarks Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-azure-vmware-solution.md
Previously updated : 03/15/2023 Last updated : 02/07/2024 # Azure NetApp Files datastore performance benchmarks for Azure VMware Solution
azure-netapp-files Performance Benchmarks Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-benchmarks-linux.md
Previously updated : 09/29/2021 Last updated : 03/24/2024 # Azure NetApp Files performance benchmarks for Linux
This section describes performance benchmarks of Linux workload throughput and w
### Linux workload throughput
-The graph below represents a 64-kibibyte (KiB) sequential workload and a 1 TiB working set. It shows that a single Azure NetApp Files volume can handle between ~1,600 MiB/s pure sequential writes and ~4,500 MiB/s pure sequential reads.
+This graph represents a 64 kibibyte (KiB) sequential workload and a 1 TiB working set. It shows that a single Azure NetApp Files volume can handle between ~1,600 MiB/s pure sequential writes and ~4,500 MiB/s pure sequential reads.
The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
The graph illustrates decreases in 10% at a time, from pure read to pure write.
### Linux workload IOPS
-The following graph represents a 4-kibibyte (KiB) random workload and a 1 TiB working set. The graph shows that an Azure NetApp Files volume can handle between ~130,000 pure random writes and ~460,000 pure random reads.
+The following graph represents a 4-KiB random workload and a 1 TiB working set. The graph shows that an Azure NetApp Files volume can handle between ~130,000 pure random writes and ~460,000 pure random reads.
This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
This graph illustrates decreases in 10% at a time, from pure read to pure write.
The graphs in this section show the validation testing results for the client-side mount option with NFSv3. For more information, see [`nconnect` section of Linux mount options](performance-linux-mount-options.md#nconnect).
-The graphs compare the advantages of `nconnect` to a non-`connected` mounted volume. In the graphs, FIO generated the workload from a single D32s_v4 instance in the us-west2 Azure region using a 64-KiB sequential workload ΓÇô the largest I/O size supported by Azure NetApp Files at the time of the testing represented here. Azure NetApp Files now supports larger I/O sizes. For more details, see [`rsize` and `wsize` section of Linux mount options](performance-linux-mount-options.md#rsize-and-wsize).
+The graphs compare the advantages of `nconnect` to a non-`connected` mounted volume. In the graphs, FIO generated the workload from a single D32s_v4 instance in the us-west2 Azure region using a 64-KiB sequential workload ΓÇô the largest I/O size supported by Azure NetApp Files at the time of the testing represented here. Azure NetApp Files now supports larger I/O sizes. For more information, see [`rsize` and `wsize` section of Linux mount options](performance-linux-mount-options.md#rsize-and-wsize).
### Linux read throughput
The following graphs show 64-KiB sequential reads of ~3,500 MiB/s reads with `nc
### Linux write throughput
-The following graphs show sequential writes. They indicate that `nconnect` has no noticeable benefit for sequential writes. 1,500 MiB/s is roughly both the sequential write volume upper limit and the D32s_v4 instance egress limit.
+The following graphs show sequential writes. They indicate that `nconnect` has no noticeable benefit for sequential writes. The sequential write volume upper limit is approximately 1,500 MiB/s; the D32s_v4 instance egress limit is also approximately 1,500 MiB/s.
![Linux write throughput](./media/performance-benchmarks-linux/performance-benchmarks-linux-write-throughput.png)
azure-netapp-files Performance Impact Kerberos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-impact-kerberos.md
Previously updated : 08/22/2022 Last updated : 02/07/2024 # Performance impact of Kerberos on Azure NetApp Files NFSv4.1 volumes
azure-netapp-files Performance Large Volumes Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-large-volumes-linux.md
na Previously updated : 05/01/2023 Last updated : 08/01/2024 # Azure NetApp Files large volume performance benchmarks for Linux
azure-netapp-files Performance Linux Direct Io https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-direct-io.md
Previously updated : 07/02/2021 Last updated : 03/02/2024 # Linux direct I/O best practices for Azure NetApp Files
This article helps you understand direct I/O best practices for Azure NetApp Fil
## Direct I/O
- The most common parameter used in storage performance benchmarking is direct I/O. It is supported by FIO and Vdbench. DISKSPD offers support for the similar construct of memory-mapped I/O. With direct I/O, the filesystem cache is bypassed, operations for direct memory access copy are avoided, and storage tests are made fast and simple.
+ The most common parameter used in storage performance benchmarking is direct I/O. It's supported by FIO and Vdbench. DISKSPD offers support for the similar construct of memory-mapped I/O. With direct I/O, the filesystem cache is bypassed, operations for direct memory access copy are avoided, and storage tests are made fast and simple.
-Using the direct I/O parameter makes storage testing easy. No data is read from the filesystem cache on the client. As such, the test is truly stressing the storage protocol and service itself, rather than memory access speeds. Also, without the DMA memory copies, read and write operations are efficient from a processing perspective.
+Using the direct I/O parameter makes storage testing easy. No data is read from the filesystem cache on the client. As such, the test is truly stressing the storage protocol and service itself, rather than memory access speeds. Without the DMA memory copies, read and write operations are efficient from a processing perspective.
-Take the Linux `dd` command as an example workload. Without the optional `odirect` flag, all I/O generated by `dd` is served from the Linux buffer cache. Reads with the blocks already in memory are not retrieved from storage. Reads resulting in a buffer cache miss end up being read from storage using NFS read-ahead with varying results, depending on factors as mount `rsize` and client read-ahead tunables. When writes are sent through the buffer cache, they use a write-behind mechanism, which is untuned and uses a significant amount of parallelism to send the data to the storage device. You might attempt to run two independent streams of I/O, one `dd` for reads and one `dd` for writes. But in fact, the operating system, untuned, favors writes over reads and uses more parallelism of it.
+Take the Linux `dd` command as an example workload. Without the optional `odirect` flag, all I/O generated by `dd` is served from the Linux buffer cache. Reads with the blocks already in memory aren't retrieved from storage. Reads resulting in a buffer cache miss end up being read from storage using NFS read-ahead with varying results, depending on factors as mount `rsize` and client read-ahead tunables. When writes are sent through the buffer cache, they use a write-behind mechanism, which is untuned and uses a significant amount of parallelism to send the data to the storage device. You might attempt to run two independent streams of I/O, one `dd` for reads and one `dd` for writes. But in fact, the operating system, untuned, favors writes over reads and uses more parallelism of it.
-Aside from database, few applications use direct I/O. Instead, they choose to leverage the advantages of a large memory cache for repeated reads and a write behind cache for asynchronous writes. In short, using direct I/O turns the test into a micro benchmark *if* the application being synthesized uses the filesystem cache.
+Aside from database, few applications use direct I/O. Instead, they leverage the advantages of a large memory cache for repeated reads and a write behind cache for asynchronous writes. In short, using direct I/O turns the test into a micro benchmark *if* the application being synthesized uses the filesystem cache.
The following are some databases that support direct I/O:
The following are some databases that support direct I/O:
## Best practices
-Testing with `directio` is an excellent way to understand the limits of the storage service and client. To get a better understanding for how the application itself will behave (if the application doesn't use `directio`), you should also run tests through the filesystem cache.
+Testing with `directio` is an excellent way to understand the limits of the storage service and client. To better understand how the application behaves (if the application doesn't use `directio`), you should also run tests through the filesystem cache.
## Next steps
azure-netapp-files Performance Linux Filesystem Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-filesystem-cache.md
Previously updated : 07/02/2021 Last updated : 03/02/2024 # Linux filesystem cache best practices for Azure NetApp Files
-This article helps you understand filesystem cache best practices for Azure NetApp Files.
+This article helps you understand filesystem cache best practices for Azure NetApp Files.
## Filesystem cache tunables You need to understand the following factors about filesystem cache tunables:
-* Flushing a dirty buffer leaves the data in a clean state usable for future reads until memory pressure leads to eviction.
+* Flushing a dirty buffer leaves the data in a clean state usable for future reads until memory pressure leads to eviction.
* There are three triggers for an asynchronous flush operation: * Time based: When a buffer reaches the age defined by this tunable, it must be marked for cleaning (that is, flushing, or writing to storage). * Memory pressure: See [`vm.dirty_ratio | vm.dirty_bytes`](#vmdirty_ratio--vmdirty_bytes) for details. * Close: When a file handle is closed, all dirty buffers are asynchronously flushed to storage.
-These factors are controlled by four tunables. Each tunable can be tuned dynamically and persistently using `tuned` or `sysctl` in the `/etc/sysctl.conf` file. Tuning these variables improves performance for applications.
+These factors are controlled by four tunables. Each tunable can be tuned dynamically and persistently using `tuned` or `sysctl` in the `/etc/sysctl.conf` file. Tuning these variables improves performance for applications.
> [!NOTE]
-> Information discussed in this article was uncovered during SAS GRID and SAS Viya validation exercises. As such, the tunables are based on lessons learned from the validation exercises. Many applications will similarly benefit from tuning these parameters.
+> Information discussed in this article was uncovered during SAS GRID and SAS Viya validation exercises. As such, the tunables are based on lessons learned from the validation exercises. Many applications similarly benefit from tuning these parameters.
### `vm.dirty_ratio | vm.dirty_bytes`
-These two tunables define the amount of RAM made usable for data modified but not yet written to stable storage. Whichever tunable is set automatically sets the other tunable to zero; RedHat advises against manually setting either of the two tunables to zero. The option `vm.dirty_ratio` (the default of the two) is set by Redhat to either 20% or 30% of physical memory depending on the OS, which is a significant amount considering the memory footprint of modern systems. Consideration should be given to setting `vm.dirty_bytes` instead of `vm.dirty_ratio` for a more consistent experience regardless of memory size. For example, ongoing work with SAS GRID determined 30 MiB an appropriate setting for best overall mixed workload performance.
+These two tunables define the amount of RAM made usable for data modified but not yet written to stable storage. Whichever tunable is set automatically sets the other tunable to zero; RedHat advises against manually setting either of the two tunables to zero. The option `vm.dirty_ratio` (the default of the two) is set by Redhat to either 20% or 30% of physical memory depending on the OS, which is a significant amount considering the memory footprint of modern systems. Consideration should be given to setting `vm.dirty_bytes` instead of `vm.dirty_ratio` for a more consistent experience regardless of memory size. For example, ongoing work with SAS GRID determined 30 MiB an appropriate setting for best overall mixed workload performance.
### `vm.dirty_background_ratio | vm.dirty_background_bytes`
-These tunables define the starting point where the Linux write-back mechanism begins flushing dirty blocks to stable storage. Redhat defaults to 10% of physical memory, which, on a large memory system, is a significant amount of data to start flushing. Taking SAS GRID for example, historically the recommendation has been to set `vm.dirty_background` to 1/5 size of `vm.dirty_ratio` or `vm.dirty_bytes`. Considering how aggressively the `vm.dirty_bytes` setting is set for SAS GRID, no specific value is being set here.
+These tunables define the starting point where the Linux write-back mechanism begins flushing dirty blocks to stable storage. Redhat defaults to 10% of physical memory, which, on a large memory system, is a significant amount of data to start flushing. Taking SAS GRID for example, historically the recommendation was to set `vm.dirty_background` to 1/5 size of `vm.dirty_ratio` or `vm.dirty_bytes`. Considering how aggressively the `vm.dirty_bytes` setting is set for SAS GRID, no specific value is being set here.
### `vm.dirty_expire_centisecs`
-This tunable defines how old a dirty buffer can be before it must be tagged for asynchronously writing out. Take SAS ViyaΓÇÖs CAS workload for example. An ephemeral write-dominant workload found that setting this value to 300 centiseconds (3 seconds) was optimal, with 3000 centiseconds (30 seconds) being the default.
+This tunable defines how old a dirty buffer can be before it must be tagged for asynchronously writing out. Take SAS ViyaΓÇÖs CAS workload for example. An ephemeral write-dominant workload found that setting this value to 300 centiseconds (3 seconds) was optimal, with 3000 centiseconds (30 seconds) being the default.
-SAS Viya shares CAS data into multiple small chunks of a few megabytes each. Rather than closing these file handles after writing data to each shard, the handles are left open and the buffers within are memory-mapped by the application. Without a close, there will be no flush until either memory pressure or 30 seconds has passed. Waiting for memory pressure proved suboptimal as did waiting for a long timer to expire. Unlike SAS GRID, which looked for the best overall throughput, SAS Viya looked to optimize write bandwidth.
+SAS Viya shares CAS data into multiple small chunks of a few megabytes each. Rather than closing these file handles after writing data to each shard, the handles are left open and the buffers within are memory-mapped by the application. Without a close, there's no flush until either memory pressure or 30 seconds has passed. Waiting for memory pressure proved suboptimal as did waiting for a long timer to expire. Unlike SAS GRID, which looked for the best overall throughput, SAS Viya looked to optimize write bandwidth.
### `vm.dirty_writeback_centisecs`
-The kernel flusher thread is responsible for asynchronously flushing dirty buffers between each flush thread sleeps. This tunable defines the amount spent sleeping between flushes. Considering the 3-second `vm.dirty_expire_centisecs` value used by SAS Viya, SAS set this tunable to 100 centiseconds (1 second) rather than the 500 centiseconds (5 seconds) default to find the best overall performance.
+The kernel flusher thread is responsible for asynchronously flushing dirty buffers between each flush thread sleeps. This tunable defines the amount spent sleeping between flushes. Considering the 3-second `vm.dirty_expire_centisecs` value used by SAS Viya, SAS set this tunable to 100 centiseconds (1 second) rather than the 500 centiseconds (5 seconds) default to find the best overall performance.
## Impact of an untuned filesystem cache
-Considering the default virtual memory tunables and the amount of RAM in modern systems, write-back potentially slows down other storage-bound operations from the perspective of the specific client driving this mixed workload. The following symptoms may be expected from an untuned, write-heavy, cache-laden Linux machine.
+Considering the default virtual memory tunables and the amount of RAM in modern systems, write-back potentially slows down other storage-bound operations from the perspective of the specific client driving this mixed workload. The following symptoms may be expected from an untuned, write-heavy, cache-laden Linux machine.
* Directory lists `ls` take long enough as to appear unresponsive. * Read throughput against the filesystem decreases significantly in comparison to write throughput. * `nfsiostat` reports write latencies **in seconds or higher**.
-You might experience this behavior only by *the Linux machine* performing the mixed write-heavy workload. Further, the experience is degraded against all NFS volumes mounted against a single storage endpoint. If the mounts come from two or more endpoints, only the volumes sharing an endpoint exhibit this behavior.
+You might experience this behavior only by *the Linux machine* performing the mixed write-heavy workload. Further, the experience is degraded against all NFS volumes mounted against a single storage endpoint. If the mounts come from two or more endpoints, only the volumes sharing an endpoint exhibit this behavior.
Setting the filesystem cache parameters as described in this section has been shown to address the issues. ## Monitoring virtual memory
-To understand what is going with virtual memory and the write-back, consider the following code snippet and output. *Dirty* represents the amount dirty memory in the system, and *writeback* represents the amount of memory actively being written to storage.
+To understand what is going with virtual memory and the write-back, consider the following code snippet and output. *Dirty* represents the amount dirty memory in the system, and *writeback* represents the amount of memory actively being written to storage.
`# while true; do echo "###" ;date ; egrep "^Cached:|^Dirty:|^Writeback:|file" /proc/meminfo; sleep 5; done`
-The following output comes from an experiment where the `vm.dirty_ratio` and the `vm.dirty_background` ratio were set to 2% and 1% of physical memory respectively. In this case, flushing began at 3.8 GiB, 1% of the 384-GiB memory system. Writeback closely resembled the write throughput to NFS.
+The following output comes from an experiment where the `vm.dirty_ratio` and the `vm.dirty_background` ratio were set to 2% and 1% of physical memory respectively. In this case, flushing began at 3.8 GiB, 1% of the 384-GiB memory system. Writeback closely resembled the write throughput to NFS.
``` Cons
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-mount-options.md
Previously updated : 12/07/2022 Last updated : 03/07/2024 # Linux NFS mount options best practices for Azure NetApp Files
This article helps you understand mount options and the best practices for using
## `Nconnect`
-Using the `nconnect` mount option allows you to specify the number of connections (network flows) that should be established between the NFS client and NFS endpoint up to a limit of 16. Traditionally, an NFS client uses a single connection between itself and the endpoint. By increasing the number of network flows, the upper limits of I/O and throughput are increased significantly. Testing has found `nconnect=8` to be the most performant.
+Using the `nconnect` mount option allows you to specify the number of connections (network flows) that should be established between the NFS client and NFS endpoint up to a limit of 16. Traditionally, an NFS client uses a single connection between itself and the endpoint. Increasing the number of network flows increases the upper limits of I/O and throughput significantly. Testing has found `nconnect=8` to be the most performant.
When preparing a multi-node SAS GRID environment for production, you might notice a repeatable 30% reduction in run time going from 8 hours to 5.5 hours:
When you use `nconnect`, keep the following rules in mind:
| Ubuntu | Ubuntu18.04 | | > [!NOTE]
- > SLES15SP2 is the minimum SUSE release in which `nconnect` is supported by Azure NetApp Files for NFSv4.1. All other releases as specified are the first releases that introduced the `nconnect` feature.
+ > SLES15SP2 is the minimum SUSE release in which `nconnect` is supported by Azure NetApp Files for NFSv4.1. All other releases as specified are the first releases that introduced the `nconnect` feature.
-* All mounts from a single endpoint will inherit the `nconnect` setting of the first export mounted, as shown in the following scenarios:
+* All mounts from a single endpoint inherit the `nconnect` setting of the first export mounted, as shown in the following scenarios:
Scenario 1: `nconnect` is used by the first mount. Therefore, all mounts against the same endpoint use `nconnect=8`.
When you use `nconnect`, keep the following rules in mind:
* `mount 10.10.10.10:/volume2 /mnt/volume2` * `mount 10.10.10.10:/volume3 /mnt/volume3`
- Scenario 2: `nconnect` is not used by the first mount. Therefore, no mounts against the same endpoint use `nconnect` even though `nconnect` may be specified thereon.
+ Scenario 2: `nconnect` isn't used by the first mount. Therefore, no mounts against the same endpoint use `nconnect` even though `nconnect` may be specified thereon.
* `mount 10.10.10.10:/volume1 /mnt/volume1` * `mount 10.10.10.10:/volume2 /mnt/volume2 -o nconnect=8` * `mount 10.10.10.10:/volume3 /mnt/volume3 -o nconnect=8`
- Scenario 3: `nconnect` settings are not propagated across separate storage endpoints. `nconnect` is used by the mount coming from `10.10.10.10` but not by the mount coming from `10.12.12.12`.
+ Scenario 3: `nconnect` settings aren't propagated across separate storage endpoints. `nconnect` is used by the mount coming from `10.10.10.10` but not by the mount coming from `10.12.12.12`.
* `mount 10.10.10.10:/volume1 /mnt/volume1 -o nconnect=8` * `mount 10.12.12.12:/volume2 /mnt/volume2`
For details, see [Linux concurrency best practices for Azure NetApp Files](perfo
Examples in this section provide information about how to approach performance tuning. You might need to make adjustments to suit your specific application needs.
-The `rsize` and `wsize` flags set the maximum transfer size of an NFS operation. If `rsize` or `wsize` are not specified on mount, the client and server negotiate the largest size supported by the two. Currently, both Azure NetApp Files and modern Linux distributions support read and write sizes as large as 1,048,576 Bytes (1 MiB). However, for best overall throughput and latency, Azure NetApp Files recommends setting both `rsize` and `wsize` no larger than 262,144 Bytes (256 K). You might observe that both increased latency and decreased throughput when using `rsize` and `wsize` larger than 256 KiB.
+The `rsize` and `wsize` flags set the maximum transfer size of an NFS operation. If `rsize` or `wsize` aren't specified on mount, the client and server negotiate the largest size supported by the two. Currently, both Azure NetApp Files and modern Linux distributions support read and write sizes as large as 1,048,576 Bytes (1 MiB). However, for best overall throughput and latency, Azure NetApp Files recommends setting both `rsize` and `wsize` no larger than 262,144 Bytes (256 K). You might observe that both increased latency and decreased throughput when using `rsize` and `wsize` larger than 256 KiB.
For example, [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server](../virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md#mount-the-azure-netapp-files-volumes) shows the 256-KiB `rsize` and `wsize` maximum as follows:
For example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](
The following considerations apply to the use of `rsize` and `wsize`:
-* Random I/O operation sizes are often smaller than the `rsize` and `wsize` mount options. As such, in effect, they will not be constrained thereby.
+* Random I/O operation sizes are often smaller than the `rsize` and `wsize` mount options. As such, they aren't constraints.
* When using the filesystem cache, sequential I/O will occur at the size predicated by the `rsize` and `wsize` mount options, unless the file size is smaller than `rsize` and `wsize`.
-* Operations bypassing the filesystem cache, although still constrained by the `rsize` and `wsize` mount options, will not necessarily issue as large as the maximum specified by `rsize` or `wsize`. This consideration is important when you use workload generators that have the `directio` option.
+* Operations bypassing the filesystem cache, although still constrained by the `rsize` and `wsize` mount options, aren't as large as the maximum specified by `rsize` or `wsize`. This consideration is important when you use workload generators that have the `directio` option.
*As a best practice with Azure NetApp Files, for best overall throughput and latency, set `rsize` and `wsize` no larger than 262,144 Bytes.* ## Close-to-open consistency and cache attribute timers
-NFS uses a loose consistency model. The consistency is loose because the application does not have to go to shared storage and fetch data every time to use it, a scenario that would have a tremendous impact to application performance. There are two mechanisms that manage this process: cache attribute timers and close-to-open consistency.
+NFS uses a loose consistency model. The consistency is loose because the application doesn't have to go to shared storage and fetch data every time to use it, a scenario that would have a tremendous impact to application performance. There are two mechanisms that manage this process: cache attribute timers and close-to-open consistency.
-*If the client has complete ownership of data, that is, it is not shared between multiple nodes or systems, there is guaranteed consistency.* In that case, you can reduce the `getattr` access operations to storage and speed up the application by turning off close-to-open (`cto`) consistency (`nocto` as a mount option) and by turning up the timeouts for the attribute cache management (`actimeo=600` as a mount option changes the timer to 10m versus the defaults `acregmin=3,acregmax=30,acdirmin=30,acdirmax=60`). In some testing, `nocto` reduces approximately 65-70% of the `getattr` access calls, and adjusting `actimeo` reduces these calls another 20-25%.
+*If the client has complete ownership of data, that is, it's not shared between multiple nodes or systems, there is guaranteed consistency.* In that case, you can reduce the `getattr` access operations to storage and speed up the application by turning off close-to-open (`cto`) consistency (`nocto` as a mount option) and by turning up the timeouts for the attribute cache management (`actimeo=600` as a mount option changes the timer to 10m versus the defaults `acregmin=3,acregmax=30,acdirmin=30,acdirmax=60`). In some testing, `nocto` reduces approximately 65-70% of the `getattr` access calls, and adjusting `actimeo` reduces these calls another 20-25%.
### How attribute cache timers work
-The attributes `acregmin`, `acregmax`, `acdirmin`, and `acdirmax` control the coherency of the cache. The former two attributes control how long the attributes of files are trusted. The latter two attributes control how long the attributes of the directory file itself are trusted (directory size, directory ownership, directory permissions). The `min` and `max` attributes define minimum and maximum duration over which attributes of a directory, attributes of a file, and cache content of a file are deemed trustworthy, respectively. Between `min` and `max`, an algorithm is used to define the amount of time over which a cached entry is trusted.
+The attributes `acregmin`, `acregmax`, `acdirmin`, and `acdirmax` control the coherency of the cache. The former two attributes control how long the attributes of files are trusted. The latter two attributes control how long the attributes of the directory file itself are trusted (directory size, directory ownership, directory permissions). The `min` and `max` attributes define minimum and maximum duration over which attributes of a directory, attributes of a file, and cache content of a file are deemed trustworthy, respectively. Between `min` and `max`, an algorithm is used to define the amount of time over which a cached entry is trusted.
-For example, consider the default `acregmin` and `acregmax` values, 3 and 30 seconds, respectively. For instance, the attributes are repeatedly evaluated for the files in a directory. After 3 seconds, the NFS service is queried for freshness. If the attributes are deemed valid, the client doubles the trusted time to 6 seconds, 12 seconds, 24 seconds, then as the maximum is set to 30, 30 seconds. From that point on, until the cached attributes are deemed out of date (at which point the cycle starts over), trustworthiness is defined as 30 seconds being the value specified by `acregmax`.
+For example, consider the default `acregmin` and `acregmax` values, 3 and 30 seconds, respectively. For instance, the attributes are repeatedly evaluated for the files in a directory. After 3 seconds, the NFS service is queried for freshness. If the attributes are deemed valid, the client doubles the trusted time to 6 seconds, 12 seconds, 24 seconds, then as the maximum is set to 30, 30 seconds. From that point on, until the cached attributes are deemed out of date (at which point the cycle starts over), trustworthiness is defined as 30 seconds being the value specified by `acregmax`.
-There are other cases that can benefit from a similar set of mount options, even when there's no complete ownership by the clients, for example, if the clients use the data as read only and data update is managed through another path. For applications that use grids of clients like EDA, web hosting and movie rendering and have relatively static data sets (EDA tools or libraries, web content, texture data), the typical behavior is that the data set is largely cached on the clients. There are few reads and no writes. There will be many `getattr`/access calls coming back to storage. These data sets are typically updated through another client mounting the file systems and periodically pushing content updates.
+There are other cases that can benefit from a similar set of mount options, even when there's no complete ownership by the clients, for example, if the clients use the data as read only and data update is managed through another path. For applications that use grids of clients like EDA, web hosting and movie rendering and have relatively static data sets (EDA tools or libraries, web content, texture data), the typical behavior is that the data set is largely cached on the clients. There are few reads and no writes. There are many `getattr`/access calls coming back to storage. These data sets are typically updated through another client mounting the file systems and periodically pushing content updates.
-In these cases, there's a known lag in picking up new content and the application still works with potentially out-of-date data. In these cases, `nocto` and `actimeo` can be used to control the period where out-of-data date can be managed. For example, in EDA tools and libraries, `actimeo=600` works well because this data is typically updated infrequently. For small web hosting where clients need to see their data updates timely as they're editing their sites, `actimeo=10` might be acceptable. For large-scale web sites where there's content pushed to multiple file systems, `actimeo=60` might be acceptable.
+In these cases, there's a known lag in picking up new content and the application still works with potentially out-of-date data. In these cases, `nocto` and `actimeo` can be used to control the period where out-of-data date can be managed. For example, in EDA tools and libraries, `actimeo=600` works well because this data is typically updated infrequently. For small web hosting where clients need to see their data updates timely as they're editing their sites, `actimeo=10` might be acceptable. For large-scale web sites where there's content pushed to multiple file systems, `actimeo=60` might be acceptable.
Using these mount options significantly reduces the workload to storage in these cases. (For example, a recent EDA experience reduced IOPs to the tool volume from >150 K to ~6 K.) Applications can run significantly faster because they can trust the data in memory. (Memory access time is nanoseconds vs. hundreds of microseconds for `getattr`/access on a fast network.) ### Close-to-open consistency
-Close-to-open consistency (the `cto` mount option) ensures that no matter the state of the cache, on open the most recent data for a file is always presented to the application.
+Close-to-open consistency (the `cto` mount option) ensures that no matter the state of the cache, on open the most recent data for a file is always presented to the application.
-* When a directory is crawled (`ls`, `ls -l` for example) a certain set of RPCs (remote procedure calls) are issued.
- The NFS server shares its view of the filesystem. As long as `cto` is used by all NFS clients accessing a given NFS export, all clients will see the same list of files and directories therein. The freshness of the attributes of the files in the directory is controlled by the [attribute cache timers](#how-attribute-cache-timers-work). In other words, as long as `cto` is used, files appear to remote clients as soon as the file is created and the file lands on the storage.
-* When a file is opened, the content of the file is guaranteed fresh from the perspective of the NFS server.
- If there's a race condition where the content has not finished flushing from Machine 1 when a file is opened on Machine 2, Machine 2 will only receive the data present on the server at the time of the open. In this case, Machine 2 will not retrieve more data from the file until the `acreg` timer is reached, and Machine 2 checks its cache coherency from the server again. This scenario can be observed using a tail `-f` from Machine 2 when the file is still being written to from Machine 1.
+* When a directory is crawled (`ls`, `ls -l` for example) a certain set of RPCs (remote procedure calls) are issued.
+ The NFS server shares its view of the filesystem. As long as `cto` is used by all NFS clients accessing a given NFS export, all clients see the same list of files and directories therein. The freshness of the attributes of the files in the directory is controlled by the [attribute cache timers](#how-attribute-cache-timers-work). In other words, as long as `cto` is used, files appear to remote clients as soon as the file is created and the file lands on the storage.
+* When a file is opened, the content of the file is guaranteed fresh from the perspective of the NFS server.
+ If there's a race condition where the content hasn't finished flushing from Machine 1 when a file is opened on Machine 2, Machine 2 only receives the data present on the server at the time of the open. In this case, Machine 2 doesn't retrieve more data from the file until the `acreg` timer is reached, and Machine 2 checks its cache coherency from the server again. This scenario can be observed using a tail `-f` from Machine 2 when the file is still being written to from Machine 1.
### No close-to-open consistency
-When no close-to-open consistency (`nocto`) is used, the client will trust the freshness of its current view of the file and directory until the cache attribute timers have been breached.
+When no close-to-open consistency (`nocto`) is used, the client trusts the freshness of its current view of the file and directory until the cache attribute timers have been breached.
-* When a directory is crawled (`ls`, `ls -l` for example) a certain set of RPCs (remote procedure calls) are issued.
- The client will only issue a call to the server for a current listing of files when the `acdir` cache timer value has been breached. In this case, recently created files and directories will not appear and recently removed files and directories will still appear.
+* When a directory is crawled (`ls`, `ls -l` for example) a certain set of RPCs (remote procedure calls) are issued.
+ The client only issues a call to the server for a current listing of files when the `acdir` cache timer value has been breached. In this case, recently created files and directories don't appear. Recently removed files and directories do appear.
* When a file is opened, as long as the file is still in the cache, its cached content (if any) is returned without validating consistency with the NFS server.
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
Previously updated : 09/29/2022 Last updated : 03/07/2024 # Linux NFS read-ahead best practices for Azure NetApp Files This article helps you understand filesystem cache best practices for Azure NetApp Files.
-NFS read-ahead predictively requests blocks from a file in advance of I/O requests by the application. It is designed to improve client sequential read throughput. Until recently, all modern Linux distributions set the read-ahead value to be equivalent of 15 times the mounted filesystems `rsize`.
+NFS read-ahead predictively requests blocks from a file in advance of I/O requests by the application. It's designed to improve client sequential read throughput. Until recently, all modern Linux distributions set the read-ahead value to be equivalent of 15 times the mounted filesystems `rsize`.
The following table shows the default read-ahead values for each given `rsize` mount option.
The following table shows the default read-ahead values for each given `rsize` m
|-|-| | 64 KiB | 960 KiB | | 256 KiB | 3,840 KiB |
-| 1024 KiB | 15,360 KiB |
+| 1,024 KiB | 15,360 KiB |
-RHEL 8.3 and Ubuntu 18.04 introduced changes that might negatively impact client sequential read performance. Unlike earlier releases, these distributions set read-ahead to a default of 128 KiB regardless of the `rsize` mount option used. Upgrading from releases with the larger read-ahead value to those with the 128-KiB default experienced decreases in sequential read performance. However, read-ahead values may be tuned upward both dynamically and persistently. For example, testing with SAS GRID found the 15,360-KiB read value optimal compared to 3,840 KiB, 960 KiB, and 128 KiB. Not enough tests have been run beyond 15,360 KiB to determine positive or negative impact.
+RHEL 8.3 and Ubuntu 18.04 introduced changes that might negatively impact client sequential read performance. Unlike earlier releases, these distributions set read-ahead to a default of 128 KiB regardless of the `rsize` mount option used. Upgrading from releases with the larger read-ahead value to releases with the 128-KiB default experienced decreases in sequential read performance. However, read-ahead values may be tuned upward both dynamically and persistently. For example, testing with SAS GRID found the 15,360 KiB read value optimal compared to 3,840 KiB, 960 KiB, and 128 KiB. Not enough tests have been run beyond 15,360 KiB to determine positive or negative impact.
The following table shows the default read-ahead values for each currently available distribution.
The following table shows the default read-ahead values for each currently avail
## How to work with per-NFS filesystem read-ahead
-NFS read-ahead is defined at the mount point for an NFS filesystem. The default setting can be viewed and set both dynamically and persistently. For convenience, the following bash script written by Red Hat has been provided for viewing or dynamically setting read-ahead for amounted NFS filesystem.
+NFS read-ahead is defined at the mount point for an NFS filesystem. The default setting can be viewed and set both dynamically and persistently. For convenience, the following bash script written by Red Hat is provided for viewing or dynamically setting read-ahead for amounted NFS filesystem.
-Read-ahead can be defined either dynamically per NFS mount using the following script or persistently using `udev` rules as shown in this section. To display or set read-ahead for a mounted NFS filesystem, you can save the following script as a bash file, modify the fileΓÇÖs permissions to make it an executable (`chmod 544 readahead.sh`), and run as shown.
+Read-ahead can be defined either dynamically per NFS mount using the following script or persistently using `udev` rules as shown in this section. To display or set read-ahead for a mounted NFS filesystem, you can save the following script as a bash file, modify the fileΓÇÖs permissions to make it an executable (`chmod 544 readahead.sh`), and run as shown.
## How to show or set read-ahead values
azure-netapp-files Performance Oracle Single Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-single-volumes.md
Title: Oracle database performance on Azure NetApp Files single volume | Microsoft Docs
-description: Describes performance test results of a Azure NetApp Files single volume on Oracle database.
+description: Describes performance test results of an Azure NetApp Files single volume on Oracle database.
Previously updated : 08/04/2022 Last updated : 02/04/2024 # Oracle database performance on Azure NetApp Files single volumes
azure-netapp-files Snapshots Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-delete.md
Previously updated : 09/16/2021 Last updated : 03/16/2024
azure-netapp-files Snapshots Edit Hide Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-edit-hide-path.md
Previously updated : 09/16/2021 Last updated : 03/16/2024
azure-netapp-files Snapshots Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-introduction.md
Previously updated : 11/22/2022 Last updated : 06/03/2024 # How Azure NetApp Files snapshots work
azure-netapp-files Snapshots Manage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-manage-policy.md
Previously updated : 05/18/2023 Last updated : 03/18/2024
azure-netapp-files Snapshots Restore File Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-file-client.md
Previously updated : 09/16/2021 Last updated : 03/16/2024
azure-netapp-files Snapshots Restore File Single https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-file-single.md
Previously updated : 05/04/2023 Last updated : 03/04/2024 # Restore individual files using single-file snapshot restore
-If you do not want to [restore the entire snapshot to a new volume](snapshots-restore-new-volume.md) or [copy large files across the network](snapshots-restore-file-client.md), you can use the single-file snapshot restore feature to recover individual files directly within a volume from a snapshot. This option does not require an external client data copy.
+If you don't want to [restore the entire snapshot to a new volume](snapshots-restore-new-volume.md) or [copy large files across the network](snapshots-restore-file-client.md), you can use the single-file snapshot restore feature to recover individual files directly within a volume from a snapshot. This option doesn't require an external client data copy.
The single-file snapshot restore feature enables you to restore a single file or a list of files (up to 10 files at a time) from a snapshot. You can specify a specific destination location or folder for the files to be restored to.
The single-file snapshot restore feature enables you to restore a single file or
* If you use this feature to restore files to be new files, ensure that the volume has enough logical free space to accommodate the files. * You can restore up to 10 files at a time, specified in a total length of 1024 characters. * All the directories in the destination path that you specify must be present in the active file system.
-The restore operation does not create directories in the process. If the specified destination path is invalid (doesn't exist in Active file system), the restore operation will fail.
-* If you donΓÇÖt specify a destination path, the files will be restored to the original file location. If the files already exist in the original location, they will be overwritten by the files restored from the snapshot.
+The restore operation doesn't create directories in the process. If the specified destination path is invalid (doesn't exist in Active file system), the restore operation will fail.
+* If you donΓÇÖt specify a destination path, the files restore to the original file location. If the files already exist in the original location, they are overwritten by the files restored from the snapshot.
* A volume can have only one active file-restore operation. If you want to restore additional files, you must wait until the current restore operation is complete before triggering another restore operation. * *During the file restore operation*, the following restrictions apply: * You can't create new snapshots on the volume.
The restore operation does not create directories in the process. If the specifi
1. Navigate to the volume that has the snapshot to use for restoring files.
-2. Click **Snapshots** to display the list of volume snapshots.
+2. Select **Snapshots** to display the list of volume snapshots.
-3. Right-click the snapshot that you want to use for restoring files, and then select **Restore Files** from the menu.
+3. Right-click the snapshot that you want to use for restoring files, then select **Restore Files** from the menu.
[ ![Snapshot that shows how to access the Restore Files menu item.](./media/snapshots-restore-file-single/snapshot-restore-files-menu.png) ](./media/snapshots-restore-file-single/snapshot-restore-files-menu.png#lightbox)
The restore operation does not create directories in the process. If the specifi
* Regardless of the volumeΓÇÖs protocol type (NFS, SMB, or dual protocol), directories in the path must be specified using forward slashes (`/`) and not backslashes (`\`). 2. In the **Destination Path** field, provide the location in the volume where the specified files are to be restored to.
- * If you donΓÇÖt specify a destination path, the files are restored to their original location. If files with the same names already exist in the original location, they are overwritten by the files restored from the snapshot.
+ * If you donΓÇÖt specify a destination path, the files are restored to their original location. If files with the same names already exist in the original location, they're overwritten by the files restored from the snapshot.
* If you specify a destination path: * Ensure that all directories in the path are present in the active file system. Otherwise, the restore operation fails. For example, if you specify `/CurrentCopy/contoso` as the destination path, the `/CurrentCopy/contoso` path must already exist. * By specifying a destination path, all files specified in the File Paths field are restored to the destination path (folder). * Regardless of the volumeΓÇÖs protocol type (NFS, SMB, or dual protocol), directories in the path must be specified using forward slashes (`/`) and not backslashes (`\`).
- 3. Click **Restore** to begin the restore operation.
+ 3. Select **Restore** to initiate the restore operation.
![Snapshot the Restore Files window.](./media/snapshots-restore-file-single/snapshot-restore-files-window.png)
The path `/volume-azure-nfs/currentCopy/contoso` must be valid in the active fil
From the Azure portal:
-1. Click **Snapshots**. Right-click the snapshot `daily-10-min-past-12am.2021-09-08_0010`.
-2. Click **Restore Files**.
+1. Select **Snapshots**. Right-click the snapshot `daily-10-min-past-12am.2021-09-08_0010`.
+2. Select **Restore Files**.
3. Specify **`/contoso/vm-8976.vmdk`** in File Paths. 4. Specify **`/currentCopy/contoso`** in Destination Path.
Destination path in the active file system:
The path `N:\currentCopy\contoso` must be valid in the active file system. From the Azure portal:
-1. Click **Snapshots**. Select the snapshot `daily-10-min-past-12am.2021-09-08_0010`.
-2. Click **Restore Files**.
+1. Select **Snapshots**. Select the snapshot `daily-10-min-past-12am.2021-09-08_0010`.
+2. Select **Restore Files**.
3. Specify **`/contoso/vm-9981.vmdk`** in File Paths. 4. Specify **`/currentCopy/contoso`** in Destination Path.
azure-netapp-files Snapshots Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-new-volume.md
Previously updated : 02/22/2023 Last updated : 03/22/2024
azure-netapp-files Snapshots Revert Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-revert-volume.md
Previously updated : 02/28/2023 Last updated : 03/28/2024 # Revert a volume using snapshot revert with Azure NetApp Files
-The [snapshot](snapshots-introduction.md) revert functionality enables you to quickly revert a volume to the state it was in when a particular snapshot was taken. In most cases, reverting a volume is much faster than restoring individual files from a snapshot to the active file system. It is also more space efficient compared to restoring a snapshot to a new volume.
+The [snapshot](snapshots-introduction.md) revert functionality enables you to quickly revert a volume to the state it was in when a particular snapshot was taken. In most cases, reverting a volume is faster than restoring individual files from a snapshot to the active file system. It's also more space efficient compared to restoring a snapshot to a new volume.
You can find the Revert Volume option in the Snapshots menu of a volume. After you select a snapshot for reversion, Azure NetApp Files reverts the volume to the data and timestamps that it contained when the selected snapshot was taken.
The revert functionality is also available in configurations with volume replica
## Considerations
-* Reverting a volume using snapshot revert is not supported on [Azure NetApp Files volumes that have backups](backup-requirements-considerations.md).
+* Reverting a volume using snapshot revert isn't supported on [Azure NetApp Files volumes that have backups](backup-requirements-considerations.md).
* In configurations with a volume replication relationship, a SnapMirror snapshot is created to synchronize between the source and destination volumes. This snapshot is created in addition to any user-created snapshots. **When reverting a source volume with an active volume replication relationship, only snapshots that are more recent than this SnapMirror snapshot can be used in the revert operation.** ## Steps
The revert functionality is also available in configurations with volume replica
![Screenshot that describes the right-click menu of a snapshot.](./media/shared/snapshot-right-click-menu.png)
-2. In the Revert Volume to Snapshot window,
-type the name of the volume, and click **Revert**.
+2. In the Revert Volume to Snapshot window, enter the name of the volume then select **Revert**.
The volume is now restored to the point in time of the selected snapshot.
azure-netapp-files Terraform Manage Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/terraform-manage-volume.md
Previously updated : 12/20/2023 Last updated : 03/20/2024 # Update Terraform-managed Azure resources outside of Terraform
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Last updated 6/12/2024
Microsoft regularly applies important updates to the Azure VMware Solution for new features and software lifecycle management. You should receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](architecture-private-clouds.md#host-maintenance-and-lifecycle-management).
+## August 2024
+
+All new Azure VMware Solution private clouds are being deployed with VMware vSphere 8.0 version. [Learn more](/azure/azure-vmware/architecture-private-clouds)
+ ## May 2024 Azure VMware Solution is now generally available in the Central India, UAE North, and Italy North regions, increasing the total region count to 33. [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware&rar=true&regions=all)
backup Active Directory Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/active-directory-backup-restore.md
Title: Back up and restore Active Directory description: Learn how to back up and restore Active Directory domain controllers. Previously updated : 08/09/2023 Last updated : 08/20/2024
This article outlines the proper procedures for backing up and restoring Active
## Best practices
+Before you start protection of Active Directory, check the following best practices:
+ - Make sure at least one domain controller is backed up. If you back up more than one domain controller, make sure all the ones holding the [FSMO (Flexible Single Master Operation) roles](/windows-server/identity/ad-ds/plan/planning-operations-master-role-placement) are backed up. - Back up Active Directory frequently. The backup age should never be older than the tombstone lifetime (TSL) because objects older than the TSL will be "tombstoned" and no longer considered valid.
This article outlines the proper procedures for backing up and restoring Active
> >For information about performing an authoritative restore of SYSVOL, see [this article](/windows-server/identity/ad-ds/manage/ad-forest-recovery-authoritative-recovery-sysvol).
-## Backing up Azure VM domain controllers
+## Back up Azure VM domain controllers
If the domain controller is an Azure VM, you can back up the server using [Azure VM Backup](backup-azure-vms-introduction.md). Read about [operational considerations for virtualized domain controllers](/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controllers-hyper-v#operational-considerations-for-virtualized-domain-controllers) to ensure successful backups (and future restores) of your Azure VM domain controllers.
-## Backing up on-premises domain controllers
+## Back up on-premises domain controllers
To back up an on-premises domain controller, you need to back up the server's System State data.
To back up an on-premises domain controller, you need to back up the server's Sy
>[!NOTE] > Restoring on-premises domain controllers (either from system state or from VMs) to the Azure cloud is not supported. If you would like the option of failover from an on-premises Active Directory environment to Azure, consider using [Azure Site Recovery](../site-recovery/site-recovery-active-directory.md).
-## Restoring Active Directory
+## Restore Active Directory
Active Directory data can be restored in one of two modes: **authoritative** or **nonauthoritative**. In an authoritative restore, the restored Active Directory data will override the data found on the other domain controllers in the forest.
During the restore, the server will be started in Directory Services Restore Mod
>[!NOTE] >If the DSRM password is forgotten, you can reset it using [these instructions](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/cc754363(v=ws.11)).
-### Restoring Azure VM domain controllers
+### Restore Azure VM domain controllers
To restore an Azure VM domain controller, see [Restore domain controller VMs](backup-azure-arm-restore-vms.md#restore-domain-controller-vms).
If you're restoring the last remaining domain controller in the domain, or resto
>[!NOTE] > Virtualized domain controllers, from Windows 2012 onwards use [virtualization based safeguards](/windows-server/identity/ad-ds/introduction-to-active-directory-domain-services-ad-ds-virtualization-level-100#virtualization-based-safeguards). With these safeguards, Active directory understands if the VM restored is a domain controller, and performs the necessary steps to restore the Active Directory data.
-### Restoring on-premises domain controllers
+### Restore on-premises domain controllers
To restore an on-premises domain controller, follow the directions in for restoring system state to Windows Server, using the guidance for [special considerations for system state recovery on a domain controller](backup-azure-restore-system-state.md#special-considerations-for-system-state-recovery-on-a-domain-controller).
backup Backup Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-cli.md
Title: Back up Azure Managed Disks using Azure CLI description: Learn how to back up Azure Managed Disks using Azure CLI. - Previously updated : 08/25/2023+ Last updated : 08/20/2024
This article describes how to back up [Azure Managed Disk](../virtual-machines/m
> [!IMPORTANT] > Support for Azure Managed Disks backup and restore via CLI is in preview and available as an extension in Az 2.15.0 version and later. The extension is automatically installed when you run the **az dataprotection** commands. [Learn more](/cli/azure/azure-cli-extensions-overview) about extensions.
-In this article, you'll learn how to:
--- Create a Backup vault--- Create a Backup policy--- Configure Backup of an Azure Disk--- Run an on-demand backup job- For information on the Azure Disk backup region availability, supported scenarios and limitations, see the [support matrix](disk-backup-support-matrix.md). ## Create a Backup vault
Trigger an on-demand backup using the [az dataprotection backup-instance adhoc-b
az dataprotection backup-instance adhoc-backup --name "diskrg-CLITestDisk-3df6ac08-9496-4839-8fb5-8b78e594f166" --rule-name "BackupDaily" --resource-group "000pikumar" --vault-name "PratikPrivatePreviewVault1" --retention-tag-override "default" ```
-## Tracking jobs
+## Track jobs
Track all the jobs using the [az dataprotection job list](/cli/azure/dataprotection/job#az-dataprotection-job-list) command. You can list all jobs and fetch a particular job detail.
You can also use Az.ResourceGraph to track all jobs across all Backup vaults. Us
az dataprotection job list-from-resourcegraph --datasource-type AzureDisk --status Completed ```
-## Next steps
+## Next step
[Restore Azure Managed Disks using Azure CLI](restore-managed-disks-cli.md)
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
description: Provides an overview of the BareMetal Infrastructure on Azure. Previously updated : 07/01/2023 Last updated : 08/15/2024 # What is BareMetal Infrastructure on Azure?
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
description: Learn how to identify and interact with BareMetal instances in the Azure portal or Azure CLI. Previously updated : 04/01/2023 Last updated : 08/15/2024 # Connect BareMetal Infrastructure instances in Azure
baremetal-infrastructure Know Baremetal Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/know-baremetal-terms.md
description: Know the terms of Azure BareMetal Infrastructure. Previously updated : 04/01/2023 Last updated : 08/15/2024 # Know the terms for BareMetal Infrastructure
baremetal-infrastructure About Nc2 On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md
description: Learn about Nutanix Cloud Clusters on Azure and the benefits it off
Previously updated : 7/19/2024 Last updated : 8/15/2024
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/architecture.md
description: Learn about the architecture of several configurations of BareMetal
Previously updated : 7/19/2024 Last updated : 08/15/2024
baremetal-infrastructure Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/faq.md
description: Questions frequently asked about NC2 on Azure
Previously updated : 05/21/2024 Last updated : 08/15/2024
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/get-started.md
description: Learn how to sign up, set up, and use Nutanix Cloud Clusters on Azu
Previously updated : 7/19/2024 Last updated : 8/15/2024
cloud-services Applications Dont Support Tls 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/applications-dont-support-tls-1-2.md
tag: top-support-issue -+ Last updated 07/23/2024
cloud-services Automation Manage Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/automation-manage-cloud-services.md
Title: Manage Azure Cloud Services (classic) using Azure Automation | Microsoft Docs description: Learn about how the Azure Automation service can be used to manage Azure cloud services at scale. -+ Last updated 07/23/2024
cloud-services Cloud Services Allocation Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-allocation-failures.md
Title: Troubleshooting Cloud Service (classic) allocation failures | Microsoft Docs description: Troubleshoot an allocation failure when you deploy Azure Cloud Services. Learn how allocation works and why allocation can fail. -+ Last updated 07/23/2024
cloud-services Cloud Services Certs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-certs-create.md
Title: Cloud Services (classic) and management certificates | Microsoft Docs description: Learn about how to create and deploy certificates for cloud services and for authenticating with the management API in Azure. -+ Last updated 07/23/2024
cloud-services Cloud Services Choose Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-choose-me.md
Title: What is Azure Cloud Services (classic) | Microsoft Docs description: Learn about what Azure Cloud Services is, specifically its design to support applications that are scalable, reliable, and inexpensive to operate. -+ Last updated 07/23/2024
cloud-services Cloud Services Configure Ssl Certificate Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-configure-ssl-certificate-portal.md
Title: Configure TLS for a cloud service | Microsoft Docs description: Learn how to specify an HTTPS endpoint for a web role and how to upload a TLS/SSL certificate to secure your application. These examples use the Azure portal. -+ Last updated 07/23/2024
cloud-services Cloud Services Connect To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-connect-to-custom-domain.md
Title: Connect a Cloud Service (classic) to a custom Domain Controller | Microsoft Docs description: Learn how to connect your web/worker roles to a custom AD Domain using PowerShell and AD Domain Extension -+ Last updated 07/23/2024
cloud-services Cloud Services Custom Domain Name Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-custom-domain-name-portal.md
Title: Configure a custom domain name in Cloud Services (classic) | Microsoft Docs description: Learn how to expose your Azure application or data to the internet on a custom domain by configuring Domain Name System (DNS) settings. These examples use the Azure portal. -+ Last updated 07/23/2024
cloud-services Cloud Services Diagnostics Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-diagnostics-powershell.md
Title: Enable diagnostics in Azure Cloud Services (classic) using PowerShell | Microsoft Docs description: Learn how to use PowerShell to enable collecting diagnostic data from an Azure Cloud Service with the Azure Diagnostics extension. -+ Last updated 07/23/2024
cloud-services Cloud Services Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-disaster-recovery-guidance.md
Title: Handling an Azure service disruption that impacts Azure Cloud Services (classic) description: Learn what to do if an Azure service disruption that impacts Azure Cloud Services. -+ Last updated 07/23/2024
cloud-services Cloud Services Dotnet Diagnostics Trace Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md
Title: Trace the flow in Cloud Services (classic) Application with Azure Diagnostics description: Add tracing messages to an Azure application to help debugging, measuring performance, monitoring, traffic analysis, and more. -+ Last updated 07/23/2024
cloud-services Cloud Services Dotnet Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-diagnostics.md
Title: How to use Azure diagnostics (.NET) with Cloud Services (classic) | Microsoft Docs description: Using Azure diagnostics to gather data from Azure cloud Services for debugging, measuring performance, monitoring, traffic analysis, and more. -+ Last updated 07/23/2024
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-get-started.md
Title: Get started with Azure Cloud Services (classic) and ASP.NET | Microsoft Docs description: Learn how to create a multi-tier app using ASP.NET Model-View-Controller (MVC) and Azure. The app runs in a cloud service, with web role and worker role. It uses Entity Framework, SQL Database, and Azure Storage queues and blobs. -+ Last updated 07/23/2024
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
Title: Install .NET on Azure Cloud Services (classic) roles description: This article describes how to manually install the .NET Framework on your cloud service web and worker roles. -+ Last updated 07/23/2024
cloud-services Cloud Services Enable Communication Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-enable-communication-role-instances.md
Title: Communication for Roles in Cloud Services (classic) | Microsoft Docs description: Role instances in Cloud Services can have endpoints (http, https, tcp, udp) defined for them that communicate with the outside or between other role instances. -+ Last updated 07/23/2024
cloud-services Cloud Services Guestos Family 2 3 4 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family-2-3-4-retirement.md
Title: Guest OS family 2, 3, and 4 retirement notice | Microsoft Docs description: Information about when the Azure Guest OS Family 2, 3, and 4 retirement happened and how to determine if their retirement affects you. -+ -+ Last updated 07/23/2024
cloud-services Cloud Services Guestos Family1 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family1-retirement.md
Title: Guest OS family 1 retirement notice | Microsoft Docs description: Provides information about when the Azure Guest OS Family 1 retirement happened and how to determine if its retirement affects you. -+ -+ Last updated 07/23/2024
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
Title: List of updates applied to the Azure Guest OS | Microsoft Docs description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to your Guest OS. -+ ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77-+ Last updated 07/31/2024
cloud-services Cloud Services Guestos Retirement Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-retirement-policy.md
Title: Supportability and retirement policy guide for Azure Guest OS | Microsoft Docs description: Provides information about what Microsoft supports regarding the Azure Guest OS used by Cloud Services. -+ ms.assetid: 919dd781-4dc6-4e50-bda8-9632966c5458-+ Last updated 07/23/2024
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
Title: Learn about the latest Azure Guest OS Releases | Microsoft Docs description: The latest release news and SDK compatibility for Azure Cloud Services Guest OS. -+ ms.assetid: 6306cafe-1153-44c7-8554-623b03d59a34-+ Last updated 07/31/2024
cloud-services Cloud Services How To Create Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-create-deploy-portal.md
Title: How to create and deploy a cloud service (classic) | Microsoft Docs description: Learn how to use the Quick Create method to create a cloud service and use Upload to upload and deploy a cloud service package in Azure. -+ Last updated 07/23/2024
cloud-services Cloud Services How To Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-manage-portal.md
Title: Common cloud service management tasks | Microsoft Docs description: Learn how to manage Cloud Services in the Azure portal. These examples use the Azure portal. -+ Last updated 07/23/2024
cloud-services Cloud Services How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-monitor.md
Title: Monitor an Azure Cloud Service (classic) | Microsoft Docs description: Describes what monitoring an Azure Cloud Service involves and what some of your options are. -+ Last updated 07/23/2024
cloud-services Cloud Services How To Scale Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-scale-portal.md
Title: Auto scale a cloud service (classic) in the portal | Microsoft Docs description: Learn how to use the portal to configure auto scale rules for a cloud service (classic) roles in Azure. -+ Last updated 07/23/2024
cloud-services Cloud Services How To Scale Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-how-to-scale-powershell.md
Title: Scale an Azure cloud service (classic) in Windows PowerShell | Microsoft Docs description: Learn how to use PowerShell to scale a web role or worker role in or out in Azure cloud services (classic). -+ Last updated 07/23/2024
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
Title: Node.js application using Socket.io - Azure description: Socket.IO is now natively supported on Azure. This old tutorial shows how to self-host a socket.IO-based chat application on Azure. The latest recommendation is to let Socket.IO provide real time communication for a Node.js server and clients, and let Azure manage scaling client connections. -+ Last updated 07/23/2024
cloud-services Cloud Services Nodejs Develop Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-app.md
Title: Node.js Getting Started Guide description: Learn how to create a Node.js web application and deploy it to an Azure cloud service. -+ Last updated 07/23/2024
cloud-services Cloud Services Nodejs Develop Deploy Express App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-express-app.md
Title: Build and deploy a Node.js Express app to Azure Cloud Services (classic) description: Use this tutorial to create a new application using the Express module, which provides a Model-View-Control (MVC) framework for creating Node.js web applications. -+ Last updated 07/23/2024
cloud-services Cloud Services Performance Testing Visual Studio Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-performance-testing-visual-studio-profiler.md
Title: Profiling a Cloud Service (classic) Locally in the Compute Emulator | Microsoft Docs description: Investigate performance issues in cloud services with the Visual Studio profiler -+ Last updated 07/23/2024
cloud-services Cloud Services Php Create Web Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-php-create-web-role.md
Title: Create Azure web and worker roles for PHP
description: A guide to creating PHP web and worker roles in an Azure cloud service, and configuring the PHP runtime. ms.assetid: 9f7ccda0-bd96-4f7b-a7af-fb279a9e975b-+ ms.devlang: php Last updated 07/23/2024
cloud-services Cloud Services Powershell Create Cloud Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-powershell-create-cloud-container.md
Title: Create a cloud service (classic) container with PowerShell | Microsoft Docs description: This article explains how to create a cloud service container with PowerShell. The container hosts web and worker roles. -+ Last updated 07/23/2024
cloud-services Cloud Services Python How To Use Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-how-to-use-service-management.md
Title: Use the classic deployment model (Python) - feature guide description: Learn how to programmatically perform common service management tasks from Python. -+ Last updated 07/23/2024
cloud-services Cloud Services Python Ptvs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-ptvs.md
Title: Get started with Python and Azure Cloud Services (classic)| Microsoft Docs description: Overview of using Python Tools for Visual Studio to create Azure cloud services including web roles and worker roles. -+ Last updated 07/23/2024
cloud-services Cloud Services Role Enable Remote Desktop New Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-new-portal.md
Title: Use the portal to enable Remote Desktop for a Role description: How to configure your Azure cloud service application to allow remote desktop connections through the Azure portal. -+ Last updated 07/23/2024
cloud-services Cloud Services Role Enable Remote Desktop Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-powershell.md
Title: Use PowerShell to enable Remote Desktop for a Role description: How to configure your Azure cloud service application using PowerShell to allow remote desktop connections through PowerShell. -+ Last updated 07/23/2024
cloud-services Cloud Services Role Enable Remote Desktop Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-visual-studio.md
Title: Using Visual Studio, enable Remote Desktop for a Role (Azure Cloud Services classic) description: How to configure your Azure cloud service application to allow remote desktop connections through Visual Studio. -+ Last updated 07/23/2024
cloud-services Cloud Services Role Lifecycle Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-lifecycle-dotnet.md
Title: Handle Cloud Service (classic) lifecycle events | Microsoft Docs description: Learn how to use the lifecycle methods of a Cloud Service role in .NET, including RoleEntryPoint, which provides methods to respond to lifecycle events. -+ Last updated 07/23/2024
cloud-services Cloud Services Startup Tasks Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-startup-tasks-common.md
Title: Common startup tasks for Cloud Services (classic) | Microsoft Docs description: Provides some examples of common startup tasks you may want to perform in your cloud services web role or worker role. -+ Last updated 07/23/2024
cloud-services Cloud Services Startup Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-startup-tasks.md
Title: Run Startup Tasks in Azure Cloud Services (classic) | Microsoft Docs description: Startup tasks help prepare your cloud service environment for your app. This article teaches you how startup tasks work and how to make them -+ Last updated 07/23/2024
cloud-services Cloud Services Troubleshoot Common Issues Which Cause Roles Recycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md
Title: Common causes of Cloud Service (classic) roles recycling | Microsoft Docs description: A cloud service role that suddenly recycles can cause significant downtime. Here are some common issues that cause roles to be recycled, which may help you reduce downtime. -+ Last updated 07/23/2024
cloud-services Cloud Services Troubleshoot Constrained Allocation Failed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-constrained-allocation-failed.md
description: This article shows how to resolve a ConstrainedAllocationFailed exc
-+ Last updated 07/24/2024
cloud-services Cloud Services Troubleshoot Default Temp Folder Size Too Small Web Worker Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-default-temp-folder-size-too-small-web-worker-role.md
Title: Default TEMP folder size is too small for a role | Microsoft Docs description: A cloud service role has a limited amount of space for the TEMP folder. This article provides some suggestions on how to avoid running out of space. -+ Last updated 07/24/2024
cloud-services Cloud Services Troubleshoot Deployment Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-deployment-problems.md
Title: Troubleshoot cloud service (classic) deployment problems | Microsoft Docs description: There are a few common problems you may run into when deploying a cloud service to Azure. This article provides solutions to some of them. -+ Last updated 07/24/2024
cloud-services Cloud Services Troubleshoot Fabric Internal Server Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-fabric-internal-server-error.md
description: This article shows how to resolve a FabricInternalServerError or Se
-+ Last updated 07/24/2024
cloud-services Cloud Services Troubleshoot Location Not Found For Role Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-location-not-found-for-role-size.md
description: This article shows how to resolve a LocationNotFoundForRoleSize exc
-+ Last updated 07/24/2024
cloud-services Cloud Services Troubleshoot Overconstrained Allocation Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-overconstrained-allocation-request.md
description: This article shows how to resolve an OverconstrainedAllocationReque
-+ Last updated 07/24/2024
cloud-services Cloud Services Troubleshoot Roles That Fail Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-roles-that-fail-start.md
Title: Troubleshoot roles that fail to start | Microsoft Docs description: Here are some common reasons why a Cloud Service role may fail to start. Solutions to these problems are also provided. -+ Last updated 07/24/2024
cloud-services Cloud Services Update Azure Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-update-azure-service.md
Title: How to update a cloud service (classic) | Microsoft Docs description: Learn how to update cloud services in Azure. Learn how an update on a cloud service proceeds to ensure availability. -+ Last updated 07/24/2024
cloud-services Cloud Services Workflow Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-workflow-process.md
Title: Workflow of Microsoft Azure Virtual Machine (VM) Architecture | Microsoft Docs description: This article provides overview of the workflow processes when you deploy a service. -+ Last updated 07/24/2024
cloud-services Diagnostics Extension To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/diagnostics-extension-to-storage.md
Title: Store and View Diagnostic Data in Azure Storage
description: Learn how to collect Azure diagnostics data in an Azure Storage account so you can view it with one of several available tools. -+ Last updated 07/24/2024
cloud-services Diagnostics Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/diagnostics-performance-counters.md
Title: Collect on Performance Counters in Azure Cloud Services (classic) | Microsoft Docs description: Learn how to discover, use, and create performance counters in Cloud Services with Azure Diagnostics and Application Insights. -+ Last updated 07/24/2024
cloud-services Mitigate Se https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/mitigate-se.md
tags: azure-resource-manager keywords: spectre,meltdown,specter-+ vm-windows Last updated 07/24/2024
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 and sbc2 are unavailable, the route with lower priority is tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of the second route are available, the call is dropped. ### Three routes example:
-If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`, and created a third route with `^+1(\d[10])$` with `sbc5.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 nor sbc2 are unavailable, the route with lower priority is tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of a second route are available, the third route is tried. If sbc5 is also not available, the call is dropped. Also, if a user dials `+1 321 XXX XX XX`, the call goes to `sbc5.contoso.biz`, and it isn't available, the call is dropped.
+If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`, and created a third route with `^\+1(\d{10})$` with `sbc5.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 nor sbc2 are unavailable, the route with lower priority is tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of a second route are available, the third route is tried. If sbc5 is also not available, the call is dropped. Also, if a user dials `+1 321 XXX XX XX`, the call goes to `sbc5.contoso.biz`, and it isn't available, the call is dropped.
> [!NOTE] > Failover to the next SBC in voice routing works only for response codes 408, 503, and 504.
container-registry Tutorial Rotate Revoke Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-rotate-revoke-customer-managed-keys.md
If you configure the registry for manual updating for a new key version, run the
> [!TIP] > When you run `az-acr-encryption-rotate-key`, you can pass either a versioned key ID or an unversioned key ID. If you use an unversioned key ID, the registry is then configured to automatically detect later key version updates.
-To update a customer-managed key version manually, you have two options:
+To update a customer-managed key version manually, you have three options:
-- Rotate the key and use a user-assigned identity.
+- Rotate the key and use a client ID of a managed identity.
- If you're using the key from a different key vault, verify that `principal-id-user-assigned-identity` has the `get`, `wrap`, and `unwrap` permissions on that key vault.
+If you're using the key from a different key vault, verify the `identity` has the `get`, `wrap`, and `unwrap` permissions on that key vault.
```azurecli az acr encryption rotate-key \ --name <registry-name> \ --key-encryption-key <new-key-id> \
- --identity <principal-id-user-assigned-identity>
+ --identity <client ID of a managed identity>
```
+- Rotate the key and use a user-assigned identity.
+
+Before you use the user-assigned identity, verify that the `get`, `wrap`, and `unwrap` permissions are assigned to it.
+
+ ```azurecli
+ az acr encryption rotate-key \
+ --name <registry-name> \
+ --key-encryption-key <new-key-id> \
+ --identity <id of user assigned identity>
+ ```
+
- Rotate the key and use a system-assigned identity.
- Before you use the system-assigned identity, verify that the `get`, `wrap`, and `unwrap` permissions are assigned to it.
+Before you use the system-assigned identity, verify that the `get`, `wrap`, and `unwrap` permissions are assigned to it.
```azurecli az acr encryption rotate-key \
dev-box How To Access Dev Box Task View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-access-dev-box-task-view.md
+
+ Title: Access a dev box with Task view
+
+description: Learn how to connect to your dev box using Task view in Windows for enhanced multitasking and organization.
++++ Last updated : 08/1/2024++
+#customer intent: As a dev box user, I want to connect to my dev box with Task view, so that I can swap between my local machine and my dev box quickly.
++
+# Connect to a dev box by using task view
+
+This article shows you how to connect to your dev box by using Task view.
+
+## Prerequisites
+
+To complete the steps in this article, you must have:
+- Access to a dev box through the developer portal.
+- Windows App installed.
+ - If you don't have Windows App installed, see [Get started with Windows App to connect to devices and apps](/windows-app/get-started-connect-devices-desktops-apps?context=/azure/dev-box/context/context&pivots=dev-box)
+
+## Use Task view
+
+Task view is a feature in Windows 11 (and Windows 10) that enhances multitasking and organization. Task view lets you quickly switch between your local machine and your dev boxes. You access it by selecting the Task view button in the taskbar or using the Windows key + Tab keyboard shortcut. In Task view, you see a list of your dev boxes, and you can easily switch to a different one.
+
+### Add a dev box to Task view
+
+1. Open Windows App.
+1. For the dev box you want to configure, select **(...)** > **Add to Task view**.
+
+ :::image type="content" source="media/how-to-access-dev-box-task-view/windows-app-options-add-task-view.png" alt-text="Screenshot of the dev box tile options menu with Add to task view highlighted." lightbox="media/how-to-access-dev-box-task-view/windows-app-options-add-task-view.png":::
+
+1. On the taskbar, select Task view.
+
+ :::image type="content" source="media/how-to-access-dev-box-task-view/taskbar-task-view.png" alt-text="Screenshot of the task bar with Task view highlighted." lightbox="media/how-to-access-dev-box-task-view/taskbar-task-view.png":::
+
+1. To connect, select your dev box.
+
+ :::image type="content" source="media/how-to-access-dev-box-task-view/task-view-local.png" alt-text="Screenshot of Task view showing the available desktops with the dev box highlighted." lightbox="media/how-to-access-dev-box-task-view/task-view-local.png":::
+
+### Switch between machines
+
+1. To switch between your dev box and your local machine, on the taskbar, select Task view, and then select **local desktops**.
+
+ :::image type="content" source="media/how-to-access-dev-box-task-view/task-view-dev-box.png" alt-text="Screenshot of Task view showing the available desktops with Local desktops highlighted." lightbox="media/how-to-access-dev-box-task-view/task-view-dev-box.png":::
+
+### Remove dev box from Task view
+
+1. Open Windows App.
+1. For the dev box you want to configure, select **(...)** > **Remove from Task view**.
+
+ :::image type="content" source="media/how-to-access-dev-box-task-view/windows-app-options-remove-task-view.png" alt-text="Screenshot of Windows App devices page, with Remove from Task view highlighted." lightbox="media/how-to-access-dev-box-task-view/windows-app-options-remove-task-view.png":::
+
+The dev box is removed from Task view.
+
+### Troubleshooting Task view
+
+If you added a dev box to Task view but no longer have access to it, you might want to remove it from the Task view. Attempting to remove the stale dev box by selecting the [**Remove from Task view**](#remove-dev-box-from-task-view) option in Windows App might fail.
+
+There are two troubleshooting options for removing a stale dev box from Task view: first, uninstall, and reinstall Windows App.
+
+If the unwanted dev box still shows in Task view after reinstalling Windows App, you can remove it by deleting the registry entry for the stale dev box.
+
+To remove the registry entry:
+
+1. Open the Registry Editor app.
+1. Navigate to `Computer\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RemoteSystemProviders\`
+1. Open the subfolder that has a title containing "Windows365".
+1. Delete the registry key that is titled as your stale dev box.
+1. Restart your local machine.
+
+## Related content
+
+- Learn how to [configure multiple monitors in Windows App](/windows-app/device-actions?branch=main&tabs=windows)
+- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)
dev-box How To Connect To Dev Box With Windows App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-connect-to-dev-box-with-windows-app.md
- Title: 'Connect to a dev box by using the Windows App'-
-description: Step-by-step guide to connect to your dev box using the Windows App, configure multiple monitors and quickly switch between machines by using Task view.
---- Previously updated : 06/15/2024--
-#customer intent: As a dev box user, I want to be aware of the features of Windows App, so I can decide if I want to use it to connect to my dev boxes.
--
-# Connect to a dev box by using the Windows App
-
-This article shows you how to connect to your dev box by using the Windows App. You learn how to install the Windows App, connect to a dev box, and configure multiple monitors. You also learn how to access your dev box through Windows Task view.
-
-The Windows App securely connects you to your Dev Box, and enables you to quickly switch between multiple dev boxes.
--
-## Prerequisites
--- To complete the steps in this article, you must have access to a dev box through the developer portal.-
-## Install the Windows App
-
-The Windows App might be pre-installed on your computer. If not, you can download and install it from the Microsoft Store:
-
-1. From the Start menu, open Microsoft Store.
-1. In the search bar at the top-right, enter *Windows App*, and then press ENTER.
-1. From the search results, select **Windows App**.
-1. Select **Get**.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/microsoft-store-windows-app.png" alt-text="Screenshot of the Microsoft Store Windows App page, with Get highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/microsoft-store-windows-app.png":::
-
-
-## Connect to a dev box
-
-The Windows App shows all your available virtual desktops, including your dev boxes. To connect to a dev box by using the Windows App:
-
-1. Once installed, select **Open** in the Microsoft Store. Alternatively, find the app in the Start menu.
-1. Agree to the license terms.
-1. Read the Welcome screens, and select **Next**, **Next**, and then **Done** to progress.
-1. Select **Go to devices**.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/windows-app-home.png" alt-text="Screenshot of the Windows App home page." lightbox="media/how-to-connect-to-dev-box-with-windows-app/windows-app-home.png":::
-
-1. On the dev box you want to connect to, select **Connect**.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/windows-app-connect.png" alt-text="Screenshot of the Windows App devices page, with Connect highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/windows-app-connect.png":::
-
-## Configure multiple monitors
-
-Making the most of your multiple monitors can enhance your productivity. In the Windows App, you can configure your dev box to use all available displays, a single display, or select displays.
-
-1. Open the Windows App.
-1. For the dev box you want to configure, select **(...)** > **Settings**.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/windows-app-options-settings.png" alt-text="Screenshot of the dev box options menu with Settings highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/windows-app-options-settings.png":::
-
-1. On the settings pane, turn off **Use default settings**.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/windows-app-default-settings.png" alt-text="Screenshot of the dev box display settings with Default settings off highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/windows-app-default-settings.png":::
-
-1. In **Display Settings**, in the **Display configuration** list, select the displays to use and configure the options:
-
- | Value | Description | Options |
- ||||
- | All displays | Remote desktop uses all available displays. | - Use only a single display when in windowed mode. <br> - Fit the remote session to the window. |
- | Single display | Remote desktop uses a single display. | - Start the session in full screen mode. <br> - Fit the remote session to the window. <br> - Update the resolution on when a window is resized. |
- | Select displays | Remote Desktop uses only the monitors you select. | - Maximize the session to the current displays. <br> - Use only a single display when in windowed mode. <br> - Fit the remote connection session to the window. |
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/windows-app-display-settings.png" alt-text="Screenshot of the dev box display settings with all display settings highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/windows-app-display-settings.png":::
-
-1. Close the **Display** pane.
-1. On the dev box tile, select **Connect**.
-
-## Use Task view
-
-Task view is a feature in Windows 11 (and Windows 10) that enhances multitasking and organization. Task view lets you quickly switch between your local machine and your dev boxes. You access it by selecting the Task view button in the taskbar or using the Windows key + Tab keyboard shortcut. In Task view, you see a list of your dev boxes, and you can easily switch to a different one.
-
-### Add a dev box to Task view
-
-1. Open the Windows App.
-1. For the dev box you want to configure, select **(...)** > **Add to Task view**.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/windows-app-options-add-task-view.png" alt-text="Screenshot of the dev box tile options menu with Add to task view highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/windows-app-options-add-task-view.png":::
-
-1. On the taskbar, select Task view.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/taskbar-task-view.png" alt-text="Screenshot of the task bar with Task view highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/taskbar-task-view.png":::
-
-1. To connect, select your dev box.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/task-view-local.png" alt-text="Screenshot of Task view showing the available desktops with the dev box highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/task-view-local.png":::
-
-### Switch between machines
-
-1. To switch between your dev box and your local machine, on the taskbar, select Task view, and then select **local desktops**.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/task-view-dev-box.png" alt-text="Screenshot of Task view showing the available desktops with Local desktops highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/task-view-dev-box.png":::
-
-### Remove dev box from Task view
-
-1. Open the Windows App.
-1. For the dev box you want to configure, select **(...)** > **Remove from Task view**.
-
- :::image type="content" source="media/how-to-connect-to-dev-box-with-windows-app/windows-app-options-remove-task-view.png" alt-text="Screenshot of the Windows App devices page, with Remove from Task view highlighted." lightbox="media/how-to-connect-to-dev-box-with-windows-app/windows-app-options-remove-task-view.png":::
-
-The dev box is removed from Task view.
-
-### Troubleshooting Task view
-
-If you added a dev box to Task view but no longer have access to it, you might want to remove it from the Task view. Attempting to remove the stale dev box by selecting the [**Remove from Task view**](#remove-dev-box-from-task-view) option in the Windows App might fail.
-
-There are two troubleshooting options for removing a stale dev box from Task view: first, uninstall and reinstall the Windows App.
-
-If the unwanted dev box still shows in Task view after reinstalling the Windows App, you can remove it by deleting the registry entry for the stale dev box.
-
-To remove the registry entry:
-
-1. Open the Registry Editor app.
-1. Navigate to `Computer\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RemoteSystemProviders\`
-1. Open the subfolder that has a title containing "Windows365".
-1. Delete the registry key that is titled as your stale dev box.
-1. Restart your local machine.
-
-## Related content
--- Learn how to [configure multiple monitors](./tutorial-configure-multiple-monitors.md) for your Remote Desktop client.-- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)
education-hub Add Student Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/add-student-api.md
Last updated 03/11/2023
-# Add students to a lab in the Azure Education Hub
+# Add students to a lab in the Azure Education Hub with REST APIs
This article walks through how to add students to a lab in the Azure Education Hub by using REST APIs.
education-hub Create Lab Education Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/create-lab-education-hub.md
Last updated 03/11/2023
-# Create a lab in the Azure Education Hub
+# Create a lab in the Azure Education Hub with REST APIs
This article walks you through how to create a lab and verify its creation by using REST APIs.
education-hub Delete Lab Education Hub Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/delete-lab-education-hub-apis.md
Last updated 1/24/2022
-# Delete a lab in the Azure Education Hub
+# Delete a lab in the Azure Education Hub with REST APIs
This article walks you through how to delete a lab in the Azure Education Hub by using REST APIs. Before you delete a lab, you must delete all students from it.
hdinsight Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/azure-monitor-agent.md
Title: Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters
description: Learn how to migrate to Azure Monitor Agent (AMA) in Azure HDInsight clusters. Previously updated : 07/31/2024 Last updated : 08/14/2024 # Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters
Activate the new integration by going to your cluster's portal page and scrollin
1. Then, select Enable and you can choose the Log Analytics workspace that you want your logs to be sent to. :::image type="content" source="./media/azure-monitor-agent/monitor-integration.png" alt-text=" Screenshot showing Azure monitor integration." border="true" lightbox="./media/azure-monitor-agent/monitor-integration.png":::
-1. Enable Azure Monitor Agent Integration with Log Analytics and select your workspace (existing workspace when you're migrating from your previous image to newer image)
+1. Enable Azure Monitor Agent Integration with Log Analytics and select your workspace (existing workspace when you're migrating from your previous image to newer image).
1. Once you confirm the workspace selection, precondition steps commence. :::image type="content" source="./media/azure-monitor-agent/pre-condition.png" alt-text="Screenshot showing preconditions." border="true" lightbox="./media/azure-monitor-agent/pre-condition.png":::
-1. Select Save once precondition steps are complete.
+1. Select Save once precondition steps are complete.
+
+### Enable Azure Monitor Agent logging for Spark cluster
+
+Azure HDInsight Spark clusters control AMA integration using a Spark configuration `spark.hdi.ama.enabled`, by default the value is set to false. This configuration controls whether the Spark specific logs will come up in the Log Analytics workspace. If you want to enable AMA in your Spark clusters and retrieve the Spark event logs in their LA workspaces, you need to perform an additional step to enable AMA for spark specific logs.
+
+The following steps describe how customers can enable the new Azure Monitor Agent logging for their spark workloads.
+
+1. Go to Ambari -> Spark Configs.
+
+1. Navigate to **Custom Spark defaults** and search for config `spark.hdi.ama.enabled`, the default value of this config will be false. Set this value as **true**.
+
+ :::image type="content" source="./media/azure-monitor-agent/enable-spark.png" alt-text="Screenshot showing how to enable Azure Monitor Agent logging for Spark cluster." border="true" lightbox="./media/azure-monitor-agent/enable-spark.png":::
+
+1. Click **save** and restart Spark services on all nodes.
+
+1. Access the tables in LA workspace.
### Access the new tables
iot-central Howto Create And Manage Applications Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-and-manage-applications-csp.md
-
-# Mandatory fields. See more on aka.ms/skyeye/meta.
Title: Manage Azure IoT Central applications from the CSP portal
-description: As a CSP, learn how to create and manage an Azure IoT Central application on behalf of your customer.
---- Previously updated : 06/13/2023----
-# Create and manage an Azure IoT Central application from the CSP portal
-
-The Microsoft Cloud Solution Provider (CSP) program is a Microsoft Reseller program. Its intent is to provide our channel partners with a one-stop program to resell all Microsoft Commercial Online Services. Learn more about the [Cloud Solution Provider program](https://partner.microsoft.com/cloud-solution-provider).
--
-As a CSP, you can create and manage Microsoft Azure IoT Central applications on behalf of your customers through the [Microsoft Partner Center](https://partnercenter.microsoft.com/partner/home). When Azure IoT Central applications are created on behalf of customers by CSPs, just like with other CSP managed Azure services, CSPs manage billing for customers. A charge for Azure IoT Central appears in your total bill in the Microsoft Partner Center.
-
-To get started, sign-in to your account on the Microsoft Partner Portal and select a customer for whom you want to create an Azure IoT Central application. Navigate to **Service Management** for the customer from the left nav.
-
-![Microsoft Partner Center, customer view](media/howto-create-and-manage-applications-csp/image1.png)
-
-Azure IoT Central is listed as a service available to administer. Select the **Azure IoT Central** link on the page to create new applications or manage existing applications for this customer.
-
-![Azure IoT Central available to manage](media/howto-create-and-manage-applications-csp/image2.png)
-
-You land on the **Azure IoT Central Application Manager** page. Azure IoT Central keeps context that you came from the Microsoft Partner Center and that you came to manage that particular customer. The **Application Manager** page header shows the Microsoft Partner Center context. From here, you can either navigate to an existing application you created earlier for this customer to manage or create a new application for the customer.
-
-![Create Manager for CSPs](media/howto-create-and-manage-applications-csp/image3.png)
--
-To create an Azure IoT Central application, select **Build** in the left menu. Choose one of the industry templates, or choose **Custom app** to create an application from scratch. You must complete all the fields on the **Application Create** page and then choose **Create**.
-
-## Application name
-
-The name of your application is displayed on the **Application Manager** page and within each Azure IoT Central application. You can choose any name for your Azure IoT Central application. Choose a name that makes sense to you and to others in your organization.
-
-## Application URL
-
-The application URL is the link to your application. You can save a bookmark to it in your browser or share it with others.
-
-When you enter the name for your application, your application URL is autogenerated. If you prefer, you can choose a different URL for your application. Each Azure IoT Central URL must be unique within Azure IoT Central. You see an error message if the URL you choose has already been taken.
-
-## Directory
-
-Azure IoT Central knows the customer you selected in the Microsoft Partner Portal, so you see just the Microsoft Entra tenant for that customer in the **Directory** field.
-
-A Microsoft Entra tenant contains user identities, credentials, and other organizational information. Multiple Azure subscriptions can be associated with a single Microsoft Entra tenant.
-
-To learn more, see [Microsoft Entra ID](../../active-directory/index.yml).
-
-## Azure subscription
-
-An Azure subscription enables you to create instances of Azure services. Azure IoT Central automatically finds all Azure Subscriptions of the customer to which you have access, and displays them in a dropdown on the **Create Application** page. Choose an Azure subscription to create a new Azure IoT Central Application.
-
-If you don't have an Azure subscription, you can create one in the Microsoft Partner Center. After you create the Azure subscription, navigate back to the **Create Application** page. Your new subscription appears in the **Azure Subscription** drop-down.
-
-To learn more, see [Azure subscriptions](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing).
-
-## Location
-
-**Location** is where you'd like to create the application. Typically, you should choose the location that's physically closest to your devices to get optimal performance. Currently, you can create an IoT Central application in the **Australia East**, **Canada Central**, **Central US**, **East US**, **East US 2**, **Japan East**, **North Europe**, **South Central US**, **Southeast Asia**, **UK South**, **West Europe**, and **West US** regions. Once you choose a location, you can't later move your application to a different location.
-
-## Application template
-
-Choose the application template you want to use for your application.
-
-## Next steps
-
-Now that you have learned how to create an Azure IoT Central application as a CSP, here's the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Administer your application](howto-administer.md)
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md
Whichever approach you choose, the configuration options are the same, and the p
Other approaches, not described in this article include: - [Use the REST API to create and manage IoT Central applications.](../core/howto-manage-iot-central-with-rest-api.md).-- [Create and manage an Azure IoT Central application from the Microsoft Cloud Solution Provider portal](howto-create-and-manage-applications-csp.md).
+- [Create and manage an Azure IoT Central application from the Microsoft Cloud Solution Provider portal](https://partner.microsoft.com/cloud-solution-provider).
## Parameters
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
If you restrict access to your virtual network, you need to [configure your virt
- Your Azure account has the [Network Contributor](/azure/role-based-access-control/built-in-roles#network-contributor) role, or a parent of this role, on the virtual network. See [Check access for a user to Azure resources](/azure/role-based-access-control/check-access) to verify your permissions. - The subnet you use for Azure Load Testing must have enough unassigned IP addresses to accommodate the number of load test engines for your test. Learn more about [configuring your test for high-scale load](./how-to-high-scale-load.md). - The subnet shouldn't be delegated to any other Azure service. For example, it shouldn't be delegated to Azure Container Instances (ACI). Learn more about [subnet delegation](/azure/virtual-network/subnet-delegation-overview).
+- The subnet shouldn't have IPv6 enabled. Azure Load Testing doesn't support IPv6 enabled subnets. Learn more about [IPv6 for Azure Virtual Network].(/azure/virtual-network/ip-services/ipv6-overview)
- Azure CLI version 2.2.0 or later (if you're using CI/CD). Run `az --version` to find the version that's installed on your computer. If you need to install or upgrade the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli). ## Configure virtual network
To configure the load test with your virtual network settings, update the [YAML
1. After the CI/CD workflow triggers, your load test starts, and can now access the privately hosted application endpoint in your virtual network.
+## Troubleshooting
+
+To troubleshoot issues in creating and running load tests against private endpoints, see [how to troubleshoot private endpoint tests](./troubleshoot-private-endpoint-tests.md).
## Next steps - Learn more about the [scenarios for deploying Azure Load Testing in a virtual network](./concept-azure-load-testing-vnet-injection.md).-- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
+- Learn how to [troubleshoot private endpoint tests](./troubleshoot-private-endpoint-tests.md).
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
Title: Create and manage registries
-description: Learn how create registries with the CLI, REST API, Azure portal and Azure Machine Learning studio
+description: Learn how create registries with the CLI, REST API, Azure portal, and Azure Machine Learning studio
Previously updated : 08/24/2023 Last updated : 08/19/2024
Azure Machine Learning entities can be grouped into two broad categories: * Assets such as __models__, __environments__, __components__, and __datasets__ are durable entities that are _workspace agnostic_. For example, a model can be registered with any workspace and deployed to any endpoint.
-* Resources such as __compute__, __job__, and __endpoints__ are _transient entities that are workspace specific_. For example, an online endpoint has a scoring URI that is unique to a specific instance in a specific workspace. Similarly, a job runs for a known duration and generates logs and metrics each time it's run.
+* Resources such as __compute__, __job__, and __endpoints__ are _transient entities that are workspace specific_. For example, an online endpoint has a scoring URI that is unique to a specific instance in a specific workspace. Similarly, a job runs for a known duration and generates logs and metrics each run.
Assets lend themselves to being stored in a central repository and used in different workspaces, possibly in different regions. Resources are workspace specific.
You need to decide the following information carefully before proceeding to crea
### Choose a name Consider the following factors before picking a name.
-* Registries are meant to facilitate sharing of ML assets across teams within your organization across all workspaces. Choose a name that is reflective of the sharing scope. The name should help identify your group, division or organization.
-* Registry name is unique with your organization (Microsoft Entra tenant). It's recommended to prefix your team or organization name and avoid generic names.
-* Registry names can't be changed once created because they're used in IDs of models, environments and components that are referenced in code.
+* Registries are meant to facilitate sharing of ML assets across teams within your organization across all workspaces. Choose a name that is reflective of the sharing scope. The name should help identify your group, division, or organization.
+* Registry name is unique with your organization (Microsoft Entra tenant). For example, you might prefix your team or organization name and avoid generic names.
+* Registry names can't be changed once created because they're used in IDs of models, environments, and components that are referenced in code.
* Length can be 2-32 characters. * Alphanumerics, underscore, hyphen are allowed. No other special characters. No spaces - registry names are part of model, environment, and component IDs that can be referenced in code. * Name can contain underscore or hyphen but can't start with an underscore or hyphen. Needs to start with an alphanumeric. ### Choose Azure regions
-Registries enable sharing of assets across workspaces. To do so, a registry replicates content across multiple Azure regions. You need to define the list of regions that a registry supports when creating the registry. Create a list of all regions in which you have workspaces today and plan to add in near future. This list is a good set of regions to start with. When creating a registry, you define a primary region and a set of additional regions. The primary region can't be changed after registry creation, but the additional regions can be updated at a later point.
+Registries enable sharing of assets across workspaces. To do so, a registry replicates content across multiple Azure regions. You need to define the list of regions that a registry supports when creating the registry. Create a list of all regions in which you have workspaces today and plan to add in near future. This list is a good set of regions to start with. When creating a registry, you define a primary region and a set of other regions. The primary region can't be changed after registry creation, but the other regions can be updated at a later point.
### Check permissions
You can create registries in Azure Machine Learning studio using the following s
:::image type="content" source="./media/how-to-manage-registries/studio-registry-select-regions.png" alt-text="Screenshot of the registry region selection":::
-1. Review the information you provided, and then select __Create__. You can track the progress of the create operation in the Azure portal. Once the registry is successfully created, you can find it listed in the __Manage Registries__ tab.
+1. Review the information you provided, and then select __Create__. You can track the progress of the operation in the Azure portal. Once the registry is successfully created, you can find it listed in the __Manage Registries__ tab.
:::image type="content" source="./media/how-to-manage-registries/studio-create-registry-review.png" alt-text="Screenshot of the create + review tab."::: # [Azure portal](#tab/portal) 1. From the [Azure portal](https://portal.azure.com), navigate to the Azure Machine Learning service. You can get there by searching for __Azure Machine Learning__ in the search bar at the top of the page or going to __All Services__ looking for __Azure Machine Learning__ under the __AI + machine learning__ category.
-1. Select __Create__, and then select __Azure Machine Learning registry__. Enter the registry name, select the subscription, resource group and primary region, then select __Next__.
+1. Select __Create__, and then select __Azure Machine Learning registry__. Enter the registry name, select the subscription, resource group, and primary region, then select __Next__.
-1. Select the additional regions the registry must support, then select __Next__ until you arrive at the __Review + Create__ tab.
+1. Select the more regions the registry must support, then select __Next__ until you arrive at the __Review + Create__ tab.
:::image type="content" source="./media/how-to-manage-registries/create-registry-review.png" alt-text="Screenshot of the review + create tab.":::
To create a registry, use the following command. You can edit the JSON to change
> We recommend using the latest API version when working with the REST API. For a list of the current REST API versions for Azure Machine Learning, see the [Machine Learning REST API reference](/rest/api/azureml/). The current API versions are listed in the table of contents on the left side of the page. ```bash
-curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2023-04-01 -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d '
+curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2024-04-01 -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d '
{ "properties": {
replication_locations:
## Add users to the registry
-Decide if you want to allow users to only use assets (models, environments and components) from the registry or both use and create assets in the registry. Review [steps to assign a role](../role-based-access-control/role-assignments-steps.md) if you aren't familiar how to manage permissions using [Azure role-based access control](../role-based-access-control/overview.md).
+Decide if you want to allow users to only use assets (models, environments, and components) from the registry or both use and create assets in the registry. Review [steps to assign a role](../role-based-access-control/role-assignments-steps.md) if you aren't familiar how to manage permissions using [Azure role-based access control](../role-based-access-control/overview.md).
### Allow users to use assets from the registry
Microsoft.MachineLearningServices/registries/assets/read | Allows the user to br
### Allow users to create and use assets from the registry
-To let the user both read and create or delete assets, grant the following write permission in addition to the above read permissions.
+To let the user both read and create or delete assets, grant the following write permission in addition to the previous read permissions.
Permission | Description --|--
Microsoft.MachineLearningServices/registries/assets/delete| Delete assets in reg
### Allow users to create and manage registries
-To let users create, update and delete registries, grant them the built-in __Contributor__ or __Owner__ role. If you don't want to use built in roles, create a custom role with the following permissions, in addition to all the above permissions to read, create and delete assets in registry.
+To let users create, update, and delete registries, grant them the built-in __Contributor__ or __Owner__ role. If you don't want to use built-in roles, create a custom role with the following permissions, in addition to all the above permissions to read, create, and delete assets in registry.
Permission | Description --|--
Microsoft.MachineLearningServices/registries/delete | Allows the user to delete
## Next steps
-* [Learn how to share models, components and environments across workspaces with registries](./how-to-share-models-pipelines-across-workspaces-with-registries.md)
+* [Learn how to share models, components, and environments across workspaces with registries](./how-to-share-models-pipelines-across-workspaces-with-registries.md)
* [Network isolation with registries](./how-to-registry-network-isolation.md)
machine-learning How To Data Ingest Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-data-ingest-adf.md
Previously updated : 08/17/2022 Last updated : 08/19/2024 #Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models.
# Data ingestion with Azure Data Factory
-In this article, you learn about the available options for building a data ingestion pipeline with [Azure Data Factory](../../data-factory/introduction.md). This Azure Data Factory pipeline is used to ingest data for use with [Azure Machine Learning](../overview-what-is-azure-machine-learning.md). Data Factory allows you to easily extract, transform, and load (ETL) data. Once the data has been transformed and loaded into storage, it can be used to train your machine learning models in Azure Machine Learning.
+In this article, you learn about the available options for building a data ingestion pipeline with [Azure Data Factory](../../data-factory/introduction.md). This Azure Data Factory pipeline is used to ingest data for use with [Azure Machine Learning](../overview-what-is-azure-machine-learning.md). Data Factory allows you to easily extract, transform, and load (ETL) data. Once the data is transformed and loaded into storage, it can be used to train your machine learning models in Azure Machine Learning.
Simple data transformation can be handled with native Data Factory activities and instruments such as [data flow](../../data-factory/control-flow-execute-data-flow-activity.md). When it comes to more complicated scenarios, the data can be processed with some custom code. For example, Python or R code.
The function is invoked with the [Azure Data Factory Azure Function activity](..
* Advantages: * The data is processed on a serverless compute with a relatively low latency
- * Data Factory pipeline can invoke a [Durable Azure Function](../../azure-functions/durable/durable-functions-overview.md) that may implement a sophisticated data transformation flow
+ * Data Factory pipeline can invoke a [Durable Azure Function](../../azure-functions/durable/durable-functions-overview.md) that can implement a sophisticated data transformation flow
* The details of the data transformation are abstracted away by the Azure Function that can be reused and invoked from other places * Disadvantages: * The Azure Functions must be created before use with ADF
This method is recommended for [Machine Learning Operations (MLOps) workflows](c
Each time the Data Factory pipeline runs, 1. The data is saved to a different location in storage.
-1. To pass the location to Azure Machine Learning, the Data Factory pipeline calls an [Azure Machine Learning pipeline](../concept-ml-pipelines.md). When calling the ML pipeline, the data location and job ID are sent as parameters.
+1. To pass the location to Azure Machine Learning, the Data Factory pipeline calls an [Azure Machine Learning pipeline](../concept-ml-pipelines.md). When the Data Factory pipeline calls the Azure Machine Learning pipeline, the data location and job ID are sent as parameters.
1. The ML pipeline can then create an Azure Machine Learning datastore and dataset with the data location. Learn more in [Execute Azure Machine Learning pipelines in Data Factory](../../data-factory/transform-data-machine-learning-service.md). ![Diagram shows an Azure Data Factory pipeline and an Azure Machine Learning pipeline and how they interact with raw data and prepared data. The Data Factory pipeline feeds data to the Prepared Data database, which feeds a data store, which feeds datasets in the Machine Learning workspace.](media/how-to-data-ingest-adf/aml-dataset.png)
adlsgen2_datastore = Datastore.register_azure_data_lake_gen2(
client_id=client_id, # client id of service principal ```
-Next, create a dataset to reference the file(s) you want to use in your machine learning task.
+Next, create a dataset to reference the files you want to use in your machine learning task.
The following code creates a TabularDataset from a csv file, `prepared-data.csv`. Learn more about [dataset types and accepted file formats](how-to-create-register-datasets.md#dataset-types).
machine-learning How To Debug Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-debug-pipelines.md
Testing scripts locally is a great way to debug major code fragments and complex
### Logging options and behavior
-The following table provides information for different debug options for pipelines. It isn't an exhaustive list, as other options exist besides just the Azure Machine Learning, Python, and OpenCensus ones shown here.
+The following table provides information for different debug options for pipelines. It isn't an exhaustive list, as other options exist besides just the Azure Machine Learning and Python ones shown here.
| Library | Type | Example | Destination | Resources | |-|--||-|| | Azure Machine Learning SDK | Metric | `run.log(name, val)` | Azure Machine Learning Portal UI | [How to track experiments](how-to-log-view-metrics.md)<br>[azureml.core.Run class](/python/api/azureml-core/azureml.core.run%28class%29) | | Python printing/logging | Log | `print(val)`<br>`logging.info(message)` | Driver logs, Azure Machine Learning designer | [How to track experiments](how-to-log-view-metrics.md)<br><br>[Python logging](https://docs.python.org/2/library/logging.html) |
-| OpenCensus Python | Log | `logger.addHandler(AzureLogHandler())`<br>`logging.log(message)` | Application Insights - traces | [Debug pipelines in Application Insights](./how-to-log-pipelines-application-insights.md)<br><br>[OpenCensus Azure Monitor Exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)<br>[Python logging cookbook](https://docs.python.org/3/howto/logging-cookbook.html) |
+ #### Logging options example
The following table provides information for different debug options for pipelin
import logging from azureml.core.run import Run
-from opencensus.ext.azure.log_exporter import AzureLogHandler
run = Run.get_context()
logger.info("I am a plain info statement, I will be sent to the driver logs.")
handler = AzureLogHandler(connection_string='<connection string>') logger.addHandler(handler)
-# Python logging with OpenCensus AzureLogHandler
-logger.warning("I am an OpenCensus warning statement, find me in Application Insights!")
-logger.error("I am an OpenCensus error statement with custom dimensions", {'step_id': run.id})
``` ## Azure Machine Learning designer
You can also find the log files for specific runs in the pipeline run detail pag
> [!IMPORTANT] > To update a pipeline from the pipeline run details page, you must **clone** the pipeline run to a new pipeline draft. A pipeline run is a snapshot of the pipeline. It's similar to a log file, and cannot be altered.
-## Application Insights
-For more information on using the OpenCensus Python library in this manner, see this guide: [Debug and troubleshoot machine learning pipelines in Application Insights](./how-to-log-pipelines-application-insights.md)
- ## Interactive debugging with Visual Studio Code In some cases, you may need to interactively debug the Python code used in your ML pipeline. By using Visual Studio Code (VS Code) and debugpy, you can attach to the code as it runs in the training environment. For more information, visit the [interactive debugging in VS Code guide](how-to-debug-visual-studio-code.md#debug-and-troubleshoot-machine-learning-pipelines).
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-pipelines-application-insights.md
- Title: 'Monitor & collect pipeline log files'-
-description: Add logging to your training and batch scoring pipelines and view the logged results in Application Insights.
----- Previously updated : 10/21/2021----
-# Collect machine learning pipeline log files in Application Insights for alerts and debugging
--
-The [OpenCensus](https://opencensus.io/quickstart/python/) Python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline runs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across runs.
-
-Having your logs in once place will provide a history of exceptions and error messages. Since Application Insights integrates with Azure Alerts, you can also create alerts based on Application Insights queries.
-
-## Prerequisites
-
-* Follow the steps to create an [Azure Machine Learning workspace](../quickstart-create-resources.md) and [create your first pipeline](./how-to-create-machine-learning-pipelines.md)
-* [Configure your development environment](how-to-configure-environment.md) to install the Azure Machine Learning SDK.
-* Install the [OpenCensus Azure Monitor Exporter](https://pypi.org/project/opencensus-ext-azure/) package locally:
- ```python
- pip install opencensus-ext-azure
- ```
-* Create an [Application Insights instance](/previous-versions/azure/azure-monitor/app/opencensus-python) (this doc also contains information on getting the connection string for the resource)
-
-## Getting Started
-
-This section is an introduction specific to using OpenCensus from an Azure Machine Learning pipeline. For a detailed tutorial, see [OpenCensus Azure Monitor Exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)
-
-Add a PythonScriptStep to your Azure Machine Learning Pipeline. Configure your [RunConfiguration](/python/api/azureml-core/azureml.core.runconfiguration) with the dependency on opencensus-ext-azure. Configure the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable.
-
-```python
-from azureml.core.conda_dependencies import CondaDependencies
-from azureml.core.runconfig import RunConfiguration
-from azureml.pipeline.core import Pipeline
-from azureml.pipeline.steps import PythonScriptStep
-
-# Connecting to the workspace and compute target not shown
-
-# Add pip dependency on OpenCensus
-dependencies = CondaDependencies()
-dependencies.add_pip_package("opencensus-ext-azure>=1.0.1")
-run_config = RunConfiguration(conda_dependencies=dependencies)
-
-# Add environment variable with Application Insights Connection String
-# Replace the value with your own connection string
-run_config.environment.environment_variables = {
- "APPLICATIONINSIGHTS_CONNECTION_STRING": 'InstrumentationKey=00000000-0000-0000-0000-000000000000'
-}
-
-# Configure step with runconfig
-sample_step = PythonScriptStep(
- script_name="sample_step.py",
- compute_target=compute_target,
- runconfig=run_config
-)
-
-# Submit new pipeline run
-pipeline = Pipeline(workspace=ws, steps=[sample_step])
-pipeline.submit(experiment_name="Logging_Experiment")
-```
-
-Create a file called `sample_step.py`. Import the AzureLogHandler class to route logs to Application Insights. You'll also need to import the Python Logging library.
-
-```python
-from opencensus.ext.azure.log_exporter import AzureLogHandler
-import logging
-```
-
-Next, add the AzureLogHandler to the Python logger.
-
-```python
-logger = logging.getLogger(__name__)
-logger.setLevel(logging.DEBUG)
-logger.addHandler(logging.StreamHandler())
-
-# Assumes the environment variable APPLICATIONINSIGHTS_CONNECTION_STRING is already set
-logger.addHandler(AzureLogHandler())
-logger.warning("I will be sent to Application Insights")
-```
-
-## Logging with Custom Dimensions
-
-By default, logs forwarded to Application Insights won't have enough context to trace back to the run or experiment. To make the logs actionable for diagnosing issues, more fields are needed.
-
-To add these fields, Custom Dimensions can be added to provide context to a log message. One example is when someone wants to view logs across multiple steps in the same pipeline run.
-
-Custom Dimensions make up a dictionary of key-value (stored as string, string) pairs. The dictionary is then sent to Application Insights and displayed as a column in the query results. Its individual dimensions can be used as [query parameters](#other-helpful-queries).
-
-### Helpful Context to include
-
-| Field | Reasoning/Example |
-|--|--|
-| parent_run_id | Can query logs for ones with the same parent_run_id to see logs over time for all steps, instead of having to dive into each individual step |
-| step_id | Can query logs for ones with the same step_id to see where an issue occurred with a narrow scope to just the individual step |
-| step_name | Can query logs to see step performance over time. Also helps to find a step_id for recent runs without diving into the portal UI |
-| experiment_name | Can query across logs to see experiment performance over time. Also helps find a parent_run_id or step_id for recent runs without diving into the portal UI |
-| run_url | Can provide a link directly back to the run for investigation. |
-
-**Other helpful fields**
-
-These fields might require extra code instrumentation, and aren't provided by the run context.
-
-| Field | Reasoning/Example |
-|-|--|
-| build_url/build_version | If using CI/CD to deploy, this field can correlate logs to the code version that provided the step and pipeline logic. This link can further help to diagnose issues, or identify models with specific traits (log/metric values) |
-| run_type | Can differentiate between different model types, or training vs. scoring runs |
-
-### Creating a Custom Dimensions dictionary
-
-```python
-from azureml.core import Run
-
-run = Run.get_context(allow_offline=False)
-
-custom_dimensions = {
- "parent_run_id": run.parent.id,
- "step_id": run.id,
- "step_name": run.name,
- "experiment_name": run.experiment.name,
- "run_url": run.parent.get_portal_url(),
- "run_type": "training"
-}
-
-# Assumes AzureLogHandler was already registered above
-logger.info("I will be sent to Application Insights with Custom Dimensions", extra= {"custom_dimensions":custom_dimensions})
-```
-
-## OpenCensus Python logging considerations
-
-The OpenCensus AzureLogHandler is used to route Python logs to Application Insights. As a result, Python logging nuances should be considered. When a logger is created, it has a default log level and will show logs greater than or equal to that level. A good reference for using Python logging features is the [Logging Cookbook](https://docs.python.org/3/howto/logging-cookbook.html).
-
-The `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable is needed for the OpenCensus library. We recommend setting this environment variable instead of passing it in as a pipeline parameter to avoid passing around plaintext connection strings.
-
-## Querying logs in Application Insights
-
-The logs routed to Application Insights will show up under 'traces' or 'exceptions'. Be sure to adjust your time window to include your pipeline run.
-
-![Application Insights Query result](../media/how-to-debug-pipelines-application-insights/traces-application-insights-query.png)
-
-The result in Application Insights will show the log message and level, file path, and code line number. It will also show any custom dimensions included. In this image, the customDimensions dictionary shows the key/value pairs from the previous [code sample](#creating-a-custom-dimensions-dictionary).
-
-### Other helpful queries
-
-Some of the queries below use 'customDimensions.Level'. These severity levels correspond to the level the Python log was originally sent with. For more query information, see [Azure Monitor Log Queries](/azure/data-explorer/kusto/query/).
-
-| Use case | Query |
-||-|
-| Log results for specific custom dimension, for example 'parent_run_id' | <pre>traces \| <br>where customDimensions.parent_run_id == '931024c2-3720-11ea-b247-c49deda841c1</pre> |
-| Log results for all training runs over the last seven days | <pre>traces \| <br>where timestamp > ago(7d) <br>and customDimensions.run_type == 'training'</pre> |
-| Log results with severityLevel Error from the last seven days | <pre>traces \| <br>where timestamp > ago(7d) <br>and customDimensions.Level == 'ERROR' |
-| Count of log results with severityLevel Error over the last seven days | <pre>traces \| <br>where timestamp > ago(7d) <br>and customDimensions.Level == 'ERROR' \| <br>summarize count()</pre> |
-
-## Next Steps
-
-Once you have logs in your Application Insights instance, they can be used to set [Azure Monitor alerts](../../azure-monitor/alerts/alerts-overview.md) based on query results.
-
-You can also add results from queries to an [Azure Dashboard](../../azure-monitor/app/overview-dashboard.md#add-a-logs-query) for more insights.
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md
To use a service principal (SP), you must first create the SP. Then grant it acc
> When using a service principal, grant it the __minimum access required for the task__ it is used for. For example, you would not grant a service principal owner or contributor access if all it is used for is reading the access token for a web deployment. > > The reason for granting the least access is that a service principal uses a password to authenticate, and the password may be stored as part of an automation script. If the password is leaked, having the minimum access required for a specific tasks minimizes the malicious use of the SP.
+>
+> You should rotate secrets such as the service principal password on a regular basis.
The easiest way to create an SP and grant access to your workspace is by using the [Azure CLI](/cli/azure/install-azure-cli). To create a service principal and grant it access to your workspace, use the following steps:
operator-nexus Howto Cluster Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-manager.md
The Cluster Manager is deployed in the operator's Azure subscription to manage t
## Before you begin
-You'll need:
+Ensure you have the following information:
- **Azure Subscription ID** - The Azure subscription ID where Cluster Manager needs to be created (should be the same subscription ID of the Network Fabric Controller).-- **Network Fabric Controller ID** - Network Fabric Controller and Cluster Manager have a 1:1 association. You'll need the resource ID of the Network Fabric Controller associated with the Cluster Manager.
+- **Network Fabric Controller ID** - Network Fabric Controller and Cluster Manager have a 1:1 association. You need the resource ID of the Network Fabric Controller to be associated with the Cluster Manager.
- **Log Analytics Workspace ID** - The resource ID of the Log Analytics Workspace used for the logs collection. - **Azure Region** - The Cluster Manager should be created in the same Azure region as the Network Fabric Controller. This Azure region should be used in the `Location` field of the Cluster Manager and all associated Operator Nexus instances.
Some arguments that are available for every Azure CLI command
- **--query** - uses the JMESPath query language to filter the output returned from Azure services. - **--verbose** - prints information about resources created in Azure during an operation, and other useful information
-## Cluster Manager elements
+## Cluster Manager properties
-| Elements | Description |
+| Property Name | Description |
| | - | | Name, ID, location, tags, type | Name: User friendly name <br> ID: < Resource ID > <br> Location: Azure region where the Cluster Manager is created. Values from: `az account list -locations`.<br> Tags: Resource tags <br> Type: Microsoft.NetworkCloud/clusterManagers | | managerExtendedLocation | The ExtendedLocation associated with the Cluster Manager | | managedResourceGroupConfiguration | Information about the Managed Resource Group |
-| fabricControllerId | A reference to the Network Fabric Controller that is 1:1 with this Cluster Manager |
-| analyticsWorkspaceId | This workspace will be where any logs that 's relevant to the customer will be relayed. |
-| clusterVersions[] | List of ClusterAvailableVersions objects. <br> Cluster versions that the manager supports. Will be used as an input in the cluster clusterVersion property. |
-| provisioningState | Succeeded, Failed, Canceled, Provisioning, Accepted, Updating |
-| detailedStatus | Detailed statuses that provide additional information about the status of the Cluster Manager. |
-| detailedStatusMessage | Descriptive message about the current detailedStatus. |
+| fabricControllerId | The reference to the Network Fabric Controller that is 1:1 with this Cluster Manager |
+| analyticsWorkspaceId | The Log Analytics workspace where logs that are relevant to the customer will be relayed. |
+| clusterVersions[] | The list of Cluster versions that the Cluster Manager supports. It is used as an input in the cluster clusterVersion property. |
+| provisioningState | The provisioning status of the latest operation on the Cluster Manager. One of: Succeeded, Failed, Canceled, Provisioning, Accepted, Updating |
+| detailedStatus | The detailed statuses that provide additional information about the status of the Cluster Manager. |
+| detailedStatusMessage | The descriptive message about the current detailed status. |
+
+## Cluster Manager Identity
+
+Starting with the 2024-06-01-preview API version, a customer can assign managed identity to a Cluster Manager. Both System-assigned and User-Assigned managed identities are supported.
+
+If a Cluster Manager is created with the User-assigned managed identity, a customer is required to provision access to that identity for the Nexus platform.
+Specifically, `Microsoft.ManagedIdentity/userAssignedIdentities/assign/action` permission needs to be added to the User-assigned identity for `AFOI-NC-MGMT-PME-PROD` Microsoft Entra ID. It is a known limitation of the platform that will be addressed in the future.
+
+The role assignment can be done via the Azure portal:
+
+- Open Azure portal and locate User-assigned identity in question.
+ - If you expect multiple managed identities provisioned, the role can be added instead at the resource group or subscription level.
+- Under `Access control (IAM)`, click Add new role assignment
+- Select Role: `Managed Identity Operator`. See the [permissions](../role-based-access-control/built-in-roles/identity.md#managed-identity-operator) that the role provides.
+- Assign access to: User, group, or service principal
+- Select Member: `AFOI-NC-MGMT-PME-PROD` application
+- Review and assign
## Create a Cluster Manager
-### Create the Cluster Manager using AZ CLI:
+### Create the Cluster Manager using Azure CLI:
Use the `az networkcloud clustermanager create` command to create a Cluster Manager. This command creates a new Cluster Manager or updates the properties of the Cluster Manager if it exists. If you have multiple Azure subscriptions, select the appropriate subscription ID using the [az account set](/cli/azure/account#az-account-set) command.
az networkcloud clustermanager create \
- **wait/--no-wait** - Wait for command to complete or don't wait for the long-running operation to finish. - **--tags** - Space-separated tags: key[=value] [key[=value]...]. Use '' to clear existing tags - **--subscription** - Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.-
+ - **--mi-system-assigned** - Enable System-assigned managed identity. Once added, the Identity can only be removed via the API call at this time.
+ - **--mi-user-assigned** - Space-separated resource IDs of the User-assigned managed identities to be added. Once added, the Identity can only be removed via the API call at this time.
### Create the Cluster Manager using Azure Resource Manager template editor: An alternate way to create a Cluster Manager is with the ARM template editor.
-In order to create the cluster this way, you will need to provide a template file (clusterManager.jsonc) and a parameter file (clusterManager.parameters.jsonc).
+In order to create the cluster this way, you need to provide a template file (clusterManager.jsonc) and a parameter file (clusterManager.parameters.jsonc).
You can find examples of these two files here:
az networkcloud clustermanager update \
- **--IDs** - One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource ID' arguments. - **--resource-group -g** - Name of resource group. You can configure the default group using `az configure --defaults group=<name>`. - **--subscription** - Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+ - **--mi-system-assigned** - Enable System-assigned managed identity. Once added, the Identity can only be removed via the API call at this time.
+ - **--mi-user-assigned** - Space-separated resource IDs of the User-assigned managed identities to be added. Once added, the Identity can only be removed via the API call at this time.
+
+### Update Cluster Manager Identities via APIs
+
+Cluster Manager managed identities can be assigned via CLI. The un-assignment of the identities can be done via API calls.
+Note, `<APIVersion>` is the API version 2024-06-01-preview or newer.
+
+- To remove all managed identities, execute:
+
+ ```azurecli
+ az rest --method PATCH --url /subscriptions/$SUB_ID/resourceGroups/$CLUSTER_MANAGER_RG/providers/Microsoft.NetworkCloud/clusterManagers/$CLUSTER_MANAGER_NAME?api-version=<APIVersion> --body "{\"identity\":{\"type\":\"None\"}}"
+ ```
+
+- If both User-assigned and System-assigned managed identities were added, the User-assigned can be removed by updating the `type` to `SystemAssigned`:
+
+ ```azurecli
+ az rest --method PATCH --url /subscriptions/$SUB_ID/resourceGroups/$CLUSTER_MANAGER_RG/providers/Microsoft.NetworkCloud/clusterManagers/$CLUSTER_MANAGER_NAME?api-version=<APIVersion> --body @~/uai-body.json
+ ```
+
+ The request body (uai-body.json) example:
+
+ ```azurecli
+ {
+ "identity": {
+ "type": "SystemAssigned"
+ }
+ }
+ ```
+
+- If both User-assigned and System-assigned managed identities were added, the System-assigned can be removed by updating the `type` to `UserAssigned`:
+
+ ```azurecli
+ az rest --method PATCH --url /subscriptions/$SUB_ID/resourceGroups/$CLUSTER_MANAGER_RG/providers/Microsoft.NetworkCloud/clusterManagers/$CLUSTER_MANAGER_NAME?api-version=<APIVersion> --body @~/uai-body.json
+ ```
+
+ The request body (uai-body.json) example:
+
+ ```azurecli
+ {
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/$SUB_ID/resourceGroups/$UAI_RESOURCE_GROUP/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$UAI_NAME": {}
+ }
+ }
+ }
+ ```
+
+- If multiple User-assigned managed identities were added, one of them can be removed by executing:
+
+ ```azurecli
+ az rest --method PATCH --url /subscriptions/$SUB_ID/resourceGroups/$CLUSTER_MANAGER_RG/providers/Microsoft.NetworkCloud/clusterManagers/$CLUSTER_MANAGER_NAME?api-version=<APIVersion> --body @~/uai-body.json
+ ```
+
+ The request body (uai-body.json) example:
+
+ ```azurecli
+ {
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/$SUB_ID/resourceGroups/$UAI_RESOURCE_GROUP/providers/Microsoft.ManagedIdentity/userAssignedIdentities/$UAI_NAME": null
+ }
+ }
+ }
+ ```
## Delete Cluster Manager
az networkcloud clustermanager delete \
## Next steps
-After you successfully create an NFC and Cluster Manager, the next step is to create a [Network Fabric](./howto-configure-network-fabric.md).
+After you successfully created the Network Fabric Controller and the Cluster Manager, the next step is to create a [Network Fabric](./howto-configure-network-fabric.md).
operator-service-manager Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/release-notes.md
This pages hosts release notes for Azure Operator Service Manager (AOSM).
The following release notes are generally available (GA): * Release Notes for Version 2.0.2763-119
+* Release Notes for Version 2.0.2777-132
### Release Attestation These releases are produced compliant with MicrosoftΓÇÖs Secure Development Lifecycle. This lifecycle includes processes for authorizing software changes, antimalware scanning, and scanning and mitigating security bugs and vulnerabilities.
Through MicrosoftΓÇÖs Secure Future Initiative (SFI), this release delivers the
* NFO - A dedicated service account for the preupgrade job to safeguard against modifications to the existing network function extension service account. * RP - The service principles (SPs) used for deploying site & Network Function (NF) now require ΓÇ£Microsoft.ExtendedLocation/customLocations/readΓÇ¥ permission. The SP's that deploy day N scenario now require "Microsoft.Kubernetes/connectedClusters/listClusterUserCredentials/action" permission. This change can result in failed SNS deployments if not properly reconciled * CVE - A total of five CVEs are addressed in this release.++
+## Release 2.0.2777-132
+
+Document Revision 1.1
+
+### Release Summary
+Azure Operator Service Manager is a cloud orchestration service that enables automation of operator network-intensive workloads, and mission critical applications hosted on Azure Operator Nexus. Azure Operator Service Manager unifies infrastructure, software, and configuration management with a common model into a single interface, both based on trusted Azure industry standards. This August 7, 2024 Azure Operator Service Manager release includes updating the NFO version to 2.0.2777-132, the details of which are further outlined in the remainder of this document.
+
+### Release Details
+* Release Version: 2.0.2777-132
+* Release Date: August 7, 2024
+* Is NFO update required: YES
+
+### Release Installation
+This release can be installed with as an update on top of release 2.0.2763-119.
+
+### Issues Resolved in This Release
+
+#### Bugfix Related Updates
+The following bug fixes, or other defect resolutions, are delivered with this release, for either Network Function Operator (NFO) or resource provider (RP) components.
+
+* NFO - Adding taint tolerations to all NFO pods and scheduling them on system nodes. Daemonset pods will continue to run on all nodes of cluster).
+
+#### Security Related Updates
+
+* CVE - A total of five CVEs are addressed in this release.
+
sentinel Notebooks Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-troubleshoot.md
- Title: Troubleshoot Jupyter notebooks - Microsoft Sentinel
-description: Troubleshoot errors for Jupyter notebooks in Microsoft Sentinel.
---- Previously updated : 04/04/2022--
-# Troubleshoot Jupyter notebooks
-
-Usually, a notebook creates or attaches to a kernel seamlessly, and you don't need to make any manual changes. If you get errors, or the notebook doesn't seem to be running, you might need to check the version and state of the kernel.
-
-If you run into issues with your notebooks, see the [Azure Machine Learning notebook troubleshooting](../machine-learning/how-to-run-jupyter-notebooks.md#troubleshooting).
-
-## Force caching for user accounts and credentials between notebook runs
-
-By default, user accounts and credentials are not cached between notebook runs, even for the same session.
-
-**To force caching for the duration of your session**:
-
-1. Authenticate using Azure CLI. In an empty notebook cell, enter and run the following code:
-
- ```python
- !az login
- ```
-
- The following output appears:
-
- ```python
- To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the 9-digit device code to authenticate.
- ```
-
-1. Select and copy the nine-character token from the output, and select the `devicelogin` URL to go to the indicated page.
-
-1. Paste the token into the dialog and continue with signing in as prompted.
-
- When sign-in successfully completes, you see the following output:
-
- ```python
- Subscription <subscription ID> 'Sample subscription' can be accessed from tenants <tenant ID>(default) and <tenant ID>. To select a specific tenant when accessing this subscription, use 'az login --tenant TENANT_ID'.
-
-> [!NOTE]
-> The following tenants don't contain accessible subscriptions. Use 'az login --allow-no-subscriptions' to have tenant level access.
->
-> ```
-> <tenant ID> 'foo'
-><tenant ID> 'bar'
->[
-> {
-> "cloudName": "AzureApp",
-> "homeTenantId": "<tenant ID>",
-> "id": "<ID>",
-> "isDefault": true,
-> "managedByTenants": [
-> ....
->```
->
-## Error: *Runtime dependency of PyGObject is missing*
-
-If the *Runtime dependency of PyGObject is missing* error appears when you load a query provider, try troubleshooting using the following steps:
-
-1. Proceed to the cell with the following code and run it:
-
- ```python
- qry_prov = QueryProvider("AzureSentinel")
- ```
-
- A warning similar to the following message is displayed, indicating a missing Python dependency (`pygobject`):
-
- ```output
- Runtime dependency of PyGObject is missing.
-
- Depends on your Linux distribution, you can install it by running code similar to the following:
- sudo apt install python3-gi python3-gi-cairo gir1.2-secret-1
-
- If necessary, see PyGObject's documentation: https://pygobject.readthedocs.io/en/latest/getting_started.html
-
- Traceback (most recent call last):
- File "/anaconda/envs/azureml_py36/lib/python3.6/site-packages/msal_extensions/libsecret.py", line 21, in <module>
- import gi # https://github.com/AzureAD/microsoft-authentication-extensions-for-python/wiki/Encryption-on-Linux
- ModuleNotFoundError: No module named 'gi'
- ```
-
-1. Use the [aml-compute-setup.sh](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/tutorials-and-examples/how-tos/aml-compute-setup.sh) script, located in the Microsoft Sentinel Notebooks GitHub repository, to automatically install the `pygobject` in all notebooks and Anaconda environments on the Compute instance.
-
-> [!TIP]
-> You can also fix this Warning by running the following code from a notebook:
->
-> ```python
-> !conda install --yes -c conda-forge pygobject
-> ```
->
-
-## Next steps
-
-We welcome feedback, suggestions, requests for features, contributed notebooks, bug reports or improvements and additions to existing notebooks. Go to the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) to create an issue or fork and upload a contribution.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Previously updated : 03/13/2024 Last updated : 07/10/2024
On the configuration server, go to the folder _%ProgramData%\ASR\home\svsystems\
Installer file | Operating system (64-bit only) |
-`Microsoft-ASR_UA_version_Windows_GA_date_release.exe` | Windows Server 2016 </br> Windows Server 2012 R2 </br> Windows Server 2012 </br> Windows Server 2008 R2 SP1
+`Microsoft-ASR_UA_version_Windows_GA_date_release.exe` | Windows Server 2016 </br> Windows Server 2012 R2 </br> Windows Server 2012 </br> Windows Server 2008 R2 SP1 <br> Windows Server 2019 <br> Windows Server 2022
[To be downloaded and placed in this folder manually](#rhel-5-or-centos-5-server) | Red Hat Enterprise Linux (RHEL) 5 </br> CentOS 5 `Microsoft-ASR_UA_version_RHEL6-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 6 </br> CentOS 6 `Microsoft-ASR_UA_version_RHEL7-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 7 </br> CentOS 7
storage Analyze Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/analyze-files-metrics.md
Previously updated : 05/08/2024 Last updated : 08/19/2024
The following example shows how to read metric data on the metric supporting mul
You can use Azure Monitor to analyze workloads that utilize Azure Files. Follow these steps.
-1. Go to your storage account in the [Azure portal](https://portal.azure.com).
-1. From the left navigation, select **Data storage** > **File shares**. Select the file share you want to monitor.
-1. From the left navigation, select **Monitoring** > **Metrics**.
-1. When using Azure Monitor for Azure Files, itΓÇÖs important to always select the **Files** metric namespace. Select **Add metric**.
-1. Under **Metric namespace** select **File**.
+1. Navigate to your storage account in the [Azure portal](https://portal.azure.com).
+1. In the service menu, under **Monitoring**, select **Metrics**.
+1. Under **Metric namespace**, select **File**.
:::image type="content" source="media/analyze-files-metrics/add-metric-namespace-file.png" alt-text="Screenshot showing how to select the Files metric namespace." lightbox="media/analyze-files-metrics/add-metric-namespace-file.png":::
+Now you can select a metric depending on what you want to monitor.
+ ### Monitor availability In Azure Monitor, the **Availability** metric can be useful when something is visibly wrong from either an application or user perspective, or when troubleshooting alerts.
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/authorize-data-operations-portal.md
description: When you access file data using the Azure portal, the portal makes
Previously updated : 11/15/2023 Last updated : 08/19/2024
When you attempt to access file data in the Azure portal, the portal first check
You can change the authentication method for individual file shares. By default, the portal uses the current authentication method. To determine the current authentication method, follow these steps.
-1. Navigate to your storage account in the Azure portal and select **Data storage** > **File shares** from the left navigation.
+1. Navigate to your storage account in the Azure portal.
+1. In the service menu, under **Data storage**, select **File shares**.
1. Select a file share. 1. Select **Browse**. 1. The **Authentication method** indicates whether you're currently using the storage account access key or your Microsoft Entra account to authenticate and authorize file share operations. If you're currently authenticating using the storage account access key, you'll see **Access Key** specified as the authentication method, as in the following image. If you're authenticating using your Microsoft Entra account, you'll see **Microsoft Entra user account** specified instead.
storage Storage Files Quick Create Use Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-windows.md
Next, create an SMB Azure file share.
So far, you've created an Azure storage account and a file share with one file in it. Next, create an Azure VM to represent the on-premises server.
-1. Expand the menu on the left side of the portal and select **Create a resource** in the upper left-hand corner of the Azure portal.
-1. Under **Popular services** select **Virtual machine**.
+1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
+1. Under **Popular services**, select **Virtual machine**.
1. In the **Basics** tab, under **Project details**, select the resource group you created earlier. :::image type="content" source="media/storage-files-quick-create-use-windows/vm-resource-group-and-subscription.png" alt-text="Screenshot of the Basic tab with VM information filled out.":::
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
Because standard file shares only show transaction information at the storage ac
To see previous transactions:
-1. Go to your storage account and select **Metrics** in the left navigation bar.
-2. Select **Scope** as your storage account name, **Metric Namespace** as "File", **Metric** as "Transactions", and **Aggregation** as "Sum".
-3. Select **Apply Splitting**.
-4. Select **Values** as "API Name". Select your desired **Limit** and **Sort**.
-5. Select your desired time period.
+1. Navigate to your storage account in the Azure portal.
+1. In the service menu, under **Monitoring**, select **Metrics**.
+1. Select **Scope** as your storage account name, **Metric Namespace** as "File", **Metric** as "Transactions", and **Aggregation** as "Sum".
+1. Select **Apply Splitting**.
+1. Select **Values** as "API Name". Select your desired **Limit** and **Sort**.
+1. Select your desired time period.
> [!NOTE] > Make sure you view transactions over a period of time to get a better idea of average number of transactions. Ensure that the chosen time period doesn't overlap with initial provisioning. Multiply the average number of transactions during this time period to get the estimated transactions for an entire month.