Updates from: 07/18/2024 01:08:05
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Deployment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/deployment-types.md
Azure OpenAI offers three types of deployments. These provide a varied level of
| **Best suited for** | Applications that donΓÇÖt require data residency. Recommended starting place for customers. | For customers with data residency requirements. Optimized for low to medium volume. | Real-time scoring for large consistent volume. Includes the highest commitments and limits.| | **How it works** | Traffic may be routed anywhere in the world | | | | **Getting started** | [Model deployment](./create-resource.md) | [Model deployment](./create-resource.md) | [Provisioned onboarding](./provisioned-throughput-onboarding.md) |
-| **Cost** | [Baseline](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | [Regional Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | May experience cost savings for consistent usage |
+| **Cost** | [Global deployment pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | [Regional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | May experience cost savings for consistent usage |
| **What you get** | Easy access to all new models with highest default pay-per-call limits.<br><br> Customers with high volume usage may see higher latency variability | Easy access with [SLA on availability](https://azure.microsoft.com/support/legal/sl#estimate-provisioned-throughput-and-cost) | | **What you don’t get** | ❌Data residency guarantees | ❌High volume w/consistent low latency | ❌Pay-per-call flexibility | | **Per-call Latency** | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time. |
ai-services Fast Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/fast-transcription-create.md
Last updated 7/12/2024
Fast transcription API is used to transcribe audio files with returning results synchronously and much faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as: - Quick audio or video transcription, subtitles, and edit. -- Video dubbing
+- Video translation
> [!TIP] > Try out fast transcription in [Azure AI Studio](https://aka.ms/fasttranscription/studio).
The response will include `duration`, `channel`, and more. The `combinedPhrases`
## Related content -- [Speech to text quickstart](./get-started-speech-to-text.md)-- [Batch transcription API](./batch-transcription.md)
+- [Fast transcription REST API reference](/rest/api/speechtotext/transcriptions/transcribe)
+- [Speech to text supported languages](./language-support.md?tabs=stt)
+- [Batch transcription](./batch-transcription.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md
To improve Speech to text recognition accuracy, customization is available for s
These are the locales that support the [display text format feature](./how-to-custom-speech-display-text-format.md): da-DK, de-DE, en-AU, en-CA, en-GB, en-HK, en-IE, en-IN, en-NG, en-NZ, en-PH, en-SG, en-US, es-ES, es-MX, fi-FI, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nb-NO, nl-NL, pl-PL, pt-BR, pt-PT, sv-SE, tr-TR, zh-CN, zh-HK.
+### Fast transcription
+
+The supported locales for the [fast transcription API](fast-transcription-create.md) are: en-US, es-ES, es-MX, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, pt-BR, and zh-CN. You can only specify one locale per transcription request.
+ # [Text to speech](#tab/tts) The table in this section summarizes the locales and voices supported for Text to speech. See the table footnotes for more details.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md
With [real-time speech to text](get-started-speech-to-text.md), the audio is tra
Fast transcription API is used to transcribe audio files with returning results synchronously and much faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as: - Quick audio or video transcription, subtitles, and edit. -- Video dubbing
+- Video translation
> [!NOTE] > Fast transcription API is only available via the speech to text REST API version 3.3.
ai-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text.md
Speech to text REST API is used for [batch transcription](batch-transcription.md
> Speech to text REST API v3.0 will be retired on April 1st, 2026. For more information about upgrading, see the Speech to text REST API [v3.0 to v3.1](migrate-v3-0-to-v3-1.md) and [v3.1 to v3.2](migrate-v3-1-to-v3-2.md) migration guides. > [!div class="nextstepaction"]
-> [See the Speech to text REST API v3.2 reference documentation](/rest/api/speechtotext/operation-groups?view=rest-speechtotext-v3.2&preserve-view=true)
+> [See the Speech to text REST API 2024-05-15 reference documentation](/rest/api/speechtotext/operation-groups?view=rest-speechtotext-2024-05-15-preview&preserve-view=true)
> [!div class="nextstepaction"]
-> [See the Speech to text REST API v3.1 reference documentation](/rest/api/speechtotext/operation-groups?view=rest-speechtotext-v3.1&preserve-view=true)
+> [See the Speech to text REST API v3.2 reference documentation](/rest/api/speechtotext/operation-groups?view=rest-speechtotext-v3.2&preserve-view=true)
> [!div class="nextstepaction"]
-> [See the Speech to text REST API v3.0 reference documentation](/rest/api/speechtotext/operation-groups?view=rest-speechtotext-v3.0&preserve-view=true)
+> [See the Speech to text REST API v3.1 reference documentation](/rest/api/speechtotext/operation-groups?view=rest-speechtotext-v3.1&preserve-view=true)
Use Speech to text REST API to: -- [Custom speech](custom-speech-overview.md): With custom speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.
+- [Fast transcription](fast-transcription-create.md): Transcribe audio files with returning results synchronously and much faster than real-time audio. Use the fast transcription API ([/speechtotext/transcriptions:transcribe](/rest/api/speechtotext/transcriptions/transcribe)) in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as quick audio or video transcription or video translation.
+- [Custom speech](custom-speech-overview.md): Upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.
- [Batch transcription](batch-transcription.md): Transcribe audio files as a batch from multiple URLs or an Azure container. Speech to text REST API includes such features as:
ai-services Speech Synthesis Markup Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md
The following table describes each supported `style` attribute:
|`style="customerservice"`|Expresses a friendly and helpful tone for customer support.| |`style="depressed"`|Expresses a melancholic and despondent tone with lower pitch and energy.| |`style="disgruntled"`|Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt.|
-|`style="documentary-narration"`|Narrates documentaries in a relaxed, interested, and informative style suitable for dubbing documentaries, expert commentary, and similar content.|
+|`style="documentary-narration"`|Narrates documentaries in a relaxed, interested, and informative style suitable for documentaries, expert commentary, and similar content.|
|`style="embarrassed"`|Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable.| |`style="empathetic"`|Expresses a sense of caring and understanding.| |`style="envious"`|Expresses a tone of admiration when you desire something that someone else has.|
ai-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-to-text.md
Real-time speech to text is available via the [Speech SDK](speech-sdk.md) and th
Fast transcription API is used to transcribe audio files with returning results synchronously and much faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as: - Quick audio or video transcription, subtitles, and edit. -- Video dubbing
+- Video translation
> [!NOTE] > Fast transcription API is only available via the speech to text REST API version 2024-05-15-preview and later.
ai-services Video Translation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/video-translation-overview.md
Video translation is a feature in Azure AI Speech that enables you to seamlessly translate and generate videos in multiple languages automatically. This feature is designed to help you localize your video content to cater to diverse audiences around the globe. You can efficiently create immersive, localized videos across various use cases such as vlogs, education, news, enterprise training, advertising, film, TV shows, and more.
-The process of replacing the original language of a video with audio recorded in a different language is often relied upon to cater to diverse audiences. Traditionally achieved through human recording and manual post-production, dubbing is essential for ensuring that viewers can enjoy video content in their native language. However, this process comes with key pain points, including its high cost, lengthy duration, and inability to replicate the original speaker's voice accurately. Video translation in Azure AI Speech addresses these challenges by providing an automated, efficient, and cost-effective solution for creating localized videos.
+The process of replacing the original language of a video with audio recorded in a different language is often relied upon to cater to diverse audiences. Traditionally achieved through human recording and manual post-production, translation is essential for ensuring that viewers can enjoy video content in their native language. However, this process comes with key pain points, including its high cost, lengthy duration, and inability to replicate the original speaker's voice accurately. Video translation in Azure AI Speech addresses these challenges by providing an automated, efficient, and cost-effective solution for creating localized videos.
## Use case
We support video translation between various languages, enabling you to tailor y
- **Translation from language A to B and large language model (LLM) reformulation.** Translates the transcribed content from the original language (Language A) to the target language (Language B) using advanced language processing techniques. Enhances translation quality and refines gender-aware translated text through LLM reformulation. -- **Automatic dubbing ΓÇô voice generation in other language.**
+- **Automatic translation ΓÇô voice generation in other language.**
- Utilizes AI-powered text-to-speech technology to automatically generate human-like voices in the target language. These voices are precisely synchronized with the video, ensuring a flawless dubbing experience. This includes utilizing prebuilt neural voices for high-quality output and offering options for personal voice.
+ Utilizes AI-powered text-to-speech technology to automatically generate human-like voices in the target language. These voices are precisely synchronized with the video, ensuring a flawless translation experience. This includes utilizing prebuilt neural voices for high-quality output and offering options for personal voice.
- **Human in the loop for content editing.** Allows for human intervention to review and edit the translated content, ensuring accuracy and cultural appropriateness before finalizing the dubbed video.
ai-studio Model Catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md
Some models in the **Curated by Azure AI** and **Open models from the Hugging Fa
Model Catalog offers two distinct ways to deploy models from the catalog for your use: managed compute and serverless APIs. The deployment options available for each model vary; learn more about the features of the deployment options, and the options available for specific models, in the following tables. Learn more about [data processing]( concept-data-privacy.md) with the deployment options. <!-- docutune:disable -->
-Features | Managed compute | serverless API (pay-as-you-go)
+Features | Managed compute | Serverless API (pay-as-you-go)
--|--|--
-Deployment experience and billing | Model weights are deployed to dedicated Virtual Machines with Managed Online Endpoints. The managed online endpoint, which can have one or more deployments, makes available a REST API for inference. You're billed for the Virtual Machine core hours used by the deployments. | Access to models is through a deployment that provisions an API to access the model. The API provides access to the model hosted and managed by Microsoft, for inference. This mode of access is referred to as "Models as a Service". You're billed for inputs and outputs to the APIs, typically in tokens; pricing information is provided before you deploy.
+Deployment experience and billing | Model weights are deployed to dedicated Virtual Machines with Managed Online Endpoints. The managed online endpoint, which can have one or more deployments, makes available a REST API for inference. You're billed for the Virtual Machine core hours used by the deployments. | Access to models is through a deployment that provisions an API to access the model. The API provides access to the model hosted and managed by Microsoft, for inference. You're billed for inputs and outputs to the APIs, typically in tokens; pricing information is provided before you deploy.
| API authentication | Keys and Microsoft Entra ID authentication.| Keys only.
-Content safety | Use Azure Content Safety service APIs. | Azure AI Content Safety filters are available integrated with inference APIs. Azure AI Content Safety filters may be billed separately.
-Network isolation | [Configure managed networks for Azure AI Studio hubs.](configure-managed-network.md) | MaaS endpoint will follow your hub's public network access (PNA) flag setting. For more information, see the [Network isolation for models deployed via Serverless APIs](#network-isolation-for-models-deployed-via-serverless-apis) section.
+Content safety | Use Azure Content Safety service APIs. | Azure AI Content Safety filters are available integrated with inference APIs. Azure AI Content Safety filters is billed separately.
+Network isolation | [Configure managed networks for Azure AI Studio hubs.](configure-managed-network.md) | Endpoints will follow your hub's public network access (PNA) flag setting. For more information, see the [Network isolation for models deployed via Serverless APIs](#network-isolation-for-models-deployed-via-serverless-apis) section.
Model | Managed compute | Serverless API (pay-as-you-go) --|--|--
Prompt flow offers a great experience for prototyping. You can use models deploy
## Serverless APIs with Pay-as-you-go billing
-Certain models in the Model Catalog can be deployed as serverless APIs with pay-as-you-go billing; this method of deployment is called Models-as-a Service (MaaS), providing a way to consume them as an API without hosting them on your subscription. Models available through MaaS are hosted in infrastructure managed by Microsoft, which enables API-based access to the model provider's model. API based access can dramatically reduce the cost of accessing a model and significantly simplify the provisioning experience. Most MaaS models come with token-based pricing.
+Certain models in the Model Catalog can be deployed as serverless APIs with pay-as-you-go billing, providing a way to consume them as an API without hosting them on your subscription. Models are hosted in infrastructure managed by Microsoft, which enables API-based access to the model provider's model. API based access can dramatically reduce the cost of accessing a model and significantly simplify the provisioning experience.
-### How are third-party models made available in MaaS?
+Models that are available for deployment as serverless APIs with pay-as-you-go billing are offered by the model provider but hosted in Microsoft-managed Azure infrastructure and accessed via API. Model providers define the license terms and set the price for use of their models, while Azure Machine Learning service manages the hosting infrastructure, makes the inference APIs available, and acts as the data processor for prompts submitted and content output by models deployed via MaaS. Learn more about data processing for MaaS at the [data privacy](concept-data-privacy.md) article.
:::image type="content" source="../media/explore/model-publisher-cycle.png" alt-text="A diagram showing model publisher service cycle." lightbox="../media/explore/model-publisher-cycle.png":::
-Models that are available for deployment as serverless APIs with pay-as-you-go billing are offered by the model provider but hosted in Microsoft-managed Azure infrastructure and accessed via API. Model providers define the license terms and set the price for use of their models, while Azure Machine Learning service manages the hosting infrastructure, makes the inference APIs available, and acts as the data processor for prompts submitted and content output by models deployed via MaaS. Learn more about data processing for MaaS at the [data privacy](concept-data-privacy.md) article.
-
-### Pay for model usage in MaaS
+### Billing
The discovery, subscription, and consumption experience for models deployed via MaaS is in the Azure AI Studio and Azure Machine Learning studio. Users accept license terms for use of the models, and pricing information for consumption is provided during deployment. Models from third party providers are billed through Azure Marketplace, in accordance with the [Commercial Marketplace Terms of Use](/legal/marketplace/marketplace-terms); models from Microsoft are billed using Azure meters as First Party Consumption Services. As described in the [Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage), First Party Consumption Services are purchased using Azure meters but aren't subject to Azure service terms; use of these models is subject to the license terms provided.
-### Deploy models for inference through MaaS
-
-Deploying a model through MaaS allows users to get access to ready to use inference APIs without the need to configure infrastructure or provision GPUs, saving engineering time and resources. These APIs can be integrated with several LLM tools and usage is billed as described in the previous section.
-
-### Fine-tune models through MaaS with Pay-as-you-go
+### Fine-tune models
-For models that are available through MaaS and support fine-tuning, users can take advantage of hosted fine-tuning with pay-as-you-go billing to tailor the models using data they provide. For more information, see the [fine-tuning overview](../concepts/fine-tuning-overview.md).
+Certain models support also serverless fine-tuning where users can take advantage of hosted fine-tuning with pay-as-you-go billing to tailor the models using data they provide. For more information, see the [fine-tuning overview](../concepts/fine-tuning-overview.md).
### RAG with models deployed as serverless APIs
aks Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md
Title: Dapr extension for Azure Kubernetes Service (AKS) overview
+ Title: Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications. Last updated 04/22/2024
-# Dapr
+# Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
[Distributed Application Runtime (Dapr)][dapr-docs] offers APIs that help you write and implement simple, portable, resilient, and secured microservices. Dapr APIs run as a sidecar process in tandem with your applications and abstract away common complexities you may encounter when building distributed applications, such as: - Service discovery
Dapr is incrementally adoptable. You can use any of the API building blocks as n
## Capabilities and features
-[Using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/conceptual-extensions.md) eliminates the overhead of:
+[Using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster][dapr-create-extension] eliminates the overhead of:
- Downloading Dapr tooling - Manually installing and managing the Dapr runtime on your AKS cluster
Microsoft provides best-effort support for [the latest version of Dapr and two p
- 1.12.x - 1.11.x
-You can run Azure CLI commands to retreive a list of available versions in [a cluster](/cli/azure/k8s-extension/extension-types#az-k8s-extension-extension-types-list-versions-by-cluster) or [a location](/cli/azure/k8s-extension/extension-types#az-k8s-extension-extension-types-list-versions-by-location).
+You can run Azure CLI commands to retrieve a list of available versions in [a cluster](/cli/azure/k8s-extension/extension-types#az-k8s-extension-extension-types-list-versions-by-cluster) or [a location](/cli/azure/k8s-extension/extension-types#az-k8s-extension-extension-types-list-versions-by-location).
To view a list of the stable Dapr versions available to your managed AKS cluster, run the following command:
If you install Dapr through the AKS extension, our recommendation is to continue
## Next Steps
-After learning about Dapr and some of the challenges it solves, try [Deploying an application with the Dapr cluster extension][dapr-quickstart].
+> [!div class="nextstepaction"]
+> [Walk through the Dapr extension quickstart to demo how it works][dapr-quickstart]
+ <!-- Links Internal --> [csi-secrets-store]: ./csi-secrets-store-driver.md
After learning about Dapr and some of the challenges it solves, try [Deploying a
[dapr-migration]: ./dapr-migration.md [aks-msi]: ./use-managed-identity.md [dapr-configuration-options]: ./dapr-settings.md
+[dapr-create-extension]: ./dapr.md
<!-- Links External --> [dapr-docs]: https://docs.dapr.io/
After learning about Dapr and some of the challenges it solves, try [Deploying a
[dapr-subscriptions]: https://docs.dapr.io/developing-applications/building-blocks/pubsub/subscription-methods/#declarative-subscriptions [dapr-supported-version]: https://docs.dapr.io/operations/support/support-release-policy/ [dapr-observability]: https://docs.dapr.io/operations/observability/
-[dapr-alpha-beta]: https://docs.dapr.io/operations/support/alpha-beta-apis/
+[dapr-alpha-beta]: https://docs.dapr.io/operations/support/alpha-beta-apis/
aks Dapr Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md
The Dapr extension requires the following outbound URLs on `https://:443` to fun
## Next Steps
-Once you successfully provisioned Dapr in your AKS cluster, try deploying a [sample application][sample-application].
+- [Walk through the tutorial for deploying Dapr Workflow via the extension][dapr-workflow]
+- [Determine if you need to migrate from Dapr open source to the Dapr extension][dapr-migration].
+ <!-- LINKS INTERNAL --> [deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
Once you successfully provisioned Dapr in your AKS cluster, try deploying a [sam
[dapr-migration]: ./dapr-migration.md [dapr-settings]: ./dapr-settings.md [aks-azurelinux]: ./cluster-configuration.md#azure-linux-container-host-for-aks-
+[dapr-workflow]: ./dapr-workflow.md
<!-- LINKS EXTERNAL --> [kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production
aks Dapr Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-workflow.md
Notice that the workflow status is marked as completed.
## Next steps
-[Learn how to add configuration settings to the Dapr extension on your AKS cluster][dapr-config].
+- [Configure the Dapr extension on your AKS cluster][dapr-config].
+- [Determine if you need to migrate from Dapr open source to the Dapr extension][dapr-migration].
<!-- Links Internal --> [deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
Notice that the workflow status is marked as completed.
[cluster]: ./tutorial-kubernetes-deploy-cluster.md [k8s-sp]: ./dapr.md#register-the-kubernetesconfiguration-resource-provider [dapr-config]: ./dapr-settings.md
+[dapr-migration]: ./dapr-migration.md
[az-cloud-shell]: ./learn/quick-kubernetes-deploy-powershell.md#azure-cloud-shell [kubectl]: ./tutorial-kubernetes-deploy-cluster.md#connect-to-cluster-using-kubectl
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Title: Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
+ Title: Install the Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
description: Install and configure Dapr on your Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes clusters using the Dapr cluster extension. Previously updated : 06/06/2024 Last updated : 07/16/2024
-# Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
+# Install the Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
[Dapr](./dapr-overview.md) simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. With Dapr's sidecar architecture, you can keep your code platform agnostic while tackling challenges around building microservices, like: - Calling other services reliably and securely
Or simply remove the Bicep template.
## Next Steps -- Learn more about [extra settings and preferences you can set on the Dapr extension][dapr-settings].-- Once you have successfully provisioned Dapr in your AKS cluster, try deploying a [sample application][sample-application].-- Try out [Dapr Workflow on your Dapr extension for AKS][dapr-workflow]
+> [!div class="nextstepaction"]
+> [Configure the Dapr extension for your unique scenario][dapr-settings]
<!-- LINKS INTERNAL --> [deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
This article addresses frequent questions about Azure Kubernetes Service (AKS).
-## Which Azure regions currently provide AKS?
+## Support
-For a complete list of available regions, see [AKS regions and availability][aks-regions].
+### Does AKS offer a service-level agreement?
-## Can I spread an AKS cluster across regions?
+AKS provides SLA guarantees in the [Standard pricing tier with the Uptime SLA feature][pricing-tiers].
-No. AKS clusters are regional resources and can't span regions. See [best practices for business continuity and disaster recovery][bcdr-bestpractices] for guidance on how to create an architecture that includes multiple regions.
+The Free pricing tier doesn't have an associated Service Level *Agreement*, but has a Service Level *Objective* of 99.5%. Transient connectivity issues are observed if there's an upgrade, unhealthy underlay nodes, platform maintenance, an application overwhelms the API Server with requests, etc. For mission-critical and production workloads, or if your workload doesn't tolerate API Server restarts, we recommend using the Standard tier, which includes Uptime SLA.
-## Can I spread an AKS cluster across availability zones?
+### What is platform support, and what does it include?
-Yes. You can deploy an AKS cluster across one or more [availability zones][availability-zones] in [regions that support them][az-regions].
+Platform support is a reduced support plan for unsupported "N-3" version clusters. Platform support only includes Azure infrastructure support. Platform support doesn't include anything related to the following:
-## Can I limit who has access to the Kubernetes API server?
+- Kubernetes functionality and components
+- Cluster or node pool creation
+- Hotfixes
+- Bug fixes
+- Security patches
+- Retired components
-Yes. There are two options for limiting access to the API server:
+For more information on restrictions, see the [platform support policy][supported-kubernetes-versions].
-- Use [API Server Authorized IP Ranges][api-server-authorized-ip-ranges] if you want to maintain a public endpoint for the API server but restrict access to a set of trusted IP ranges.-- Use a [private cluster][private-clusters] if you want to limit the API server to *only* be accessible from within your virtual network.
+AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of *three* minor versions. AKS can only guarantee [full support](./supported-kubernetes-versions.md#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on kubernetes upstream.
-## Can I have different VM sizes in a single cluster?
+### Does AKS automatically upgrade my unsupported clusters?
-Yes, you can use different virtual machine sizes in your AKS cluster by creating [multiple node pools][multi-node-pools].
+AKS initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default.
-## Are security updates applied to AKS agent nodes?
+For example, Kubernetes v1.25 upgrades to v1.26 during the v1.29 GA release. To minimize disruptions, set up [maintenance windows][planned-maintenance]. See [auto-upgrade][auto-upgrade-cluster] for details on automatic upgrade channels.
-AKS patches CVEs that have a "vendor fix" every week. CVEs without a fix are waiting on a "vendor fix" before it can be remediated. The AKS images are automatically updated inside of 30 days. We recommend you apply an updated Node Image on a regular cadence to ensure that latest patched images and OS patches are all applied and current. You can do this using one of the following methods:
+### Can I run Windows Server containers on AKS?
-- Manually, through the Azure portal or the Azure CLI.-- By upgrading your AKS cluster. The cluster upgrades [cordon and drain nodes][cordon-drain] automatically and then bring a new node online with the latest Ubuntu image and a new patch version or a minor Kubernetes version. For more information, see [Upgrade an AKS cluster][aks-upgrade].-- By using [node image upgrade](node-image-upgrade.md).
+Yes, Windows Server containers are available on AKS. To run Windows Server containers in AKS, you create a node pool that runs Windows Server as the guest OS. Windows Server containers can use only Windows Server 2019. To get started, see [Create an AKS cluster with a Windows Server node pool](./learn/quick-windows-container-deploy-cli.md).
-## What's the size limit on a container image in AKS?
+Windows Server support for node pool includes some limitations that are part of the upstream Windows Server in Kubernetes project. For more information on these limitations, see [Windows Server containers in AKS limitations][aks-windows-limitations].
-AKS doesn't set a limit on the container image size. However, it's important to understand that the larger the image, the higher the memory demand. A larger size could potentially exceed resource limits or the overall available memory of worker nodes. By default, memory for VM size Standard_DS2_v2 for an AKS cluster is set to 7 GiB.
+### Can I apply Azure reservation discounts to my AKS agent nodes?
-When a container image is excessively large, as in the Terabyte (TBs) range, kubelet might not be able to pull it from your container registry to a node due to lack of disk space.
+AKS agent nodes are billed as standard Azure virtual machines. If you purchased [Azure reservations][reservation-discounts] for the VM size that you're using in AKS, those discounts are automatically applied.
-### Windows Server nodes
+## Operations
-For Windows Server nodes, Windows Update doesn't automatically run and apply the latest updates. On a regular schedule around the Windows Update release cycle and your own validation process, you should perform an upgrade on the cluster and the Windows Server node pool(s) in your AKS cluster. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. For more information on this process, see [Upgrade a node pool in AKS][nodepool-upgrade].
+### Can I move/migrate my cluster between Azure tenants?
-### Are there security threats targeting AKS that I should be aware of?
+Moving your AKS cluster between tenants is currently unsupported.
-Microsoft provides guidance for other actions you can take to secure your workloads through services like [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks). The following security threat is related to AKS and Kubernetes that you should be aware of:
+### Can I move/migrate my cluster between subscriptions?
-- [New large-scale campaign targets Kubeflow](https://techcommunity.microsoft.com/t5/azure-security-center/new-large-scale-campaign-targets-kubeflow/ba-p/2425750) (June 8, 2021).
+Movement of clusters between subscriptions is currently unsupported.
-## How does the managed Control Plane communicate with my Nodes?
+### Can I move my AKS clusters from the current Azure subscription to another?
-AKS uses a secure tunnel communication to allow the api-server and individual node kubelets to communicate even on separate virtual networks. The tunnel is secured through mTLS encryption. The current main tunnel that is used by AKS is [Konnectivity, previously known as apiserver-network-proxy](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/). Verify all network rules follow the [Azure required network rules and FQDNs](limit-egress-traffic.md).
+Moving your AKS cluster and its associated resources between Azure subscriptions isn't supported.
-## Can my pods use the API server FQDN instead of the cluster IP?
+### Can I move my AKS cluster or AKS infrastructure resources to other resource groups or rename them?
-Yes, you can add the annotation `kubernetes.azure.com/set-kube-service-host-fqdn` to pods to set the `KUBERNETES_SERVICE_HOST` variable to the domain name of the API server instead of the in-cluster service IP. This is useful in cases where your cluster egress is done via a layer 7 firewall, such as when using Azure Firewall with Application Rules.
+Moving or renaming your AKS cluster and its associated resources isn't supported.
+
+### Can I restore my cluster after deleting it?
+
+No, you cannot restore your cluster after deleting it. When you delete your cluster, the node resource group and all its resources are also deleted. An example of the second resource group is *MC_myResourceGroup_myAKSCluster_eastus*.
+
+If you want to keep any of your resources, move them to another resource group before deleting your cluster. If you want to protect against accidental deletes, you can lock the AKS managed resource group hosting your cluster resources using [Node resource group lockdown][node-resource-group-lockdown].
+
+### Can I scale my AKS cluster to zero?
+
+You can completely [stop a running AKS cluster](start-stop-cluster.md), saving on the respective compute costs. Additionally, you may also choose to [scale or autoscale all or specific `User` node pools](scale-cluster.md#scale-user-node-pools-to-0) to 0, maintaining only the necessary cluster configuration.
+
+You can't directly scale [system node pools](use-system-pools.md) to zero.
+
+### Can I use the Virtual Machine Scale Set APIs to scale manually?
+
+No, scale operations by using the Virtual Machine Scale Set APIs aren't supported. Use the AKS APIs (`az aks scale`).
+
+### Can I use Virtual Machine Scale Sets to manually scale to zero nodes?
+
+No, scale operations by using the Virtual Machine Scale Set APIs aren't supported. You can use the AKS API to scale to zero nonsystem node pools or [stop your cluster](start-stop-cluster.md) instead.
-## Why are two resource groups created with AKS?
+### Can I stop or de-allocate all my VMs?
+
+While AKS has resilience mechanisms to withstand such a config and recover from it, it isn't a supported configuration. [Stop your cluster](start-stop-cluster.md) instead.
+
+### Why are two resource groups created with AKS?
AKS builds upon many Azure infrastructure resources, including Virtual Machine Scale Sets, virtual networks, and managed disks. These integrations enable you to apply many of the core capabilities of the Azure platform within the managed Kubernetes environment provided by AKS. For example, most Azure virtual machine types can be used directly with AKS and Azure Reservations can be used to receive discounts on those resources automatically.
To enable this architecture, each AKS deployment spans two resource groups:
> [!NOTE] > Modifying any resource under the node resource group in the AKS cluster is an unsupported action and will cause cluster operation failures. You can prevent changes from being made to the node resource group by [blocking users from modifying resources](cluster-configuration.md#fully-managed-resource-group-preview) managed by the AKS cluster.
-## Can I provide my own name for the AKS node resource group?
+### Can I provide my own name for the AKS node resource group?
Yes. By default, AKS names the node resource group *MC_resourcegroupname_clustername_location*, but you can also provide your own name.
As you work with the node resource group, keep in mind that you can't:
- Specify names for the managed resources within the node resource group. - Modify or delete Azure-created tags of managed resources within the node resource group. See additional information in the [next section](#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group).
-## Can I modify tags and other properties of the AKS resources in the node resource group?
+### Can I modify tags and other properties of the AKS resources in the node resource group?
You might get unexpected scaling and upgrading errors if you modify or delete Azure-created tags and other resource properties in the node resource group. AKS allows you to create and modify custom tags created by end users, and you can add those tags when [creating a node pool](manage-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool). You might want to create or modify custom tags, for example, to assign a business unit or cost center. Another option is to create Azure Policies with a scope on the managed resource group.
Azure-created tags are created for their respective Azure Services and should al
> [!NOTE] > In the past, the tag name "Owner" was reserved for AKS to manage the public IP that is assigned on front end IP of the loadbalancer. Now, services follow use the `aks-managed` prefix. For legacy resources, don't use Azure policies to apply the "Owner" tag name. Otherwise, all resources on your AKS cluster deployment and update operations will break. This does not apply to newly created resources.
-## What Kubernetes admission controllers does AKS support? Can admission controllers be added or removed?
-
-AKS supports the following [admission controllers][admission-controllers]:
--- *NamespaceLifecycle*-- *LimitRanger*-- *ServiceAccount*-- *DefaultIngressClass*-- *DefaultStorageClass*-- *DefaultTolerationSeconds*-- *MutatingAdmissionWebhook*-- *ValidatingAdmissionWebhook*-- *ResourceQuota*-- *PodNodeSelector*-- *PodTolerationRestriction*-- *ExtendedResourceToleration*-
-Currently, you can't modify the list of admission controllers in AKS.
-
-## Can I use admission controller webhooks on AKS?
+## Quotas, limits, and region availability
-Yes, you can use admission controller webhooks on AKS. It's recommended you exclude internal AKS namespaces, which are marked with the **control-plane label.** For example:
-
-```output
-namespaceSelector:
- matchExpressions:
- - key: control-plane
- operator: DoesNotExist
-```
+### Which Azure regions currently provide AKS?
-AKS firewalls the API server egress so your admission controller webhooks need to be accessible from within the cluster.
-
-## Can admission controller webhooks impact kube-system and internal AKS namespaces?
-
-To protect the stability of the system and prevent custom admission controllers from impacting internal services in the kube-system, namespace AKS has an **Admissions Enforcer**, which automatically excludes kube-system and AKS internal namespaces. This service ensures the custom admission controllers don't affect the services running in kube-system.
-
-If you have a critical use case for deploying something on kube-system (not recommended) in support of your custom admission webhook, you may add the following label or annotation so that Admissions Enforcer ignores it.
-
-Label: ```"admissions.enforcer/disabled": "true"``` or Annotation: ```"admissions.enforcer/disabled": true```
-
-## Is Azure Key Vault integrated with AKS?
-
-[Azure Key Vault Provider for Secrets Store CSI Driver][aks-keyvault-provider] provides native integration of Azure Key Vault into AKS.
-
-## Can I run Windows Server containers on AKS?
-
-Yes, Windows Server containers are available on AKS. To run Windows Server containers in AKS, you create a node pool that runs Windows Server as the guest OS. Windows Server containers can use only Windows Server 2019. To get started, see [Create an AKS cluster with a Windows Server node pool](./learn/quick-windows-container-deploy-cli.md).
-
-Windows Server support for node pool includes some limitations that are part of the upstream Windows Server in Kubernetes project. For more information on these limitations, see [Windows Server containers in AKS limitations][aks-windows-limitations].
-
-## Does AKS offer a service-level agreement?
-
-AKS provides SLA guarantees in the [Standard pricing tier with the Uptime SLA feature][pricing-tiers].
-
-The Free pricing tier doesn't have an associated Service Level *Agreement*, but has a Service Level *Objective* of 99.5%. Transient connectivity issues are observed if there's an upgrade, unhealthy underlay nodes, platform maintenance, an application overwhelms the API Server with requests, etc. For mission-critical and production workloads, or if your workload doesn't tolerate API Server restarts, we recommend using the Standard tier, which includes Uptime SLA.
-
-## Can I apply Azure reservation discounts to my AKS agent nodes?
-
-AKS agent nodes are billed as standard Azure virtual machines. If you purchased [Azure reservations][reservation-discounts] for the VM size that you're using in AKS, those discounts are automatically applied.
-
-## Can I move/migrate my cluster between Azure tenants?
-
-Moving your AKS cluster between tenants is currently unsupported.
-
-## Can I move/migrate my cluster between subscriptions?
-
-Movement of clusters between subscriptions is currently unsupported.
-
-## Can I move my AKS clusters from the current Azure subscription to another?
-
-Moving your AKS cluster and its associated resources between Azure subscriptions isn't supported.
-
-## Can I move my AKS cluster or AKS infrastructure resources to other resource groups or rename them?
-
-Moving or renaming your AKS cluster and its associated resources isn't supported.
-
-## Why is my cluster delete taking so long?
+For a complete list of available regions, see [AKS regions and availability][aks-regions].
-Most clusters are deleted upon user request. In some cases, especially cases where you bring your own Resource Group or perform cross-RG tasks, deletion can take more time or even fail. If you have an issue with deletes, double-check that you don't have locks on the RG, that any resources outside of the RG are disassociated from the RG, and so on.
+### Can I spread an AKS cluster across regions?
-## Why is my cluster create/update taking so long?
+No. AKS clusters are regional resources and can't span regions. See [best practices for business continuity and disaster recovery][bcdr-bestpractices] for guidance on how to create an architecture that includes multiple regions.
-If you have issues with create and update cluster operations, make sure you don't have any assigned policies or service constraints that may block your AKS cluster from managing resources like VMs, load balancers, tags, etc.
+### Can I spread an AKS cluster across availability zones?
-## Can I restore my cluster after deleting it?
+Yes. You can deploy an AKS cluster across one or more [availability zones][availability-zones] in [regions that support them][az-regions].
-No, you cannot restore your cluster after deleting it. When you delete your cluster, the node resource group and all its resources are also deleted. An example of the second resource group is *MC_myResourceGroup_myAKSCluster_eastus*.
+### Can I have different VM sizes in a single cluster?
-If you want to keep any of your resources, move them to another resource group before deleting your cluster. If you want to protect against accidental deletes, you can lock the AKS managed resource group hosting your cluster resources using [Node resource group lockdown][node-resource-group-lockdown].
+Yes, you can use different virtual machine sizes in your AKS cluster by creating [multiple node pools][multi-node-pools].
-## What is platform support, and what does it include?
+### What's the size limit on a container image in AKS?
-Platform support is a reduced support plan for unsupported "N-3" version clusters. Platform support only includes Azure infrastructure support. Platform support doesn't include anything related to the following:
+AKS doesn't set a limit on the container image size. However, it's important to understand that the larger the image, the higher the memory demand. A larger size could potentially exceed resource limits or the overall available memory of worker nodes. By default, memory for VM size Standard_DS2_v2 for an AKS cluster is set to 7 GiB.
-- Kubernetes functionality and components-- Cluster or node pool creation-- Hotfixes-- Bug fixes-- Security patches-- Retired components
+When a container image is excessively large, as in the Terabyte (TBs) range, kubelet might not be able to pull it from your container registry to a node due to lack of disk space.
-For more information on restrictions, see the [platform support policy][supported-kubernetes-versions].
+#### Windows Server nodes
-AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of *three* minor versions. AKS can only guarantee [full support](./supported-kubernetes-versions.md#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on kubernetes upstream.
+For Windows Server nodes, Windows Update doesn't automatically run and apply the latest updates. On a regular schedule around the Windows Update release cycle and your own validation process, you should perform an upgrade on the cluster and the Windows Server node pool(s) in your AKS cluster. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. For more information on this process, see [Upgrade a node pool in AKS][nodepool-upgrade].
-## Does AKS automatically upgrade my unsupported clusters?
+### Are AKS images required to run as root?
-AKS initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default.
+The following images have functional requirements to "Run as Root" and exceptions must be filed for any policies:
-For example, kubernetes v1.25 upgrades to v1.26 during the v1.29 GA release. To minimize disruptions, set up [maintenance windows][planned-maintenance]. See [auto-upgrade][auto-upgrade-cluster] for details on automatic upgrade channels.
+- *mcr.microsoft.com/oss/kubernetes/coredns*
+- *mcr.microsoft.com/azuremonitor/containerinsights/ciprod*
+- *mcr.microsoft.com/oss/calico/node*
+- *mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi*
-## If I have pod / deployments in state 'NodeLost' or 'Unknown' can I still upgrade my cluster?
+## Security, access, and identity
-You can, but we don't recommend it. You should perform updates when the state of the cluster is known and healthy.
+### Can I limit who has access to the Kubernetes API server?
-## If I have a cluster with one or more nodes in an Unhealthy state or shut down, can I perform an upgrade?
+Yes. There are two options for limiting access to the API server:
-No, delete/remove any nodes in a failed state or otherwise from the cluster before upgrading.
+- Use [API Server Authorized IP Ranges][api-server-authorized-ip-ranges] if you want to maintain a public endpoint for the API server but restrict access to a set of trusted IP ranges.
+- Use a [private cluster][private-clusters] if you want to limit the API server to *only* be accessible from within your virtual network.
-## I ran a cluster delete, but see the error `[Errno 11001] getaddrinfo failed`
+### Are security updates applied to AKS agent nodes?
-Most commonly, this error arises if you have one or more Network Security Groups (NSGs) still in use that are associated with the cluster. Remove them and attempt the delete again.
+AKS patches CVEs that have a "vendor fix" every week. CVEs without a fix are waiting on a "vendor fix" before they can be remediated. The AKS images are automatically updated inside of 30 days. We recommend you apply an updated Node Image on a regular cadence to ensure that latest patched images and OS patches are all applied and current. You can do this using one of the following methods:
-## I ran an upgrade, but now my pods are in crash loops, and readiness probes fail?
+- Manually, through the Azure portal or the Azure CLI.
+- By upgrading your AKS cluster. The cluster upgrades [cordon and drain nodes][cordon-drain] automatically and then bring a new node online with the latest Ubuntu image and a new patch version or a minor Kubernetes version. For more information, see [Upgrade an AKS cluster][aks-upgrade].
+- By using [node image upgrade](node-image-upgrade.md).
-Confirm your service principal hasn't expired. See: [AKS service principal](./kubernetes-service-principal.md) and [AKS update credentials](./update-credentials.md).
+### Are there security threats targeting AKS that I should be aware of?
-## My cluster was working, but suddenly can't provision LoadBalancers, mount PVCs, etc.?
+Microsoft provides guidance for other actions you can take to secure your workloads through services like [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks). The following security threat is related to AKS and Kubernetes that you should be aware of:
-Confirm your service principal hasn't expired. See: [AKS service principal](./kubernetes-service-principal.md) and [AKS update credentials](./update-credentials.md).
+- [New large-scale campaign targets Kubeflow](https://techcommunity.microsoft.com/t5/azure-security-center/new-large-scale-campaign-targets-kubeflow/ba-p/2425750) (June 8, 2021).
-## Can I scale my AKS cluster to zero?
+### Does AKS store any customer data outside of the cluster's region?
-You can completely [stop a running AKS cluster](start-stop-cluster.md), saving on the respective compute costs. Additionally, you may also choose to [scale or autoscale all or specific `User` node pools](scale-cluster.md#scale-user-node-pools-to-0) to 0, maintaining only the necessary cluster configuration.
+No, all data is stored in the cluster's region.
-You can't directly scale [system node pools](use-system-pools.md) to zero.
+### How to avoid permission ownership setting slow issues when the volume has numerous files
-## Can I use the Virtual Machine Scale Set APIs to scale manually?
+Traditionally if your pod is running as a nonroot user (which you should), you must specify a `fsGroup` inside the pod's security context so the volume can be readable and writable by the Pod. This requirement is covered in more detail [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
-No, scale operations by using the Virtual Machine Scale Set APIs aren't supported. Use the AKS APIs (`az aks scale`).
+A side effect of setting `fsGroup` is that each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume (with a few exceptions noted below). This scenario happens even if group ownership of the volume already matches the requested `fsGroup`. It can be expensive for larger volumes with lots of small files, which can cause pod startup to take a long time. This scenario has been a known problem before v1.20, and the workaround is setting the Pod run as root:
-## Can I use Virtual Machine Scale Sets to manually scale to zero nodes?
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: security-context-demo
+spec:
+ securityContext:
+ runAsUser: 0
+ fsGroup: 0
+```
-No, scale operations by using the Virtual Machine Scale Set APIs aren't supported. You can use the AKS API to scale to zero nonsystem node pools or [stop your cluster](start-stop-cluster.md) instead.
+The issue has been resolved with Kubernetes version 1.20. For more information, see [Kubernetes 1.20: Granular Control of Volume Permission Changes](https://kubernetes.io/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/).
-## Can I stop or de-allocate all my VMs?
+## Networking
-While AKS has resilience mechanisms to withstand such a config and recover from it, it isn't a supported configuration. [Stop your cluster](start-stop-cluster.md) instead.
+### How does the managed Control Plane communicate with my Nodes?
-## Can I use custom VM extensions?
+AKS uses a secure tunnel communication to allow the api-server and individual node kubelets to communicate even on separate virtual networks. The tunnel is secured through mTLS encryption. The current main tunnel that is used by AKS is [Konnectivity, previously known as apiserver-network-proxy](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/). Verify all network rules follow the [Azure required network rules and FQDNs](limit-egress-traffic.md).
-No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. To install custom components, use the Kubernetes APIs and mechanisms. For example, use DaemonSets to install required components.
+### Can my pods use the API server FQDN instead of the cluster IP?
-## Does AKS store any customer data outside of the cluster's region?
+Yes, you can add the annotation `kubernetes.azure.com/set-kube-service-host-fqdn` to pods to set the `KUBERNETES_SERVICE_HOST` variable to the domain name of the API server instead of the in-cluster service IP. This is useful in cases where your cluster egress is done via a layer 7 firewall, such as when using Azure Firewall with Application Rules.
-No, all data is stored in the cluster's region.
+### Can I configure NSGs with AKS?
-## Are AKS images required to run as root?
+AKS doesn't apply Network Security Groups (NSGs) to its subnet and doesn't modify any of the NSGs associated with that subnet. AKS only modifies the network interfaces NSGs settings. If you're using CNI, you also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR ranges. If you're using kubenet, you must also ensure the security rules in the NSGs allow traffic between the node and pod CIDR. For more information, see [Network security groups](concepts-network.md#network-security-groups).
-The following images have functional requirements to "Run as Root" and exceptions must be filed for any policies:
+### How does Time synchronization work in AKS?
-- *mcr.microsoft.com/oss/kubernetes/coredns*-- *mcr.microsoft.com/azuremonitor/containerinsights/ciprod*-- *mcr.microsoft.com/oss/calico/node*-- *mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi*
+AKS nodes run the "chrony" service, which pulls time from the localhost. Containers running on pods get the time from the AKS nodes. Applications launched inside a container use time from the container of the pod.
-## What is Azure CNI Transparent Mode vs. Bridge Mode?
+### What is Azure CNI Transparent Mode vs. Bridge Mode?
Starting with version 1.2.0, Azure CNI sets Transparent mode as default for single tenancy Linux CNI deployments. Transparent mode is replacing bridge mode. In the following [Bridge mode](#bridge-mode) and [Transparent mode](#transparent-mode) sections, we discuss more about the differences between both modes and the benefits and limitations for Transparent mode in Azure CNI.
-### Bridge mode
+#### Bridge mode
Azure CNI Bridge mode creates an L2 bridge named "azure0" in a "just in time" fashion. All the host side pod `veth` pair interfaces are connected to this bridge. Pod-Pod intra VM communication and the remaining traffic go through this bridge. The bridge is a layer 2 virtual device that on its own can't receive or transmit anything unless you bind one or more real devices to it. For this reason, eth0 of the Linux VM has to be converted into a subordinate to "azure0" bridge, which creates a complex network topology within the Linux VM. As a symptom, CNI had to handle other networking functions, such as DNS server updates.
default via 10.240.0.1 dev azure0 proto dhcp src 10.240.0.4 metric 100
root@k8s-agentpool1-20465682-1:/# ```
-### Transparent mode
+#### Transparent mode
Transparent mode takes a straightforward approach to setting up Linux networking. In this mode, Azure CNI doesn't change any properties of eth0 interface in the Linux VM. This approach of changing the Linux networking properties helps reduce complex corner case issues that clusters might face with Bridge mode. In Transparent mode, Azure CNI creates and adds host-side pod `veth` pair interfaces that are added to the host network. Intra VM Pod-to-Pod communication is through ip routes added by the CNI. Essentially, Pod-to-Pod communication is over layer 3 and L3 routing rules route pod traffic.
The following example shows an ip route setup of Transparent mode. Each Pod's in
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown ```
-### Benefits of Transparent mode
+#### Benefits of Transparent mode
- Provides mitigation for `conntrack` DNS parallel race condition and avoidance of 5-sec DNS latency issues without the need to set up node local DNS (you may still use node local DNS for performance reasons). - Eliminates the initial 5-sec DNS latency CNI bridge mode introduces today due to "just in time" bridge setup.
The following example shows an ip route setup of Transparent mode. Each Pod's in
- Provides better handling of UDP traffic and mitigation for UDP flood storm when ARP times out. In Bridge mode, when bridge doesn't know a MAC address of destination pod in intra-VM Pod-to-Pod communication, by design, it results in storm of the packet to all ports. This issue is resolved in Transparent mode, as there are no L2 devices in path. See more [here](https://github.com/Azure/azure-container-networking/issues/704). - Transparent mode performs better in Intra VM Pod-to-Pod communication in terms of throughput and latency when compared to Bridge mode.
-## How to avoid permission ownership setting slow issues when the volume has numerous files?
+## Add-ons, extensions, and integrations
-Traditionally if your pod is running as a nonroot user (which you should), you must specify a `fsGroup` inside the pod's security context so the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
+### Can I use custom VM extensions?
-A side effect of setting `fsGroup` is that each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume (with a few exceptions noted below). This scenario happens even if group ownership of the volume already matches the requested `fsGroup`. It can be expensive for larger volumes with lots of small files, which can cause pod startup to take a long time. This scenario has been a known problem before v1.20, and the workaround is setting the Pod run as root:
+No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. To install custom components, use the Kubernetes APIs and mechanisms. For example, use DaemonSets to install required components.
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: security-context-demo
-spec:
- securityContext:
- runAsUser: 0
- fsGroup: 0
+### What Kubernetes admission controllers does AKS support? Can admission controllers be added or removed?
+
+AKS supports the following [admission controllers][admission-controllers]:
+
+- *NamespaceLifecycle*
+- *LimitRanger*
+- *ServiceAccount*
+- *DefaultIngressClass*
+- *DefaultStorageClass*
+- *DefaultTolerationSeconds*
+- *MutatingAdmissionWebhook*
+- *ValidatingAdmissionWebhook*
+- *ResourceQuota*
+- *PodNodeSelector*
+- *PodTolerationRestriction*
+- *ExtendedResourceToleration*
+
+Currently, you can't modify the list of admission controllers in AKS.
+
+### Can I use admission controller webhooks on AKS?
+
+Yes, you can use admission controller webhooks on AKS. It's recommended you exclude internal AKS namespaces, which are marked with the **control-plane label.** For example:
+
+```output
+namespaceSelector:
+ matchExpressions:
+ - key: control-plane
+ operator: DoesNotExist
```
-The issue has been resolved with Kubernetes version 1.20. For more information, see [Kubernetes 1.20: Granular Control of Volume Permission Changes](https://kubernetes.io/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/).
+AKS firewalls the API server egress so your admission controller webhooks need to be accessible from within the cluster.
-## Can I use FIPS cryptographic libraries with deployments on AKS?
+### Can admission controller webhooks impact kube-system and internal AKS namespaces?
-FIPS-enabled nodes are now supported on Linux-based node pools. For more information, see [Add a FIPS-enabled node pool](create-node-pools.md#fips-enabled-node-pools).
+To protect the stability of the system and prevent custom admission controllers from impacting internal services in the kube-system, namespace AKS has an **Admissions Enforcer**, which automatically excludes kube-system and AKS internal namespaces. This service ensures the custom admission controllers don't affect the services running in kube-system.
-## Can I configure NSGs with AKS?
+If you have a critical use case for deploying something on kube-system (not recommended) in support of your custom admission webhook, you may add the following label or annotation so that Admissions Enforcer ignores it.
-AKS doesn't apply Network Security Groups (NSGs) to its subnet and doesn't modify any of the NSGs associated with that subnet. AKS only modifies the network interfaces NSGs settings. If you're using CNI, you also must ensure the security rules in the NSGs allow traffic between the node and pod CIDR ranges. If you're using kubenet, you must also ensure the security rules in the NSGs allow traffic between the node and pod CIDR. For more information, see [Network security groups](concepts-network.md#network-security-groups).
+Label: ```"admissions.enforcer/disabled": "true"``` or Annotation: ```"admissions.enforcer/disabled": true```
-## How does Time synchronization work in AKS?
+### Is Azure Key Vault integrated with AKS?
-AKS nodes run the "chrony" service, which pulls time from the localhost. Containers running on pods get the time from the AKS nodes. Applications launched inside a container use time from the container of the pod.
+[Azure Key Vault Provider for Secrets Store CSI Driver][aks-keyvault-provider] provides native integration of Azure Key Vault into AKS.
+
+### Can I use FIPS cryptographic libraries with deployments on AKS?
+
+FIPS-enabled nodes are now supported on Linux-based node pools. For more information, see [Add a FIPS-enabled node pool](create-node-pools.md#fips-enabled-node-pools).
-## How are AKS addons updated?
+### How are AKS addons updated?
Any patch, including a security patch, is automatically applied to the AKS cluster. Anything bigger than a patch, like major or minor version changes (which can have breaking changes to your deployed objects), is updated when you update your cluster if a new release is available. You can find when a new release is available by visiting the [AKS release notes](https://github.com/Azure/AKS/releases).
-## What is the purpose of the AKS Linux Extension I see installed on my Linux Virtual Machine Scale Sets instances?
+### What is the purpose of the AKS Linux Extension I see installed on my Linux Virtual Machine Scale Sets instances?
The AKS Linux Extension is an Azure VM extension that installs and configures monitoring tools on Kubernetes worker nodes. The extension is installed on all new and existing Linux nodes. It configures the following monitoring tools:
These tools help provide observability around many node health related problems,
The extension **doesn't require additional outbound access** to any URLs, IP addresses, or ports beyond the [documented AKS egress requirements](./limit-egress-traffic.md). It doesn't require any special permissions granted in Azure. It uses kubeconfig to connect to the API server to send the monitoring data collected.
+## Troubleshooting cluster issues
+
+### Why is my cluster delete taking so long?
+
+Most clusters are deleted upon user request. In some cases, especially cases where you bring your own Resource Group or perform cross-RG tasks, deletion can take more time or even fail. If you have an issue with deletes, double-check that you don't have locks on the RG, that any resources outside of the RG are disassociated from the RG, and so on.
+
+### Why is my cluster create/update taking so long?
+
+If you have issues with create and update cluster operations, make sure you don't have any assigned policies or service constraints that may block your AKS cluster from managing resources like VMs, load balancers, tags, etc.
+
+### If I have pod / deployments in state 'NodeLost' or 'Unknown' can I still upgrade my cluster?
+
+You can, but we don't recommend it. You should perform updates when the state of the cluster is known and healthy.
+
+### If I have a cluster with one or more nodes in an Unhealthy state or shut down, can I perform an upgrade?
+
+No, delete/remove any nodes in a failed state or otherwise from the cluster before upgrading.
+
+### I ran a cluster delete, but see the error `[Errno 11001] getaddrinfo failed`
+
+Most commonly, this error arises if you have one or more Network Security Groups (NSGs) still in use that are associated with the cluster. Remove them and attempt the delete again.
+
+### I ran an upgrade, but now my pods are in crash loops, and readiness probes fail
+
+Confirm your service principal hasn't expired. See: [AKS service principal](./kubernetes-service-principal.md) and [AKS update credentials](./update-credentials.md).
+
+### My cluster was working, but suddenly can't provision LoadBalancers, mount PVCs, etc.
+
+Confirm your service principal hasn't expired. See: [AKS service principal](./kubernetes-service-principal.md) and [AKS update credentials](./update-credentials.md).
+ <!-- LINKS - internal --> [aks-upgrade]: ./upgrade-cluster.md
The extension **doesn't require additional outbound access** to any URLs, IP add
[aks-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service [cordon-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ [admission-controllers]: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/-
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
description: Use the Dapr cluster extension for Azure Kubernetes Service (AKS) o
Previously updated : 12/27/2023 Last updated : 07/16/2024
Now that both the Node.js and Python applications are deployed, you watch messag
## Next steps > [!div class="nextstepaction"]
-> [Learn more about other cluster extensions][cluster-extensions].
+> [Learn how to create the Dapr extension][dapr-create-extension]
<!-- LINKS --> <!-- INTERNAL --> [azure-cli-install]: /cli/azure/install-azure-cli [azure-powershell-install]: /powershell/azure/install-az-ps [cluster-extensions]: ./cluster-extensions.md
-[dapr-overview]: ./dapr.md
+[dapr-overview]: ./dapr-overview.md
[az-group-delete]: /cli/azure/group#az-group-delete [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
+[dapr-create-extension]: ./dapr.md
<!-- EXTERNAL --> [hello-world-gh]: https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
To better optimize your costs during these periods, you can turn off, or stop, y
> [!CAUTION] > Stopping your cluster deallocates the control plane and releases the capacity. In regions experiencing capacity constraints, customers may be unable to start a stopped cluster. We do not recommend stopping mission critical workloads for this reason.
+> [!NOTE]
+> AKS start operations will restore all objects from ETCD with the exception of standalone pods with the same names and ages. meaning that a pod's age will continue to be calculated from its original creation time. This count will keep increasing over time, regardless of whether the cluster is in a stopped state.
++ ## Before you begin This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal].
api-center Discover Shadow Apis Dev Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/discover-shadow-apis-dev-proxy.md
Title: Tutorial - Discover shadow APIs using Dev Proxy
-description: In this tutorial, you learn how to discover shadow APIs in your apps using Dev Proxy and onboard them to API Center.
+ Title: Discover shadow APIs using Dev Proxy
+description: Learn how to discover shadow APIs in your apps using Dev Proxy and onboard them to API Center.
-+ Last updated 07/15/2024
-# Tutorial - Discover shadow APIs using Dev Proxy
+# Discover shadow APIs using Dev Proxy
Using Azure API Center you catalog APIs used in your organization. This allows you to tell which APIs you use, where the API is in its lifecycle, and who to contact if there are issues. In short, having an up-to-date catalog of APIs helps you improve the governance-, compliance-, and security posture.
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
Create and use an API center for the following:
* **Real-world API representation** - Add real-world information about each API including versions and definitions such as OpenAPI definitions. List API deployments and associate them with runtime environments, for example, representing Azure API Management or other API management solutions.
-* **API governance** - Organize and filter APIs and related resources using built-in and custom metadata, to help with API governance and discovery by API consumers. Set up linting and analysis to enforce API definition quality.
+* **API governance** - Organize and filter APIs and related resources using built-in and custom metadata, to help with API governance and discovery by API consumers. Set up [linting and analysis](enable-api-analysis-linting.md) to enforce API definition quality. Integrate with tools such as Dev Proxy to ensure that apps don't use unregistered [shadow APIs](discover-shadow-apis-dev-proxy.md) or APIs that don't meet organizational standards.
* **API discovery and reuse** - Enable developers and API program managers to discover APIs via the Azure portal, an API Center portal, and developer tools including a [Visual Studio Code extension](use-vscode-extension.md)ΓÇï.
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
ai-usage: ai-assisted+ <!-- NOTES:
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
For logs stored in the App Service file system, the easiest way is to download t
- Linux/custom containers: `https://<app-name>.scm.azurewebsites.net/api/logs/docker/zip` - Windows apps: `https://<app-name>.scm.azurewebsites.net/api/dump`
-For Linux/custom containers, the ZIP file contains console output logs for both the docker host and the docker container. For a scaled-out app, the ZIP file contains one set of logs for each instance. In the App Service file system, these log files are the contents of the */home/LogFiles* directory.
+For Linux/custom containers, the ZIP file contains console output logs for both the docker host and the docker container. For a scaled-out app, the ZIP file contains one set of logs for each instance. In the App Service file system, these log files are the contents of the */home/LogFiles* directory. Deployment logs are stored in */site/deployments/*.
For Windows apps, the ZIP file contains the contents of the *D:\Home\LogFiles* directory in the App Service file system. It has the following structure:
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
ms.devlang: csharp
zone_pivot_groups: app-service-portal-azd+ # Tutorial: Deploy an ASP.NET Core and Azure SQL Database app to Azure App Service
app-service Tutorial Java Tomcat Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-mysql-app.md
Last updated 05/08/2024 zone_pivot_groups: app-service-portal-azd+ # Tutorial: Build a Tomcat web app with Azure App Service on Linux and MySQL
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
Define access policies to use the user-assigned managed identity with your Key V
2. Select the Key Vault that contains your certificate. 3. If you're using the permission model **Vault access policy**: Select **Access Policies**, select **+ Add Access Policy**, select **Get** for **Secret permissions**, and choose your user-assigned managed identity for **Select principal**. Then select **Save**.
- If you're using **Azure role-based access control** follow the article [Assign a managed identity access to a resource](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) and assign the user-assigned managed identity the **Key Vault Secrets User** role to the Azure Key Vault.
+ If you're using **Azure role-based access control** follow the article [Assign a managed identity access to a resource](/entra/identity/managed-identities-azure-resources/how-to-assign-access-azure-resource) and assign the user-assigned managed identity the **Key Vault Secrets User** role to the Azure Key Vault.
### Verify Firewall Permissions to Key Vault
azure-app-configuration Feature Management Dotnet Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/feature-management-dotnet-reference.md
Allocation logic is similar to the [Microsoft.Targeting](#microsofttargeting) fe
### Overriding Enabled State with a Variant
-You can use variants to override the enabled state of a feature flag. This gives variants an opportunity to extend the evaluation of a feature flag. If a caller is checking whether a flag that has variants is enabled, the feature manager will check if the variant assigned to the current user is set up to override the result. This is done using the optional variant property `status_override`. By default, this property is set to `None`, which means the variant doesn't affect whether the flag is considered enabled or disabled. Setting `status_override` to `Enabled` allows the variant, when chosen, to override a flag to be enabled. Setting `status_override` to `Disabled` provides the opposite functionality, therefore disabling the flag when the variant is chosen. A feature with a `Status` of `Disabled` can't be overridden.
+You can use variants to override the enabled state of a feature flag. This gives variants an opportunity to extend the evaluation of a feature flag. If a caller is checking whether a flag that has variants is enabled, the feature manager will check if the variant assigned to the current user is set up to override the result. This is done using the optional variant property `status_override`. By default, this property is set to `None`, which means the variant doesn't affect whether the flag is considered enabled or disabled. Setting `status_override` to `Enabled` allows the variant, when chosen, to override a flag to be enabled. Setting `status_override` to `Disabled` provides the opposite functionality, therefore disabling the flag when the variant is chosen. A feature with an `enabled` state of `false` can't be overridden.
If you're using a feature flag with binary variants, the `status_override` property can be very helpful. It allows you to continue using APIs like `IsEnabledAsync` and `FeatureGateAttribute` in your application, all while benefiting from the new features that come with variants, such as percentile allocation and seed.
azure-functions Deployment Zip Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/deployment-zip-push.md
To speed up development, you might find it easier to deploy your function app pr
For more information, see the [.zip deployment reference](https://github.com/projectkudu/kudu/wiki/Deploying-from-a-zip-file).
-## Deployment .zip file requirements
-
-The .zip file that you use for push deployment must contain all of the files needed to run your function.
- >[!IMPORTANT] > When you use .zip deployment, any files from an existing deployment that aren't found in the .zip file are deleted from your function app.
+## Deployment .zip file requirements
-A function app includes all of the files and folders in the `wwwroot` directory. A .zip file deployment includes the contents of the `wwwroot` directory, but not the directory itself. When deploying a C# class library project, you must include the compiled library files and dependencies in a `bin` subfolder in your .zip package.
-When you are developing on a local computer, you can manually create a .zip file of the function app project folder using built-in .zip compression functionality or third-party tools.
+A zip deployment process extracts the zip archive's files and folders in the `wwwroot` directory. If you include the parent directory when creating the archive, the system will not find the files it expects to see in `wwwroot`.
## <a name="cli"></a>Deploy by using Azure CLI
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
You can create your function app and other required resources in Azure using one
+ [Deployment templates](./functions-infrastructure-as-code.md): You can use ARM templates and Bicep files to automate the deployment of the required resources to Azure. Make sure your template includes any [required settings](#deployment-requirements). + [Azure portal](./functions-create-function-app-portal.md): You can create the required resources in the [Azure portal](https://portal.azure.com).
-### Publish code project
+### Publish your application
After creating your function app and other required resources in Azure, you can deploy the code project to Azure using one of these methods:
After creating your function app and other required resources in Azure, you can
For more information, see [Deployment technologies in Azure Functions](functions-deployment-technologies.md).
+#### Deployment payload
+
+Many of the deployment methods make use of a zip archive. If you are creating the zip archive yourself, it must follow the structure outlined in this section. If it does not, your app may experience errors at startup.
+
+The deployment payload should match the output of a `dotnet publish` command, though without the enclosing parent folder. The zip archive should be made from the following files:
+
+- `.azurefunctions/`
+- `extensions.json`
+- `functions.metadata`
+- `host.json`
+- `worker.config.json`
+- Your project executable (a console app)
+- Other supporting files and directories peer to that executable
+
+These files are generated by the build process, and they are not meant to be edited directly.
+
+When preparing a zip archive for deployment, you should only compress the contents of the output directory, not the enclosing directory itself. When the archive is extracted into the current working directory, the files listed above need to be immediately visible.
+ ### Deployment requirements There are a few requirements for running .NET functions in the isolated worker model in Azure, depending on the operating system:
azure-functions Run Functions From Deployment Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md
The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` values
[!INCLUDE [Function app settings](../../includes/functions-app-settings.md)]
+### Creating the zip archive
++ ## Use WEBSITE_RUN_FROM_PACKAGE = 1 This section provides information about how to run your function app from a local package file.
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Azure Maps is a collection of geospatial services and SDKs that use fresh mapping data to provide geographic context to web and mobile applications. Azure Maps provides: * REST APIs to render vector and raster maps in multiple styles and satellite imagery.
-* Creator services to create and render maps based on private indoor map data.
* Search services to locate addresses, places, and points of interest around the world. * Various routing options; such as point-to-point, multipoint, multipoint optimization, isochrone, electric vehicle, commercial vehicle, traffic influenced, and matrix routing. * Traffic flow view and incidents view, for applications that require real-time traffic information.
Verify that the location of your current IP address is in a supported country/re
## Next steps
-Learn about indoor maps:
-
-[What is Azure Maps Creator?]
- Try a sample app that showcases Azure Maps: [Quickstart: Create a web app]
Stay up to date on Azure Maps:
[Get started with Azure Maps Power BI visual]: power-bi-visual-get-started.md [How to use the Get Map Attribution API]: how-to-show-attribution.md [Quickstart: Create a web app]: quick-demo-map-app.md
-[What is Azure Maps Creator?]: about-creator.md
[v1]: /rest/api/maps/data?view=rest-maps-1.0&preserve-view=true [v2]: /rest/api/maps/data [How to create data registry]: how-to-create-data-registries.md
azure-maps About Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-creator.md
- Title: Overview for Microsoft Azure Maps Creator-
-description: Learn about services and capabilities in Microsoft Azure Maps Creator and how to use them in your applications.
-- Previously updated : 08/03/2023-----
-# What is Azure Maps Creator?
-
-Azure Maps Creator is a first party geospatial platform that enables you to create and render maps, based on indoor map data, on top of the outdoor map in your web and mobile applications.
-
-## Services in Azure Maps Creator
-
-Creator is a platform for building indoor mapping solutions for all your needs. As an extension of Azure Maps, Creator allows blending of both indoor and outdoor maps for a seamless visual experience. Creator supports generating indoor maps from CAD drawings (DWG) or GeoJSON and enables custom styling of the map. You can also provide directions within your indoor map using wayfinding.
--
-### Conversion
-
-An [onboarding tool] is provided to prepare your facility's DWGs by identifying the data to use and to positioning your facility on the map. The conversion service then converts the geometry and data from your DWG files into a digital indoor map.
-
-The first step in creating your indoor map is to upload a drawing package into your Azure Maps account. A drawing package contains one or more CAD (computer-aided design) drawings of your facility along with a manifest describing the drawings. The drawings define the elements of the facility while the manifest tells the Azure Maps [Conversion] service how to read the facility drawing files and metadata. For more
-information about manifest properties, see [Manifest file requirements] and for more information on creating and uploading a drawing package, see the [Drawing package guide].
-
-### Dataset
-
-A collection of the indoor map [features] of a facility. Update your facility dataset through a visual editor and query for features in real time using the [Features API]. For more information, see [Work with datasets using the QGIS plugin].
-
-### Rendering
-
-[Tilesets], created from your data, are used to render maps on mobile devices or in the browser.
-
-### Styling
-
-[Custom styling] enables you to customize your indoor maps to meet your needs. You can customize your facilityΓÇÖs look and feel to reflect your brand colors or emphasize different rooms or specific areas of interest. Everything is configurable from the color of a feature, an icon that renders, or the zoom level when a feature should appear, resize or disappear. You can define how your data should be styled in the [visual style editor]. For more information, see [Create custom styles for indoor maps].
-
-### Wayfinding
-
-A [Routeset] is automatically created for your facility. [Wayfinding] uses that routeset to provide your customers with the shortest path between two points using the [Wayfinding service].
-
-### SDK
-
-Use the Azure Maps Web SDK to develop applications that provide a customized indoor map experience. For more information, see [Use the Azure Maps Indoor Maps module].
-
-## The indoor maps workflow
-
-This section provides a high-level overview of the indoor map creation workflow.
-
-1. **Create**. You first must create a drawing package containing one or more CAD
- (computer-aided design) drawings of your facility along with a [manifest]
- describing the drawings. You can use the [Azure Maps Creator onboarding tool] to
- create new and edit existing [manifest files].
-
-1. **Upload**. Upload your drawing packages into your Azure Storage
- account. For more information, see [How to create data registry].
-
-1. **Convert**. Once the drawing package is uploaded into your Azure Storage account,
- use the [Conversion] service to validate the data in the uploaded drawing
- package and convert it into map data.
-
-1. **Dataset**. Create a [dataset] from the map data. A dataset is collection
- of indoor map [features] that are stored in your Azure Maps account.
- For more information, see [Work with datasets using the QGIS plugin].
-
-1. **Tileset**. Converting your data into a [tileset] allows
- you to add it to an Azure Maps map and apply custom styling.
-
-1. **Styles**. Styles drive the visual appearance of spatial features on the map.
- When a new tileset is created, default styles are automatically associated with the
- features it contains. These default styles can be modified to suit your needs
- using the [visual style editor]. For more information, see
- [Create custom styles for indoor maps].
-
-1. **Wayfinding**. Provide your customers with the shortest path between two points
- within a facility. For more information, see [Wayfinding].
-
-## Azure Maps Creator documentation
-
-### ![Concept articles](./media/creator-indoor-maps/about-creator/Concepts.png) Concepts
--- [Indoor map concepts]-
-### ![Creator tutorial](./media/creator-indoor-maps/about-creator/tutorials.png) Tutorials
--- [Use Azure Maps Creator to create indoor maps]-
-### ![How-to articles](./media/creator-indoor-maps/about-creator/how-to-guides.png) How-to guides
--- [Manage Creator]-- [Query datasets with WFS API]-- [Custom styling for indoor maps]-- [Indoor maps wayfinding service]-- [Edit indoor maps using the QGIS plugin]-- [Create dataset using GeoJson package]-
-### ![Reference articles](./media/creator-indoor-maps/about-creator/reference.png) Reference
--- [Drawing package requirements]-- [Facility Ontology]-- [Drawing error visualizer]-- [Azure Maps Creator REST API]-
-[Azure Maps Creator onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool
-[Azure Maps Creator REST API]: /rest/api/maps-creator
-[Conversion]: /rest/api/maps-creator/conversion
-[Create custom styles for indoor maps]: how-to-create-custom-styles.md
-[Create dataset using GeoJson package]: how-to-dataset-geojson.md
-[Custom styling for indoor maps]: how-to-create-custom-styles.md
-[custom styling]: creator-indoor-maps.md#custom-styling-preview
-[dataset]: creator-indoor-maps.md#datasets
-[Drawing error visualizer]: drawing-error-visualizer.md
-[Drawing package guide]: drawing-package-guide.md?pivots=drawing-package-v2
-[Drawing package requirements]: drawing-requirements.md
-[Edit indoor maps using the QGIS plugin]: creator-qgis-plugin.md
-[Facility Ontology]: creator-facility-ontology.md
-[Features API]: /rest/api/maps-creator/features?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
-[features]: glossary.md#feature
-[How to create data registry]: how-to-create-data-registries.md
-[Indoor map concepts]: creator-indoor-maps.md
-[Indoor maps wayfinding service]: how-to-creator-wayfinding.md
-[Manage Creator]: how-to-manage-creator.md
-[Manifest file requirements]: drawing-requirements.md#manifest-file-requirements-1
-[manifest files]: drawing-requirements.md#manifest-file-1
-[manifest]: drawing-requirements.md#manifest-file-requirements
-[onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool
-[Query datasets with WFS API]: how-to-creator-wfs.md
-[Routeset]: /rest/api/maps-creator/routeset/create?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
-[tileset]: creator-indoor-maps.md#tilesets
-[Tilesets]: creator-indoor-maps.md#tilesets
-[Use Azure Maps Creator to create indoor maps]: tutorial-creator-indoor-maps.md
-[Use the Azure Maps Indoor Maps module]: how-to-use-indoor-module.md
-[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
-[Wayfinding service]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
-[Wayfinding]: creator-indoor-maps.md#wayfinding-preview
-[Work with datasets using the QGIS plugin]: creator-qgis-plugin.md
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
The `category` class feature defines category names. For example: "room.conferen
Learn more about Creator for indoor maps by reading:
-> [!div class="nextstepaction"]
-> [What is Azure Maps Creator?]
- > [!div class="nextstepaction"] > [Creator for indoor maps]
Learn more about Creator for indoor maps by reading:
<! learn.microsoft.com links > [Create a dataset using a GeoJson package]: how-to-dataset-geojson.md [Creator for indoor maps]: creator-indoor-maps.md
-[What is Azure Maps Creator?]: about-creator.md
+ <! External Links > [Azure Maps services]: https://aka.ms/AzureMaps [feature object]: https://www.rfc-editor.org/rfc/rfc7946#section-3.2
azure-maps Creator Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-geographic-scope.md
The following table describes the mapping between geography and supported Azure
| Europe| West Europe, North Europe | eu.atlas.microsoft.com | |United States | West US 2, East US 2 | us.atlas.microsoft.com |
-## Next steps
-
-> [!div class="nextstepaction"]
-> [What is Azure Maps Creator?]
- [Azure geographies]: https://azure.microsoft.com/global-infrastructure/geographies
-[What is Azure Maps Creator?]: about-creator.md
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
The following example shows how to update a dataset, create a new tileset, and d
4. Save the new **tilesetId** for the next step. 5. To enable the visualization of the updated campus dataset, update the tileset identifier in your application. If the old tileset is no longer used, you can delete it.
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Tutorial: Creating a Creator indoor map]
- <!-- Internal Links -> [Convert a drawing package]: #convert-a-drawing-package [Custom styling service]: #custom-styling-preview
The following example shows how to update a dataset, create a new tileset, and d
[Manage Azure Maps Creator]: how-to-manage-creator.md [structure]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure [style picker control]: choose-map-style.md#add-the-style-picker-control
-[Tutorial: Creating a Creator indoor map]: tutorial-creator-indoor-maps.md
[Tutorial: Implement IoT spatial analytics by using Azure Maps]: tutorial-iot-hub-maps.md [verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration
azure-maps Creator Onboarding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-onboarding-tool.md
This article demonstrates how to create an indoor map using the Azure Maps Creat
## Prerequisites -- A basic understanding of Creator. For an overview, see [What is Azure Maps Creator?]
+- A basic understanding of Creator. For an overview, see [Creator for indoor maps].
- A drawing package. For more information, see [Drawing package requirements]. > [!NOTE]
Integrate the indoor map into your applications using the Web SDK.
> [!div class="nextstepaction"] > [Use the Azure Maps Indoor Maps module]
-[Azure Maps Creator onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool
+[Azure Maps Creator onboarding tool]: creator-onboarding-tool.md
[Conversion service]: /rest/api/maps-creator/conversion [Convert a drawing package]: creator-indoor-maps.md#convert-a-drawing-package [dataset]: creator-indoor-maps.md#datasets
Integrate the indoor map into your applications using the Web SDK.
[The Map Configuration ID]: #the-map-configuration-id [tileset]: creator-indoor-maps.md#tilesets [Use the Azure Maps Indoor Maps module]: how-to-use-indoor-module.md
-[What is Azure Maps Creator?]: about-creator.md
+[Creator for indoor maps]: creator-indoor-maps.md
azure-maps Creator Qgis Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-qgis-plugin.md
The [Azure Maps QGIS plugin] is used to view and edit [datasets] in [QGIS]. It e
## Prerequisites -- Understanding of [Creator concepts].-- An Azure Maps Creator [dataset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps] tutorial helpful.
+- Understanding of [Creator concepts]
+- A [dataset]
- A basic working knowledge of [QGIS] ## Get started
If you have question related to Azure Maps, see [MICROSOFT Q&A]. Be sure and tag
[MICROSOFT Q&A]: /answers/questions/ask [QGIS]: https://qgis.org/en/site/ [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
This tutorial uses the [Postman] application, but you can choose a different API
> [!IMPORTANT] > Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the drawing package you uploaded into your Azure storage account from your source code and HTTP requests.
-2. Now that the drawing package is uploaded, use `udid` for the uploaded package to convert the package into map data. For steps on how to convert a package, see [Convert a drawing package].
+2. Now that the drawing package is uploaded, use `udid` for the uploaded package to convert the package into map data.
>[!NOTE] >If your conversion process succeeds, you will not receive a link to the Error Visualizer tool.
The _ConversionWarningsAndErrors.json_ contains a list of your drawing package e
Learn more by reading:
-> [!div class="nextstepaction"]
-> [What is Azure Maps Creator?]
- > [!div class="nextstepaction"] > [Creator for indoor maps] [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Conversion API]: /rest/api/maps-creator/conversion
-[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package
[Creator for indoor maps]: creator-indoor-maps.md [Creator resource]: how-to-manage-creator.md [Drawing package requirements]: drawing-requirements.md
Learn more by reading:
[How to create data registry]: how-to-create-data-registries.md [Postman]: https://www.postman.com/ [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[What is Azure Maps Creator?]: about-creator.md
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
The following image shows the Drawing Units window within Autodesk's AutoCAD® s
### Alignment
-Each floor of a facility is provided as an individual DWG file. As a result, it's possible that the floors aren't perfectly aligned when stacked on top of each other. Azure Maps Conversion service requires that all drawings be aligned with the physical space. To verify alignment, use a reference point that can span across floors, such as an elevator or column that spans multiple floors. you can view all the floors by opening a new drawing, and then use the `XATTACH` command to load all floor drawings. If you need to fix any alignment issues, you can use the reference points and the `MOVE` command to realign the floors that require it.
-
+Each floor of a facility is provided as an individual DWG file. As a result, it's possible that the floors aren't perfectly aligned when stacked on top of each other. Azure Maps Conversion service requires that all drawings be aligned with the physical space. To verify alignment, use a reference point that can span across floors, such as an elevator or column that spans multiple floors. You can view all the floors by opening a new drawing, and then use the `XATTACH` command to load all floor drawings. If you need to fix any alignment issues, you can use the reference points and the `MOVE` command to realign the floors that require it.
### Layers Ensure that each layer of a drawing contains entities of one feature class. If a layer contains entities for walls, then it can't have other features such as units or doors. However, a feature class can be split up over multiple layers. For example, you can have three layers in the drawing that contain wall entities.
You should now have all the DWG drawings prepared to meet Azure Maps Conversion
## Next steps
-> [!div class="nextstepaction"]
-> [What is Azure Maps Creator?]
- > [!div class="nextstepaction"] > [Creator for indoor maps]
-> [!div class="nextstepaction"]
-> [Tutorial: Creating a Creator indoor map]
- :::zone-end :::zone pivot="drawing-package-v2"
When finished, select the **Create + Download** button to download a copy of the
## Next steps
-> [!div class="nextstepaction"]
-> [What is Azure Maps Creator?]
- > [!div class="nextstepaction"] > [Creator for indoor maps] > [!div class="nextstepaction"] > [Create indoor map with the onboarding tool]- :::zone-end <! Drawing Package v1 links> [sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0 [Manifest file requirements]: drawing-requirements.md#manifest-file-requirements-1 [Drawing Package Requirements]: drawing-requirements.md
-[Tutorial: Creating a Creator indoor map]: tutorial-creator-indoor-maps.md
- <! Drawing Package v2 links> [Conversion service]: https://aka.ms/creator-conversion [sample drawing package v2]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%202.0
-[Azure Maps Creator onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool
+[Azure Maps Creator onboarding tool]: creator-onboarding-tool.md
[manifest files]: drawing-requirements.md#manifest-file-1 [wayfinding]: creator-indoor-maps.md#wayfinding-preview [facility level]: drawing-requirements.md#facility-level
-[Create indoor map with the onboarding tool]: creator-onboarding-tool.md
-[What is Azure Maps Creator?]: about-creator.md
[Creator for indoor maps]: creator-indoor-maps.md
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
For a guide on how to prepare your drawing package, see the drawing package guid
> [!div class="nextstepaction"] > [Drawing Package Guide]
-Learn more by reading:
-
-> [!div class="nextstepaction"]
-> [What is Azure Maps Creator?]
-
-> [!div class="nextstepaction"]
-> [Creator for indoor maps]
-
-[What is Azure Maps Creator?]: about-creator.md
-[Creator for indoor maps]: creator-indoor-maps.md
- <! Drawing Package v1 links> [Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0 [Drawing Package Guide]: drawing-package-guide.md
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
When you create an indoor map using Azure Maps Creator, default styles are appli
## Prerequisites - Understanding of [Creator concepts].-- An Azure Maps Creator [tileset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps] tutorial helpful.
+- An Azure Maps Creator [tileset].
## Create custom styles using Creators visual editor
Now when you select that unit in the map, the pop-up menu has the new layer ID,
[tileset get]: /rest/api/maps-creator/tileset/get?view=rest-maps-creator-2023-03-01-preview&preserve-view=true [tileset]: /rest/api/maps-creator/tileset?view=rest-maps-creator-2023-03-01-preview&preserve-view=true [unitProperties]: drawing-requirements.md#unitproperties
-[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
[Use the Azure Maps Indoor Maps module]: how-to-use-indoor-module.md
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
The Azure Maps Creator [wayfinding service] allows you to navigate from place to
## Prerequisites - Understanding of [Creator concepts].-- An Azure Maps Creator [dataset] and [tileset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps] tutorial helpful.
+- An Azure Maps Creator [dataset] and [tileset].
>[!IMPORTANT] > > - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services]. > - In the URL examples in this article you will need to: > - Replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
-> - Replace `{datasetId`} with your `datasetId`. For more information, see the [Check the dataset creation status] section of the *Use Creator to create indoor maps* tutorial.
+> - Replace `{datasetId`} with your `datasetId`.
## Create a routeset
The wayfinding service calculates the path through specific intervening points.
[Get the facility ID]: #get-the-facility-id <! learn.microsoft.com links > [Access to Creator services]: how-to-manage-creator.md#access-to-creator-services
-[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
[Creator concepts]: creator-indoor-maps.md [dataset]: creator-indoor-maps.md#datasets [tileset]: creator-indoor-maps.md#tilesets
-[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
[wayfinding service]: creator-indoor-maps.md#wayfinding-preview [wayfinding]: creator-indoor-maps.md#wayfinding-preview <! REST API Links >
azure-maps How To Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md
This article describes how to query Azure Maps Creator [datasets] using [Web Fea
## Prerequisites
-* Successful completion of [Tutorial: Use Creator to create indoor maps].
-* The `datasetId` obtained in [Check dataset creation status] section of the *Use Creator to create indoor maps* tutorial.
+* A [dataset]
-This article uses the same sample indoor map as used in the Tutorial: Use Creator to create indoor maps.
>[!IMPORTANT] > > * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services]. > * In the URL examples in this article you will need to replace: > * `{Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
-> * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status] section of the *Use Creator to create indoor maps* tutorial.
## Query for feature collections
After the response returns, copy the feature `id` for one of the `unit` features
} ```
-[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
+[dataset]: creator-indoor-maps.md#datasets
[datasets]: /rest/api/maps-creator/dataset [WFS API]: /rest/api/maps-creator/wfs [Web Feature Service (WFS)]: /rest/api/maps-creator/wfs
-[Tutorial: Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
-[Check dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
[Access to Creator Services]: how-to-manage-creator.md#access-to-creator-services [WFS Describe Collections API]: /rest/api/maps-creator/wfs/get-collection-definition
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Azure Maps Creator enables users to import their indoor map data in GeoJSON format with [Facility Ontology 2.0], which can then be used to create a [dataset].
-> [!NOTE]
-> This article explains how to create a dataset from a GeoJSON package. For information on additional steps required to complete an indoor map, see [Next steps].
- ## Prerequisites - An [Azure Maps account]
To check the status of the dataset creation process and retrieve the `datasetId`
> `https://us.atlas.microsoft.com/datasets/**c9c15957-646c-13f2-611a-1ea7adc75174**?api-version=2023-03-01-preview`
-See [Next steps] for links to articles to help you complete your indoor map.
- ## Add data to an existing dataset Data can be added to an existing dataset by providing the `datasetId` parameter to the [Dataset Create API] along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted.
If your original dataset was created from a GoeJSON source and you wish to add a
https://us.atlas.microsoft.com/datasets?api-version=2023-03-01-preview&conversionId={conversionId}&outputOntology=facility-2.0&datasetId={datasetId} ```
-| Identifier | Description |
-|--|-|
-| conversionId | The ID returned when converting your drawing package. For more information, see [Convert a drawing package]. |
+| Identifier | Description |
+|--||
+| conversionId | The ID returned when converting your drawing package. |
| datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package. | ## Geojson zip package requirements
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
- Openings can't intersect with other openings on the same level. - Every `opening` must be associated with at least one `verticalPenetration` or `unit`.
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a tileset]
- [Access to Creator services]: how-to-manage-creator.md#access-to-creator-services [area]: creator-facility-ontology.md?pivots=facility-ontology-v2#areaelement [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples
-[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package
[Create a dataset]: #create-a-dataset
-[Create a tileset]: tutorial-creator-indoor-maps.md#create-a-tileset
[Creator for indoor maps]: creator-indoor-maps.md [Creator resource]: how-to-manage-creator.md [Dataset Create API]: /rest/api/maps-creator/dataset/create?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
[How to create data registry]: how-to-create-data-registries.md [level]: creator-facility-ontology.md?pivots=facility-ontology-v2#level [line]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement
-[Next steps]: #next-steps
[openings]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening [point]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement [RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md
Introduction to Creator services for indoor mapping:
Learn how to use the Creator services to render indoor maps in your application:
-> [!div class="nextstepaction"]
-> [Azure Maps Creator tutorial]
- > [!div class="nextstepaction"] > [Use the Indoor Maps module] [Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control [Microsoft Entra authentication]: azure-maps-authentication.md#microsoft-entra-authentication
-[Azure Maps Creator tutorial]: tutorial-creator-indoor-maps.md
[Azure Maps pricing]: https://aka.ms/CreatorPricing [Azure portal]: https://portal.azure.com [Data conversion]: creator-indoor-maps.md#convert-a-drawing-package
azure-maps How To Use Indoor Module Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module-ios.md
The Azure Maps iOS SDK allows you to render indoor maps created in Azure Maps Cr
1. Complete the steps in the [Quickstart: Create an iOS app]. Code blocks in this article can be inserted into the `viewDidLoad` function of `ViewController`. 1. A [Creator resource]
-1. Get a `tilesetId` by completing the [Tutorial: Use Creator to create indoor maps]. The tileset ID is used to render indoor maps with the Azure Maps iOS SDK.
+1. A `tilesetId`. The tileset ID is used to render indoor maps with the Azure Maps iOS SDK.
## Instantiate the indoor manager
The following screenshot shows the above code displaying an indoor map.
[Quickstart: Create an iOS app]: quick-ios-app.md [Creator resource]: how-to-manage-creator.md
-[Tutorial: Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
[Creator for indoor maps]: creator-indoor-maps.md [Drawing package requirements]: drawing-requirements.md
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
When you create an indoor map using Azure Maps Creator, default styles are appli
- [Subscription key] - A map configuration alias or ID. For more information, see [map configuration API].
-> [!TIP]
-> If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps] tutorial helpful.
- The map configuration `alias` (or `mapConfigurationId`) is required to render indoor maps with custom styles via the Azure Maps Indoor Maps module. ## Embed the Indoor Maps module
Learn more about how to add more data to your map:
[style-loader]: https://webpack.js.org/loaders/style-loader [Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Tileset List API]: /rest/api/maps-creator/tileset/list
-[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor [Webpack]: https://webpack.js.org
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Learn more about migrating from Bing Maps to Azure Maps.
[Clustering point data in the Web SDK]: clustering-point-data-web-sdk.md [Contour layer code samples]: https://samples.azuremaps.com/?search=contour [Create a data source]: create-data-source-web-sdk.md
-[Creator]: tutorial-creator-indoor-maps.md
+[Creator]: creator-indoor-maps.md
[Display an infobox]: #display-an-infobox [Drawing tools module code samples]: https://samples.azuremaps.com#drawing-tools-module [free account]: https://azure.microsoft.com/free/
azure-maps Migrate Get Imagery Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-imagery-metadata.md
This article explains how to migrate the Bing Maps [Get Imagery Metadata] API to the Azure Maps [Get Map Tile] API.
-The Azure Maps Get Map Tile API provides map tiles in vector or raster formats to be used in the [Azure Maps Web SDK] or 3rd party map controls. Some example tiles that can be requested are Azure Maps road, satellite/aerial, weather radar or indoor map tiles (generated using [Azure Maps Creator]).
+The Azure Maps Get Map Tile API provides map tiles in vector or raster formats to be used in the [Azure Maps Web SDK] or 3rd party map controls. Some example tiles that can be requested are Azure Maps road, satellite/aerial, or weather radar.
## Prerequisites
The following table lists the fields that can appear in the HTTP response when r
| Bing Maps response field | Azure Maps response field | Description | |--||-| | imageHeight (Json)<BR>ImageWidth (XML)  | Not supported  | Azure Maps Get Map Tile API provides the map tile image directly in the HTML response (binary image string) and offers 256 x 256 and 512 x 512 pixel tile sizes.  |
-| imageUrl (Json)<BR>ImageUrl (XML)       | Not supported  | Azure Maps Get Map Tile API provides the map tile image directly in the HTML response (binary image string), as oppsed to an image URL. |
-| imageUrlSubdomains (Json)<BR>ImageUrlSubdomains (XML)  | Not supported  | Azure Maps Get Map Tile API provides the map tile image directly in the HTML response (binary image string), as oppsed to an image URL. |
+| imageUrl (Json)<BR>ImageUrl (XML)       | Not supported  | Azure Maps Get Map Tile API provides the map tile image directly in the HTML response (binary image string), as opposed to an image URL. |
+| imageUrlSubdomains (Json)<BR>ImageUrlSubdomains (XML)  | Not supported  | Azure Maps Get Map Tile API provides the map tile image directly in the HTML response (binary image string), as opposed to an image URL. |
| imageWidth (Json)<BR>ImageWidth (XML)   | Not supported  | Azure Maps Get Map Tile API provides the map tile image directly in the HTML response (binary image string) and offers 256 x 256 and 512 x 512 pixel tile sizes.  | | vintageEnd (Json)<BR>VintageEnd (XML)    | Not supported  | Azure Maps Get Map Tile API provides map tile vintage information in the response header (Data-Capture-Date-Range<SUP>**1**</SUP>), rather than in the response body. | | vintageStart (Json)<BR>VintageStart (XML)| Not supported  | Azure Maps Get Map Tile API provides map tile vintage information in the response header (Data-Capture-Date-Range<SUP>**1**</SUP>), rather than in the response body. |
For more Azure Maps Render APIs see:
[Authentication with Azure Maps]: azure-maps-authentication.md [Azure Account]: https://azure.microsoft.com/ [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Azure Maps Creator]: about-creator.md
[Azure Maps Product Terms]: https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure [Azure Maps service geographic scope]: geographic-scope.md [Azure Maps Supported Languages]: supported-languages.md
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
- Title: 'Tutorial: Use Microsoft Azure Maps Creator to create indoor maps'-
-description: Learn how to use Microsoft Azure Maps Creator to create indoor maps.
-- Previously updated : 01/28/2022-----
-# Tutorial: Use Azure Maps Creator to create indoor maps
-
-This tutorial describes how to create indoor maps for use in Microsoft Azure Maps. This tutorial demonstrates how to:
-
-> [!div class="checklist"]
->
-> * Upload your drawing package for indoor maps.
-> * Convert your drawing package into map data.
-> * Create a dataset from your map data.
-> * Create a tileset from the data in your dataset.
-> * Get the default map configuration ID from your tileset.
-
-You can also create a dataset from a GeoJSON package. For more information, see [Create a dataset using a GeoJSON package (preview)].
-
-## Prerequisites
-
-* An [Azure Maps account]
-* A [subscription key]
-* A [Creator resource]
-* An [Azure storage account]
-* The [sample drawing package] downloaded
-
-This tutorial uses the [Postman] application, but you can use a different API development environment.
-
->[!IMPORTANT]
->
-> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services].
-> * In the URL examples, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
-
-## Upload a drawing package
-
-Follow the steps outlined in the [How to create data registry] article to upload the GeoJSON package into your Azure storage account then register it in your Azure Maps account.
-
-> [!IMPORTANT]
-> Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the GeoJSON package you uploaded into your Azure storage account from your source code and HTTP requests.
-
-## Convert a drawing package
-
-Now that the drawing package is uploaded, you use the `udid` value for the uploaded package to convert the package into map data. The [Conversion API] uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation] article.
-
-To convert a drawing package:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. For **Request name**, enter a name for the request, such as **POST Convert Drawing Package**.
-
-4. Select the **POST** HTTP method.
-
-5. Enter the following URL to the [Conversion service]. Replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. Replace `udid` with the `udid` value of the uploaded package.
-
- ```http
- https://us.atlas.microsoft.com/conversions?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2023-03-01-preview&udid={udid}&inputType=DWG&dwgPackageVersion=2.0
- ```
-
-6. Select **Send**.
-
-7. In the response window, select the **Headers** tab.
-
-8. Copy the value of the **Operation-Location** key. It contains the status URL that you use to check the status of the conversion.
-
- :::image type="content" source="./media/tutorial-creator-indoor-maps/data-convert-location-url.png" border="true" alt-text="Screenshot of Postman that shows the URL value of the operation location key in the response header.":::
-
-### Check the status of the drawing package conversion
-
-After the conversion operation finishes, it returns a `conversionId` value. You can access the `conversionId` value by checking the status of the drawing package's conversion process. You can then use the `conversionId` value to access the converted data.
-
-To check the status of the conversion process and retrieve the `conversionId` value:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. For **Request name**, enter a name for the request, such as **GET Conversion Status**.
-
-4. Select the **GET** HTTP method.
-
-5. Enter the status URL that you copied in the [Convert a drawing package](#convert-a-drawing-package) section. The request should look like the following URL:
-
- ```http
- https://us.atlas.microsoft.com/conversions/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-6. Select **Send**.
-
-7. In the response window, select the **Headers** tab.
-
-8. Copy the value of the **Resource-Location** key, which is the resource location URL. The resource location URL contains the unique identifier `conversionId`, which other APIs use to access the converted map data.
-
- :::image type="content" source="./media/tutorial-creator-indoor-maps/data-conversion-id.png" alt-text="Screenshot of Postman that highlights the conversion ID value that appears in the Resource-Location key in the response header.":::
-
-The sample drawing package should be converted without errors or warnings. But if you receive errors or warnings from your own drawing package, the JSON response includes a link to the [Drawing Error Visualizer]. You can use the Drawing Error Visualizer to inspect the details of errors and warnings. To get recommendations for resolving conversion errors and warnings, see [Drawing conversion errors and warnings].
-
-The following JSON fragment displays a sample conversion warning:
-
-```json
-{
- "operationId": "{operationId}",
- "created": "2021-05-19T18:24:28.7922905+00:00",
- "status": "Succeeded",
- "warning": {
- "code": "dwgConversionProblem",
- "details": [
- {
- "code": "warning",
- "details": [
- {
- "code": "manifestWarning",
- "message": "Ignoring unexpected JSON property: unitProperties[0].nonWheelchairAccessible with value False"
- }
- ]
- }
- ]
- },
- "properties": {
- "diagnosticPackageLocation": "https://atlas.microsoft.com/mapData/ce61c3c1-faa8-75b7-349f-d863f6523748?api-version=1.0"
- }
-}
-```
-
-## Create a dataset
-
-A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset, use the [Dataset Create API]. The Dataset Create API takes the `conversionId` value for the converted drawing package and returns a `datasetId` value for the created dataset.
-
-To create a dataset:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. For **Request name**, enter a name for the request, such as **POST Dataset Create**.
-
-4. Select the **POST** HTTP method.
-
-5. Enter the following URL to the [Dataset service]. Replace `{conversionId}` with the `conversionId` value that you obtained in [Check the status of the drawing package conversion](#check-the-status-of-the-drawing-package-conversion).
-
- ```http
- https://us.atlas.microsoft.com/datasets?api-version=2023-03-01-preview&conversionId={conversionId}&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-6. Select **Send**.
-
-7. In the response window, select the **Headers** tab.
-
-8. Copy the value of the **Operation-Location** key. It contains the status URL that you use to check the status of the dataset.
-
- :::image type="content" source="./media/tutorial-creator-indoor-maps/data-dataset-location-url.png" border="true" alt-text="Screenshot of Postman that shows the value of the Operation-Location key for a dataset in the response header.":::
-
-### Check the dataset creation status
-
-To check the status of the dataset creation process and retrieve the `datasetId` value:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. For **Request name**, enter a name for the request, such as **GET Dataset Status**.
-
-4. Select the **GET** HTTP method.
-
-5. Enter the status URL that you copied in the [Create a dataset](#create-a-dataset) section. The request should look like the following URL:
-
- ```http
- https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-6. Select **Send**.
-
-7. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the resource location URL. The resource location URL contains the unique identifier (`datasetId`) of the dataset.
-
-8. Save the `datasetId` value, because you'll use it in the next tutorial.
-
- :::image type="content" source="./media/tutorial-creator-indoor-maps/dataset-id.png" alt-text="Screenshot of Postman that shows the dataset ID value of the Resource-Location key in the response header.":::
-
-## Create a tileset
-
-A tileset is a set of vector tiles that render on the map. Tilesets are created from existing datasets. However, a tileset is independent from the dataset that it comes from. If the dataset is deleted, the tileset continues to exist.
-
-To create a tileset:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. For **Request name**, enter a name for the request, such as **POST Tileset Create**.
-
-4. Select the **POST** HTTP method.
-
-5. Enter the following URL to the [Tileset service]. Replace `{datasetId}` with the `datasetId` value that you obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section.
-
- ```http
- https://us.atlas.microsoft.com/tilesets?api-version=2023-03-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
- ```
-
-6. Select **Send**.
-
-7. In the response window, select the **Headers** tab.
-
-8. Copy the value of the **Operation-Location** key. It contains the status URL, which you use to check the status of the tileset.
-
- :::image type="content" source="./media/tutorial-creator-indoor-maps/data-tileset-location-url.png" border="true" alt-text="Screenshot of Postman that shows the status URL, which is the value of the Operation-Location key in the response header.":::
-
-### Check the status of tileset creation
-
-To check the status of the tileset creation process and retrieve the `tilesetId` value:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. For **Request name**, enter a name for the request, such as **GET Tileset Status**.
-
-4. Select the **GET** HTTP method.
-
-5. Enter the status URL that you copied in the [Create a tileset](#create-a-tileset) section. The request should look like the following URL:
-
- ```http
- https://us.atlas.microsoft.com/tilesets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-6. Select **Send**.
-
-7. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the resource location URL. The resource location URL contains the unique identifier (`tilesetId`) of the dataset.
-
- :::image type="content" source="./media/tutorial-creator-indoor-maps/tileset-id.png" alt-text="Screenshot of Postman that shows the tileset ID, which is part of the value of the resource location URL in the response header.":::
-
-## Get the map configuration (preview)
-
-After you create a tileset, you can get the `mapConfigurationId` value by using the [tileset get] HTTP request:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. For **Request name**, enter a name for the request, such as **GET mapConfigurationId from Tileset**.
-
-4. Select the **GET** HTTP method.
-
-5. Enter the following URL to the [Tileset service]. Pass in the tileset ID that you obtained in the previous step.
-
- ```http
- https://us.atlas.microsoft.com/tilesets/{tilesetId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-6. Select **Send**.
-
-7. The tileset JSON appears in the body of the response. Scroll down to see the `mapConfigurationId` value:
-
- ```json
- "defaultMapConfigurationId": "5906cd57-2dba-389b-3313-ce6b549d4396"
- ```
-
-For more information, see [Map configuration] in the article about indoor map concepts.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [What is Azure Maps Creator?]
-
-> [!div class="nextstepaction"]
-> [Creator for indoor maps]
-
-> [!div class="nextstepaction"]
-> [Use the Azure Maps Indoor Maps module with custom styles](how-to-use-indoor-module.md)
-
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Azure storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Creator resource]: how-to-manage-creator.md
-[Sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/blob/master/Sample%20-%20Contoso%20Drawing%20Package.zip
-[Postman]: https://www.postman.com
-[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services
-[Create a dataset using a GeoJSON package (Preview)]: how-to-dataset-geojson.md
-[How to create data registry]: how-to-create-data-registries.md
-[Conversion API]: /rest/api/maps-creator/conversion
-[Conversion service]: /rest/api/maps-creator/conversion/convert
-[Creator Long-Running Operation]: creator-long-running-operation-v2.md
-[Drawing error visualizer]: drawing-error-visualizer.md
-[Drawing conversion errors and warnings]: drawing-conversion-error-codes.md
-[Dataset Create API]: /rest/api/maps-creator/dataset/create
-[Dataset service]: /rest/api/maps-creator/dataset
-[Tileset service]: /rest/api/maps-creator/tileset?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
-[tileset get]: /rest/api/maps-creator/tileset/get?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
-[Map configuration]: creator-indoor-maps.md#map-configuration
-[What is Azure Maps Creator?]: about-creator.md
-[Creator for indoor maps]: creator-indoor-maps.md
azure-monitor Azure Monitor Agent Custom Text Log Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-custom-text-log-migration.md
Last updated 05/09/2023
# Migrate from MMA custom text log to AMA DCR based custom text logs This article describes the steps to migrate a [MMA Custom text log](data-sources-custom-logs.md) table so you can use it as a destination for a new [AMA custom text logs](data-collection-log-text.md) DCR. When you follow the steps, you won't lose any data. If you're creating a new AMA custom text log table, then this article doesn't pertain to you.
-
+
+> Note: Once logs are migrated, MMA will not be able to write to the destination table. This is an issue for the migration of production system that we are actively working.
+>
+ ## Background MMA custom text logs must be configured to support new features in order for AMA custom text log DCRs to write to it. The following actions are taken: - The table is reconfigured to enable all DCR-based custom logs features.
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
metric_relabel_configs:
regex: '.+' ```
+> [!NOTE]
+>
+> If you wish to add labels to all the jobs in your custom configuration, explicitly add labels using metrics_relabel_configs for each job. Global external labels are not supported via configmap based prometheus configuration.
+> ```yaml
+> relabel_configs:
+> - source_labels: [__address__]
+> target_label: example_label
+> replacement: 'example_value'
+> ```
+>
+
azure-monitor Code Optimizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/code-optimizations.md
Code Optimizations, an AI-based service in Azure Application Insights, works in
Make informed decisions and optimize your code using real-time performance data and insights gathered from your production environment.
+[You can review your Code Optimizations in the Azure portal.](https://aka.ms/codeoptimizations)
+ ## Demo video > [!VIDEO https://www.youtube-nocookie.com/embed/eu1P_vLTZO0]
Application Insights Profiler and Code Optimizations work together to provide a
[The Profiler](../profiler/profiler-overview.md) focuses on tracing specific requests, down to the millisecond. It provides an excellent "big picture" view of issues within your application and general best practices to address them. ### Code Optimizations
-Code Optimizations analyzes the profiling data collected by the Application Insights Profiler. As the Profiler uploads data to Application Insights, our machine learning model analyzes some of the data to find where the application's code can be optimized. Code Optimizations:
+[Code Optimizations](https://aka.ms/codeoptimizations) analyzes the profiling data collected by the Application Insights Profiler. As the Profiler uploads data to Application Insights, our machine learning model analyzes some of the data to find where the application's code can be optimized. Code Optimizations:
- Displays aggregated data gathered over time. - Connects data with the methods and functions in your application code. - Narrows down the culprit by finding bottlenecks within the code.
-## Cost
+## Cost and overhead
-While Code Optimizations incurs no extra costs.
+Code Optimizations are generated automatically after [Application Insights Profiler is enabled](../profiler/profiler-overview.md#sampling-rate-and-overhead). It incurs no extra cost to you as it analyzes performance issues and generates performance recommendations. Some features (such as code-level fix suggestions) require [Copilot for GitHub](https://docs.github.com/copilot/about-github-copilot/what-is-github-copilot) and/or [Copilot for Azure](../../copilot/overview.md).
## Supported regions
azure-monitor Set Up Code Optimizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/set-up-code-optimizations.md
Setting up Code Optimizations to identify and analyze CPU and memory bottlenecks
- Connect your web app to Application Insights. - Enable the Profiler on your web app.
+[You can review your Code Optimizations in the Azure portal.](https://aka.ms/codeoptimizations)
+ ## Demo video > [!VIDEO https://www.youtube-nocookie.com/embed/vbi9YQgIgC8]
azure-monitor View Code Optimizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/view-code-optimizations.md
# View Code Optimizations results (Preview)
-Now that you set up and configured Code Optimizations on your app, access and view any insights you received via the Azure portal. You can access Code Optimizations through the **Performance** blade from the left navigation pane and select **Code Optimizations (preview)** from the top menu.
+Now that you set up and configured Code Optimizations on your app, [access and view any insights you received directly via the Azure portal.](https://aka.ms/codeoptimizations)
+
+You can also access Code Optimizations through any of your Application Insights resources from **Performance** pane and select **Code Optimizations (preview)** button from the top menu.
:::image type="content" source="./media/code-optimizations/code-optimizations-performance-blade.png" alt-text="Screenshot of Code Optimizations located in the Performance blade.":::
You can also view a graph depicting a specific performance issue's impact and th
## Next steps > [!div class="nextstepaction"]
-> [Troubleshoot Code Optimizations](./code-optimizations-troubleshoot.md)
+> [Review Code Optimizations in Azure portal](https://aka.ms/codeoptimizations)
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
The data size used for the daily cap is the size after customer-defined data tra
Data collection resumes at the reset time which is a different hour of the day for each workspace. This reset hour can't be configured. You can optionally create an alert rule to send an alert when this event is created. > [!NOTE]
-> The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected, particularly if the workspace is receiving high volumes of data. If data is collected above the cap, it's still billed. See [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) for a query that is helpful in studying the daily cap behavior.
-
+> The daily cap can't stop data collection at precisely the specified cap level and some excess data is expected. The data collection beyond the daily cap can be particularly large if the workspace is receiving high rates of data. If data is collected above the cap, it's still billed. See [View the effect of the Daily Cap](#view-the-effect-of-the-daily-cap) for a query that is helpful in studying the daily cap behavior.
## When to use a daily cap Daily caps are typically used by organizations that are particularly cost conscious. They shouldn't be used as a method to reduce costs, but rather as a preventative measure to ensure that you don't exceed a particular budget.
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
description: Overview of Microsoft services and functionalities that contribute
Previously updated : 02/05/2024 Last updated : 07/15/2024 # Azure Monitor overview
azure-monitor Monitor Linux Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/scom-manage-instance/monitor-linux-machines.md
+
+ms.assetid:
+ Title: Monitor Linux machines
+description: This article describes how it monitor Linux machines.
+++ Last updated : 06/10/2024+++++
+# Monitor Linux machines
+
+Azure Monitor SCOM Managed Instance provides a cloud-based alternative for Operations Manager users providing monitoring continuity for cloud and on-premises environments across the cloud adoption journey.
+
+>[!NOTE]
+>- Linux monitoring is supported only via Managed Gateways.
+>- Azure and Arc-enabled Linux machines aren't supported.
+
+With SCOM Managed Instance, you can monitor Linux workloads that are on-premises and behind a gateway server. At this stage, we don't support monitoring Linux VMs hosted in Azure. For more information, see [How to monitor on-premises Linux VMs](/system-center/scom/manage-deploy-crossplat-agent-console).
azure-monitor Monitor Off Azure Vm With Scom Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/scom-manage-instance/monitor-off-azure-vm-with-scom-managed-instance.md
ms.assetid: Title: Monitor Azure and Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance
+ Title: Monitor Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance
description: This article describes how it monitor Azure and Off-Azure virtual machines with SCOM Managed Instance. Previously updated : 05/22/2024 Last updated : 07/17/2024 -
-# Monitor Azure and Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance
+# Monitor Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance
Azure Monitor SCOM Managed Instance provides a cloud-based alternative for Operations Manager users providing monitoring continuity for cloud and on-premises environments across the cloud adoption journey.
Azure Monitor SCOM Managed Instance provides a cloud-based alternative for Opera
In Azure Monitor SCOM Managed Instance, an agent is a service that is installed on a computer that looks for configuration data and proactively collects information for analysis and reporting, measures the health state of monitored objects like an SQL database or logical disk, and executes tasks on demand by an operator or in response to a condition. It allows SCOM Managed Instance to monitor Windows operating systems and the components installed on them, such as a website or an Active Directory domain controller.
+## Support for Azure and Off-Azure workloads
+
+One of the most important monitoring scenarios is that of on-premises (off-Azure) workloads that unlock SCOM Managed Instance as a true **Hybrid monitoring solution**.
+
+The following are the supported monitoring scenarios:
+
+|Type of endpoint|Trust|Experience|
+||||
+|Line of sight on-premises agent|Trusted|OpsConsole|
+|Line of sight on-premises agent|Untrusted|Managed Gateway and OpsConsole|
+|No Line of sight on-premises agent|Trusted/Untrusted|Managed Gateway and OpsConsole|
+
+SCOM Managed Instance users will be able to:
+
+- Set up and manage Gateways seamlessly from SCOM Managed Instance portal on Arc-enabled servers.
+- Set high availability at Gateway plane for agent failover as described in [Designing for High Availability and Disaster Recovery](/system-center/scom/plan-hadr-design).
+ ## Supported scenarios The following are the supported monitoring scenarios: -- Azure and Arc-enabled VMs-- On-premises agents that have Line of sight connectivity to Azure-- On-premises agents with no Line of sight connectivity (must use managed Gateway) to Azure
+- On-premises virtual machines with no Line of sight connectivity (must use managed Gateway) to Azure
+- On-premises virtual machines that have Line of sight connectivity to Azure
## Prerequisites Following are the prerequisites required on desired monitoring endpoints:
-1. Ensure to Allowlist the following Azure URL on the desired monitoring endpoints:
- `*.workloadnexus.azure.com`
-2. Confirm the Line of sight between SCOM Managed Instance and desired monitoring endpoints by running the following command. Obtain LB DNS (Load balancer DNS) information by navigating to SCOM Managed Instance > **Overview** > **Properties** > **Load balancer** > **DNS Name**.
+1. Confirm the Line of sight between SCOM Managed Instance and desired monitoring endpoints by running the following command. Obtain LB DNS (Load balancer DNS) information by navigating to SCOM Managed Instance > **Overview** > **Properties** > **Load balancer** > **DNS Name**.
``` Test-NetConnection -ComputerName <Load balancer DNS> -Port 5723 ```
-3. Ensure to install [.NET Framework 4.7.2](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2) or higher on desired monitoring endpoints.
-4. Ensure TLS 1.2 or higher is enabled.
+2. Ensure to install [.NET Framework 4.7.2](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2) or higher on desired monitoring endpoints.
+3. Ensure TLS 1.2 or higher is enabled.
+
+To Troubleshooting connectivity problems, see [Troubleshoot issues with Azure Monitor SCOM Managed Instance](troubleshoot-scom-managed-instance.md).
+
+## Install SCOM Managed Instance Gateway
+
+Managed Gateway can be installed on Arc-enabled servers enabling it to relay monitoring data from air-gapped and network isolated servers to SCOM Managed Instance.
+
+To install SCOM Managed Instance gateway, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/). Search and select **SCOM Managed Instance**.
+2. On the **Overview** page, under **Manage**, select **SCOM managed instances**.
+3. On the **SCOM managed instances** page, select the desired SCOM managed instance.
+4. On the desired SCOM managed instance **Overview** page, under **Manage**, select **Managed Gateway**.
+5. On the **Managed Gateways** page, select **New Managed Gateway**.
+
+ :::image type="content" source="media/monitor-off-azure-vm-with-scom-managed-instance/new-managed-gateway-inline.png" alt-text="Screenshot that shows new managed gateway." lightbox="media/monitor-off-azure-vm-with-scom-managed-instance/new-managed-gateway-expanded.png":::
+
+ **Add a Managed Gateway** page opens listing all the Azure arc virtual machines.
+
+ >[!NOTE]
+ >SCOM Managed Instance Managed Gateway can be configured on Arc-enabled machines only.
+
+ :::image type="content" source="media/monitor-off-azure-vm-with-scom-managed-instance/add-managed-gateway-inline.png" alt-text="Screenshot that shows add a managed gateway option." lightbox="media/monitor-off-azure-vm-with-scom-managed-instance/add-managed-gateway-expanded.png":::
+
+6. Select the desired virtual machine and then select **Add**.
+
+ :::image type="content" source="media/monitor-off-azure-vm-with-scom-managed-instance/add-inline.png" alt-text="Screenshot that shows Add managed gateway." lightbox="media/monitor-off-azure-vm-with-scom-managed-instance/add-expanded.png":::
+
+7. On the **Add Monitored Resources** window, review the selections and select **Add**.
+
+ :::image type="content" source="media/monitor-off-azure-vm-with-scom-managed-instance/install-gateway-inline.png" alt-text="Screenshot that shows Install managed gateway page." lightbox="media/monitor-off-azure-vm-with-scom-managed-instance/install-gateway-expanded.png":::
+
+### Delete a Gateway
+
+To delete a Gateway, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/). Search and select **SCOM Managed Instance**.
+2. On the **Overview** page, under **Manage**, select **SCOM managed instances**.
+3. On the **SCOM managed instances** page, select the desired SCOM managed instance.
+4. On the desired SCOM managed instance **Overview** page, under **Manage**, select **Managed Gateways**.
+5. On the **Managed Gateways** page, select Ellipsis button **(…)**, which is next to your desired gateway, and select **Delete**.
+
+ :::image type="content" source="media/monitor-off-azure-vm-with-scom-managed-instance/delete-gateway-inline.png" alt-text="Screenshot that shows delete gateway option." lightbox="media/monitor-off-azure-vm-with-scom-managed-instance/delete-gateway-expanded.png":::
+
+6. On the **Delete SCOM MI Gateway** page, check **Are you sure that you want to delete Managed Gateway?** and then select **Delete**.
+
+ :::image type="content" source="media/monitor-off-azure-vm-with-scom-managed-instance/delete-managed-gateway-inline.png" alt-text="Screenshot that shows delete managed gateway option." lightbox="media/monitor-off-azure-vm-with-scom-managed-instance/delete-managed-gateway-expanded.png":::
+
+## Managed Gateway configuration
+
+### Configure monitoring of servers via SCOM Managed Instance Gateway
+
+To configure monitoring of air-gapped and network isolated servers through Managed Gateway, follow the steps mentioned in [Install an agent on a computer running Windows by using the Discovery Wizard](/system-center/scom/manage-deploy-windows-agent-console#install-an-agent-on-a-computer-running-windows-by-using-the-discovery-wizard) section. Download and install agent from [here](https://go.microsoft.com/fwlink/?linkid=2251996).
-To Troubleshooting connectivity problems, see [Troubleshoot issues with Azure Monitor SCOM Managed Instance](/system-center/scom/troubleshoot-scom-managed-instance?view=sc-om-2022&preserve-view=true).
+>[!NOTE]
+>Operations Manager Console is required for this action. For more information, see [Connect the Azure Monitor SCOM Managed Instance to Ops console](connect-managed-instance-ops-console.md).
## Install agent for Windows virtual machine
Follow these steps to deploy the SCOM Managed Instance agent with the Agent Setu
6. On the **Destination Folder** page, leave the installation folder set to the default, or select **Change** and type a path, and select **Next**.
-7. On the **Agent Setup Options** page, you can choose whether you want to **connect the agent to Operations Manager**.
+7. On the **Agent Setup Options** page, you can choose whether you want to **connect the agent to Operations Manager**.
8. On the **Management Group Configuration** page, do the following:
Follow these steps to deploy the SCOM Managed Instance agent with the Agent Setu
11. When the **Completing the Microsoft Monitoring Agent Setup Wizard** page appears, select **Finish**.
-## Install Managed Gateway
+## Configure monitoring of on-premises servers
-To install Managed Gateway, [download the Gateway software](https://go.microsoft.com/fwlink/?linkid=2251997) and follow [these steps](/system-center/scom/deploy-install-gateway-server?view=sc-om-2022&tabs=InstallGatewayServer&preserve-view=true).
-
-## Monitor Linux machine
+To configure monitoring of on-premises servers that have direct connectivity (VPN/ER) with Azure, follow the steps mentioned in [Install an agent on a computer running Windows by using the Discovery Wizard](/system-center/scom/manage-deploy-windows-agent-console#install-an-agent-on-a-computer-running-windows-by-using-the-discovery-wizard) section.
-With SCOM Managed Instance, you can monitor Linux workloads that are on-premises and behind a gateway server. At this stage, we don't support monitoring Linux VMs hosted in Azure. For more information, see [How to monitor on-premises Linux VMs](/system-center/scom/manage-deploy-crossplat-agent-console).
+>[!NOTE]
+>Operations Manager Console is required for this action. For more information, see [Connect the Azure Monitor SCOM Managed Instance to Ops console](connect-managed-instance-ops-console.md).
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
To check the setting, open your *web.config* file and find the system.web sectio
> Modifying the `httpRuntime targetFramework` value changes the runtime quirks applied to your application and can cause other, subtle behavior changes. Be sure to test your application thoroughly after making this change. For a full list of compatibility changes, see [Re-targeting changes](/dotnet/framework/migration-guide/application-compatibility#retargeting-changes). > [!NOTE] > If the `targetFramework` is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you're using your own virtual machine, you may need to enable TLS 1.2 in the OS.+
+## Snapshot Debugger overhead scenarios
+
+The Snapshot Debugger is designed for use in production environments. The default settings include rate limits to minimize the impact on your applications.
+
+However, you may experience small CPU, memory, and I/O overhead associated with the Snapshot Debugger, like in the following scenarios.
+
+**When an exception is thrown in your application:**
+
+- Creating a signature for the problem type and deciding whether to create a snapshot adds a very small CPU and memory overhead.
+- If de-optimization is enabled, there is an overhead for re-JITting the method that threw the exception. This will be incurred the next time that method executes. Depending on the size of the method, this could be between 1ms and 100ms of CPU time.
+
+**If the exception handler decides to create a snapshot:**
+
+- Creating the process snapshot takes about half a second (P50=0.3s, P90=1.2s, P95=1.9s) during which time, the thread that threw the exception is paused. Other threads are not blocked.
+
+- Converting the process snapshot to a minidump and uploading it to Application Insights takes several minutes.
+ - Convert: P50=63s, P90=187s, P95=275s.
+ - Upload: P50=31s, P90=75s, P95=98s.
+
+ This is done in Snapshot Uploader, which runs in a separate process. The Snapshot Uploader process runs at below normal CPU priority and uses low priority I/O.
+
+ The minidump is first written to disk and the amount of disk spaced is roughly the same as the working set of the original process. Writing the minidump can induce page faults as memory is read.
+
+ The minidump is compressed during upload, which consumes both CPU and memory in the Snapshot Uploader process. The CPU, memory, and disk overhead for this is be proportional to the size of the process snapshot. Snapshot Uploader processes snapshots serially.
+
+**When `TrackException` is called:**
+
+The Snapshot Debugger checks if the exception is new or if a snapshot has been created for it. This adds a very small CPU overhead.
+ ## Preview Versions of .NET Core If you're using a preview version of .NET Core or your application references Application Insights SDK, directly or indirectly via a dependent assembly, follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json).
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
While the Snapshot Debugger process continues to run and serve traffic to users
If you enabled the Snapshot Debugger but you aren't seeing snapshots, see the [Troubleshooting guide](snapshot-debugger-troubleshoot.md).
+## Overhead
+
+The Snapshot Debugger is designed for use in production environments. The default settings include rate limits to minimize the impact on your applications.
+
+However, you may experience small CPU, memory, and I/O overhead associated with the Snapshot Debugger, such as:
+- When an exception is thrown in your application
+- If the exception handler decides to create a snapshot
+- When `TrackException` is called
+
+There is **no additional cost** for storing data captured by Snapshot Debugger.
+
+[See example scenarios in which you may experience Snapshot Debugger overhead.](./snapshot-debugger-troubleshoot.md#snapshot-debugger-overhead-scenarios)
+ ## Limitations This section discusses limitations for the Snapshot Debugger.
azure-netapp-files Azure Netapp Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-metrics.md
Azure NetApp Files provides metrics on allocated storage, actual storage usage, volume IOPS, and latency. By analyzing these metrics, you can gain a better understanding on the usage pattern and volume performance of your NetApp accounts.
+## Definitions
+
+Understanding the terminology related to performance and capacity in Azure NetApp Files is essential to understanding the metrics available:
+
+- **Capacity pool**: A capacity pool is how capacity is billed in Azure NetApp Files. Capacity pools contain volume.
+- **Volume quota**: The amount of capacity provisioned to an Azure NetApp Files volume. Volume quota is directly tied to automatic Quality of Service (QoS), which impacts the volume performance. For more information, see [QoS types for capacity pools](azure-netapp-files-understand-storage-hierarchy.md#qos_types).
+- **Throughput**: The amount of data transmitted across the wire (read/write/other) between Azure NetApp Files and the client. Throughput in Azure NetApp Files is measured in bytes per second.
+- **Latency**: Latency is the amount of time for a storage operation to complete within storage from the time it arrives to the time it's processed and is ready to be sent back to the client. Latency in Azure NetApp Files is measured in milliseconds (ms).
+
+## About storage performance operation metrics
+
+An operation in Azure NetApp Files is defined as _something_ that happens during a client/server conversation. For instance, when a client requests a file to be read from Azure NetApp Files, read and other operations are sent and received between the client and server.
+
+When monitoring the Azure NetApp Files volume, read and write operations are self-explanatory. Also included in the metrics is a metric called **Other IOPS**, meaning any operation that isn't a read or write. **Other IOPS** encompasses operations such as metadata, which is present alongside most read and write operations.
+
+The following types of metadata operations are included in the **Other IOPS** metric:
+
+**NFSv3**
+
+NFSv3 metadata calls included in **Other IOPS** as covered in [RFC-1813](https://www.rfc-editor.org/rfc/rfc1813):
+
+- Procedure 0: NULL - Do nothing
+- Procedure 1: GETATTR - Get file attributes
+- Procedure 2: SETATTR - Set file attributes
+- Procedure 3: LOOKUP - Lookup filename
+- Procedure 4: ACCESS - Check Access Permission
+- Procedure 5: READLINK - Read from symbolic link
+- Procedure 8: CREATE - Create a file
+- Procedure 9: MKDIR - Create a directory
+- Procedure 10: SYMLINK - Create a symbolic link
+- Procedure 11: MKNOD - Create a special device
+- Procedure 12: REMOVE - Remove a File
+- Procedure 13: RMDIR - Remove a Directory
+- Procedure 14: RENAME - Rename a File or Directory
+- Procedure 15: LINK - Create Link to an object
+- Procedure 16: READDIR - Read From Directory
+- Procedure 17: READDIRPLUS - Extended read from directory
+- Procedure 18: FSSTAT - Get dynamic file system information
+- Procedure 19: FSINFO - Get static file system Information
+- Procedure 20: PATHCONF - Retrieve POSIX information
+- Procedure 21: COMMIT - Commit cached data on a server to stable storage
+
+**NFSv4.1**
+
+NFSv4.1 metadata calls included in **Other IOPS** as covered in [RFC-7530](https://www.rfc-editor.org/rfc/rfc7530):
+
+- Procedure 0: NULL ΓÇô Do nothing
+- Procedure 1: COMPOUND ΓÇô Combining multiple NFS operations into a single request
+- Operation 3: ACCESS ΓÇô Check access rights
+- Operation 4: CLOSE ΓÇô Close file
+- Operation 5: COMMIT ΓÇô Commit cached data
+- Operation 6: CREATE - Create a nonregular file object
+- Operation 7: DELEGPURGE - Purge delegations awaiting recovery
+- Operation 8: DELEGRETURN - Return delegation
+- Operation 9: GETATTR - Get attributes
+- Operation 10: GETFH - Get current filehandle
+- Operation 11: LINK - Create link to a file
+- Operation 12: LOCK - Create lock
+- Operation 13: LOCKT - Test for Lock
+- Operation 14: LOCKU - Unlock file
+- Operation 15: LOOKUP - Look Up filename
+- Operation 16: LOOKUPP - Look Up parent directory
+- Operation 17: NVERIFY - Verify difference in attributes
+- Operation 18: OPEN - Open a regular file
+- Operation 19: OPENATTR - Open named attribute directory
+- Operation 20: OPEN_CONFIRM - Confirm open
+- Operation 21: OPEN_DOWNGRADE - Reduce open file access
+- Operation 22: PUTFH - Set current filehandle
+- Operation 23: PUTPUBFH - Set public filehandle
+- Operation 24: PUTROOTFH - Set root filehandle
+- Operation 26: READDIR - Read directory
+- Operation 27: READLINK - Read symbolic link
+- Operation 28: REMOVE - Remove file system object
+- Operation 29: RENAME - Rename directory entry
+- Operation 30: RENEW - Renew a lease
+- Operation 32: SAVEFH - Save current filehandle
+- Operation 33: SECINFO - Obtain available security
+- Operation 34: SETATTR - Set attributes
+- Operation 35: SETCLIENTID - Negotiate client ID
+- Operation 36: SETCLIENTID_CONFIRM - Confirm client ID
+- Operation 37: VERIFY - Verify same attributes
+- Operation 39: RELEASE_LOCKOWNER ΓÇô Release lock-owner state
+
+**SMB (includes SMB2 and SMB3.x)**
+
+SMB commands included in **Other IOPS** with opcode value:
+
+| SMB command | Opcode value |
+| - | - |
+| SMB2 NEGOTIATE | 0x0000 |
+| SMB2 SESSION_SETUP | 0x0001 |
+| SMB2 LOGOFFΓÇ»| 0x0002 |
+| SMB2 TREE_CONNECTΓÇ»| 0x0003 |
+| SMB2 TREE_DISCONNECTΓÇ»| 0x0004 |
+| SMB2 CREATEΓÇ»| 0x0005 |
+| SMB2 CLOSEΓÇ»| 0x0006 |
+| SMB2 FLUSHΓÇ»| 0x0007 |
+| SMB2 LOCK | 0x000A |
+| SMB2 IOCTLΓÇ»| 0x000B |
+| SMB2 CANCELΓÇ»| 0x000C |
+| SMB2 ECHOΓÇ»| 0x000D |
+| SMB2 QUERY_DIRECTORYΓÇ»| 0x000E |
+| SMB2 CHANGE_NOTIFY | 0x000F |
+| SMB2 QUERY_INFOΓÇ»| 0x0010 |
+| SMB2 SET_INFO | 0x0011 |
+| SMB2 OPLOCK_BREAK | 0x0012 |
+ ## Ways to access metrics Azure NetApp Files metrics are natively integrated into Azure monitor. From within the Azure portal, you can find metrics for Azure NetApp Files capacity pools and volumes from two locations:
Azure NetApp Files metrics are natively integrated into Azure monitor. From with
> Volume latency for *Average Read Latency* and *Average Write Latency* is measured within the storage service and does not include network latency. - *Average Read Latency*
- The average time for reads from the volume in milliseconds.
+ The average roundtrip time (RTT) for reads from the volume in milliseconds.
- *Average Write Latency*
- The average time for writes from the volume in milliseconds.
+ The average roundtrip time (RTT) for writes from the volume in milliseconds.
- *Read IOPS*
- The number of reads to the volume per second.
+ The number of read operations to the volume per second.
- *Write IOPS*
- The number of writes to the volume per second.
+ The number of write operations to the volume per second.
+- ***Other IOPS***
+ The number of [other operations](#about-storage-performance-operation-metrics) to the volume per second.
+- *Total IOPS*
+ A sum of the write, read, and other operations to the volume per second.
## <a name="replication"></a>Volume replication metrics
Azure NetApp Files metrics are natively integrated into Azure monitor. From with
The condition of the replication relationship. A healthy state is denoted by `1`. An unhealthy state is denoted by `0`. - *Is volume replication transferring*
- Whether the status of the volume replication is ΓÇÿtransferringΓÇÖ.
+ Whether the status of the volume replication is transferring.
- *Volume replication lag time* <br> Lag time is the actual amount of time the replication lags behind the source. It indicates the age of the replicated data in the destination volume relative to the source volume.
Azure NetApp Files metrics are natively integrated into Azure monitor. From with
* *Other throughput* Other throughput (that isn't read or write) in bytes per second.
+* *Total throughput*
+ Sum of all throughput (read, write, and other) in bytes per second.
+ ## Volume backup metrics * *Is Volume Backup Enabled*
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
## July 2024
-* [Azure NetApp Files backup](backup-introduction.md) is now available in Azure [US Gov regions](backup-introduction.md#supported-regions).
+* [Azure NetApp Files backup](backup-introduction.md) is now available in Azure [US Gov regions](backup-introduction.md#supported-regions).
++
+* [Metrics enhancement:](azure-netapp-files-metrics.md) New performance metrics for volumes
+
+ New counters have been added to Azure NetApp Files performance metrics to increase visibility into your volumes' workloads:
+
+ - Other IOPS: any operations other than read or write.
+ - Total IOPS: a summation of all IOPS (read, write, and other)
+ - Other throughput: any operations other than read or write.
+ - Total throughput: Total throughput is a summation of all throughput (read, write, and other)
## June 2024
azure-resource-manager Networking Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md
The following [private-link resources](../../../private-link/private-endpoint-ov
All other private-link resources don't support move.
+> [!NOTE]
+> A private endpoint should be in succeeded state prior to attempting to move the resource.
++ ## Next steps For commands to move resources, see [Move resources to new resource group or subscription](../move-resource-group-and-subscription.md).
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/architecture.md
Title: Architecture of BareMetal Infrastructure for NC2
+ Title: Architecture of BareMetal Infrastructure for NC2 on Azure
-description: Learn about the architecture of several configurations of BareMetal Infrastructure for NC2.
+description: Learn about the architecture of several configurations of BareMetal Infrastructure for NC2 on Azure.
Previously updated : 05/21/2024 Last updated : 7/17/2024
-# Architecture of BareMetal Infrastructure for Nutanix
+# Nutanix Cloud Clusters (NC2) on Azure architectural concepts
-In this article, we look at the architectural options for BareMetal Infrastructure for Nutanix and the features each option supports.
+NC2 provides Nutanix-based private clouds in Azure. The private cloud hardware and software deployments are fully integrated and automated in Azure. Deploy and manage the private cloud through the Azure portal, CLI, or PowerShell.
+
+A private cloud includes clusters with:
+
+- Dedicated bare-metal server hosts provisioned with Nutanix AHV hypervisor
+- Nutanix Prism Central for managing Nutanix Prism Element, Nutanix AHV and Nutanix AOS.
+- Nutanix Flow software-defined networking for Nutanix AHV workload VMs
+- Nutanix AOS software-defined storage for Nutanix AHV workload VMs
+- Nutanix Move for workload mobility
+- Resources in the Azure underlay (required for connectivity and to operate the private cloud)
+
+Private clouds are installed and managed within an Azure subscription. The number of private clouds within a subscription is scalable.
+
+The following diagram describes the architectural components of the Azure VMware Solution.
++
+Each NC2 on Azure architectural component has the following function:
+
+- Azure Subscription: Used to provide controlled access, budget, and quota management for the NC2 on Azure service.
+- Azure Region: Physical locations around the world where we group data centers into Availability Zones (AZs) and then group AZs into regions.
+- Azure Resource Group: Container used to place Azure services and resources into logical groups.
+- NC2 on Azure: Uses Nutanix software, including Prism Central, Prism Element, Nutanix Flow software-defined networking, Nutanix Acropolis Operating System (AOS) software-defined storage, and Azure bare-metal Acropolis Hypervisor (AHV) hosts to provide compute, networking, and storage resources.
+- Nutanix Move: Provides migration services.
+- Nutanix Disaster Recovery: Provides disaster recovery automation and storage replication services.
+- Nutanix Files: Provides filer services.
+- Nutanix Self Service: Provides application lifecycle management and cloud orchestration.
+- Nutanix Cost Governance: Provides multi-cloud optimization to reduce cost & enhance cloud security.
+- Azure Virtual Network (VNet): Private network used to connect AHV hosts, Azure services and resources together.
+- Azure Route Server: Enables network appliances to exchange dynamic route information with Azure networks.
+- Azure Virtual Network Gateway: Cross premises gateway for connecting Azure services and resources to other private networks using IPSec VPN, ExpressRoute, and VNet to VNet.
+- Azure ExpressRoute: Provides high-speed private connections between Azure data centers and on-premises or colocation infrastructure.
+- Azure Virtual WAN (vWAN): Aggregates networking, security, and routing functions together into a single unified Wide Area Network (WAN).
## Deployment example
Connecting from cloud to on-premises is supported by two traditional products: E
One example deployment is to have a VPN gateway in the Hub virtual network. This virtual network is peered with both the PC virtual network and Cluster Management virtual network, providing connectivity across the network and to your on-premises site.
+## Supported topologies
+
+The following table describes the network topologies supported by each network features configuration of NC2 on Azure.
+
+|Topology |Supported |
+| :- |::|
+|Connectivity to BareMetal Infrastructure (BMI) in a local VNet| Yes |
+|Connectivity to BMI in a peered VNet (Same region)|Yes |
+|Connectivity to BMI in a peered VNet\* (Cross region or global peering) with VWAN\*|Yes |
+|Connectivity to BM in a peered VNet* (Cross region or global peering)* without VWAN| No|
+|On-premises connectivity to Delegated Subnet via Global and Local Expressroute |Yes|
+|ExpressRoute (ER) FastPath |No |
+|Connectivity from on-premises to BMI in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes |
+|On-premises connectivity to Delegated Subnet via VPN GW| Yes |
+|Connectivity from on-premises to BMI in a spoke VNet over VPN gateway and VNet peering with gateway transit| Yes |
+|Connectivity over Active/Passive VPN gateways| Yes |
+|Connectivity over Active/Active VPN gateways| No |
+|Connectivity over Active/Active Zone Redundant gateways| No |
+|Transit connectivity via vWAN for Spoke Delegated VNETS| Yes |
+|On-premises connectivity to Delegated subnet via vWAN attached SD-WAN| No|
+|On-premises connectivity via Secured HUB(Az Firewall NVA) | No|
+|Connectivity from UVMs on NC2 nodes to Azure resources|Yes|
+
+\* You can overcome this limitation by setting Site-to-Site VPN.
+
+## Constraints
+
+The following table describes whatΓÇÖs supported for each network features configuration:
+
+|Features |Basic network features |
+| :- | -: |
+|Delegated subnet per VNet |1|
+|[Network Security Groups](../../../virtual-network/network-security-groups-overview.md) on NC2 on Azure-delegated subnets|No|
+|VWAN enables traffic inspection via NVA (Virtual WAN Hub routing intent)|Yes|
+[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets without VWAN| No|
+|Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in the same Vnet on Azure-delegated subnets|No|
+|Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in a different spoke Vnet connected to vWAN|Yes|
+|Load balancers for NC2 on Azure traffic|No|
+|Dual stack (IPv4 and IPv6) virtual network|IPv4 only supported|
+ ## Next steps Learn more:
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
- Title: Solution design--
-description: Learn about topologies and constraints for NC2 on Azure.
--- Previously updated : 05/21/2024--
-# Solution design
-
-This article identifies topologies and constraints for NC2 on Azure.
-
-## Supported topologies
-
-The following table describes the network topologies supported by each network features configuration of NC2 on Azure.
-
-|Topology |Supported |
-| :- |::|
-|Connectivity to BareMetal Infrastructure (BMI) in a local VNet| Yes |
-|Connectivity to BMI in a peered VNet (Same region)|Yes |
-|Connectivity to BMI in a peered VNet\* (Cross region or global peering) with VWAN\*|Yes |
-|Connectivity to BM in a peered VNet* (Cross region or global peering)* without VWAN| No|
-|On-premises connectivity to Delegated Subnet via Global and Local Expressroute |Yes|
-|ExpressRoute (ER) FastPath |No |
-|Connectivity from on-premises to BMI in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes |
-|On-premises connectivity to Delegated Subnet via VPN GW| Yes |
-|Connectivity from on-premises to BMI in a spoke VNet over VPN gateway and VNet peering with gateway transit| Yes |
-|Connectivity over Active/Passive VPN gateways| Yes |
-|Connectivity over Active/Active VPN gateways| No |
-|Connectivity over Active/Active Zone Redundant gateways| No |
-|Transit connectivity via vWAN for Spoke Delegated VNETS| Yes |
-|On-premises connectivity to Delegated subnet via vWAN attached SD-WAN| No|
-|On-premises connectivity via Secured HUB(Az Firewall NVA) | No|
-|Connectivity from UVMs on NC2 nodes to Azure resources|Yes|
-
-\* You can overcome this limitation by setting Site-to-Site VPN.
-
-## Constraints
-
-The following table describes whatΓÇÖs supported for each network features configuration:
-
-|Features |Basic network features |
-| :- | -: |
-|Delegated subnet per VNet |1|
-|[Network Security Groups](../../../virtual-network/network-security-groups-overview.md) on NC2 on Azure-delegated subnets|No|
-|VWAN enables traffic inspection via NVA (Virtual WAN Hub routing intent)|Yes|
-[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets without VWAN| No|
-|Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in the same Vnet on Azure-delegated subnets|No|
-|Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in a different spoke Vnet connected to vWAN|Yes|
-|Load balancers for NC2 on Azure traffic|No|
-|Dual stack (IPv4 and IPv6) virtual network|IPv4 only supported|
-
-## Next steps
-
-Learn more:
-
-> [!div class="nextstepaction"]
-> [Architecture](architecture.md)
baremetal-infrastructure Use Cases And Supported Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/use-cases-and-supported-scenarios.md
description: Learn about use cases and supported scenarios for NC2 on Azure, inc
Previously updated : 05/21/2024 Last updated : 7/17/2024 # Use cases and supported scenarios
Move applications to the cloud and modernize your infrastructure.
Applications move with no changes, allowing for flexible operations and minimum downtime. > [!div class="nextstepaction"]
-> [Solution design](solution-design.md)
+> [Architecture](architecture.md)
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Agent-based faults are injected into **Azure Virtual Machines** or **Virtual Mac
| Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Disconnect](#network-disconnect) | Network disruption | | Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Latency](#network-latency) | Network performance degradation | | Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Packet Loss](#network-packet-loss) | Network reliability issues |
+| Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Isolation](#network-isolation) | Network disruption |
| Windows | [DNS Failure](#dns-failure) | DNS resolution issues | | Windows | [Network Disconnect (Via Firewall)](#network-disconnect-via-firewall) | Network disruption | | Windows, Linux | [Physical Memory Pressure](#physical-memory-pressure) | Memory capacity loss, resource pressure |
The parameters **destinationFilters** and **inboundDestinationFilters** use the
* When running on Linux, this fault can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments (via the `inboundDestinationFilters` and `destinationFilters` parameters). * This fault currently only affects new connections. Existing active connections are unaffected. You can restart the service or process to force connections to break.
+### Network Isolation
+
+| Property | Value |
+|-|-|
+| Capability name | NetworkIsolation-1.0 |
+| Target type | Microsoft-Agent |
+| Supported OS types | Windows, Linux (outbound traffic only) |
+| Description | Fully isolate the virtual machine from network connections by dropping all IP-based inbound (on Windows) and outbound (on Windows and Linux) packets for the specified duration. At the end of the duration, network connections will be re-enabled. Because the agent depends on network traffic, this action cannot be cancelled and will run to the specified duration. |
+| Prerequisites | **Windows:** The agent must run as administrator, which happens by default if installed as a VM extension. |
+| | **Linux:** The `tc` (Traffic Control) package is used for network faults. If it isn't already installed, the agent automatically attempts to install it from the default package manager. |
+| Urn | urn:csci:microsoft:agent:networkIsolation/1.0 |
+| Fault type | Continuous. |
+| Parameters (key, value) | |
+| virtualMachineScaleSetInstances | An array of instance IDs when you apply this fault to a virtual machine scale set. Required for virtual machine scale sets in uniform orchestration mode, optional otherwise. [Learn more about instance IDs](../virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md#scale-set-instance-id-for-uniform-orchestration-mode). |
+
+#### Sample JSON
+
+```json
+{
+ "name": "branchOne",
+ "actions": [
+ {
+ "type": "continuous",
+ "name": "urn:csci:microsoft:agent:networkIsolation/1.0",
+ "parameters": [],
+ "duration": "PT10M",
+ "selectorid": "myResources"
+ }
+ ]
+}
+```
+
+#### Limitations
+
+* Because the agent depends on network traffic, **this action cannot be cancelled** and will run to the specified duration. Use with caution.
+* The agent-based network faults currently only support IPv4 addresses.
+* When running on Windows, the network packet loss fault currently only works with TCP or UDP packets.
+* When running on Linux, this fault only affects **outbound** traffic, not inbound traffic. The fault affects **both inbound and outbound** traffic on Windows environments.
+* This fault currently only affects new connections. Existing active connections are unaffected. You can restart the service or process to force connections to break.
++ ### DNS Failure | Property | Value |
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
FROM OPENROWSET('CosmosDB',
                HTAP) WITH (_id VARCHAR(1000)) as HTAP ```
+##### Working with MongoDB `id` field
+
+The `id` property in MongoDB containers is automatically overridden with the Base64 representation of the "_id" property both in analytical store. The "id" field is intended for internal use by MongoDB applications. Currently, the only workaround is to rename the "id" property to something other than "id".
++ #### Full fidelity schema for API for NoSQL or Gremlin accounts It's possible to use full fidelity Schema for API for NoSQL accounts, instead of the default option, by setting the schema type when enabling Synapse Link on an Azure Cosmos DB account for the first time. Here are the considerations about changing the default schema representation type:
cosmos-db Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md
Title: Options to migrate data from MongoDB-
-description: Review various options to migrate your data from other MongoDB sources to Azure Cosmos DB for MongoDB vCore.
+
+description: Review various options to migrate your data from other MongoDB sources to vCore-based Azure Cosmos DB for MongoDB.
Last updated 11/17/2023
-# CustomerIntent: As a MongoDB user, I want to understand the various options available to migrate my data to Azure Cosmos DB for MongoDB vCore, so that I can make an informed decision about which option is best for my use case.
+# CustomerIntent: As a MongoDB user, I want to understand the various options available to migrate my data to vCore-based Azure Cosmos DB for MongoDB, so that I can make an informed decision about which option is best for my use case.
-# What are the options to migrate data from MongoDB to Azure Cosmos DB for MongoDB vCore?
+# What are the options to migrate data from MongoDB to vCore-based Azure Cosmos DB for MongoDB?
-This document describes the various options to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore offering.
+This document describes the various options to lift and shift your MongoDB workloads to vCore-based Azure Cosmos DB for MongoDB offering.
+
+Migrations can be done in two ways:
+
+- Offline Migration: A snapshot based bulk copy from source to target. New data added/updated/deleted on the source after the snapshot isn't copied to the target. The application downtime required depends on the time taken for the bulk copy activity to complete.
+
+- Online Migration: Apart from the bulk data copy activity done in the offline migration, a change stream monitors all additions/updates/deletes. After the bulk data copy is completed, the data in the change stream is copied to the target to ensure that all updates made during the migration process are also transferred to the target. The application downtime required is minimal.
## Azure Data Studio (Online)
-The [The MongoDB migration extension for Azure Data Studio](/azure-data-studio/extensions/database-migration-for-mongo-extension) is the preferred tool in migrating your MongoDB workloads to the API for MongoDB vCore.
+The [MongoDB migration extension for Azure Data Studio](/azure-data-studio/extensions/database-migration-for-mongo-extension) is the preferred tool in migrating your MongoDB workloads to the vCore-based Azure Cosmos DB for MongoDB.
The migration process has two phases:
Use the graphical user interface to manage the entire migration process from st
## Native MongoDB tools (Offline)
-You can use the native MongoDB tools such as *mongodump/mongorestore*, *mongoexport/mongoimport* to migrate datasets offline (without replicating live changes) to Azure Cosmos DB for MongoDB vCore offering.
+You can use the native MongoDB tools such as *mongodump/mongorestore*, *mongoexport/mongoimport* to migrate datasets offline (without replicating live changes) to vCore-based Azure Cosmos DB for MongoDB offering.
| Scenario | MongoDB native tool | | | |
You can use the native MongoDB tools such as *mongodump/mongorestore*, *mongoexp
- *mongoexport/mongoimport* is the best pair of migration tools for migrating a subset of your MongoDB database. - *mongoexport* exports your existing data to a human-readable JSON or CSV file. *mongoexport* takes an argument specifying the subset of your existing data to export.
- - *mongoimport* opens a JSON or CSV file and inserts the content into the target database instance (Azure Cosmos DB for MongoDB vCore in this case.).
- - JSON and CSV aren't a compact format; you could incur excess network charges as *mongoimport* sends data to Azure Cosmos DB for MongoDB vCore.
-- *mongodump/mongorestore* is the best pair of migration tools for migrating your entire MongoDB database. The compact BSON format makes more efficient use of network resources as the data is inserted into Azure Cosmos DB for MongoDB vCore.
+ - *mongoimport* opens a JSON or CSV file and inserts the content into the target database instance (vCore-based Azure Cosmos DB for MongoDB in this case.).
+ - JSON and CSV aren't a compact format; you could incur excess network charges as *mongoimport* sends data to vCore-based Azure Cosmos DB for MongoDB.
+- *mongodump/mongorestore* is the best pair of migration tools for migrating your entire MongoDB database. The compact BSON format makes more efficient use of network resources as the data is inserted into vCore-based Azure Cosmos DB for MongoDB.
- *mongodump* exports your existing data as a BSON file.
- - *mongorestore* imports your BSON file dump into Azure Cosmos DB for MongoDB vCore.
+ - *mongorestore* imports your BSON file dump into vCore-based Azure Cosmos DB for MongoDB.
> [!NOTE] > The MongoDB native tools can move data only as fast as the host hardware allows. ## Data migration using Azure Databricks (Offline/Online)
-Migrating using Azure Databricks offers full control of the migration rate and data transformation. This method can also support large datasets that are in TBs in size.
+Migrating using Azure Databricks offers full control of the migration rate and data transformation. This method can also support large datasets that are in TBs in size. The spark migration utility operates as a job within Databricks.
++
+This tool supports the following MongoDB sources:
+- MongoDB VM
+- MongoDB Atlas
+- AWS DocumentDB
+- Azure Cosmos DB MongoDB RU (Offline only)
+
+[Sign up for Azure Cosmos DB for MongoDB Spark Migration](https://forms.office.com/r/cLSRNugFSp) to gain access to the Spark Migration Tool GitHub repository. The repository offers detailed, step-by-step instructions for migrating your workloads from various Mongo sources to vCore-based Azure Cosmos DB for MongoDB.
-- [Azure Databricks](https://azure.microsoft.com/services/databricks/) is a platform as a service (PaaS) offering for [Apache Spark](https://spark.apache.org/). You can use Azure Databricks to do an offline/online migration of databases from MongoDB to Azure Cosmos DB for MongoDB.-- Here's how you can [migrate data to Azure Cosmos DB for MongoDB vCore offline using Azure Databricks](../migrate-databricks.md#provision-an-azure-databricks-cluster) ## Related content -- Migrate data to Azure Cosmos DB for MongoDB vCore [using native MongoDB tools](how-to-migrate-native-tools.md).-- Migrate data to Azure Cosmos DB for MongoDB vCore [using Azure Databricks](../migrate-databricks.md).
+- Migrate data to vCore-based Azure Cosmos DB for MongoDB using [native MongoDB tools](how-to-migrate-native-tools.md).
+- Migrate data to vCore-based Azure Cosmos DB for MongoDB using the [MongoDB migration extension for Azure Data Studio](/azure-data-studio/extensions/database-migration-for-mongo-extension).
cosmos-db How To Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-python-get-started.md
The preceding code imports modules that you'll use in the rest of the article.
To connect to the API for NoSQL of Azure Cosmos DB, create an instance of the [CosmosClient](/python/api/azure-cosmos/azure.cosmos.cosmosclient) class. This class is the starting point to perform all operations against databases. There are three ways to connect to an API for NoSQL account using the **CosmosClient** class:
+- [Connect with Microsoft Entra ID (recommended)](#connect-using-the-microsoft-identity-platform-recommended)
- [Connect with an API for NoSQL endpoint and read/write key](#connect-with-an-endpoint-and-key) - [Connect with an API for NoSQL connection string](#connect-with-a-connection-string)-- [Connect with Microsoft Entra ID](#connect-using-the-microsoft-identity-platform) ### Connect with an endpoint and key
Create a new instance of the **CosmosClient** class with the ``COSMOS_CONNECTION
:::code language="python" source="~/cosmos-db-nosql-python-samples/003-how-to/app_connection_string.py" id="connection_string":::
-### Connect using the Microsoft identity platform
+### Connect using the Microsoft identity platform (recommended)
To connect to your API for NoSQL account using the Microsoft identity platform and Microsoft Entra ID, use a security principal. The exact type of principal will depend on where you host your application code. The table below serves as a quick reference guide.
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
The client library is available through the Node Package Manager, as the `@azure
This sample creates a new instance of the `CosmosClient` type and authenticates using a `DefaultAzureCredential` instance. ### Get a database Use `client.database` to retrieve the existing database named *`cosmicworks`*. ### Get a container Retrieve the existing *`products`* container using `database.container`. ### Create an item Build a new object with all of the members you want to serialize into JSON. In this example, the type has a unique identifier, and fields for category, name, quantity, price, and sale. Create an item in the container using `container.items.upsert`. This method "upserts" the item effectively replacing the item if it already exists. ### Read an item Perform a point read operation by using both the unique identifier (`id`) and partition key fields. Use `container.item` to get a pointer to an item and `item.read` to efficiently retrieve the specific item. ### Query items
SELECT * FROM products p WHERE p.category = @category
Fetch all of the results of the query using `query.fetchAll`. Loop through the results of the query. ## Related content
cost-management-billing Cost Analysis Common Uses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-common-uses.md
description: This article explains how you can get results for common cost analysis tasks in Cost Management. Previously updated : 03/21/2024 Last updated : 07/17/2024 -+ # Common cost analysis uses
In the Azure portal, navigate to cost analysis for your scope. For example: **Co
In the default view, the top chart has the Actual/Amortized cost and forecast cost sections. The solid color of the chart shows your Actual/Amortized cost. The shaded color shows the forecast cost.
+For more information about forecasting costs, see [Forecasting costs in Cost Analysis](quick-acm-cost-analysis.md#forecasting-costs-in-cost-analysis).
+ :::image type="content" border="true" source="./media/cost-analysis-common-uses/enrollment-forecast.png" lightbox="./media/cost-analysis-common-uses/enrollment-forecast.png" alt-text="Screenshot showing Forecast cost in cost analysis."::: ## View forecast costs grouped by service
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
# Quickstart: Start using Cost analysis
-Before you can control and optimize your costs, you first need to understand where they originated ΓÇô from the underlying resources used to support your cloud projects to the environments they're deployed in and the owners who manage them. Full visibility backed by a thorough tagging strategy is critical to accurately understand your spending patterns and enforce cost control mechanisms.
+Before you can control and optimize your costs, you first need to understand where they originated ΓÇô from the underlying resources used to support your cloud projects to the environments they get deployed in and the owners who manage them. Full visibility backed by a thorough tagging strategy is critical to accurately understand your spending patterns and enforce cost control mechanisms.
In this quickstart, you use Cost analysis to explore and get quick answers about your costs. You can see a summary of your cost over time to identify trends and break costs down to understand how you're being charged for the services you use. For advanced reporting, use Power BI or export raw cost details.
As you explore the different views, notice that Cost analysis remembers which vi
:::image type="content" source="./media/quick-acm-cost-analysis/pin-to-recent.png" alt-text="Screenshot showing the Pin to recent option." lightbox="./media/quick-acm-cost-analysis/pin-to-recent.png" :::
-Views in the **Recommended** list may vary based on what users most commonly use across Azure.
+Views in the **Recommended** list might vary based on what users most commonly use across Azure.
## Analyze costs with smart views
If you don't have a budget, select the **create** link in the **Budget** KPI and
:::image type="content" source="./media/quick-acm-cost-analysis/create-budget.png" alt-text="Screenshot showing the Create budget - advanced setting link." lightbox="./media/quick-acm-cost-analysis/create-budget.png" :::
-Depending on the view and scope you're using, you may also see cost insights below the KPIs. Cost insights show important datapoints about your cost ΓÇô from discovering top cost contributors to identifying anomalies based on usage patterns. Select the **See insights** link to review and provide feedback on all insights. Here's an insights example.
+Depending on the view and scope you're using, you might also see cost insights below the KPIs. Cost insights show important datapoints about your cost ΓÇô from discovering top cost contributors to identifying anomalies based on usage patterns. Select the **See insights** link to review and provide feedback on all insights. Here's an insights example.
:::image type="content" source="./media/quick-acm-cost-analysis/see-insights.png" alt-text="Screenshot showing insights." lightbox="./media/quick-acm-cost-analysis/see-insights.png" :::
Lastly, use the table to identify and review your top cost contributors and dril
This view is where you spend most of your time in Cost analysis. To explore further: 1. Expand rows to take a quick peek and see how costs are broken down to the next level. Examples include resources with their product meters and services with a breakdown of products.
-2. Select the name to drill down and see the next level details in a full view. From there, you can drill down again and again, to get down to the finest level of detail, based on what you're interested in. Examples include selecting a subscription, then a resource group, and then a resource to view the specific product meters for that resource.
-3. Select the shortcut menu (Γï») to see related costs. Examples include filtering the list of resource groups to a subscription or filtering resources to a specific location or tag.
+2. Select the name to drill down and see the next level details in a full view. From there, you can drill down again and again, to get down to the finest level of detail, based on what you have interest in. Examples include selecting a subscription, then a resource group, and then a resource to view the specific product meters for that resource.
+3. To see related costs, select the shortcut menu (Γï»). Examples include filtering the list of resource groups to a subscription or filtering resources to a specific location or tag.
4. Select the shortcut menu (Γï») to open the management screen for that resource, resource group, or subscription. From this screen, you can stop or delete resources to avoid future charges. 5. Open other smart views to get different perspectives on your costs. 6. Open a customizable view and apply other filters or group the data to explore further.
-> [!NOTE]
-> If you want to visualize and monitor daily trends within the period, enable the [chart preview feature](enable-preview-features-cost-management-labs.md#chartsfeature) in Cost Management Labs, available from the **Try preview** command.
+>[!NOTE]
+>If you want to visualize and monitor daily trends within the period, enable the [chart preview feature](enable-preview-features-cost-management-labs.md#chartsfeature) in Cost Management Labs, available from the **Try preview** command.
## Analyze costs with customizable views
Here's an example of the Accumulated Costs customizable view.
:::image type="content" source="./media/quick-acm-cost-analysis/accumulated-costs-view.png" alt-text="Screenshot showing the Accumulated costs customizable view." lightbox="./media/quick-acm-cost-analysis/accumulated-costs-view.png" :::
-After you customize your view to meet your needs, you may want to save and share it with others. To share views with others:
+After you customize your view to meet your needs, you might want to save and share it with others. To share views with others:
1. Save the view on a subscription, resource group, management group, or billing account. 2. Share a URL with view configuration details, which they can use on any scope they have access to.
Forecast costs are available from both smart and custom views. In either case, t
Your forecast is a projection of your estimated costs for the selected period. Your forecast changes depending on what data is available for the period, how long of a period you select, and what filters you apply. If you notice an unexpected spike or drop in your forecast, expand the date range, and use grouping to identify large increases or decreases in historical cost. You can filter them out to normalize the forecast. A few key considerations: 
-1. Forecasting employs a 'time series linear regression' model, which adjusts to factors such as reserved instance purchases that temporarily affect forecasted costs. Following such purchases, the forecasted costs typically stabilize in alignment with usage trends within a few days. You have the option to filter out these temporary spikes to obtain a more normalized forecasted cost.
+1. Forecasting employs a *time series linear regression* model, which adjusts to factors such as reserved instance purchases that temporarily affect forecasted costs. Following such purchases, the forecasted costs typically stabilize in alignment with usage trends within a few days. You can filter out these temporary spikes to obtain a more normalized forecasted cost.
-1. For accurate long-term forecasting, it's essential to have sufficient historical data. New subscriptions or contracts with limited historical data may result in less accurate forecasts. At least 90 days of historical data are recommended for a more precise annual forecast.
+1. For accurate long-term forecasting, it's essential to have sufficient historical data. New subscriptions or contracts with limited historical data might result in less accurate forecasts. At least 90 days of historical data are recommended for a more precise annual forecast.
1. When you select a budget in a custom view, you can also see if or when your forecast would exceed your budget.
+Here's a table to help you understand how the forecast duration and lookback period are calculated based on the forecast period:
+
+| Forecast Duration | Lookback Period |
+|--||
+| Up to 28 days | 28 days |
+| Above 28 days | Same as Forecast Duration |
+| Above 90 days | 90 days |
+ ## More information For more information about using features in costs analysis, see the following articles:
If you need advanced reporting outside of cost analysis, like grouping by multip
- Usage data from exports or APIs - See [Choose a cost details solution](../automate/usage-details-best-practices.md) to help you determine if exports from the Azure portal or if cost details from APIs are right for you.
-Be sure to [configure subscription anomaly alerts](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert) and set up a [budget](tutorial-acm-create-budgets.md) to help drive accountability and cost control.
+To help drive accountability and cost control, [configure subscription anomaly alerts](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert) and set up a [budget](tutorial-acm-create-budgets.md).
## Next steps
cost-management-billing Tutorial Improved Exports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-improved-exports.md
Title: Tutorial - Improved exports experience - Preview
description: This tutorial helps you create automatic exports for your actual and amortized costs in the Cost and Usage Specification standard (FOCUS) format. Previously updated : 06/17/2024 Last updated : 07/17/2024
For a comprehensive reference of all available datasets, including the schema fo
- Cost and usage details (actual) - Select this option to export standard usage and purchase charges. - Cost and usage details (amortized) - Select this option to export amortized costs for purchases like Azure reservations and Azure savings plan for compute.-- Cost and usage details (FOCUS) - Select this option to export cost and usage details using the open-source FinOps Open Cost and Usage Specification ([FOCUS](https://focus.finops.org/)) format. It combines actual and amortized costs. This format reduces data processing time and storage and compute charges for exports. The management group scope isn't supported for Cost and usage details (FOCUS) exports.
+- Cost and usage details (FOCUS) - Select this option to export cost and usage details using the open-source FinOps Open Cost and Usage Specification ([FOCUS](https://focus.finops.org/)) format. It combines actual and amortized costs.
+ - This format reduces data processing time and storage and compute charges for exports.
+ - The management group scope isn't supported for Cost and usage details (FOCUS) exports.
+ - You can use the FOCUS-formatted export as the input for a Microsoft Fabric workspace for FinOps. For more information, see [Create a Fabric workspace for FinOps](/cloud-computing/finops/fabric/create-fabric-workspace-finops).
- Cost and usage details (usage only) - Select this option to export standard usage charges without purchase information. Although you can't use this option when creating new exports, existing exports using this option are still supported. - Price sheet ΓÇô Select this option to export your download your organization's Azure pricing. - Reservation details ΓÇô Select this option to export the current list of all available reservations.
cost-management-billing Reservation Trade In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md
If the original purchase was made as an overage, the original invoice on which t
The original invoice is canceled, and a new invoice is created. The money is refunded to the credit card that was used for the original purchase. If you changed your card, [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-## Cancel, exchange, and refund policies
-
-You can't cancel, exchange, or refund a savings plan.
- ## Need help? Contact us. If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft only provides Azure savings plan for compute expert support requests in English.
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
The Snowflake connector offers new functionalities and is compatible with most f
| Script parameters are not supported in Script activity currently. As an alternative, utilize dynamic expressions for script parameters. For more information, see [Expressions and functions in Azure Data Factory and Azure Synapse Analytics](control-flow-expression-language-functions.md). | Support script parameters in Script activity. | | Support BigDecimal in Lookup activity. The NUMBER type, as defined in Snowflake, will be displayed as a string in Lookup activity. | BigDecimal is not supported in Lookup activity. |
+To determine the version of the Snowflake connector used in your existing Snowflake linked service, check the ```type``` property. The legacy version is identified by ```"type": "Snowflake"```, while the latest V2 version is identified by ```"type": "SnowflakeV2"```.
+
+The V2 version offers several enhancements over the legacy version, including:
+
+Autoscaling: Automatically adjusts resources based on traffic load.
+Multi-Availability Zone Operation: Provides resilience by operating across multiple availability zones.
+Static IP Support: Enhances security by allowing the use of static IP addresses.
+ ## Related content For a list of data stores supported as sources and sinks by Copy activity, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-guide.md
You can refer to the troubleshooting pages for each connector to see problems sp
The errors below are general to the copy activity and could occur with any connector.
-#### Error code: JreNotFound
+#### Error code: 20000
- **Message**: `Java Runtime Environment cannot be found on the Self-hosted Integration Runtime machine. It is required for parsing or writing to Parquet/ORC files. Make sure Java Runtime Environment has been installed on the Self-hosted Integration Runtime machine.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Check your integration runtime environment, see [Use Self-hosted Integration Runtime](./format-parquet.md#using-self-hosted-integration-runtime).
-#### Error code: WildcardPathSinkNotSupported
+#### Error code: 20002
+
+- **Message**: `An error occurred when invoking Java Native Interface.`
+
+- **Cause**: If the error message contains "Cannot create JVM: JNI return code [-6][JNI call failed: Invalid arguments.]", the possible cause is that JVM can't be created because some illegal (global) arguments are set.
+
+- **Recommendation**: Log in to the machine that hosts *each node* of your self-hosted integration runtime. Check to ensure that the system variable is set correctly, as follows: `_JAVA_OPTIONS "-Xms256m -Xmx16g" with memory bigger than 8G`. Restart all the integration runtime nodes, and then rerun the pipeline.
+
+#### Error code: 20020
- **Message**: `Wildcard in path is not supported in sink dataset. Fix the path: '%setting;'.`
The errors below are general to the copy activity and could occur with any conne
3. Save the file, and then restart the Self-hosted IR machine.
-#### Error code: JniException
--- **Message**: `An error occurred when invoking Java Native Interface.`--- **Cause**: If the error message contains "Cannot create JVM: JNI return code [-6][JNI call failed: Invalid arguments.]", the possible cause is that JVM can't be created because some illegal (global) arguments are set.--- **Recommendation**: Log in to the machine that hosts *each node* of your self-hosted integration runtime. Check to ensure that the system variable is set correctly, as follows: `_JAVA_OPTIONS "-Xms256m -Xmx16g" with memory bigger than 8G`. Restart all the integration runtime nodes, and then rerun the pipeline.-
-#### Error code: GetOAuth2AccessTokenErrorResponse
+#### Error code: 20150
- **Message**: `Failed to get access token from your token endpoint. Error returned from your authorization server: %errorResponse;.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Correct all OAuth2 client credential flow settings of your authorization server.
-#### Error code: FailedToGetOAuth2AccessToken
+#### Error code: 20151
- **Message**: `Failed to get access token from your token endpoint. Error message: %errorMessage;.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Correct all OAuth2 client credential flow settings of your authorization server.
-#### Error code: OAuth2AccessTokenTypeNotSupported
+#### Error code: 20152
- **Message**: `The toke type '%tokenType;' from your authorization server is not supported, supported types: '%tokenTypes;'.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Use an authorization server that can return tokens with supported token types.
-#### Error code: OAuth2ClientIdColonNotAllowed
+#### Error code: 20153
- **Message**: `The character colon(:) is not allowed in clientId for OAuth2ClientCredential authentication.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Use a valid client ID.
-#### Error code: ManagedIdentityCredentialObjectNotSupported
+#### Error code: 20523
- **Message**: `Managed identity credential is not supported in this version ('%version;') of Self Hosted Integration Runtime.` - **Recommendation**: Check the supported version and upgrade the integration runtime to a higher version.
-#### Error code: QueryMissingFormatSettingsInDataset
+#### Error code: 20551
- **Message**: `The format settings are missing in dataset %dataSetName;.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Deselect the "Binary copy" in the dataset, and set correct format settings.
-#### Error code: QueryUnsupportedCommandBehavior
+#### Error code: 20552
- **Message**: `The command behavior "%behavior;" is not supported.` - **Recommendation**: Don't add the command behavior as a parameter for preview or GetSchema API request URL.
-#### Error code: DataConsistencyFailedToGetSourceFileMetadata
+#### Error code: 20701
- **Message**: `Failed to retrieve source file ('%name;') metadata to validate data consistency.` - **Cause**: There is a transient issue on the sink data store, or retrieving metadata from the sink data store is not allowed.
-#### Error code: DataConsistencyFailedToGetSinkFileMetadata
+#### Error code: 20703
- **Message**: `Failed to retrieve sink file ('%name;') metadata to validate data consistency.` - **Cause**: There is a transient issue on the sink data store, or retrieving metadata from the sink data store is not allowed.
-#### Error code: DataConsistencyValidationNotSupportedForNonDirectBinaryCopy
+#### Error code: 20704
- **Message**: `Data consistency validation is not supported in current copy activity settings.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Remove the 'validateDataConsistency' property in the copy activity payload.
-#### Error code: DataConsistencyValidationNotSupportedForLowVersionSelfHostedIntegrationRuntime
+#### Error code: 20705
- **Message**: `'validateDataConsistency' is not supported in this version ('%version;') of Self Hosted Integration Runtime.` - **Recommendation**: Check the supported integration runtime version and upgrade it to a higher version, or remove the 'validateDataConsistency' property from copy activities.
-#### Error code: SkipMissingFileNotSupportedForNonDirectBinaryCopy
+#### Error code: 20741
- **Message**: `Skip missing file is not supported in current copy activity settings, it's only supported with direct binary copy with folder.` - **Recommendation**: Remove 'fileMissing' of the skipErrorFile setting in the copy activity payload.
-#### Error code: SkipInconsistencyDataNotSupportedForNonDirectBinaryCopy
+#### Error code: 20742
- **Message**: `Skip inconsistency is not supported in current copy activity settings, it's only supported with direct binary copy when validateDataConsistency is true.` - **Recommendation**: Remove 'dataInconsistency' of the skipErrorFile setting in the copy activity payload.
-#### Error code: SkipForbiddenFileNotSupportedForNonDirectBinaryCopy
+#### Error code: 20743
- **Message**: `Skip forbidden file is not supported in current copy activity settings, it's only supported with direct binary copy with folder.` - **Recommendation**: Remove 'fileForbidden' of the skipErrorFile setting in the copy activity payload.
-#### Error code: SkipForbiddenFileNotSupportedForThisConnector
+#### Error code: 20744
- **Message**: `Skip forbidden file is not supported for this connector: ('%connectorName;').` - **Recommendation**: Remove 'fileForbidden' of the skipErrorFile setting in the copy activity payload.
-#### Error code: SkipInvalidFileNameNotSupportedForNonDirectBinaryCopy
+#### Error code: 20745
- **Message**: `Skip invalid file name is not supported in current copy activity settings, it's only supported with direct binary copy with folder.` - **Recommendation**: Remove 'invalidFileName' of the skipErrorFile setting in the copy activity payload.
-#### Error code: SkipInvalidFileNameNotSupportedForSource
+#### Error code: 20746
- **Message**: `Skip invalid file name is not supported for '%connectorName;' source.` - **Recommendation**: Remove 'invalidFileName' of the skipErrorFile setting in the copy activity payload.
-#### Error code: SkipInvalidFileNameNotSupportedForSink
+#### Error code: 20747
- **Message**: `Skip invalid file name is not supported for '%connectorName;' sink.` - **Recommendation**: Remove 'invalidFileName' of the skipErrorFile setting in the copy activity payload.
-#### Error code: SkipAllErrorFileNotSupportedForNonBinaryCopy
+#### Error code: 20748
- **Message**: `Skip all error file is not supported in current copy activity settings, it's only supported with binary copy with folder.` - **Recommendation**: Remove 'allErrorFile' in the skipErrorFile setting in the copy activity payload.
-#### Error code: DeleteFilesAfterCompletionNotSupportedForNonDirectBinaryCopy
+#### Error code: 20771
- **Message**: `'deleteFilesAfterCompletion' is not support in current copy activity settings, it's only supported with direct binary copy.` - **Recommendation**: Remove the 'deleteFilesAfterCompletion' setting or use direct binary copy.
-#### Error code: DeleteFilesAfterCompletionNotSupportedForThisConnector
+#### Error code: 20772
- **Message**: `'deleteFilesAfterCompletion' is not supported for this connector: ('%connectorName;').` - **Recommendation**: Remove the 'deleteFilesAfterCompletion' setting in the copy activity payload.
-#### Error code: FailedToDownloadCustomPlugins
+#### Error code: 27002
- **Message**: `Failed to download custom plugins.`
The errors below are general to the copy activity and could occur with any conne
## General connector errors
-#### Error code: UserErrorOdbcInvalidQueryString
+#### Error code: 9611
- **Message**: `The following ODBC Query is not valid: '%'.`
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Verify your query is valid and can return dat) if you want to execute non-query scripts and your data store is supported. Alternatively, consider to use stored procedure that returns a dummy result to execute your non-query scripts.
-#### Error code: FailToResolveParametersInExploratoryController
+#### Error code: 11775
-- **Message**: `The parameters and expression cannot be resolved for schema operations. …The template function 'linkedService' is not defined or not valid.`
+- **Message**: `Failed to connect to your instance of Azure Database for PostgreSQL flexible server.`
-- **Cause**: The service has limitation to support the linked service which references another linked service with parameters for test connection or preview data. For example, passing a parameter from a Key Vault to a linked service may occur the issue. 
+- **Cause**: User or password provided are incorrect. The encryption method selected is not compatible with the configuration of the server. The network connectivity method configured for your instance doesn't allow connections from the Integration Runtime selected.
-- **Recommendation**: Remove the parameters in the referred linked service to eliminate the error. Otherwise, run the pipeline without testing connection or previewing data. 
+- **Recommendation**: Confirm that the user provided exists in your instance of PostgreSQL and that the password corresponds to the one currently assigned to that user. Make sure that the encryption method selected is accepted by your instance of PostgreSQL, based on its current configuration. If the network connectivity method of your instance is configured for Private access (VNet integration), use a Self-Hosted Integration Runtime (IR) to connect to it. If it is configured for Public access (allowed IP addresses), it is recommended to use an Azure IR with managed virtual network and deploy a managed private endpoint to connect to your instance. When it is configured for Public access (allowed IP addresses) a less recommended alternative consists in creating firewall rules in your instance to allow traffic originating on the IP addresses used by the Azure IR you're using.
## Related content
data-factory Connector Troubleshoot Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-postgresql.md
This article provides suggestions to troubleshoot common problems with the Azure Database for PostgreSQL connector in Azure Data Factory and Azure Synapse.
-## Error code: AzurePostgreSqlNpgsqlDataTypeNotSupported
--- **Message**: `The data type of the chosen Partition Column, '%partitionColumn;', is '%dataType;' and this data type is not supported for partitioning.`--- **Recommendation**: Pick a partition column with int, bigint, smallint, serial, bigserial, smallserial, timestamp with or without time zone, time without time zone or date data type.-
-## Error code: AzurePostgreSqlNpgsqlPartitionColumnNameNotProvided
+## Error code: 23704
- **Message**: `Partition column name must be specified.` - **Cause**: No partition column name is provided, and it couldn't be decided automatically.
+## Error code: 23705
+
+- **Message**: `The data type of the chosen Partition Column, '%partitionColumn;', is '%dataType;' and this data type is not supported for partitioning.`
+
+- **Recommendation**: Pick a partition column with int, bigint, smallint, serial, bigserial, smallserial, timestamp with or without time zone, time without time zone or date data type.
+ ## Related content For more troubleshooting help, try these resources:
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
Refer to the [JSON sample](#json) to add ` parameters` section to define paramet
} } ```
+## Related content
+
+[Store credentials in Azure Key Vault](store-credentials-in-key-vault.md)
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account description: Defend your AWS resources with Microsoft Defender for Cloud, a guide to set up and configure Defender for Cloud to protect your workloads in AWS. Previously updated : 07/01/2024 Last updated : 07/17/2024 # Connect AWS accounts to Microsoft Defender for Cloud
To complete the procedures in this article, you need:
- Access to an AWS account. -- **Subscription owner** permission for the relevant Azure subscription, and **Administrator** permission on the AWS account.
+- Contributor level permission for the relevant Azure subscription.
+
+- An Entra ID account that has an Application Administrator or Cloud Application Administrator directory role for your tenant (or equivalent administrator rights to create app registrations).
> [!NOTE] > The AWS connector is not available on the national government clouds (Azure Government, Microsoft Azure operated by 21Vianet).
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project description: Defend your GCP resources by using Microsoft Defender for Cloud. Protect your workloads and enhance your cloud security with our comprehensive solution. Previously updated : 07/01/2024 Last updated : 07/17/2024 # Connect your GCP project to Microsoft Defender for Cloud
To complete the procedures in this article, you need:
- Access to a GCP project. -- **Subscription owner** permission on the relevant Azure subscription, and **Owner** permission on the GCP organization or project.
+- Contributor level permission for the relevant Azure subscription.
+
+- An Entra ID account that has an Application Administrator or Cloud Application Administrator directory role for your tenant (or equivalent administrator rights to create app registrations).
You can learn more about Defender for Cloud pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
firewall Firewall Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-best-practices.md
To maximize the [performance](firewall-performance.md) of your Azure Firewall an
- **Exceeding rule limitations**
- If you exceed limitations, such as using over 20,000 unique source/destination combinations in rules, it can affect firewall traffic processing and cause latency. Even though this is a soft limit, if you surpass this value it can affect overall firewall performance. For more information, see the [documented limits](../nat-gateway/tutorial-hub-spoke-nat-firewall.md).
+ If you exceed limitations, such as using over 20,000 unique source/destination combinations in rules, it can affect firewall traffic processing and cause latency. Even though this is a soft limit, if you surpass this value it can affect overall firewall performance. For more information, see the [documented limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-firewall-limits).
- **High traffic throughput**
frontdoor Front Door Cdn Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-cdn-comparison.md
The following table provides a comparison between Azure Front Door and Azure CDN
| Geo-filtering | &check; | &check; | &check; | &check; | &check; | &check; | | Token authentication | | | | | | &check; | | DDOS protection | &check; | &check; | &check; | &check; | &check; | &check; |
-| DDOS protection | &check; | &check; | &check; | &check; | &check; | &check; |
| Domain Fronting Block | &check; | &check; | &check; | &check; | &check; | &check; | | **Analytics and reporting** | | | | | | | | Monitoring Metrics | &check; (more metrics than Classic) | &check; (more metrics than Classic) | &check; | &check; | &check; | &check; |
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
Origin support for direct private endpoint connectivity is currently limited to:
* Web App * Internal load balancers, or any services that expose internal load balancers such as Azure Kubernetes Service, Azure Container Apps or Azure Red Hat OpenShift * Storage Static Website
-* Application Gateway
+* Application Gateway (Preview only. Please do not put production workloads)
> [!NOTE] > * This feature isn't supported with Azure App Service Slots or Functions.
hdinsight Hdinsight Autoscale Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-autoscale-clusters.md
Autoscale continuously monitors the cluster and collects the following metrics:
|Used Memory per Node|The load on a worker node. A worker node on which 10 GB of memory is used, is considered under more load than a worker with 2 GB of used memory.| |Number of Application Masters per Node|The number of Application Master (AM) containers running on a worker node. A worker node that is hosting two AM containers, is considered more important than a worker node that is hosting zero AM containers.|
-The above metrics are checked every 60 seconds. Autoscale makes scale-up and scale-down decisions based on these metrics.
+The above metrics are checked every 60 seconds. Autoscale makes scale-up and scale-down decisions based on these metrics.
+
+For a complete list of cluster metrics, see [Supported metrics for Microsoft.HDInsight/clusters](monitor-hdinsight-reference.md#supported-metrics-for-microsofthdinsightclusters).
### Load-based scale conditions
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
HDInsight support cluster auditing with Azure Monitor logs, by importing the fol
* `log_gateway_audit_CL` - this table provides audit logs from cluster gateway nodes that show successful and failed sign-in attempts. * `log_auth_CL` - this table provides SSH logs with successful and failed sign-in attempts. * `log_ambari_audit_CL` - this table provides audit logs from Ambari.
-* `log_ranger_audti_CL` - this table provides audit logs from Apache Ranger on ESP clusters.
+* `ranger_audit_logs_CL` - this table provides audit logs from Apache Ranger on ESP clusters.
+
+For the log table mappings from the classic Azure Monitor integration to the new one, see [Log table mapping](monitor-hdinsight-reference.md#log-table-mapping).
+ #### [Classic Azure Monitor experience](#tab/previous)
HDInsight support cluster auditing with Azure Monitor logs, by importing the fol
* `log_gateway_audit_CL` - this table provides audit logs from cluster gateway nodes that show successful and failed sign-in attempts. * `log_auth_CL` - this table provides SSH logs with successful and failed sign-in attempts. * `log_ambari_audit_CL` - this table provides audit logs from Ambari.
-* `log_ranger_audti_CL` - this table provides audit logs from Apache Ranger on ESP clusters.
+* `ranger_audit_logs_CL` - this table provides audit logs from Apache Ranger on ESP clusters.
+
+For the log table mappings from the classic Azure Monitor integration to the new one, see [Log table mapping](monitor-hdinsight-reference.md#log-table-mapping).
hdinsight Hdinsight Rotate Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-rotate-storage-keys.md
Use [Script Action](hdinsight-hadoop-customize-cluster-linux.md#script-action-to
The preceding script directly updates the access key on the cluster side only and doesn't renew a copy on the HDInsight Resource provider side. Therefore, the script action hosted in the storage account will fail after the access key is rotated. Workaround:
-Use [SAS URIs](hdinsight-storage-sharedaccesssignature-permissions.md) for script actions or make the scripts publicly accessible.
+Use external storage account via [SAS URIs](hdinsight-storage-sharedaccesssignature-permissions.md) for script actions or make the scripts publicly accessible.
## Next steps
hdinsight Apache Kafka Log Analytics Operations Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-log-analytics-operations-management.md
The steps to enable Azure Monitor logs for HDInsight are the same for all HDInsi
| summarize AggregatedValue = avg(kafka_BrokerTopicMetrics_BytesOutPerSec_Count_value_d) by bin(TimeGenerated, 1h) ```
- You can also enter `*` to search all types logged. Currently the following logs are available for queries:
-
- | Log type | Description |
- | - | - |
- | log\_kafkaserver\_CL | Kafka broker server.log |
- | log\_kafkacontroller\_CL | Kafka broker controller.log |
- | metrics\_kafka\_CL | Kafka JMX metrics |
+ You can also enter `*` to search all types logged. For a list of logs that are available for queries, see [Kafka workload](../monitor-hdinsight-reference.md#kafka-workload).
:::image type="content" source="./media/apache-kafka-log-analytics-operations-management/apache-kafka-cpu-usage.png" alt-text="Apache kafka log analytics cpu usage." border="true":::
hdinsight Log Analytics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md
To enable the new Azure Monitor integration, follow the steps outlined in the [A
Since the new table format is different from the previous one, your queries need to be reworked so you can use our new tables. Once you enable the new Azure Monitor integration, you can browse the tables and schemas to identify the fields that are used in your old queries.
-We provide a [mapping table](#appendix-table-mapping) between the old table to the new table to help you quickly find the new fields you need to use to migrate your dashboards and queries.
+We provide a [mapping table](monitor-hdinsight-reference.md#log-table-mapping) between the old table to the new table to help you quickly find the new fields you need to use to migrate your dashboards and queries.
**Default queries**: We created default queries that show how to use the new tables for common situations. The default queries also show what information is available in each table. You can access the default queries by following the instructions in the [Default queries to use with new tables](#default-queries-to-use-with-new-tables) section in this article.
We provide a [mapping table](#appendix-table-mapping) between the old table to t
If you have built multiple dashboards to monitor your HDInsight clusters, you need to adjust the query behind the table once you enable the new Azure Monitor integration. The table name or the field name might change in the new integration, but all the information you have in old integration is included.
-Refer to the [mapping table](#appendix-table-mapping) between the old table/schema to the new table/schema to update the query behind the dashboards.
+Refer to the [mapping table](monitor-hdinsight-reference.md#log-table-mapping) between the old table/schema to the new table/schema to update the query behind the dashboards.
#### Out-of-box dashboards
hdinsight Monitor Hdinsight Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/monitor-hdinsight-reference.md
See [Monitor HDInsight](monitor-hdinsight.md) for details on the data you can co
### Supported metrics for Microsoft.HDInsight/clusters The following table lists the metrics available for the Microsoft.HDInsight/clusters resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] [!INCLUDE [horz-monitor-ref-metrics-dimensions-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions-intro.md)]
hdinsight Monitor Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/monitor-hdinsight.md
The following table describes a couple of alert rules for HDInsight. These alert
| Metric| Pending CPU | Whenever the maximum pending CPU is greater or less than dynamic threshold| | Activity log| Delete cluster | Whenever the Activity Log has an event with Category='Administrative', Signal name='Delete Cluster (HDInsight Cluster)'|
+For an example that shows how to create an alert, see [Azure Monitor alerts](cluster-availability-monitor-logs.md#azure-monitor-alerts).
+ [!INCLUDE [horz-monitor-advisor-recommendations](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-advisor-recommendations.md)] ## Related content
hdinsight Selective Logging Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/selective-logging-analysis.md
If the script action shows a failure status in the script action history:
## Table names
-### Spark cluster
-
-The following table names are for different log types (sources) inside Spark tables.
-
-| Source number | Table name | Log types | Description |
-| | | | |
-| 1. | HDInsightAmbariCluster Alerts | No log types | This table contains Ambari cluster alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
-| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two head nodes. Each metric is now a column, and each metric is reported once per record. |
-| 3. | HDInsightHadoopAnd YarnLogs | **Head node**: MRJobSummary, Resource Manager, TimelineServer **Worker node**: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
-| 4. | HDInsightSecurityLogs | AmbariAuditLog, AuthLog | This table contains records from the Ambari audit and authentication logs. |
-| 5. | HDInsightSparkLogs | **Head node**: JupyterLog, LivyLog, SparkThriftDriverLog **Worker node**: SparkExecutorLog, SparkDriverLog | This table contains all logs related to Spark and its related components: Livy and Jupyter. |
-| 6. | HDInsightHadoopAnd YarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics that we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
-| 7. | HDInsightOozieLogs | Oozie | This table contains all logs generated from the Oozie framework. |
-
-### Interactive Query cluster
-
-The following table names are for different log types (sources) inside Interactive Query tables.
-
-| Source number | Table name | Log types | Description |
-| | | | |
-| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari cluster alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
-| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two head nodes. Each metric is now a column, and each metric is reported once per record. |
-| 3. | HDInsightHadoopAndYarnLogs | **Head node**: MRJobSummary, Resource Manager, TimelineServer **Worker node**: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
-| 4. | HDInsightHadoopAndYarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics that we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
-| 5. | HDInsightHiveAndLLAPLogs | **Head node**: InteractiveHiveHSILog, InteractiveHiveMetastoreLog, ZeppelinLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
-| 6. | HDInsightHiveAndLLAPmetrics | No log types | This table contains JMX metrics from the Hive and LLAP frameworks. It contains all the same JMX metrics as the old Custom Logs tables. It contains one metric per record. |
-| 7. | HDInsightHiveTezAppStats | No log types |
-| 8. | HDInsightSecurityLogs | **Head node**: AmbariAuditLog, AuthLog **ZooKeeper node, worker node**: AuthLog | This table contains records from the Ambari audit and authentication logs. |
-
-### HBase cluster
-
-The following table names are for different log types (sources) inside HBase tables.
-
-| Source number | Table name | Log types | Description |
-| | | | |
-| 1. | HDInsightAmbariClusterAlerts | No other log types | This table contains Ambari cluster alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
-| 2. | HDInsightAmbariSystem Metrics | No other log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two head nodes. Each metric is now a column, and each metric is reported once per record. |
-| 3. | HDInsightHadoopAndYarnLogs | **Head node**: MRJobSummary, Resource Manager, TimelineServer **Worker node**: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
-| 4. | HDInsightSecurityLogs | **Head node**: AmbariAuditLog, AuthLog **Worker node**: AuthLog **ZooKeeper node**: AuthLog | This table contains records from the Ambari audit and authentication logs. |
-| 5. | HDInsightHBaseLogs | **Head node**: HDFSGarbageCollectorLog, HDFSNameNodeLog **Worker node**: PhoenixServerLog, HBaseRegionServerLog, HBaseRestServerLog **ZooKeeper node**: HBaseMasterLog | This table contains logs from HBase and its related components: Phoenix and HDFS. |
-| 6. | HDInsightHBaseMetrics | No log types | This table contains JMX metrics from HBase. It contains all the same JMX metrics from the tables listed in the Old Schema column. In contrast from the old tables, each row contains one metric. |
-| 7. | HDInsightHadoopAndYarn Metrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics that we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
-
-### Hadoop cluster
-
-The following table names are for different log types (sources) inside Hadoop tables.
-
-| Source number | Table name | Log types | Description |
-| | | | |
-| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari cluster alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
-| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two head nodes. Each metric is now a column, and each metric is reported once per record. |
-| 3. | HDInsightHadoopAndYarnLogs | **Head node**: MRJobSummary, Resource Manager, TimelineServer **Worker node**: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
-| 4. | HDInsightHadoopAndYarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics that we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
-| 5. | HDInsightHiveAndLLAPLogs | **Head node**: HiveMetastoreLog, HiveServer2Log, WebHcatLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
-| 6. | HDInsight Hive And LLAP Metrics | No log types | This table contains JMX metrics from the Hive and LLAP frameworks. It contains all the same JMX metrics as the old Custom Logs tables. It contains one metric per record. |
-| 7. | HDInsight Security Logs | **Head node**: AmbariAuditLog, AuthLog **ZooKeeper node**: AuthLog | This table contains records from the Ambari audit and authentication logs. |
+For a complete listing of table names for different log types (sources), see [Azure Monitor Logs tables](monitor-hdinsight-reference.md#azure-monitor-logs-tables).
## Parameter syntax
iot-edge How To Visual Studio Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-visual-studio-develop-module.md
description: Use Visual Studio to develop a custom IoT Edge module and deploy to
Previously updated : 07/13/2023 Last updated : 07/17/2024 zone_pivot_groups: iotedge-dev
Typically, you want to test and debug each module before running it within an en
Received message: 1, Body: [hello world] ```
- > [!TIP]
- > You can also use [PostMan](https://www.getpostman.com/) or other API tools to send messages instead of `curl`.
- 1. Press **Ctrl + F5** or select the stop button to stop debugging. ### Build and debug multiple modules
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support | | - | -- | - | -- | -- |
-| Debian 11 (Bullseye) | | ![Debian + ARM32v7](./media/support/green-check.png) | | [June 2026](https://wiki.debian.org/LTS) |
-| Red Hat Enterprise Linux 9 | ![Red Hat Enterprise Linux 9 + AMD64](./media/support/green-check.png) | | | [May 2032](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
-| Red Hat Enterprise Linux 8 | ![Red Hat Enterprise Linux 8 + AMD64](./media/support/green-check.png) | | | [May 2029](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
-| Ubuntu Server 22.04 | ![Ubuntu Server 22.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 22.04 + ARM64](./media/support/green-check.png) | [June 2027](https://wiki.ubuntu.com/Releases) |
-| Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | [April 2025](https://wiki.ubuntu.com/Releases) |
-| Windows 10/11 | ![Windows 10/11 + AMD64](./medi#prerequisites) for supported Windows OS versions. |
-| Windows Server 2019/2022 | ![Windows Server 2019/2022 + AMD64](./medi#prerequisites) for supported Windows OS versions. |
+| [Debian 11](https://www.debian.org/releases/bullseye/) | | ![Debian + ARM32v7](./media/support/green-check.png) | | [June 2026](https://wiki.debian.org/LTS) |
+| [Red Hat Enterprise Linux 9](https://access.redhat.com/articles/3078) | ![Red Hat Enterprise Linux 9 + AMD64](./media/support/green-check.png) | | | [May 2032](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
+| [Red Hat Enterprise Linux 8](https://access.redhat.com/articles/3078) | ![Red Hat Enterprise Linux 8 + AMD64](./media/support/green-check.png) | | | [May 2029](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
+| [Ubuntu Server 22.04](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | ![Ubuntu Server 22.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 22.04 + ARM64](./media/support/green-check.png) | [June 2027](https://wiki.ubuntu.com/Releases) |
+| [Ubuntu Server 20.04](https://wiki.ubuntu.com/FocalFoss64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | [April 2025](https://wiki.ubuntu.com/Releases) |
+| [Windows 10/11](iot-edge-for-linux-on-windows.md#prerequisites) | ![Windows 10/11 + AMD64](./medi#prerequisites) for supported Windows OS versions. |
+| [Windows Server 2019/2022](iot-edge-for-linux-on-windows.md#prerequisites) | ![Windows Server 2019/2022 + AMD64](./medi#prerequisites) for supported Windows OS versions. |
> [!NOTE] > When a *Tier 1* operating system reaches its end of standard support date, it's removed from the *Tier 1* supported platform list. If you take no action, IoT Edge devices running on the unsupported operating system continue to work but ongoing security patches and bug fixes in the host packages for the operating system won't be available after the end of support date. To continue to receive support and security updates, we recommend that you update your host OS to a *Tier 1* supported platform.
The systems listed in the following table are considered compatible with Azure I
| [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) | | | [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/support/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/support/green-check.png) | | | [RHEL 7](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7) | ![RHEL 7 + AMD64](./media/support/green-check.png) | ![RHEL 7 + ARM32v7](./media/support/green-check.png) | ![RHEL 7 + ARM64](./media/support/green-check.png) | [June 2024](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
-| [Ubuntu 20.04 <sup>2</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | | [April 2025](https://wiki.ubuntu.com/Releases) |
-| [Ubuntu 22.04 <sup>2</sup>](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | | ![Ubuntu 22.04 + ARM32v7](./media/support/green-check.png) | | [June 2027](https://wiki.ubuntu.com/Releases) |
+| [Ubuntu Server 22.04 <sup>2</sup>](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | | ![Ubuntu 22.04 + ARM32v7](./media/support/green-check.png) | | [June 2027](https://wiki.ubuntu.com/Releases) |
+| [Ubuntu Server 20.04 <sup>2</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | | [April 2025](https://wiki.ubuntu.com/Releases) |
| [Ubuntu Core <sup>3</sup>](https://snapcraft.io/azure-iot-edge) | | ![Ubuntu Core + AMD64](./media/support/green-check.png) | ![Ubuntu Core + ARM64](./media/support/green-check.png) | [April 2027](https://ubuntu.com/about/release-cycle) | | [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/support/green-check.png) | | | | | [Yocto (Kirkstone)](https://www.yoctoproject.org/)<br>For Yocto issues, open a [GitHub issue](https://github.com/Azure/meta-iotedge/issues) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | [April 2026](https://wiki.yoctoproject.org/wiki/Releases) |
The systems listed in the following table are considered compatible with Azure I
| Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support | | - | -- | - | -- | -- |
-| [Debian 11](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | | ![Debian 11 + ARM64](./media/support/green-check.png) | [June 2026](https://wiki.debian.org/LTS) |
+| [Debian 11 ](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | | ![Debian 11 + ARM64](./media/support/green-check.png) | [June 2026](https://wiki.debian.org/LTS) |
| [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) | | | [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/support/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/support/green-check.png) | |
-| [Ubuntu 20.04 <sup>1</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | | [April 2025](https://wiki.ubuntu.com/Releases) |
-| [Ubuntu 22.04 <sup>1</sup>](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | | ![Ubuntu 22.04 + ARM32v7](./media/support/green-check.png) | | [June 2027](https://wiki.ubuntu.com/Releases) |
+| [Ubuntu Server 22.04 <sup>1</sup>](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | | ![Ubuntu 22.04 + ARM32v7](./media/support/green-check.png) | | [June 2027](https://wiki.ubuntu.com/Releases) |
+| [Ubuntu Server 20.04 <sup>1</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | | [April 2025](https://wiki.ubuntu.com/Releases) |
| [Ubuntu Core <sup>2</sup>](https://snapcraft.io/azure-iot-edge) | | ![Ubuntu Core + AMD64](./media/support/green-check.png) | ![Ubuntu Core + ARM64](./media/support/green-check.png) | [April 2027](https://ubuntu.com/about/release-cycle) | | [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/support/green-check.png) | | | | | [Yocto (Kirkstone)](https://www.yoctoproject.org/)<br>For Yocto issues, open a [GitHub issue](https://github.com/Azure/meta-iotedge/issues) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | [April 2026](https://wiki.yoctoproject.org/wiki/Releases) |
machine-learning How To Manage Compute Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-manage-compute-session.md
One flow binds to one compute session. You can start a compute session on a flow
||| |Azure Machine Learning workspace|Contributor| |Azure Storage|Contributor (control plane) + Storage Blob Data Contributor + Storage File Data Privileged Contributor (data plane, consume flow draft in fileshare and data in blob)|
- |Azure Key Vault (when using [access policies permission model](../../key-vault/general/assign-access-policy.md))|Contributor + any access policy permissions besides **purge** operations, this is `default mode` for linked Azure Key Vault.|
+ |Azure Key Vault (when using [access policies permission model](../../key-vault/general/assign-access-policy.md))|Contributor + any access policy permissions besides **purge** operations, this is **default mode** for linked Azure Key Vault.|
|Azure Key Vault (when using [RBAC permission model](../../key-vault/general/rbac-guide.md))|Contributor (control plane) + Key Vault Administrator (data plane)| |Azure Container Registry|Contributor| |Azure Application Insights|Contributor|
+
+ > [!NOTE]
+ > The job submitter need have `assign` permission on user assigned managed identity, you can assign `Managed Identity Operator` role, as every time create serverless compute session, it will assign user assigned managed identity to compute.
- If you choose compute instance as compute type, you can only set idle shutdown time. - As it's running on an existing compute instance the VM size is fixed and can't change in session side.
Learn full end to end code first example: [Integrate prompt flow with LLM-based
> [!NOTE]
- > The idle shutdown is one hour if you are using CLI/SDK to submit a flow run. You can go to compute page to release compute
+ > The idle shutdown is one hour if you are using CLI/SDK to submit a flow run. You can go to compute page to release compute.
### Reference files outside of the flow folder Sometimes, you might want to reference a `requirements.txt` file that is outside of the flow folder. For example, you might have complex project that includes multiple flows, and they share the same `requirements.txt` file. To do this, You can add this field `additional_includes` into the `flow.dag.yaml`. The value of this field is a list of the relative file/folder path to the flow folder. For example, if requirements.txt is in the parent folder of the flow folder, you can add `../requirements.txt` to the `additional_includes` field.
machine-learning Reference Model Inference Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-api.md
__Response__
> [!TIP]
-> You can inspect the property `details.loc` to understand the location of the offending parameter and `details.input` to see the value that was passed in the request.
+> You can inspect the property `detail.loc` to understand the location of the offending parameter and `detail.input` to see the value that was passed in the request.
## Content safety
operator-nexus Concepts Nexus Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-networking.md
+
+ Title: Azure Operator Nexus - Networking concepts
+description: Get an overview of networking in Azure Operator Nexus.
++++ Last updated : 06/13/2024+++
+# Networking in Azure Operator Nexus Kubernetes
+
+An Azure Operator Nexus, or simply Operator Nexus, instance comprises compute
+and networking hardware installed at the customer premises. Multiple layers of
+physical and virtual devices provide network connectivity and routing services
+to the workloads running on this compute hardware. This document provides a
+detailed description of each of these networking layers.
+
+## Topology
+
+Here we describe the topology of hardware in an Operator Nexus instance.
++
+Customers own and manage Provider edge (PE) routers. These routers represent
+the edge of the customerΓÇÖs backbone network.
+
+Operator Nexus manages the customer edge (CE) routers. These routers are part
+of the Operator Nexus instance and are included in near-edge hardware
+[bill of materials][bom] (BOM). There are two CE routers in each multi-rack
+Operator Nexus instance. Each CE router has an uplink to each of the PE
+routers. The CE routers are the only Operator Nexus devices that are physically
+connected to the customerΓÇÖs network.
+
+Each rack of compute servers in a multi-rack Azure Operator Nexus instance has
+two top-of-rack (TOR) switches. Each TOR has an uplink to each of the CE
+routers. Each TOR is connected to each bare metal compute server in the rack
+and is configured as a simple [layer 2 switch][layer2-switch].
+
+[bom]: ./reference-operator-nexus-fabric-skus.md
+[layer2-switch]: https://en.wikipedia.org/wiki/Multilayer_switch#Layer-2_switching
+
+## Bare metal
+
+Tenant workloads running on this compute infrastructure are typically virtual
+or containerized network functions. Virtual network functions (VNFs) run as
+virtual machines (VMs) on the compute server hardware. Containerized network
+functions (CNFs) run inside containers. These containers run on VMs that
+themselves run on the compute server hardware.
+
+Network functions that provide end-user data plane services require high
+performance network interfaces that offer advanced features and high I/O rates.
++
+In near-edge multi-rack Operator Nexus instances, each bare metal compute
+server is a dual-socket machine with [Non-Uniform Memory Access][numa] (NUMA)
+architecture.
+
+A bare metal compute server in a near-edge multi-rack Azure Operator Nexus
+instance contains one dual-port network interface card (pNIC) for each NUMA
+cell. These pNICs support [Single-Root I/O Virtualization][sriov] (SR-IOV) and
+other high-performance features. One NUMA cell is memory and CPU-aligned with a
+one pNIC.
+
+All network interfaces assigned to tenant workloads are host passthrough
+devices and use SR-IOV virtual functions (VFs) allocated from the pNIC aligned
+to the NUMA cell housing the workload VMΓÇÖs CPU and memory resources. This
+arrangement ensures optimal performance of the networking stack inside the VMs
+and containers that are assigned those VFs.
+
+Compute racks are deployed with a pair of Top-of-Rack (TOR) switches. Each pNIC
+on each bare metal compute server is connected to both of those TORs.
+[Multi-chassis link aggregation group][mlag] (MLAG) provides high availability
+and [link aggregation control protocol][lacp] (LACP) provides increased
+aggregate throughput for the link.
+
+Each bare metal compute server has a storage network interface that is provided
+by a bond that aggregates two *host-local* virtual functions (VF)s (as opposed
+to VM-local VFs) connected to *both* pNICs. These two VFs are aggregated in an
+active-backup bond to ensure if one of the pNICs fails, network storage
+connectivity remains available.
+
+[sriov]: https://en.wikipedia.org/wiki/Single-root_input/output_virtualization
+[mlag]: https://en.wikipedia.org/wiki/Multi-chassis_link_aggregation_group
+[lacp]: https://www.cisco.com/c/en/us/td/docs/ios/12_2sb/feature/guide/gigeth.html
+[numa]: https://en.wikipedia.org/wiki/Non-uniform_memory_access
+
+## Logical network resources
+
+When interacting with the Operator Nexus Network Cloud API and Managed Network
+Fabric APIs, users create and modify a set of logical resources.
+
+Logical resources in the Managed Network Fabric API correspond to the networks
+and access control configuration on the underlying networking hardware (the
+TORs and CEs). Notably, `ManagedNetworkFabric.L2IsolationDomain` and
+`ManagedNetworkFabric.L3IsolationDomain` resources contain low-level switch and
+network configuration. A `ManagedNetworkFabric.L2IsolationDomain` represents a
+[virtual local area network][vlan] identifier (VLAN). A
+`ManagedNetworkFabric.L3IsolationDomain` represents a
+[virtual routing and forwarding][vrf] configuration (VRF) on the CE routers.
+Read about the [concept of an Isolation Domain][isd].
+
+Logical resources in the Network Cloud API correspond to compute
+infrastructure. There are resources for physical racks and bare metal hardware.
+Likewise, there are resources for Kubernetes clusters and virtual machines that
+run on that hardware and the logical networks that connect them.
+
+`NetworkCloud.L2Network`, `NetworkCloud.L3Network`, and
+`NetworkCloud.TrunkedNetwork` all represent workload networks, meaning traffic
+on these networks is meant for tenant workloads.
+
+A `NetworkCloud.L2Network` represents a layer-2 network and contains little
+more than a link to a `ManagedNetworkFabric.L2IsolationDomain`. This
+L2IsolationDomain contains a VLAN identifier and a maximum transmission unit
+(MTU) setting.
+
+A `NetworkCloud.L3Network` represents a layer-3 network and contains a VLAN
+identifier, information about IP address assignment for endpoints on the
+network and a link to a `ManagedNetworkFabric.L3IsolationDomain`.
+
+> [!NOTE]
+> Why does a `NetworkCloud.L3Network` resource contain a VLAN identifier?
+> Aren't VLANs a layer-2 concept?
+>
+> Yes, yes they are! The reason for this is due
+> to the fact that the `NetworkCloud.L3Network` must be able to refer to a
+> specific [`ManagedNetworkFabric.InternalNetwork`][internal-net].
+> `ManagedNetworkFabric.InternalNetwork`s are created within a specific
+> `ManagedNetworkFabric.L3IsolationDomain` and are given a VLAN identifier.
+> Therefore, in order to reference a specific
+> `ManagedNetworkFabric.InternalNetwork`, the `NetworkCloud.L3Network` must
+> contain both an L3IsolationDomain identifier and a VLAN identifier.
++
+Logical *network resources* in the Network Cloud API such as
+`NetworkCloud.L3Network` *reference* logical resources in the Managed Network
+Fabric API and in doing so provide a logical connection between the physical
+compute infrastructure and the physical network infrastructure.
+
+When creating a Nexus Virtual Machine, you may specify zero or more L2, L3, and
+Trunked Networks in the Nexus Virtual Machine's
+[`NetworkAttachments`][vm-netattach]. When creating a Nexus Kubernetes Cluster,
+you may specify zero or more L2, L3, and Trunked Networks in the Nexus
+Kubernetes Cluster's
+[`NetworkConfiguration.AttachedNetworkConfiguration`][attachednetconf] field.
+AgentPools are collections of similar Kubernetes worker nodes within a Nexus
+Kubernetes Cluster. You can configure each Agent Pool's attached L2, L3, and
+Trunked Networks in the AgentPool's
+[`AttachedNetworkConfiguration`][attachednetconf] field.
+
+You can share networks across standalone Nexus Virtual Machines and Nexus
+Kubernetes Clusters. This composability allows you to stitch together CNFs and
+VNFs working in concert across the same logical networks.
++
+The diagram shows an example of a Nexus Kubernetes cluster with two agent pools
+and a standalone Nexus Virtual Machine connected to different workload
+networks. Agent Pool "AP1" has no extra network configuration and therefore it
+inherits the KubernetesCluster's network information. Also note that all
+Kubernetes Nodes and all standalone Nexus Virtual Machines are configured to
+connect to the same Cloud Services Network. Finally, Agent Pool "AP2" and the
+stand-alone VM are configured to connect to a "Shared L3 Network".
+
+### The CloudServicesNetwork
+
+Nexus Virtual Machines and Nexus Kubernetes Clusters always reference something
+called the "Cloud Services Network" (CSN). The CSN is a special network used
+for traffic between on-premises workloads and a set of external or Azure-hosted
+endpoints.
+
+Traffic on the CloudServicesNetwork is routed through a proxy, where egress
+traffic is controlled via the use of an allowlist. Users can tune this
+allowlist [using the Network Cloud API][csnapi].
+
+[csnapi]: ./quickstarts-tenant-workload-prerequisites.md#create-a-cloud-services-network
+
+<!
+TODO(jaypipes): Expand and explain this more. There's no good information about
+CSN in our public docs.
+
+TODO(jaypipes): A diagram showing CSN traffic flow and proxy.
+>
+
+### The CNI Network
+
+When creating a Nexus Kubernetes Cluster, you provide the resource identifier
+of a `NetworkCloud.L3Network` in the `NetworkConfiguration.CniNetworkId` field.
+
+This "CNI network", sometimes referred to as "DefaultCNI Network", specifies
+the layer-3 network that provides IP addresses for Kubernetes Nodes in the
+Nexus Kubernetes cluster.
++
+The diagram shows the relationships between some of the Network Cloud, Managed
+Network Fabric, and Kubernetes logical resources. In the diagram, a
+`NetworkCloud.L3Network` is a logical resource in the Network Cloud API that
+represents a layer 3 network. The `NetworkCloud.KubernetesCluster` resource has
+a field `networkConfiguration.cniNetworkId` that contains a reference to the
+`NetworkCloud.L3Network` resource.
+
+The `NetworkCloud.L3Network` resource is associated with a single
+`ManagedNetworkFabric.InternalNetwork` resource via its `l3IsolationDomainId`
+and `vlanId` fields. The `ManagedNetworkFabric.L3IsolationDomain` resource
+contains one or more `ManagedNetworkFabric.InternalNetwork` resources, keyed by
+`vlanId`. When the user creates the `NetworkCloud.KubernetesCluster` resource,
+one or more `NetworkCloud.AgentPool` resources are created.
+
+Each of these `NetworkCloud.AgentPool` resources comprises one or more virtual
+machines. A Kubernetes `Node` resource represents each of those virtual
+machines. These Kubernetes `Node` resources must get an IP address and the
+Container Networking Interface (CNI) plugins on the virtual machines grab an IP
+address from the pool of IP addresses associated with the
+`NetworkCloud.L3Network`. The `NetworkCloud.KubernetesCluster` resource
+references the `NetworkCloud.L3Network` via its `cniNetworkId` field. The
+routing and access rules for those node-level IP addresses are contained in the
+`ManagedNetworkFabric.L3IsolationDomain`. The `NetworkCloud.L3Network` refers
+to the `ManagedNetworkFabric.L3IsolationDomain` via its `l3IsoldationDomainId`
+field.
+
+[netfabric]: ./concepts-network-fabric.md
+[vlan]: https://en.wikipedia.org/wiki/VLAN
+[vrf]: https://en.wikipedia.org/wiki/Virtual_routing_and_forwarding
+[isd]: ./howto-configure-isolation-domain.md
+[internal-net]: ./howto-configure-isolation-domain.md#create-internal-network
+[vm-netattach]: https://learn.microsoft.com/rest/api/networkcloud/virtual-machines/create-or-update?view=rest-networkcloud-2023-07-01&tabs=HTTP#networkattachment
+[attachednetconf]: https://learn.microsoft.com/rest/api/networkcloud/kubernetes-clusters/create-or-update?view=rest-networkcloud-2023-07-01&tabs=HTTP#attachednetworkconfiguration
+
+## Operator Nexus Kubernetes networking
+
+There are three logical layers of networking in Kubernetes:
+
+* Node networking layer
+* Pod networking layer
+* Service networking layer
+
+The *Node networking layer* provides connectivity between the Kubernetes
+control plane and the kubelet worker node agent.
+
+The *Pod networking layer* provides connectivity between containers (Pods)
+running inside the Nexus Kubernetes cluster and connectivity between a Pod and
+one or more tenant-defined networks.
+
+The *Service networking layer* provides load balancing and ingress
+functionality for sets of related Pods.
+
+### Node networking
+
+Operator Nexus Kubernetes clusters house one or more containerized network
+functions (CNFs) that run on a virtual machines (VM). A Kubernetes *Node*
+represents a single VM. Kubernetes Nodes may be either *Control Plane* Nodes or
+*Worker* Nodes. Control Plane Nodes contain management components for the
+Kubernetes Cluster. Worker Nodes house tenant workloads.
+
+Groups of Kubernetes Worker Nodes are called *Agent Pools*. Agent Pools are an
+Operator Nexus construct, *not* a Kubernetes construct.
++
+Each bare metal compute server in an Operator Nexus instance has a
+[switchdev][switchdev] that is affined to a single NUMA cell on the bare metal
+server. The switchdev houses a set of SR-IOV VF representor ports that provide
+connectivity to a set of bridge devices that are used to house routing tables
+for different networks.
+
+In addition to the `defaultcni` interface, Operator Nexus establishes a
+`cloudservices` network interface on every Node. The `cloudservices` network
+interface is responsible for routing traffic destined for external (to the
+customer's premises) endpoints. The `cloudservices` network interface
+corresponds to the `NetworkCloud.CloudServicesNetwork` API resource that the
+user defines before creating a Nexus Kubernetes cluster. The IP address
+assigned to the `cloudservices` network interface is a
+[link-local address][lladdr], ensuring that external network traffic always
+traverses this specific interface.
+
+In addition to the `defaultcni` and `cloudservices` network interfaces,
+Operator Nexus creates one or more network interfaces on each Kubernetes Node
+that correspond to `NetworkCloud.L2Network`, `NetworkCloud.L3Network`, and
+`NetworkCloud.TrunkedNetwork` associations with the Nexus Kubernetes cluster
+or AgentPool.
+
+Only Agent Pool VMs have these extra network interfaces. Control Plane VMs
+only have the `defaultcni` and `cloudservices` network interfaces.
+
+#### Node IP Address Management (IPAM)
++
+Nodes in an Agent Pool receive an IP address from a pool of IP addresses
+associated with the `NetworkCloud.L3Network` resource referred to in the
+`NetworkCloud.KubernetesCluster` resource's `networkConfiguration.cniNetworkId`
+field. This `defaultcni` network is the default gateway for all Pods that run
+on that Node and serves as the default network for east-west Pod to Pod
+communication within the Nexus Kubernetes cluster.
+
+[lladdr]: https://en.wikipedia.org/wiki/Link-local_address
+
+### Pod networking
+
+Kubernetes Pods are collections of one or more container images that run in a
+[Linux namespace][linux-ns]. This Linux namespace isolates the containerΓÇÖs
+processes and resources from other containers and processes on the host. For
+Nexus Kubernetes clusters, this "host" is a VM that is represented as a
+Kubernetes Worker Node.
+
+Before creating an Operator Nexus Kubernetes Cluster, users first create a set
+of resources that represent the virtual networks from which tenant workloads
+are assigned addresses. These virtual networks are then referenced in the
+`cniNetworkId`, `cloudServicesNetworkId`, `agentPoolL2Networks`,
+`agentPoolL3Networks`, and `agentPoolTrunkedNetworks` fields when creating the
+Operator Nexus Kubernetes Cluster.
+
+Pods can run on any compute server in any rack in an Operator Nexus instance.
+By default all Pods in a Nexus Kubernetes cluster can communicate with each
+other over what is known as the [*pod network*][podnetwork]. Several
+[Container Networking Interface][cni] (CNI) plugins that are installed in each
+Nexus Kubernetes Worker Node manage the Pod networking.
+
+#### Extra Networks
+
+When creating a Pod in a Nexus Kubernetes Cluster, you declare any extra
+networks that the Pod should attach to by [specifying][specify-net-anno] a
+`k8s.v1.cni.cnf.io/networks` annotation. The annotation's value is a
+comma-delimited list of network names. These network names correspond to names
+of any Trunked, L3 or L2 Networks associated with the Nexus Kubernetes Cluster
+or Agent Pool.
+
+Operator Nexus configures the Agent Pool VM with
+[NetworkAttachmentDefinition][nad] (NAD) files that contain network
+configuration for a single extra network.
+
+For each Trunked Network listed in the Pod's associated networks, the Pod gets
+a single network interface. The workload is responsible for sending raw tagged
+traffic through this interface or constructing tagged interfaces on top of the
+network interface.
+
+For each L2 Network listed in the Pod's associated networks, the Pod gets a
+single network interface. The workload is responsible for their own static MAC
+addressing.
+
+#### Pod IP Address Management
++
+When you create a Nexus Kubernetes cluster, you specify the IP address ranges
+for the pod network in the `podCidrs` field. When Pods launch, the CNI plugin
+establishes an `eth0@ifXX` interface in the Pod and assigns an IP address from
+a range of IP addresses in that `podCidrs` field.
+
+For L3 Networks, if the network has been configured to use Nexus IPAM, the
+Pod's network interface associated with the L3 Network receives an IP address
+from the IP address range (CIDR) configured for that network. If the L3 Network
+isn't configured to use Nexus IPAM, the workload is responsible for statically
+assigning an IP address to the Pod's network interface.
+
+#### Routing
+
+Inside each Pod, the `eth0` interface's traffic traverses a
+[virtual ethernet device][veth] (veth) that connects to a
+[switchdev][switchdev] on the host (the VM) that houses the `defaultcni`,
+`cloudservices`, and other Node-level interfaces.
+
+The `eth0` interface inside a Pod has a simple route table that effectively
+uses the worker node VM's route table for any of the following traffic.
++
+* Pod to pod traffic: Traffic destined for an IP in the `podCidrs` address
+ ranges flows to the switchdev on the host VM and over the Node-level
+ `defaultcni` interface where it is routed to the appropriate destination agent
+ pool VM's IP address.
+* L3 OSDevice network traffic: Traffic destined for an IP in an associated L3
+ Network with the `OSDevice` plugin type flows to the switchdev on the host VM
+ and over the Node-level interface associated with that L3 Network.
+* All other traffic passes to the default gateway in the Pod, which routes to the
+ Node-level `cloudservices` interface. Egress rules configured on the
+ CloudServicesNetwork associated with the Nexus Kubernetes cluster then
+ determine how the traffic should be routed.
++
+Additional network interfaces inside a Pod will use the Pod's route table to
+route traffic to additional L3 Networks that use the `SRIOV` and `DPDK` plugin
+types.
+
+[linux-ns]: https://en.wikipedia.org/wiki/Linux_namespaces
+[podnetwork]: https://kubernetes.io/docs/concepts/cluster-administration/networking/
+[cni]: https://www.cni.dev/
+[veth]: https://www.man7.org/linux/man-pages/man4/veth.4.html
+[switchdev]: https://www.kernel.org/doc/html/latest/networking/switchdev.html
+[specify-net-anno]: https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation
+[nad]: https://github.com/k8snetworkplumbingwg/multi-net-spec/blob/master/v1.3/%5Bv1.3%5D%20Kubernetes%20Network%20Custom%20Resource%20Definition%20De-facto%20Standard.pdf
+
+<!
+### Service network configuration
+
+TODO(jaypipes)
+
+## Default Routing and BGP Configuration
+
+CNFs are typically a collection of Kubernetes Pods that are connected to one or
+more virtual networks. Those virtual networks are routed across the physical
+network infrastructure via [Border Gateway Protocol][bgp] (BGP).
+
+TODO(jaypipes)
+
+[bgp]: https://en.wikipedia.org/wiki/Border_Gateway_Protocol
+
+-->
operator-nexus Concepts Nexus Workload Network Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-workload-network-types.md
+
+ Title: "Azure Operator Nexus: Nexus Workload Network"
+description: Introduction to Workload networks core concepts.
++++ Last updated : 04/25/2024+++
+# Nexus workload Network Overview
+
+This article describes core concepts of Nexus workload networks, and introduction to options to configure nexus workload networks with critical properties.
+Nexus workload network enables application connect with on-premises network and other services over Azure public cloud. It supports operator use cases with
+standard industry technologies that are reliable, predictable, and familiar to operators and network equipment providers.
+
+Nexus offers several top-level API resources that categorically represent different types of networks with different input expectations.
+These network types represent logical attachments but also Layer3 information as well. Essentially, they encapsulate how the customer
+wishes those networks are to be exposed within their cluster.
+
+## Nexus workload network types
+
+ * L3Network: The Nexus workload L3Network resource can be shared and reused across standalone virtual machines and Nexus Kubernetes clusters. Its primary purpose is to define a network that supports Layer 3 properties, which are coordinated between the virtualized workloads and the integrated Nexus Managed Fabric L3IsolationDomain. Additionally, it provides DualStack allocation capabilities (both IPv4 and IPv6) and directly references Azure Managed Network Fabric resources representing the VRF and VLAN associated with this network
+
+ * L2Network: The Nexus workload L2Network resource can be shared and reused across standalone virtual machines and Nexus AKS clusters. Its primary purpose is to grant direct access to a specific Nexus Managed Fabric L2IsolationDomain, enabling isolated network attachment within the Nexus Cluster. Customers utilize L2Network resources when they want the fabric to carry a VLAN across workloads without participating in Layer 3 on that network.
+
+ * TrunkedNetwork: The Nexus workload TrunkedNetwork resource allows association with multiple IsolationDomains, enabling customers to create a custom VLAN trunk range that workloads can access. The TrunkedNetwork defines the allowable VLAN set that workloads can directly tag traffic on. Tagged traffic for VLANs not specified in the TrunkedNetwork resource will be dropped. This custom VLAN trunk range can span across the same IsolationDomain or multiple L3IsolationDomains and/or L2IsolationDomains.
+
+## Nexus workload network plugins
+
+Network plugin is the feature to configuration how applications use the underlying Networks when attaching networks to application VMs or Pods.
+The type of plugins supported for different network types.
+
+| Plugin Name | Available Network Types |
+|||
+|SRIOV|L2Network, L3Network, TrunkedNetwork|
+|DPDK|L2Network, L3Network, TrunkedNetwork|
+|MACVLAN|L2Network, L3Network, TrunkedNetwork|
+|IPVLAN|L3Network, TrunkedNetwork|
+|OSDev|L2Network, L3Network, TrunkedNetwork|
+
+ * SRIOV: The SRIOV plugin generates a network attachment definition named after the corresponding network resource. This interface is integrated into a sriov-dp-config resource,
+which is linked to by the network attachment definition. If a network is connected to the cluster multiple times, all interfaces will be available for scheduling via the network
+attachment definition. No IP assignment is made to this type of interface within the node operating system.
+
+ * DPDK: Configured specifically for DPDK workloads, the DPDK plugin type creates a network attachment definition that mirrors the associated network resource. This interface is
+placed within a sriov-dp-config resource, which the network attachment definition references. Multiple connections of the same network to the cluster make all interfaces schedulable
+through the network attachment definition. Depending on the hardware of the platform, the interface might be linked to a specific driver to support DPDK processing. Like SRIOV, this
+interface doesn't receive an IP assignment within the node operating system.
+
+ * OSDevice: The OSDevice plugin type is tailored for direct use within the node operating system, rather than Kubernetes. It acquires a network configuration that is visible and
+functional within the nodeΓÇÖs operating system network namespace. This plugin is suitable for instances where direct communication over this network from the nodeΓÇÖs OS is required.
+
+ * IPVLAN: The IPVLAN plugin type helps the creation of a network attachment definition named according to the associated network resource. This interface allows for the efficient
+routing of traffic in environments where network isolation is required without the need for multiple physical network interfaces. It operates by assigning multiple IP addresses to a
+single network interface, each behaving as if it is on a separate physical device. Despite the separation at the IP layer, this type doesn't handle separate MAC addresses, and it doesn't provide IP assignments within the node operating system.
+
+ * MACVLAN: The MACVLAN plugin type generates a network attachment definition reflective of the linked network resource. This interface type creates multiple virtual networks interfaces
+each with a unique MAC address over a single physical network interface. It's useful in scenarios where applications running in containers need to appear as physically
+separate on the network for security or compliance reasons. Each interface behaves as if it's directly connected to the physical network, which allows for IP assignments within the
+node operating system.
+
+## Nexus Network IPAM
+
+Nexus Kubernetes offers IP Address Management (IPAM) solutions in various forms. For standalone virtual machines(VNF workload) or Nexus kubernetes nodes(CNF workload) connected to a Nexus network supporting Layer 3,
+an IPAM system is employed that covers multiple clusters. This system ensures unique IP addresses across both VMs and Nexus Kubernetes nodes within network VM interfaces inside VM operating
+systems. Additionally, when these networks are utilized for containerized workloads, Network Attachment Definitions (NADs) automatically generated by Nexus kubernetes cluster incorporate this IPAM feature.
+This same cross-cluster IPAM capability is used to guarantee that containers connected to the same networks receive unique IP addresses as well.
+
+## Nexus Relay
+
+Nexus Kubernetes utilizes the [Arc](../azure-arc/overview.md) [Azure Relay](../azure-relay/relay-what-is-it.md) functionality by integrating the Nexus kubernetes Hybrid Relay infrastructure in each region where the Nexus Cluster service operates.
+This setup uses dedicated Nexus relay infrastructure within Nexus owned subscriptions, ensuring that Nexus kubernetes cluster Arc Connectivity doesn't rely on shared public relay networks.
+
+Each Nexus kubernetes cluster and node instance is equipped with its own relay, and customers can manage Network ACL rules through the Nexus Cluster Azure Resource Manager APIs. These rules determine which networks can access both the az connectedk8s proxy and az ssh for their Nexus Arc resources within that specific on-premises Nexus Cluster. This feature enhances operator security by adhering to security protocols established after previous Arc/Relay security incidents, requiring remote Arc connectivity to have customer-defined network filters or ACLs.
+
operator-nexus Howto Azure Operator Nexus Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-azure-operator-nexus-prerequisites.md
# Operator Nexus Azure resources prerequisites
-To get started with Operator Nexus, you need to create a Network Fabric Controller (NFC) and then a Cluster Manager (CM)
-in your target Azure region.
+To get started with Operator Nexus, you need to create a Network Fabric Controller (NFC) and then a Cluster Manager (CM) in your target Azure region.
Each NFC is associated with a CM in the same Azure region and your subscription. You need to complete the prerequisites before you can deploy the first Operator Nexus NFC and CM pair. In subsequent deployments of Operator Nexus, you'll only need to create the NFC and CM after reaching the [quota](./reference-limits-and-quotas.md#network-fabric) of supported Operator Nexus instances.
-## Resource Provider Registration
--- Permit access to the necessary Azure Resource Providers for the Azure Subscription for Operator Nexus resources:
- - az provider register --namespace Microsoft.NetworkCloud
- - az provider register --namespace Microsoft.ManagedNetworkFabric
- - az provider register --namespace Microsoft.Compute
- - az provider register --namespace Microsoft.ContainerService
- - az provider register --namespace Microsoft.ExtendedLocation
- - az provider register --namespace Microsoft.HybridCompute
- - az provider register --namespace Microsoft.HybridConnectivity
- - az provider register --namespace Microsoft.HybridContainerService
- - az provider register --namespace Microsoft.HybridNetwork
- - az provider register --namespace Microsoft.Insights
- - az provider register --namespace Microsoft.Keyvault
- - az provider register --namespace Microsoft.Kubernetes
- - az provider register --namespace Microsoft.KubernetesConfiguration
- - az provider register --namespace Microsoft.ManagedIdentity
- - az provider register --namespace Microsoft.Network
- - az provider register --namespace Microsoft.OperationalInsights
- - az provider register --namespace Microsoft.OperationsManagement
- - az provider register --namespace Microsoft.ResourceConnector
- - az provider register --namespace Microsoft.Resources
- - az provider register --namespace Microsoft.Storage
+## Install CLI Extensions and sign-in to your Azure subscription
+
+Install latest version of the
+[necessary CLI extensions](./howto-install-cli-extensions.md).
+
+### Azure subscription sign-in
+
+```azurecli
+ az login
+ az account set --subscription $SUBSCRIPTION_ID
+ az account show
+```
+
+>[!NOTE]
+>Your account must have permissions to read/write/publish in the subscription
+
+## Resource Provider registration
+
+Ensure access to the necessary Azure Resource Providers for the Azure Subscription for Operator Nexus resources. Register the following providers:
+
+```Azure CLI
+az provider register --namespace Microsoft.Compute
+az provider register --namespace Microsoft.ContainerService
+az provider register --namespace Microsoft.ExtendedLocation
+az provider register --namespace Microsoft.HybridCompute
+az provider register --namespace Microsoft.HybridConnectivity
+az provider register --namespace Microsoft.HybridContainerService
+az provider register --namespace Microsoft.HybridNetwork
+az provider register --namespace Microsoft.Insights
+az provider register --namespace Microsoft.Keyvault
+az provider register --namespace Microsoft.Kubernetes
+az provider register --namespace Microsoft.KubernetesConfiguration
+az provider register --namespace Microsoft.ManagedIdentity
+az provider register --namespace Microsoft.ManagedNetworkFabric
+az provider register --namespace Microsoft.Network
+az provider register --namespace Microsoft.NetworkCloud
+az provider register --namespace Microsoft.OperationalInsights
+az provider register --namespace Microsoft.OperationsManagement
+az provider register --namespace Microsoft.ResourceConnector
+az provider register --namespace Microsoft.Resources
+az provider register --namespace Microsoft.Storage
+```
+
+## EncryptionAtHost feature registration
+You must enable [EncryptionAtHost](/azure/virtual-machines/linux/disks-enable-host-based-encryption-cli) feature for your subscription. Use the following steps to enable the feature for your subscription:
+
+### Register the EncryptionAtHost feature:
+
+Execute the following command to register the feature for your subscription
+
+```Azure CLI
+az feature register --namespace Microsoft.Compute --name EncryptionAtHost
+```
+
+### Verify the registration State:
+
+Confirm that the registration state is Registered (registration might take a few minutes) using the following command before trying out the feature.
+
+```Azure CLI
+az feature show --namespace Microsoft.Compute --name EncryptionAtHost
+```
+### Register the Resource Provider:
+
+```Azure CLI
+az provider register --namespace Microsoft.Compute
+```
+
+Ensure that the registration state is Registered.
## Dependent Azure resources setup
In subsequent deployments of Operator Nexus, you'll only need to create the NFC
- Azure Storage supports blobs and files accessible from anywhere in the world over HTTP or HTTPS - this storage isn't for user/consumer data.
-## Install CLI Extensions and sign-in to your Azure subscription
-
-Install latest version of the
-[necessary CLI extensions](./howto-install-cli-extensions.md).
-
-### Azure subscription sign-in
-
-```azurecli
- az login
- az account set --subscription $SUBSCRIPTION_ID
- az account show
-```
-
->[!NOTE]
->Your account must have permissions to read/write/publish in the subscription
- ## Create steps - Step 1: [Create Network Fabric Controller](./howto-configure-network-fabric-controller.md)
postgresql Best Practices Migration Service Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/best-practices-migration-service-postgresql.md
Title: Best practices to migrate into Flexible Server
-description: Best practices for a seamless migration into Azure Database for PostgreSQL, including premigration validation, target server configuration, migration timeline, and migration speed benchmarking.
+description: Best practices for migration into Azure Database for PostgreSQL, including premigration validation, target server configuration, migration timeline, and migration speed benchmarking.
This article explains common pitfalls encountered and best practices to ensure a
## Premigration validation
-As a first step in the migration, run the premigration validation before you perform a migration. You can use the **Validate** and **Validate and Migrate** options on the migration setup page. Premigration validation conducts thorough checks against a predefined rule set. The goal is to identify potential problems and provide actionable insights for remedial actions. Keep running premigration validation until it results in a **Succeeded** state. Select [premigration validations](concepts-premigration-migration-service.md) to know more.
+As a first step in the migration, run the premigration validation before you perform a migration. You can use the **Validate** and **Validate and Migrate** options on the migration **Setup** page. Premigration validation conducts thorough checks against a predefined rule set. The goal is to identify potential problems and provide actionable insights for remedial actions. Keep running premigration validation until it results in a **Succeeded** state. To learn more, see [Premigration validations](concepts-premigration-migration-service.md).
-## Target Flexible server configuration
+## Target Flexible Server configuration
-During the initial base copy of data, multiple insert statements are executed on the target, which generates WALs (Write Ahead Logs). Until these WALs are archived, the logs consume storage at the target and the storage required by the database.
+During the initial base copy of data, multiple insert statements are executed on the target, which generates write-ahead logs (WALs). Until these WALs are archived, the logs consume storage at the target and the storage required by the database.
-To calculate the number, sign in to the source instance and execute this command for all the Database(s) to be migrated:
+To calculate the number, sign in to the source instance and run this command for all the databases to be migrated:
`SELECT pg_size_pretty( pg_database_size('dbname') );`
-It's advisable to allocate sufficient storage on the Flexible server, equivalent to 1.25 times or 25% more storage than what is being used per the output to the command above. [Storage Autogrow](../../flexible-server/how-to-auto-grow-storage-portal.md) can also be used.
+We recommend that you allocate sufficient storage on the flexible server, equivalent to 1.25 times or 25% more storage than what's being used per the output to the preceding command. You can also use [Storage Autogrow](../../flexible-server/how-to-auto-grow-storage-portal.md).
> [!IMPORTANT]
-> Storage size can't be reduced in manual configuration or Storage Autogrow. Each step in the Storage configuration spectrum doubles in size, so estimating the required storage beforehand is prudent.
+> Storage size can't be reduced in manual configuration or Storage Autogrow. Each step in the storage configuration spectrum doubles in size, so estimating the required storage beforehand is prudent.
-The quickstart to [Create an Azure Database for PostgreSQL flexible server using the portal](../../flexible-server/quickstart-create-server-portal.md) is an excellent place to begin. [Compute and storage options in Azure Database for PostgreSQL - Flexible Server](../../flexible-server/concepts-compute-storage.md) also gives detailed information about each server configuration.
+The quickstart to [create an Azure Database for PostgreSQL - Flexible Server instance by using the portal](../../flexible-server/quickstart-create-server-portal.md) is an excellent place to begin. For more information about each server configuration, see [Compute and storage options in Azure Database for PostgreSQL - Flexible Server](../../flexible-server/concepts-compute-storage.md).
## Migration timeline
-Each migration has a maximum lifetime of seven days (168 hours) once it starts and will time out after seven days. You can complete your migration and application cutover once the data validation and all checks are complete to avoid the migration from timing out. In Online migrations, after the initial base copy is complete, the cutover window lasts three days (72 hours) before timing out. In offline migrations, the applications should stop writing to the Database to prevent data loss. Similarly, for Online migration, keep traffic low throughout the migration.
+Each migration has a maximum lifetime of seven days (168 hours) after it starts, and it times out after seven days. You can complete your migration and application cutover after the data validation and all checks are complete to avoid the migration from timing out. In online migrations, after the initial base copy is complete, the cutover window lasts three days (72 hours) before timing out. In offline migrations, the applications should stop writing to the database to prevent data loss. Similarly, for online migration, keep traffic low throughout the migration.
-Most non-prod servers (dev, UAT, test, staging) are migrated using offline migrations. Since these servers have less data than the production servers, the migration completes fast. For production server migration, you need to know the time it would take to complete the migration to plan for it in advance.
+Most nonproduction servers (dev, UAT, test, and staging) are migrated by using offline migrations. Because these servers have less data than the production servers, the migration is fast. For production server migration, you need to know the time it would take to complete the migration to plan for it in advance.
-The time taken for a migration to complete depends on several factors. It includes the number of databases, size, number of tables inside each database, number of indexes, and data distribution across tables. It also depends on the SKU of the target server and the IOPS available on the source instance and target server. Given the many factors that can affect the migration time, it's hard to estimate the total time for the migration to complete. The best approach would be to perform a test migration with your workload.
+The time taken for a migration to complete depends on several factors. It includes the number of databases, size, number of tables inside each database, number of indexes, and data distribution across tables. It also depends on the SKU of the target server and the IOPS available on the source instance and target server. With so many factors that can affect the migration time, it's hard to estimate the total time for a migration to complete. The best approach is to perform a test migration with your workload.
-The following phases are considered for calculating the total downtime to perform production server migration.
+The following phases are considered for calculating the total downtime to perform production server migration:
-- **Migration of PITR** - The best way to get a good estimate on the time taken to migrate your production database server would be to take a point-in time restore of your production server and run the offline migration on this newly restored server.
+- **Migration of PITR**: The best way to get a good estimate on the time taken to migrate your production database server is to take a point-in time restore (PITR) of your production server and run the offline migration on this newly restored server.
+- **Migration of buffer**: After you finish the preceding step, you can plan for actual production migration during a time period when application traffic is low. This migration can be planned on the same day or probably a week away. By this time, the size of the source server might have increased. Update your estimated migration time for your production server based on the amount of this increase. If the increase is significant, consider doing another test by using the PITR server. But for most servers, the size increase shouldn't be significant enough.
+- **Data validation**: After the migration is finished for the production server, you need to verify if the data in the flexible server is an exact copy of the source instance. You can use open-source or third-party tools or you can do the validation manually. Prepare the validation steps you want to do before the actual migration. Validation can include:
-- **Migration of Buffer** - After completing the above step, you can plan for actual production migration during a time period when the application traffic is low. This migration can be planned on the same day or probably a week away. By this time, the size of the source server might have increased. Update your estimated migration time for your production server based on the amount of this increase. If the increase is significant, you can consider doing another test using the PITR server. But for most servers the size increase shouldn't be significant enough.
+ - Row count match for all the tables involved in the migration.
+ - Matching counts for all the database objects (tables, sequences, extensions, procedures, and indexes).
+ - Comparing maximum or minimum IDs of key application-related columns.
-- **Data Validation** - Once the migration is completed for the production server, you need to verify if the data in the flexible server is an exact copy of the source instance. Customers can use open-source/third-party tools or can do the validation manually. Prepare the validation steps you would like to do before the actual migration. Validation can include:
-
-- Row count match for all the tables involved in the migration.
+ > [!NOTE]
+ > The size of databases needs to be the right metric for validation. The source instance might have bloats or dead tuples, which can bump up the size of the source instance. It's normal to have size differences between source instances and target servers. An issue in the first three steps of validation indicates a problem with the migration.
-- Matching counts for all the database objects (tables, sequences, extensions, procedures, indexes)
+- **Migration of server settings**: Any custom server parameters, firewall rules (if applicable), tags, and alerts must be manually copied from the source instance to the target.
+- **Changing connection strings**: The application should change its connection strings to a flexible server after successful validation. This activity is coordinated with the application team to change all the references of connection strings pointing to the source instance. In the flexible server, the user parameter can be used in the **user=username** format in the connection string.
-- Comparing max or min IDs of key application-related columns
+For example: `psql -h myflexserver.postgres.database.azure.com -u user1 -d db1`
- > [!NOTE]
- > The size of databases needs to be the right metric for validation. The source instance might have bloats/dead tuples, which can bump up the size of the source instance. It's completely normal to have size differences between source instances and target servers. If there's an issue in the first three steps of validation, it indicates a problem with the migration.
--- **Migration of server settings** - Any custom server parameters, firewall rules (if applicable), tags, and alerts must be manually copied from the source instance to the target.--- **Changing connection strings** - The application should change its connection strings to a flexible server after successful validation. This activity is coordinated with the application team to change all the references of connection strings pointing to the source instance. In the flexible server, the user parameter can be used in the **user=username** format in the connection string.-
-For example: psql -h **myflexserver**.postgres.database.azure.com -u user1 -d db1
-
-While a migration often runs without a hitch, it's good practice to plan for contingencies if more time is required for debugging or if a migration needs to be restarted.
+Although a migration often runs without any problems, it's good practice to plan for contingencies if more time is required for debugging or if a migration needs to be restarted.
## Migration speed benchmarking
-The following table shows the time it takes to perform migrations for databases of various sizes using the migration service. The migration was performed using a flexible server with the SKU ΓÇô **Standard_D4ds_v4(4 cores, 16GB Memory, 128 GB disk, and 500 iops)**
+The following table shows the time it takes to perform migrations for databases of various sizes by using the migration service. The migration was performed by using a flexible server with the SKU Standard_D4ds_v4 (4 cores, 16-GB memory, 128-GB disk, and 500 IOPS).
| Database size | Approximate time taken (HH:MM) | | : | : |
The following table shows the time it takes to perform migrations for databases
| 500 GB | 04:00 | | 1,000 GB | 07:00 |
-> [!NOTE]
-> The above numbers give you an approximation of the time taken to complete the migration. We strongly recommend running a test migration with your workload to get a precise value for migrating your server.
+The preceding numbers give you an approximation of the time taken to complete the migration. We strongly recommend running a test migration with your workload to get a precise value for migrating your server.
> [!IMPORTANT]
-> Pick a higher SKU for your flexible server to perform faster migrations. Azure Database for PostgreSQL Flexible server supports near zero downtime Compute & IOPS scaling so the SKU can be updated with minimal downtime. You can always change the SKU to match the application needs post-migration.
+> Choose a higher SKU for your flexible server to perform faster migrations. Azure Database for PostgreSQL - Flexible Server supports near-zero downtime compute and IOPS scaling, so the SKU can be updated with minimal downtime. You can always change the SKU to match the application needs post-migration.
-### Improve migration speed - parallel migration of tables
+### Improve migration speed: Parallel migration of tables
-A powerful SKU is recommended for the target, as the PostgreSQL migration service runs out of a container on the Flexible server. A powerful SKU enables more tables to be migrated in parallel. You can scale the SKU back to your preferred configuration after the migration. This section contains steps to improve the migration speed in case the data distribution among the tables needs to be more balanced and/or a more powerful SKU doesn't significantly impact the migration speed.
+We recommend a powerful SKU for the target because the PostgreSQL migration service runs out of a container on the flexible server. A powerful SKU enables more tables to be migrated in parallel. You can scale the SKU back to your preferred configuration after the migration. This section contains steps to improve the migration speed if the data distribution among the tables needs to be more balanced or a more powerful SKU doesn't significantly affect the migration speed.
-If the data distribution on the source is highly skewed, with most of the data present in one table, the allocated compute for migration needs to be fully utilized, and it creates a bottleneck. So, we split large tables into smaller chunks, which are then migrated in parallel. This feature applies to tables with more than 10000000 (10 m) tuples. Splitting the table into smaller chunks is possible if one of the following conditions is satisfied.
+If the data distribution on the source is highly skewed, with most of the data present in one table, the allocated compute for migration needs to be fully utilized, which creates a bottleneck. So, split large tables into smaller chunks, which are then migrated in parallel. This feature applies to tables with more than 10,000,000 (10 m) tuples. Splitting the table into smaller chunks is possible if one of the following conditions is satisfied:
-1. The table must have a column with a simple (not composite) primary key or unique index of type int or significant int.
+- The table must have a column with a simple (not composite) primary key or unique index of type `int` or `significant int`.
- > [!NOTE]
- > In the case of approaches #2 or #3, the user must carefully evaluate the implications of adding a unique index column to the source schema. Only after confirmation that adding a unique index column will not affect the application should the user go ahead with the changes.
+ > [!NOTE]
+ > In the case of the first or second approaches, you must carefully evaluate the implications of adding a unique index column to the source schema. Only after confirmation that adding a unique index column won't affect the application should you go ahead with the changes.
-1. If the table doesn't have a simple primary key or unique index of type int or significant int but has a column that meets the data type criteria, the column can be converted into a unique index using the command below. This command doesn't require a lock on the table.
+- If the table doesn't have a simple primary key or unique index of type `int` or `significant int` but has a column that meets the data type criteria, the column can be converted into a unique index by using the following command. This command doesn't require a lock on the table.
```sql create unique index concurrently partkey_idx on <table name> (column name); ```
-1. If the table doesn't have a simple int/big int primary key or unique index or any column that meets the data type criteria, you can add such a column using [ALTER](https://www.postgresql.org/docs/current/sql-altertable.html) and drop it post-migration. Running the ALTER command requires a lock on the table.
+- If the table doesn't have a `simple int`/`big int` primary key or unique index or any column that meets the data type criteria, you can add such a column by using [ALTER](https://www.postgresql.org/docs/current/sql-altertable.html) and drop it post-migration. Running the `ALTER` command requires a lock on the table.
```sql alter table <table name> add column <column name> big serial unique; ```
-If any of the above conditions are satisfied, the table is migrated in multiple partitions in parallel, which should provide a marked increase in the migration speed.
+If any of the preceding conditions are satisfied, the table is migrated in multiple partitions in parallel, which should provide an increase in the migration speed.
#### How it works -- The migration service looks up the maximum and minimum integer value of the table's Primary key/Unique index that must be split up and migrated in parallel.-- If the difference between the minimum and maximum value is more than 10000000 (10 m), then the table is split into multiple parts, and each part is migrated in parallel.
+- The migration service looks up the maximum and minimum integer value of the table's primary key/unique index that must be split up and migrated in parallel.
+- If the difference between the minimum and maximum value is more than 10,000,000 (10 m), the table is split into multiple parts and each part is migrated in parallel.
In summary, the PostgreSQL migration service migrates a table in parallel threads and reduces the migration time if: - The table has a column with a simple primary key or unique index of type int or significant int.-- The table has at least 10000000 (10 m) rows so that the difference between the minimum and maximum value of the primary key is more than 10000000 (10 m).
+- The table has at least 10,000,000 (10 m) rows so that the difference between the minimum and maximum value of the primary key is more than 10,000,000 (10 m).
- The SKU used has idle cores, which can be used for migrating the table in parallel. ## Vacuum bloat in the PostgreSQL database
-Over time, as data is added, updated, and deleted, PostgreSQL might accumulate dead rows and wasted storage space. This bloat can lead to increased storage requirements and decreased query performance. Vacuuming is a crucial maintenance task that helps reclaim this wasted space and ensures the database operates efficiently. Vacuuming addresses issues such as dead rows and table bloat, ensuring efficient use of storage. More importantly, it helps ensure a quicker migration as the migration time taken is a function of the Database size.
+Over time, as data is added, updated, and deleted, PostgreSQL might accumulate dead rows and wasted storage space. This bloat can lead to increased storage requirements and decreased query performance. Vacuuming is a crucial maintenance task that helps reclaim this wasted space and ensures the database operates efficiently. Vacuuming addresses issues such as dead rows and table bloat to ensure efficient use of storage. It also helps to ensure quicker migration because the migration time is a function of the database size.
-PostgreSQL provides the VACUUM command to reclaim storage occupied by dead rows. The `ANALYZE` option also gathers statistics, further optimizing query planning. For tables with heavy write activity, the `VACUUM` process can be more aggressive using `VACUUM FULL`, but it requires more time to execute.
+PostgreSQL provides the `VACUUM` command to reclaim storage occupied by dead rows. The `ANALYZE` option also gathers statistics to further optimize query planning. For tables with heavy write activity, the `VACUUM` process can be more aggressive by using `VACUUM FULL`, but it requires more time to run.
-- Standard Vacuum
+- Standard vacuum
-```sql
-VACUUM your_table;
-```
+ ```sql
+ VACUUM your_table;
+ ```
-- Vacuum with Analyze
+- Vacuum with analyze
-```sql
-VACUUM ANALYZE your_table;
-```
+ ```sql
+ VACUUM ANALYZE your_table;
+ ```
-- Aggressive Vacuum for Heavy Write Tables
+- Aggressive vacuum for heavy write tables
-```sql
-VACUUM FULL your_table;
-```
+ ```sql
+ VACUUM FULL your_table;
+ ```
-In this example, replace your_table with the actual table name. The `VACUUM` command without **FULL** reclaims space efficiently, while `VACUUM ANALYZE` optimizes query planning. The `VACUUM FULL` option should be used judiciously due to its heavier performance impact.
+In this example, replace your_table with the actual table name. The `VACUUM` command without `FULL` reclaims space efficiently, whereas `VACUUM ANALYZE` optimizes query planning. The `VACUUM FULL` option should be used judiciously because of its heavier performance impact.
-Some Databases store large objects, such as images or documents, that can contribute to database bloat over time. The `VACUUMLO` command is designed for large objects in PostgreSQL.
+Some databases store large objects, such as images or documents, that can contribute to database bloat over time. The `VACUUMLO` command is designed for large objects in PostgreSQL.
-- Vacuum Large Objects
+- Vacuum large objects
-```sql
-VACUUMLO;
-```
+ ```sql
+ VACUUMLO;
+ ```
Regularly incorporating these vacuuming strategies ensures a well-maintained PostgreSQL database. ## Special consideration
-There are special conditions that typically refer to unique circumstances, configurations, or prerequisites that learners need to be aware of before proceeding with a tutorial or module. These conditions could include specific software versions, hardware requirements, or additional tools that are necessary for successful completion of the learning content.
+There are special conditions that typically refer to unique circumstances, configurations, or prerequisites that you need to be aware of before you proceed with a tutorial or module. These conditions could include specific software versions, hardware requirements, or other tools that are necessary for successful completion of the learning content.
### Online migration
-Online migration makes use of [pgcopydb follow](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html) and some of the [logical decoding restrictions](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html#pgcopydb-follow) apply. In addition, it's recommended to have a primary key in all the tables of a database undergoing Online migration. If primary key is absent, the deficiency will result in only insert operations being reflected during migration, excluding updates or deletes. Add a temporary primary key to the relevant tables before proceeding with the online migration.
+Online migration makes use of [pgcopydb follow](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html), and some of the [logical decoding restrictions](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html#pgcopydb-follow) apply. We also recommend that you have a primary key in all the tables of a database that's undergoing online migration. If a primary key is absent, the deficiency results in only `insert` operations being reflected during migration, excluding updates or deletes. Add a temporary primary key to the relevant tables before you proceed with the online migration.
> [!NOTE]
-> In the case of Online migration of tables without a primary key, only Insert operations are replayed on the target. This can potentially introduce inconsistency in the Database if records that are updated or deleted on the source do not reflect on the target.
+> In the case of online migration of tables without a primary key, only `insert` operations are replayed on the target. This can potentially introduce inconsistency in the database if records that are updated or deleted on the source don't reflect on the target.
-An alternative is to use the `ALTER TABLE` command where the action is [REPLICA IDENTIY](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY) with the `FULL` option. The `FULL` option records the old values of all columns in the row so that even in the absence of a Primary key, all CRUD operations are reflected on the target during the Online migration. If none of these options work, perform an offline migration as an alternative.
+An alternative is to use the `ALTER TABLE` command where the action is [REPLICA IDENTIY](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY) with the `FULL` option. The `FULL` option records the old values of all columns in the row so that even in the absence of a primary key, all CRUD operations are reflected on the target during the online migration. If none of these options work, perform an offline migration as an alternative.
### Database with postgres_fdw extension
-The [postgres_fdw module](https://www.postgresql.org/docs/current/postgres-fdw.html) provides the foreign-data wrapper postgres_fdw, which can be used to access data stored in external PostgreSQL servers. If your database uses this extension, the following steps must be performed to ensure a successful migration.
+The [postgres_fdw module](https://www.postgresql.org/docs/current/postgres-fdw.html) provides the foreign data wrapper postgres_fdw, which can be used to access data stored in external PostgreSQL servers. If your database uses this extension, the following steps must be performed to ensure a successful migration.
-1. Temporarily remove (unlink) Foreign data wrapper on the source instance.
-1. Perform data migration of rest using the Migration service.
-1. Restore the Foreign data wrapper roles, user, and Links to the target after migration.
+1. Temporarily remove (unlink) the foreign data wrapper on the source instance.
+1. Perform data migration of the rest by using the migration service.
+1. Restore the foreign data wrapper roles, user, and links to the target after migration.
### Database with postGIS extension
-The Postgis extension has breaking changes/compact issues between different versions. If you migrate to a flexible server, the application should be checked against the newer postGIS version to ensure that the application isn't impacted or that the necessary changes must be made. The [postGIS news](https://postgis.net/news/) and [release notes](https://postgis.net/docs/release_notes.html#idm45191) are a good starting point to understand the breaking changes across versions.
+The postGIS extension has breaking changes/compact issues between different versions. If you migrate to a flexible server, the application should be checked against the newer postGIS version to ensure that the application isn't affected or that the necessary changes must be made. The [postGIS news](https://postgis.net/news/) and [release notes](https://postgis.net/docs/release_notes.html#idm45191) are a good starting point to understand the breaking changes across versions.
### Database connection cleanup
-Sometimes, you might encounter this error when starting a migration:
+Sometimes, you might encounter this error when you start a migration:
`CL003:Target database cleanup failed in the pre-migration step. Reason: Unable to kill active connections on the target database created by other users. Please add the pg_signal_backend role to the migration user using the command 'GRANT pg_signal_backend to <migrationuser>' and try a new migration.`
-In this case, you can grant the `migration user` permission to close all active connections to the database or close the connections manually before retrying the migration.
+In this scenario, you can grant the `migration user` permission to close all active connections to the database or close the connections manually before you retry the migration.
## Related content
postgresql Concepts Known Issues Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-known-issues-migration-service.md
Title: "Migration service - known issues and limitations"
-description: Providing the limitations and known issues of the migration service in Azure Database for PostgreSQL.
+ Title: "Migration service - Known issues and limitations"
+description: This article provides the limitations and known issues of the migration service in Azure Database for PostgreSQL.
This article describes the known issues and limitations associated with the migr
Here are common limitations that apply to migration scenarios: -- You can have only one active migration or validation to your Flexible server.--- The migration service only supports users and roles migration when the source is Azure Database for PostgreSQL single server.-
+- You can have only one active migration or validation to your flexible server.
+- The migration service only supports migration for users and roles when the source is Azure Database for PostgreSQL - Single Server.
- The migration service shows the number of tables copied from source to target. You must manually check the data and PostgreSQL objects on the target server post-migration.--- The migration service only migrates user databases, not system databases such as template_0 and template_1.--- The migration service doesn't support moving TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER, PG_PARTMAN extensions from source to target.--- You can't move extensions not supported by the Azure Database for PostgreSQL ΓÇô Flexible server. The supported extensions are listed in [Extensions - Azure Database for PostgreSQL](/azure/postgresql/flexible-server/concepts-extensions).--- User-defined collations can't be migrated into Azure Database for PostgreSQL ΓÇô flexible server.--- You can't migrate to an older version. For instance, you can't migrate from PostgreSQL 15 to Azure Database for PostgreSQL version 14.-
+- The migration service migrates only user databases, not system databases, such as template_0 and template_1.
+- The migration service doesn't support moving TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER, or PG_PARTMAN extensions from source to target.
+- You can't move extensions not supported by Azure Database for PostgreSQL - Flexible Server. The supported extensions are listed in [Extensions - Azure Database for PostgreSQL](/azure/postgresql/flexible-server/concepts-extensions).
+- User-defined collations can't be migrated into Azure Database for PostgreSQL - Flexible Server.
+- You can't migrate to an older version. For instance, you can't migrate from Azure Database for PostgreSQL version 15 to version 14.
- The migration service only works with preferred or required SSLMODE values.- - The migration service doesn't support superuser privileges and objects.--- Azure Database for PostgreSQL - Flexible Server does not support the creation of custom tablespaces due to superuser privilege restrictions. During migration, data from custom tablespaces in the source PostgreSQL instance is migrated into the default tablespaces of the target Azure Database for PostgreSQL - Flexible Server.
+- Azure Database for PostgreSQL - Flexible Server doesn't support the creation of custom tablespaces because of superuser privilege restrictions. During migration, data from custom tablespaces in the source PostgreSQL instance is migrated into the default tablespaces of the target Azure Database for PostgreSQL - Flexible Server instance.
- The following PostgreSQL objects can't be migrated into the PostgreSQL flexible server target:+ - Create casts - Creation of FTS parsers and FTS templates - Users with superuser roles - Create TYPE - The migration service doesn't support migration at the object level, that is, at the table level or schema level.
+- The migration service is unable to perform migration when the source database is Azure Database for PostgreSQL - Single Server with no public access or is an on-premises/AWS using a private IP, and the target Azure Database for PostgreSQL - Flexible Server instance is accessible only through a private endpoint.
+- Migration to burstable SKUs isn't supported. Databases must first be migrated to a nonburstable SKU and then scaled down if needed.
+- The Migration Runtime Server is designed to operate with the default DNS servers/private DNS zones, for example, `privatelink.postgres.database.azure.com`. Custom DNS names/DNS servers aren't supported by the migration service when you use the Migration Runtime Server feature. When you're configuring private endpoints for both the source and target databases, it's imperative to use the default private DNS zone provided by Azure for the private link service. The use of custom DNS configurations isn't yet supported and might lead to connectivity issues during the migration process.
-- The migration service is unable to perform migration when the source database is Azure Database for PostgreSQL single server with no public access or is an on-premises/AWS using a private IP, and the target Azure Database for PostgreSQL Flexible Server is accessible only through a private endpoint.--- Migration to burstable SKUs isn't supported; databases must first be migrated to a non-burstable SKU and then scaled down if needed.--- The Migration Runtime Server is specifically designed to operate with the default DNS servers/private DNS zones i.e., **privatelink.postgres.database.azure.com**. Custom DNS names/DNS servers are not supported by the migration service when utilizing the migration runtime server feature. When configuring private endpoints for both the source and target databases, it is imperative to use the default private DNS zone provided by Azure for the private link service. The use of custom DNS configurations is not yet supported and may lead to connectivity issues during the migration process.-
-## Limitations migrating from Azure Database for PostgreSQL single server
--- Microsoft Entra ID users present on your source server aren't migrated to the target server. To mitigate this limitation, visit [Manage Microsoft Entra roles](../../flexible-server/how-to-manage-azure-ad-users.md) to manually create all Microsoft Entra users on your target server before triggering a migration. If Microsoft Entra users aren't created on target server, migration fail.--- If the target flexible server uses SCRAM-SHA-256 password encryption method, connection to flexible server using the users/roles on single server fails since the passwords are encrypted using md5 algorithm. To mitigate this limitation, choose the option MD5 for password_encryption server parameter on your flexible server.
+## Limitations migrating from Azure Database for PostgreSQL - Single Server
-- Online migration makes use of [pgcopydb follow](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html) and some of the [logical decoding restrictions](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html#pgcopydb-follow) apply.
+- Microsoft Entra ID users present on your source server aren't migrated to the target server. To mitigate this limitation, see [Manage Microsoft Entra roles](../../flexible-server/how-to-manage-azure-ad-users.md) to manually create all Microsoft Entra users on your target server before you trigger a migration. If Microsoft Entra users aren't created on the target server, migration fails.
+- If the target flexible server uses the SCRAM-SHA-256 password encryption method, connection to a flexible server using the users/roles on a single server fails because the passwords are encrypted by using the md5 algorithm. To mitigate this limitation, choose the option `MD5` for the `password_encryption` server parameter on your flexible server.
+- Online migration makes use of [pgcopydb follow](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html), and some of the [logical decoding restrictions](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html#pgcopydb-follow) apply.
## Related content
postgresql Concepts Migration Service Runtime Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-migration-service-runtime-server.md
Title: "Introduction of migration runtime server in Migration service in Azure Database for PostgreSQL"
-description: "Concepts about the migration runtime server in migration service Azure Database for PostgreSQL"
+ Title: "Migration Runtime Server in Azure Database for PostgreSQL"
+description: "This article discusses concepts about Migration Runtime Server with the migration service in Azure Database for PostgreSQL."
-# Migration Runtime Server with the migration service in Azure Database for PostgreSQL Preview
+# Migration Runtime Server with the migration service in Azure Database for PostgreSQL preview
-The Migration Runtime Server is a specialized feature within the [migration service in Azure Database for PostgreSQL](concepts-migration-service-postgresql.md), designed to act as an intermediary server during migration. It's a separate Azure Database for PostgreSQL - Flexible Server instance that isn't the target server but is used to facilitate the migration of databases from a source environment that is only accessible via a private network.
+Migration Runtime Server is a specialized feature in the [migration service in Azure Database for PostgreSQL](concepts-migration-service-postgresql.md) that acts as an intermediary server during migration. It's a separate Azure Database for PostgreSQL - Flexible Server instance that isn't the target server. It's used to facilitate the migration of databases from a source environment that's only accessible via a private network.
-The migration runtime server is helpful in scenarios where both the source PostgreSQL instances and the target Azure Database for PostgreSQL Flexible Server are configured to communicate over private endpoints or private IPs, ensuring that the migration occurs within a secure and isolated network space. The Migration Runtime Server handles the data transfer, connecting to the source PostgreSQL instance to retrieve data and then pushing it to the target server.
+Migration Runtime Server is helpful in scenarios where both the source PostgreSQL instances and the target Azure Database for PostgreSQL - Flexible Server instance are configured to communicate over private endpoints or private IPs. This arrangement ensures that the migration occurs within a secure and isolated network space. Migration Runtime Server handles the data transfer. It connects to the source PostgreSQL instance to retrieve data and then push it to the target server.
-The migration runtime server is distinct from the target server and is configured to handle the data transfer process, ensuring a secure and efficient migration path.
+Migration Runtime Server is distinct from the target server and is configured to handle the data transfer process, ensuring a secure and efficient migration path.
## Supported migration scenarios with the Migration Runtime Server
-The migration runtime server is essential for transferring data between different source PostgreSQL instances and the Azure Database for PostgreSQL - Flexible Server. It's necessary in the following scenarios:
+Migration Runtime Server is essential for transferring data between different source PostgreSQL instances and the Azure Database for PostgreSQL - Flexible Server instance. It's necessary in the following scenarios:
-- When the source is an Azure Database for PostgreSQLΓÇöSingle Server configured with a private endpoint and the target is an Azure Database for PostgreSQLΓÇöFlexible Server with a private endpoint.-- For sources such as on-premises databases, Azure VMs, or AWS instances that are only accessible via private networks, and the target Azure Database for PostgreSQL - Flexible Server with a private endpoint.
+- When the source is an Azure Database for PostgreSQL - Single Server configured with a private endpoint and the target is an Azure Database for PostgreSQL - Flexible Server with a private endpoint.
+- For sources such as on-premises databases, Azure virtual machines, or AWS instances that are only accessible via private networks and the target Azure Database for PostgreSQL - Flexible Server instance with a private endpoint.
## How do you use the Migration Runtime Server feature?
-To use the Migration Runtime Server feature within the migration service in Azure Database for PostgreSQL, you can select the appropriate migration option either through the Azure portal during the setup or by specifying the `migrationRuntimeResourceId` in the JSON properties file during the migration create command in the Azure CLI. Here's how to do it in both methods:
+To use the Migration Runtime Server feature within the migration service in Azure Database for PostgreSQL, you have two migration options:
+
+- Use the Azure portal during setup.
+- Specify the `migrationRuntimeResourceId` parameter in the JSON properties file during the migration create command in the Azure CLI.
+
+Here's how to do it in both methods.
### Use the Azure portal -- Sign in to the Azure portal and access the migration service (from the target server) in the Azure Database for PostgreSQL instance.-- Begin a new migration workflow within the service.-- When you reach the "Select runtime server" tab, use the Migration Runtime Server by selecting "Yes."-- Choose your Azure subscription and resource group and the location of the VNet-integrated Azure Database for PostgreSQLΓÇöFlexible server.-- Select the appropriate Azure Database for PostgreSQL Flexible Server to serve as your Migration Runtime Server.
+1. Sign in to the Azure portal and access the migration service (from the target server) in the Azure Database for PostgreSQL instance.
+1. Begin a new migration workflow within the service.
+1. When you reach the **Select runtime server** tab, select **Yes** to use Migration Runtime Server.
+1. Select your Azure subscription and resource group. Select the location of the virtual network-integrated Azure Database for PostgreSQL - Flexible Server instance.
+1. Select the appropriate Azure Database for PostgreSQL - Flexible Server instance to serve as your Migration Runtime Server instance.
+ :::image type="content" source="media/concepts-migration-service-runtime-server/select-runtime-server.png" alt-text="Screenshot that shows selecting Migration Runtime Server.":::
-### Use Azure CLI
+### Use the Azure CLI
-- Open your command-line interface.-- Ensure you have the Azure CLI installed and you're logged into your Azure account using az sign-in.-- The version should be at least 2.62.0 or above to use the migration runtime server option.-- The `az postgres flexible-server migration create` command requires a JSON file path as part of `--properties` parameter, which contains configuration details for the migration. Provide the `migrationRuntimeResourceId` in the JSON properties file.
+1. Open your command-line interface.
+1. Ensure that you have the Azure CLI installed and that you're signed in to your Azure account by using `az sign-in`.
+1. The version should be at least 2.62.0 or above to use the Migration Runtime Server option.
+1. The `az postgres flexible-server migration create` command requires a JSON file path as part of the `--properties` parameter, which contains configuration details for the migration. Provide the `migrationRuntimeResourceId` parameter in the JSON properties file.
## Migration Runtime Server essentials -- **Minimal Configuration**ΓÇöDespite being created from an Azure Database for PostgreSQL Flexible Server, the migration runtime server solely facilitates migration without the need for HA, backups, version specificity, or advanced storage features.-- **Performance and Sizing**ΓÇöThe migration runtime server must be appropriately scaled to manage the workload, and it's recommended that you select an SKU equivalent to or greater than that of the target server.-- **Networking** Ensure that the migration runtime server is appropriately integrated into the Virtual Network (virtual network) and that network security allows for secure communication with both the source and target servers. For more information visit [Network guide for migration service](how-to-network-setup-migration-service.md).-- **Cleanup Post-Migration**ΓÇöAfter the migration is complete, the migration runtime server should be decommissioned to avoid unnecessary costs. Before deletion, ensure all data has been successfully migrated and that the server is no longer needed.
+- **Minimal configuration**: Despite being created from Azure Database for PostgreSQL - Flexible Server, Migration Runtime Server solely facilitates migration without the need for high availability, backups, version specificity, or advanced storage features.
+- **Performance and sizing**: Migration Runtime Server must be appropriately scaled to manage the workload. We recommend that you select an SKU equivalent to or greater than that of the target server.
+- **Networking**: Ensure that Migration Runtime Server is appropriately integrated into the virtual network and that network security allows for secure communication with both the source and target servers. For more information, see [Network guide for migration service](how-to-network-setup-migration-service.md).
+- **Post-migration cleanup**: After the migration is finished, Migration Runtime Server should be decommissioned to avoid unnecessary costs. Before deletion, ensure that all data was successfully migrated and that the server is no longer needed.
## Related content
postgresql Concepts Premigration Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-premigration-migration-service.md
Title: "Migration service - premigration validations"
-description: premigration validations to identify issues before running migrations
+ Title: "Migration service - Premigration validations"
+description: Learn about premigration validations to identify issues before you run a migration to Azure Database for PostgreSQL.
Premigration validation is a set of rules that involves assessing and verifying
## How do you use the premigration validation feature?
-To use premigration validation when migrating to Azure Database for PostgreSQL - flexible server, you can select the appropriate migration option either through the Azure portal during the setup or by specifying the `--migration-option` parameter in the Azure CLI when creating a migration. Here's how to do it in both methods:
+To use premigration validation when you migrate to Azure Database for PostgreSQL - Flexible Server, you have two migration options:
-### Use the Azure portal
--- Navigate to the migration tab within the Azure Database for PostgreSQL.
+- Use the Azure portal during setup.
+- Specify the `--migration-option` parameter in the Azure CLI when you create a migration.
-- Select the **Create** button
+Here's how to do it in both methods.
-- In the Setup page, choose the migration option that includes validation. This could be labeled as **validate**, **validate and migrate**
+### Use the Azure portal
- :::image type="content" source="media\concepts-premigration-migration-service\premigration-option.png" alt-text="Screenshot of premigration option to start migration." lightbox="media\concepts-premigration-migration-service\premigration-option.png":::
+1. Go to the migration tab in Azure Database for PostgreSQL.
-### Use Azure CLI
+1. Select **Create**.
-- Open your command-line interface.
+1. On the **Setup** page, choose the migration option that includes validation. Select **Validate** or **Validate and Migrate**.
-- Ensure you have the Azure CLI installed and you're logged into your Azure account using az sign-in.
+ :::image type="content" source="media\concepts-premigration-migration-service\premigration-option.png" alt-text="Screenshot that shows the premigration option to start migration." lightbox="media\concepts-premigration-migration-service\premigration-option.png":::
-- The version should be at least 2.56.0 or above to use the migration option.
+### Use the Azure CLI
-Construct your migration task creation command with the Azure CLI.
+1. Open your command-line interface.
-```bash
-az postgres flexible-server migration create --subscription <subscription ID> --resource-group <Resource group Name> --name <Flexible server Name> --migration-name <Unique migration ID> --migration-option ValidateAndMigrate --properties "Path of the JSON File" --migration-mode offline
-```
+1. Ensure that you have the Azure CLI installed and that you're signed in to your Azure account by using `az sign-in`.
+ The version should be at least 2.56.0 or above to use the migration option.
-Include the `--migration-option` parameter followed by the option validate to perform only the premigration **Validate**, **Migrate**, or **ValidateAndMigrate** to perform validation and then proceed with the migration if the validation is successful.
+1. Construct your migration task creation command with the Azure CLI.
-## Pre-migration validation options
+ ```bash
+ az postgres flexible-server migration create --subscription <subscription ID> --resource-group <Resource group Name> --name <Flexible server Name> --migration-name <Unique migration ID> --migration-option ValidateAndMigrate --properties "Path of the JSON File" --migration-mode offline
+ ```
-You can pick any of the following options.
+1. Include the `--migration-option` parameter followed by the `Validate` option to perform only the premigration. Use `Validate`, `Migrate`, or `ValidateAndMigrate` to perform validation. If the validation is successful, continue with the migration.
-- **Validate** - Use this option to check your server and database readiness for migration to the target. **This option will not start data migration and will not require any server downtime.**
- - Plan your migrations better by performing premigration validations in advance to know the potential issues you might encounter while performing migrations.
+## Premigration validation options
-- **Migrate** - Use this option to kickstart the migration without going through a validation process. Perform validation before triggering a migration to increase the chances of success. Once validation is done, you can use this option to start the migration process.
+You can choose any of the following options:
-- **ValidateandMigrate** - This option performs validations, and migration gets triggered if all checks are in the **succeeded** or **warning** state. Validation failures don't start the migration between source and target servers.
+- **Validate**: Use this option to check your server and database readiness for migration to the target. *This option won't start data migration and won't require any server downtime.*
+ - Plan your migrations better by performing premigration validations in advance to know the potential issues you might encounter while you perform migrations.
+- **Migrate**: Use this option to kickstart the migration without going through a validation process. Perform validation before you trigger a migration to increase the chances of success. After validation is finished, you can use this option to start the migration process.
+- **Validate and Migrate**: This option performs validations, and migration gets triggered if all checks are in the **Succeeded** or **Warning** state. Validation failures don't start the migration between source and target servers.
-We recommend that customers use premigration validations to identify issues before running migrations. This helps you to plan your migrations better and avoid any surprises during the migration process.
+We recommend that you use premigration validations to identify issues before you run migrations. This technique helps you to plan your migrations better and avoid any surprises during the migration process.
1. Choose the **Validate** option and run premigration validation on an advanced date of your planned migration. 1. Analyze the output and take any remedial actions for any errors.
-1. Rerun Step 1 until the validation is successful.
-
-1. Start the migration using the **Validate and Migrate** option on the planned date and time.
+1. Rerun step 1 until the validation is successful.
-## Validation states
+1. Start the migration by using the **Validate and Migrate** option on the planned date and time.
-The result post running the validated option can be:
+## Validation states
-- **Succeeded** - No issues were found, and you can plan for the migration-- **Failed** - There were errors found during validation, which can cause the migration to fail. Review the list of errors and their suggested workarounds and take corrective measures before planning the migration.-- **Warning** - Warnings are informative messages you must remember while planning the migration.
+After you run the **Validate** option, you see one of the following options:
+- **Succeeded**: No issues were found and you can plan for the migration.
+- **Failed**: Errors were found during validation, which can cause the migration to fail. Review the list of errors and their suggested workarounds. Take corrective measures before you plan the migration.
+- **Warning**: Warnings are informative messages you must remember while you plan the migration.
## Related content
postgresql Concepts User Roles Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-user-roles-migration-service.md
Title: "Migration service - Migration of users/roles, ownerships, and privileges"
-description: Migration of users/roles, ownerships, and privileges along with schema and data
+description: Learn about the migration of user roles, ownerships, and privileges along with schema and data for the migration service in Azure Database for PostgreSQL.
-# Migration of user roles, ownerships, and privileges for the migrations service in Azure Database for PostgreSQL
+# Migration of user roles, ownerships, and privileges for the migration service in Azure Database for PostgreSQL
[!INCLUDE [applies-to-postgresql-flexible-server](~/reusable-content/ce-skilling/azure/includes/postgresql/includes/applies-to-postgresql-flexible-server.md)] > [!IMPORTANT]
-> The migration of user roles, ownerships, and privileges feature is available only for the Azure Database for PostgreSQL Single server as the source. This feature is currently disabled for PostgreSQL version 16 servers.
+> The migration of user roles, ownerships, and privileges feature is available only for the Azure Database for PostgreSQL - Single Server instance as the source. This feature is currently disabled for PostgreSQL version 16 servers.
-The migration service automatically provides the following built-in capabilities for the Azure Database for PostgreSQL single server as the source and data migration.
+The migration service automatically provides the following built-in capabilities for Azure Database for PostgreSQL - Single Server as the source and data migration:
- Migration of user roles on your source server to the target server. - Migration of ownership of all the database objects on your source server to the target server.-- Migration of permissions of database objects on your source server, such as GRANTS/REVOKES, to the target server.
+- Migration of permissions of database objects on your source server, such as `GRANT`/`REVOKE`, to the target server.
+
+## Permission differences between Azure Database for PostgreSQL - Single Server and Flexible Server
-## Permission differences between Azure Database for PostgreSQL Single server and Flexible server
This section explores the differences in permissions granted to the **azure_pg_admin** role across single server and flexible server environments. ### PG catalog permissions
-Unlike user-created schemas, which organize database objects into logical groups, pg_catalog is a system schema. It houses crucial system-level information, such as details about tables, columns, and other internal bookkeeping data. Essentially, itΓÇÖs where PostgreSQL stores important metadata.
-In a single server environment, a user belonging to the azure_pg_admin role is granted select privileges for all pg_catalog tables and views. However, in a flexible server, we restricted privileges for certain tables and views, allowing only the super user to query them.
+Unlike user-created schemas, which organize database objects into logical groups, pg_catalog is a system schema. It houses crucial system-level information, such as details about tables, columns, and other internal bookkeeping data. It's where PostgreSQL stores important metadata.
+
+- In a single server environment, a user belonging to the azure_pg_admin role is granted select privileges for all pg_catalog tables and views.
+- In a flexible server environment, privileges are restricted for certain tables and views so that only superusers are allowed to query them.
-We removed all privileges for non-superusers on the following pg_catalog tables.
-- pg_authid
+We removed all privileges for non-superusers on the following pg_catalog tables:
-- pg_largeobject
+- pg_authid
+
+- pg_largeobject
- pg_statistic -- pg_subscription
+- pg_subscription
+
+- pg_user_mapping
-- pg_user_mapping
+We removed all privileges for non-superusers on the following pg_catalog views:
-We removed all privileges for non-superusers on the following pg_catalog views.
-- pg_config
+- pg_config
-- pg_file_settings
+- pg_file_settings
-- pg_hba_file_rules
+- pg_hba_file_rules
-- pg_replication_origin_status
+- pg_replication_origin_status
-- pg_shadow
+- pg_shadow
-Allowing unrestricted access to these system tables and views could lead to unauthorized modifications, accidental deletions, or even security breaches. By restricting access, we're reducing the risk of unintended changes or data exposure.
+Allowing unrestricted access to these system tables and views could lead to unauthorized modifications, accidental deletions, or even security breaches. Restricted access reduces the risk of unintended changes or data exposure.
### pg_pltemplate deprecation
-Another important consideration is the deprecation of the **pg_pltemplate** system table within the pg_catalog schema by the PostgreSQL community **starting from version 13.** If you're migrating to Flexible Server versions 13 and above and have granted permissions to users on the pg_pltemplate table on your single server, you mist revoke these permissions before initiating a new migration.
+Another important consideration is the deprecation of the **pg_pltemplate** system table within the pg_catalog schema by the PostgreSQL community *starting from version 13*. If you're migrating to Flexible Server versions 13 and above and have granted permissions to users on the pg_pltemplate table on your single server, you must revoke these permissions before you initiate a new migration.
#### What is the impact?-- If your application is designed to directly query the affected tables and views, it encounters issues upon migrating to the flexible server. We strongly advise you to refactor your application to avoid direct queries to these system tables. -- If you have granted or revoked privileges to any users or roles for the affected pg_catalog tables and views, you encounter an error during the migration process. This error will be identified by the following pattern:
+- If your application is designed to directly query the affected tables and views, it encounters issues upon migrating to the flexible server. We strongly advise you to refactor your application to avoid direct queries to these system tables.
+- If you've granted or revoked privileges to any users or roles for the affected pg_catalog tables and views, you encounter an error during the migration process. You can identify this error by the following pattern:
-```sql
-pg_restore error: could not execute query <GRANT/REVOKE> <PRIVILEGES> on <affected TABLE/VIEWS> to <user>.
- ```
+ ```sql
+ pg_restore error: could not execute query <GRANT/REVOKE> <PRIVILEGES> on <affected TABLE/VIEWS> to <user>.
+ ```
#### Workaround
-To resolve this error, it's necessary to undo the privileges granted to users and roles on the affected pg_catalog tables and views. You can accomplish this by taking the following steps.
+To resolve this error, it's necessary to undo the privileges granted to users and roles on the affected pg_catalog tables and views. You can accomplish this task by taking the following steps.
- **Step 1: Identify Privileges**
+**Step 1: Identify privileges**
Execute the following query on your single server by logging in as the admin user:
GROUP BY
```
-**Step 2: Review the Output**
+**Step 2: Review the output**
The output of the query shows the list of privileges granted to roles on the impacted tables and views.
For example:
| SELECT | pg_authid | adminuser1 | | SELECT, UPDATE |pg_shadow | adminuser2 | - **Step 3: Undo the privileges**
-To undo the privileges, run REVOKE statements for each privilege on the relation from the grantee. In this example, you would run:
+To undo the privileges, run `REVOKE` statements for each privilege on the relation from the grantee. In this example, you would run:
```sql REVOKE SELECT ON pg_authid FROM adminuser1;
REVOKE SELECT ON pg_shadow FROM adminuser2;
REVOKE UPDATE ON pg_shadow FROM adminuser2; ```
-**Step 4: Final Verification**
+**Step 4: Final verification**
-Run the query from Step 1 again to ensure that the resulting output set is empty.
+Run the query from step 1 again to ensure that the resulting output set is empty.
> [!NOTE]
-> Make sure you perform the above steps for all the databases included in the migration to avoid any permission-related issues during the migration.
+> Make sure you perform the preceding steps for all the databases included in the migration to avoid any permission-related issues during the migration.
-After completing these steps, you can proceed to initiate a new migration from the single server to the flexible server using the migration service. You shouldn't encounter permission-related issues during this process.
+After you finish these steps, you can proceed to initiate a new migration from the single server to the flexible server by using the migration service. You shouldn't encounter permission-related issues during this process.
## Related content+ - [Migration service](concepts-migration-service-postgresql.md) - [Known issues and limitations](concepts-known-issues-migration-service.md) - [Network setup](how-to-network-setup-migration-service.md)
private-link Disable Private Link Service Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-link-service-network-policy.md
ms.devlang: azurecli
# Disable network policies for Private Link service source IP
-To choose a source IP address for your Azure Private Link service, the explicit disable setting `privateLinkServiceNetworkPolicies` is required on the subnet. This setting only applies for the specific private IP address you chose as the source IP of the Private Link service. For other resources in the subnet, access is controlled based on the network security group security rules definition.
+When configuring Azure Private Link service, the explicit setting `privateLinkServiceNetworkPolicies` must be disabled on the subnet. This setting only affects the Private Link service. For other resources in the subnet, access is controlled based on the network security group security rules definition.
When you use the portal to create an instance of the Private Link service, this setting is automatically disabled as part of the creation process. Deployments using any Azure client (PowerShell, Azure CLI, or templates) require an extra step to change this property.
sentinel Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/automation.md
After onboarding your Microsoft Sentinel workspace to the unified security opera
| | | | **Automation rules with alert triggers** | In the unified security operations platform, automation rules with alert triggers act only on Microsoft Sentinel alerts. <br><br>For more information, see [Alert create trigger](../automate-incident-handling-with-automation-rules.md#alert-create-trigger). | | **Automation rules with incident triggers** | In both the Azure portal and the unified security operations platform, the **Incident provider** condition property is removed, as all incidents have *Microsoft Defender XDR* as the incident provider (the value in the *ProviderName* field). <br><br>At that point, any existing automation rules run on both Microsoft Sentinel and Microsoft Defender XDR incidents, including those where the **Incident provider** condition is set to only *Microsoft Sentinel* or *Microsoft 365 Defender*. <br><br>However, automation rules that specify a specific analytics rule name will run only on the incidents that were created by the specified analytics rule. This means that you can define the **Analytic rule name** condition property to an analytics rule that exists only in Microsoft Sentinel to limit your rule to run on incidents only in Microsoft Sentinel. <br><br>For more information, see [Incident trigger conditions](../automate-incident-handling-with-automation-rules.md#conditions). |
-| **Changes to existing incident names** | In the unified SOC operations platform, the Defender portal uses a unique engine to correlate incidents and alerts. When onboarding your workspace to the unified SOC operations platform, existing incident names might be changed if the correlation is applied. To ensure that your automation rules always run correctly, we therefore recommend that you avoid using incident titles in your automation rules, and suggest the use of tags instead. |
+| **Changes to existing incident names** | In the unified SOC operations platform, the Defender portal uses a unique engine to correlate incidents and alerts. When onboarding your workspace to the unified SOC operations platform, existing incident names might be changed if the correlation is applied. To ensure that your automation rules always run correctly, we therefore recommend that you avoid using incident titles as condition criteria in your automation rules, and suggest instead to use the name of the analytics rule that created the incident, and tags if more specificity is required. |
| ***Updated by* field** | <li>After onboarding your workspace, the **Updated by** field has a [new set of supported values](../automate-incident-handling-with-automation-rules.md#incident-update-trigger), which no longer include *Microsoft 365 Defender*. In existing automation rules, *Microsoft 365 Defender* is replaced by a value of *Other* after onboarding your workspace. <br><br><li>If multiple changes are made to the same incident in a 5-10 minute period, a single update is sent to Microsoft Sentinel, with only the most recent change. <br><br>For more information, see [Incident update trigger](../automate-incident-handling-with-automation-rules.md#incident-update-trigger). | | **Automation rules that add incident tasks** | If an automation rule adds an incident task, the task is shown only in the Azure portal. | | **Microsoft incident creation rules** | Microsoft incident creation rules aren't supported in the unified security operations platform. <br><br>For more information, see [Microsoft Defender XDR incidents and Microsoft incident creation rules](../microsoft-365-defender-sentinel-integration.md#microsoft-defender-xdr-incidents-and-microsoft-incident-creation-rules). |
sentinel Create Analytics Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-analytics-rules.md
In the Azure portal, stages are represented visually as tabs. In the Defender po
| **Description** | A free-text description for your rule. | | **Severity** | Match the impact the activity triggering the rule might have on the target environment, should the rule be a true positive.<br><br>**Informational**: No impact on your system, but the information might be indicative of future steps planned by a threat actor.<br>**Low**: The immediate impact would be minimal. A threat actor would likely need to conduct multiple steps before achieving an impact on an environment.<br>**Medium**: The threat actor could have some impact on the environment with this activity, but it would be limited in scope or require additional activity.<br> **High**: The activity identified provides the threat actor with wide ranging access to conduct actions on the environment or is triggered by impact on the environment. | | **MITRE ATT&CK** | Choose those threat activities which apply to your rule. Select from among the **MITRE ATT&CK** tactics and techniques presented in the drop-down list. You can make multiple selections.<br><br>For more information on maximizing your coverage of the MITRE ATT&CK threat landscape, see [Understand security coverage by the MITRE ATT&CK® framework](mitre-coverage.md). |
- | **Status** | If you want the rule to run immediately after you finish creating it, leave the status set to **Enabled**. Otherwise, select **Disabled**, and enable it later from your **Active rules** tab when you need it. Or enable the rule without it running immediately by scheduling the rule's first run at a specific date and time. See [Schedule and scope the query](#schedule-and-scope-the-query). |
+ | **Status** | **Enabled**: The rule runs immediately upon creation, or at the [specific date and time you choose to schedule it (currently in PREVIEW)](#schedule-and-scope-the-query).<br>**Disabled**: The rule is created but doesn't run. Enable it later from your **Active rules** tab when you need it. |
1. Select **Next: Set rule logic**.
For more information, see:
- [Entities in Microsoft Sentinel](entities.md) - [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
-Also, learn from an example of using custom analytics rules when [monitoring Zoom](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516) with a [custom connector](create-custom-connector.md).
+Also, learn from an example of using custom analytics rules when [monitoring Zoom](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516) with a [custom connector](create-custom-connector.md).
sentinel Scheduled Rules Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/scheduled-rules-overview.md
The MITRE ATT&CK tactics and techniques defined here in the rule apply to any al
For more information on maximizing your coverage of the MITRE ATT&CK threat landscape, see [Understand security coverage by the MITRE ATT&CK® framework](mitre-coverage.md).
-**Status:** When you create the rule, its **Status** is **Enabled** by default, which means it will run immediately after you finish creating it. If you donΓÇÖt want it to run immediately, you have two options:
-- Select **Disabled**, and the rule will be added to your **Active rules** tab. You can enable it from there when you need it.
+**Status:** When you create the rule, its **Status** is **Enabled** by default, which means it runs immediately after you finish creating it. If you donΓÇÖt want it to run immediately, you have two options:
+- Select **Disabled**, and the rule is created without running. When you want the rule to run, find it in your **Active rules** tab, and enable it from there.
- Schedule the rule to first run at a specific date and time. This method is currently in PREVIEW. See [Query scheduling](#query-scheduling) later on in this article. ### Rule query
service-bus-messaging Service Bus Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-azure-credentials.md
Next, update your code to use passwordless connections.
1. Identify the code that creates a `ServiceBusClient` object to connect to Azure Service Bus. Update your code to match the following example: ```csharp
- var serviceBusNamespace = $"https://{namespace}.servicebus.windows.net";
+ var serviceBusNamespace = $"{namespace}.servicebus.windows.net";
ServiceBusClient client = new( serviceBusNamespace, new DefaultAzureCredential());
Next, update your code to use passwordless connections.
} serviceBusNamespace := fmt.Sprintf(
- "https://%s.servicebus.windows.net",
+ "%s.servicebus.windows.net",
namespace) client, err := azservicebus.NewClient(serviceBusNamespace, credential, nil)
Next, update your code to use passwordless connections.
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build(); String serviceBusNamespace =
- "https://" + namespace + ".servicebus.windows.net";
+ namespace + ".servicebus.windows.net";
ConnectionFactory factory = new ServiceBusJmsConnectionFactory( credential,
Next, update your code to use passwordless connections.
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build(); String serviceBusNamespace =
- "https://" + namespace + ".servicebus.windows.net";
+ namespace + ".servicebus.windows.net";
ServiceBusReceiverClient receiver = new ServiceBusClientBuilder() .credential(serviceBusNamespace, credential)
Next, update your code to use passwordless connections.
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build(); String serviceBusNamespace =
- "https://" + namespace + ".servicebus.windows.net";
+ namespace + ".servicebus.windows.net";
ServiceBusSenderClient client = new ServiceBusClientBuilder() .credential(serviceBusNamespace, credential)
Next, update your code to use passwordless connections.
```nodejs const credential = new DefaultAzureCredential();
- const serviceBusNamespace = `https://${namespace}.servicebus.windows.net`;
+ const serviceBusNamespace = `${namespace}.servicebus.windows.net`;
const client = new ServiceBusClient( serviceBusNamespace,
Next, update your code to use passwordless connections.
```python credential = DefaultAzureCredential()
- service_bus_namespace = "https://%s.servicebus.windows.net" % namespace
+ service_bus_namespace = "%s.servicebus.windows.net" % namespace
client = ServiceBusClient( fully_qualified_namespace = service_bus_namespace,
service-bus-messaging Service Bus Prefetch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-prefetch.md
If an application explicitly abandons a message, the message might again be avai
If you need a high degree of reliability for message processing, and processing takes significant work and time, we recommend that you use the Prefetch feature conservatively, or not at all. If you need high throughput and message processing is commonly cheap, prefetch yields significant throughput benefits.
-The maximum prefetch count and the lock duration configured on the queue or subscription need to be balanced such that the lock timeout at least exceeds the cumulative expected message processing time for the maximum size of the prefetch buffer, plus one message. At the same time, the lock timeout shouldn't be so long that messages can exceed their maximum time to live when they're accidentally dropped, and so requiring their lock to expire before being redelivered.
+The maximum prefetch count and the lock duration configured on the queue or subscription need to be balanced such that the lock timeout at least exceeds the cumulative expected message processing time for the maximum size of the prefetch buffer, plus one message. At the same time, the lock duration shouldn't be so long that messages can exceed their maximum time to live while being locked, as this would mean they get removed if they could not be completed when they were prefetched.
[!INCLUDE [service-bus-track-0-and-1-sdk-support-retirement](../../includes/service-bus-track-0-and-1-sdk-support-retirement.md)]
service-fabric How To Managed Cluster Modify Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-modify-node-type.md
Previously updated : 05/24/2024 Last updated : 07/17/2024 # Service Fabric managed cluster node types
The cluster begins upgrading automatically. You see the additional nodes when co
You can choose to enable automatic OS image upgrades to the virtual machines running your managed cluster nodes. Although the virtual machine scale set resources are managed on your behalf with Service Fabric managed clusters, it's your choice to enable automatic OS image upgrades for your cluster nodes. As with [classic Service Fabric](service-fabric-best-practices-infrastructure-as-code.md#virtual-machine-os-automatic-upgrade-configuration) clusters, managed cluster nodes aren't upgraded by default, in order to prevent unintended disruptions to your cluster.
+> [!NOTE]
+> Automatic OS image upgrade is supported for both platform and gallery based OS images.
+ To enable automatic OS upgrades: * Use apiVersion `2021-05-01` or later version of *Microsoft.ServiceFabric/managedclusters* and *Microsoft.ServiceFabric/managedclusters/nodetypes* resources
In this walkthrough, you learn how to modify a placement property for a node typ
### Configure placement properties with a template
-To adjust the placement properties for a node type using an ARM Template, adjust the `placementProperties` property with one or more new values and do a cluster deployment for the setting to take effect. The below sample shows three values being set for a node type.
+To adjust the placement properties for a node type using an ARM Template, adjust the `placementProperties` property with one or more new values and do a cluster deployment for the setting to take effect. The following sample shows three values being set for a node type.
* The Service Fabric managed cluster resource apiVersion should be **2021-05-01** or later.
site-recovery Delete Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/delete-appliance.md
If all the appliance components are in a critical state and there's no connectiv
Before you delete the Azure Site Recovery replication appliance, ensure that you *disable replication of all servers* using the Azure Site Recovery replication appliance. To do this, go to Azure portal, select the Recovery Services vault > *Replicated items* blade. Select the servers you want to stop replicating, select **Stop replication**, and confirm the action.
-### Delete an unhealthy appliance
+## Verify account permissions
+
+If you just created your free Azure account, you're the administrator of your subscription and you have the permissions you need. If you're not the subscription administrator, work with the administrator to assign the permissions you need. To enable replication for a new virtual machine, you must have permission to:
+
+- Create a virtual machine in the selected resource group.
+- Create a virtual machine in the selected virtual network.
+- Write to an Azure storage account.
+- Write to an Azure managed disk.
+
+To complete these tasks your account should be assigned the Virtual Machine Contributor built-in role. In addition, to manage Site Recovery operations in a vault, your account should be assigned the Site Recovery Contributor built-in role.
+
+## VMware account permissions
+
+**Task** | **Role/Permissions** | **Details**
+ | |
+**VM discovery** | At least a read-only user<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Read-only | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs and networks).
+**Full replication, failover, failback** | Create a role (Azure_Site_Recovery) with the required permissions, and then assign the role to a VMware user or group<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Azure_Site_Recovery<br/><br/> Datastore -> Allocate space, browse datastore, low-level file operations, remove file, update virtual machine files<br/><br/> Network -> Network assign<br/><br/> Resource -> Assign VM to resource pool, migrate powered off VM, migrate powered on VM<br/><br/> Tasks -> Create task, update task<br/><br/> Virtual machine -> Configuration<br/><br/> Virtual machine -> Interact -> answer question, device connection, configure CD media, configure floppy media, power off, power on, VMware tools install<br/><br/> Virtual machine -> Inventory -> Create, register, unregister<br/><br/> Virtual machine -> Provisioning -> Allow virtual machine download, allow virtual machine files upload<br/><br/> Virtual machine -> Snapshots -> Remove snapshots, Create snapshots | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs and networks).
+
+## Delete an unhealthy appliance
You can only delete the Azure Site Recovery replication appliance from the Azure portal if all components are in a critical state and the appliance is no longer accessible.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
Previously updated : 03/07/2024 Last updated : 07/15/2024 - engagement-fy23 - linux-related-content
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Replication appliance / Configuration server** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 74](https://support.microsoft.com/topic/update-rollup-74-for-azure-site-recovery-584e3586-4c55-4cc2-8b1c-63038b6b4464) | 9.62.7096.1 | 9.62.7096.1 | 9.62.7096.1 | 5.24.0614.1 | 2.0.9919.0
[Rollup 73](https://support.microsoft.com/topic/update-rollup-73-for-azure-site-recovery-d3845f1e-2454-4ae8-b058-c1fec6206698) | 9.61.7016.1 | 9.61.7016.1 | 9.61.7016.1 | 5.24.0317.5 | 2.0.9917.0 [Rollup 72](https://support.microsoft.com/topic/update-rollup-72-for-azure-site-recovery-kb5036010-aba602a9-8590-4afe-ac8a-599141ec99a5) | 9.60.6956.1 | NA | 9.60.6956.1 | 5.24.0117.5 | 2.0.9917.0 [Rollup 71](https://support.microsoft.com/topic/update-rollup-71-for-azure-site-recovery-kb5035688-4df258c7-7143-43e7-9aa5-afeef9c26e1a) | 9.59.6930.1 | NA | 9.59.6930.1 | NA | NA
For Site Recovery components, we support N-4 versions, where N is the latest rel
[Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (July 2024)
+
+### Update Rollup 74
+
+[Update rollup 74](https://support.microsoft.com/topic/update-rollup-74-for-azure-site-recovery-584e3586-4c55-4cc2-8b1c-63038b6b4464) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
+**Azure VM disaster recovery** | Added support for Debian 11, SLES 12, SLES 15, and RHEL 9 Linux distros. <br><br/> Added capacity reservation support for Virtual Machine Scale Sets Flex machines protected using Site Recovery.
+**VMware VM/physical disaster recovery to Azure** | Added support for Debian 11, SLES 12, SLES 15, and RHEL 9 Linux distros. <br><br/> Added capacity reservation support for Virtual Machine Scale Sets Flex machines protected using Site Recovery. <br><br/> Added support to enable replication for newly added data disks that are added to a VMware virtual machine, which already has disaster recovery enabled. [Learn more](./vmware-azure-enable-replication-added-disk.md)
++ ## Updates (April 2024) ### Update Rollup 73
-[Update rollup 72](https://support.microsoft.com/topic/update-rollup-73-for-azure-site-recovery-d3845f1e-2454-4ae8-b058-c1fec6206698) provides the following updates:
+[Update rollup 73](https://support.microsoft.com/topic/update-rollup-73-for-azure-site-recovery-d3845f1e-2454-4ae8-b058-c1fec6206698) provides the following updates:
**Update** | **Details** |
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 2.4.
-> [!CAUTION]
+> [!CAUTION]
> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 2.4
+> * Effective July 23, 2024, **disablement** of jobs running on Azure Synapse Runtime for Apache Spark 2.4 will be executed. **Immediately** migrate to higher runtime versions otherwise your jobs will stop executing.
+> * **All Spark jobs running on Azure Synapse Runtime for Apache Spark 2.4 will be disabled as of July 23, 2024.**
> * Effective September 29, 2023, Azure Synapse will discontinue official support for Spark 2.4 Runtimes. > * Post September 29, we will not be addressing any support tickets related to Spark 2.4. There will be no release pipeline in place for bug or security fixes for Spark 2.4. Utilizing Spark 2.4 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns. > * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 2.4, but we will not provide any official support for it.
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
> [!CAUTION] > Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.2
-* End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023.
-* Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
-* In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired as of July 8, 2024. Existing workflows will continue to run but security updates and bug fixes will no longer be available. Metadata will temporarily remain in the Synapse workspace.
-* **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.**
+> * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023.
+> * Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
+> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired as of July 8, 2024. Existing workflows will continue to run but security updates and bug fixes will no longer be available. Metadata will temporarily remain in the Synapse workspace.
+> * **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.**
## Component versions
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
Title: What's new in the Remote Desktop client for Windows - Azure Virtual Deskt
description: Learn about recent changes to the Remote Desktop client for Windows zone_pivot_groups: azure-virtual-desktop-windows-clients-- Previously updated : 07/10/2024++ Last updated : 07/17/2024 # What's new in the Remote Desktop client for Windows
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption-overview.md
Title: Overview of managed disk encryption options description: Overview of managed disk encryption options Previously updated : 02/20/2024 Last updated : 07/17/2024
# Overview of managed disk encryption options
-There are several types of encryption available for your managed disks, including Azure Disk Encryption (ADE), Server-Side Encryption (SSE) and encryption at host.
+There are several types of encryption available for your managed disks, including Azure Disk Encryption (ADE), Server-Side Encryption (SSE), and encryption at host.
- **Azure Disk Storage Server-Side Encryption** (also referred to as encryption-at-rest or Azure Storage encryption) is always enabled and automatically encrypts data stored on Azure managed disks (OS and data disks) when persisting on the Storage Clusters. When configured with a Disk Encryption Set (DES), it supports customer-managed keys as well. It doesn't encrypt temp disks or disk caches. For full details, see [Server-side encryption of Azure Disk Storage](./disk-encryption.md).
There are several types of encryption available for your managed disks, includin
- **Azure Disk Encryption** helps protect and safeguard your data to meet your organizational security and compliance commitments. ADE encrypts the OS and data disks of Azure virtual machines (VMs) inside your VMs by using the [DM-Crypt](https://wikipedia.org/wiki/Dm-crypt) feature of Linux or the [BitLocker](https://wikipedia.org/wiki/BitLocker) feature of Windows. ADE is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets, with the option to encrypt with a key encryption key (KEK). For full details, see [Azure Disk Encryption for Linux VMs](./linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](./windows/disk-encryption-overview.md). -- **Confidential disk encryption** binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. The TPM and VM guest state is always encrypted in attested code using keys released by a secure protocol that bypasses the hypervisor and host operating system. Currently only available for the OS disk. Encryption at host may be used for other disks on a Confidential VM in addition to Confidential Disk Encryption. For full details, see [DCasv5 and ECasv5 series confidential VMs](../confidential-computing/confidential-vm-overview.md#confidential-os-disk-encryption).
+- **Confidential disk encryption** binds disk encryption keys to the virtual machine's TPM and makes the protected disk content accessible only to the VM. The TPM and VM guest state is always encrypted in attested code using keys released by a secure protocol that bypasses the hypervisor and host operating system. Currently only available for the OS disk; [temp disk support is in preview](https://techcommunity.microsoft.com/t5/azure-confidential-computing/confidential-temp-disk-encryption-for-confidential-vms-in-public/ba-p/3971393). Encryption at host may be used for other disks on a Confidential VM in addition to Confidential Disk Encryption. For full details, see [DCasv5 and ECasv5 series confidential VMs](../confidential-computing/confidential-vm-overview.md#confidential-os-disk-encryption).
Encryption is part of a layered approach to security and should be used with other recommendations to secure Virtual Machines and their disks. For full details, see [Security recommendations for virtual machines in Azure](security-recommendations.md) and [Restrict import/export access to managed disks](disks-enable-private-links-for-import-export-portal.yml).
Here's a comparison of Disk Storage SSE, ADE, encryption at host, and Confidenti
| &nbsp; | **Azure Disk Storage Server-Side Encryption** | **Encryption at Host** | **Azure Disk Encryption** | **Confidential disk encryption (For the OS disk only)** | |--|--|--|--|--| | Encryption at rest (OS and data disks) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| Temp disk encryption | &#10060; | &#x2705; Only supported with platform managed key | &#x2705; | &#10060; |
+| Temp disk encryption | &#10060; | &#x2705; Only supported with platform managed key | &#x2705; | &#x2705; [In Preview](https://techcommunity.microsoft.com/t5/azure-confidential-computing/confidential-temp-disk-encryption-for-confidential-vms-in-public/ba-p/3971393)|
| Encryption of caches | &#10060; | &#x2705; | &#x2705; | &#x2705; | | Data flows encrypted between Compute and Storage | &#10060; | &#x2705; | &#x2705; | &#x2705; | | Customer control of keys | &#x2705; When configured with DES | &#x2705; When configured with DES | &#x2705; When configured with KEK | &#x2705; When configured with DES |
Here's a comparison of Disk Storage SSE, ADE, encryption at host, and Confidenti
> For Confidential disk encryption, Microsoft Defender for Cloud does not currently have a recommendation that is applicable. \* Microsoft Defender for Cloud has the following disk encryption recommendations:
+* [Virtual machines and virtual machine scale sets should have encryption at host enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc4d8e41-e223-45ea-9bf5-eada37891d87) (Only detects Encryption at Host)
* [Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) (Only detects Azure Disk Encryption)
-* [\[Preview\]: Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3dc5edcd-002d-444c-b216-e123bbfa37c0) (Detects both Azure Disk Encryption and EncryptionAtHost)
-* [\[Preview\]: Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fca88aadc-6e2b-416c-9de2-5a0f01d1693f) (Detects both Azure Disk Encryption and EncryptionAtHost)
+* [Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3dc5edcd-002d-444c-b216-e123bbfa37c0) (Detects both Azure Disk Encryption and EncryptionAtHost)
+* [Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fca88aadc-6e2b-416c-9de2-5a0f01d1693f) (Detects both Azure Disk Encryption and EncryptionAtHost)
## Next steps
virtual-machines Dlsv6 Dldsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dlsv6-dldsv6-series.md
+
+ Title: Dlsv6 and Dldsv6-series
+description: Specifications for the Dlsv6 and Dldsv6-series VMs
++++ Last updated : 07/16/2024++++
+# Dlsv6 and Dldsv6-series (Preview)
+
+Applies to ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets
+
+> [!NOTE]
+> Azure Virtual Machine Series Dsv6 and Ddsv6 are currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The Dlsv6 and Dldsv6-series Virtual Machines runs on Intel® Xeon® Platinum 8473C (Emerald Rapids) processor in a [hyper threaded](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) configuration. This new processor features an all core turbo clock speed of 3.0 GHz with [Intel® Turbo Boost Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel® Advanced-Vector Extensions 512 (Intel® AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) and [Intel® Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). The Dlsv6 and Dldsv6 VM series provides 2GiBs of RAM per vCPU and optimized for workloads that require less RAM per vCPU than standard VM sizes. Target workloads include web servers, gaming, video encoding, AI/ML, and batch processing.
+
+These new Intel based VMs have two variants: Dlsv6 without local SSD and Dldsv6 with local SSD.
+
+## Dlsv6-series
+
+Dlsv6-series virtual machines run on 5<sup>th</sup> Generation Intel® Xeon® Platinum 8473C (Emerald Rapids) CPU processor reaching an all- core turbo clock speed of up to 3.0 GHz. These virtual machines offer up to 128 vCPU and 256 GiB of RAM. These VM sizes can reduce cost when running non-memory intensive applications.
+
+Dlsv6-series virtual machines do not have any temporary storage thus lowering the price of entry. You can attach Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Live Migration](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview <br>[Memory Preserving Updates](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported <br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported
+
+| **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth (Mbps)** |
+||||||||||||||
+| **Standard_D2ls_v6** | 2 | 4 | 0 | 8 | NA | NA | 3750/106 | 40000/1250 | 4167/124 | 44444/1463 | 2 | 12500 |
+| **Standard_D4ls_v6** | 4 | 8 | 0 | 12 | NA | NA | 6400/212 | 40000/1250 | 8333/248 | 52083/1463 | 2 | 12500 |
+| **Standard_D8ls_v6** | 8 | 16 | 0 | 24 | NA | NA | 12800/424 | 40000/1250 | 16667/496 | 52083/1463 | 4 | 12500 |
+| **Standard_D16ls_v6** | 16 | 32 | 0 | 48 | NA | NA | 25600/848 | 40000/1250 | 33333/992 | 52083/1463 | 8 | 12500 |
+| **Standard_D32ls_v6** | 32 | 64 | 0 | 64 | NA | NA | 51200/1696 | 80000/1696 | 66667/1984 | 104167/1984 | 8 | 16000 |
+| **Standard_D48ls_v6** | 48 | 96 | 0 | 64 | NA | NA | 76800/2544 | 80000/2544 | 100000/2976 | 104167/2976 | 8 | 24000 |
+| **Standard_D64ls_v6** | 64 | 128 | 0 | 64 | NA | NA | 102400/3392 | 102400/3392 | 133333/3969 | 133333/3969 | 8 | 30000 |
+| **Standard_D96ls_v6** | 96 | 192 | 0 | 64 | NA | NA | 153600/5088 | 153600/5088 | 200000/5953 | 200000/5953 | 8 | 41000 |
+| **Standard_D128ls_v6** | 128 | 256 | 0 | 64 | NA | NA | 204800/6782 | 204800/6782 | 266667/7935 | 266667/7935 | 8 | 54000 |
+
+## Dldsv6-series
+
+Dldsv6-series virtual machines run on the 5th Generation Intel® Xeon® Platinum 8473C (Emerald Rapids) processor reaching an all- core turbo clock speed of up to 3.0 GHz. These virtual machines offer up to 128 vCPU and 256 GiB of RAM as well as fast, local SSD storage up to 4x1760 GiB. These VM sizes can reduce cost when running non-memory intensive applications.
+
+Dldsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Live Migration](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview <br>[Memory Preserving Updates](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported <br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview <br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported
+
+| **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth** |
+||||||||||||||
+| **Standard_D2lds_v6** | 2 | 4 | 1x110 | 8 | 37500/180 | 15000/90 | 3750/106 | 40000/1250 | 4167/124 | 44444/1463 | 2 | 12500 |
+| **Standard_D4lds_v6** | 4 | 8 | 1x220 | 12 | 75000/360 | 30000/180 | 6400/212 | 40000/1250 | 8333/248 | 52083/1463 | 2 | 12500 |
+| **Standard_D8lds_v6** | 8 | 16 | 1x440 | 24 | 150000/720 | 60000/360 | 12800/424 | 40000/1250 | 16667/496 | 52083/1463 | 4 | 12500 |
+| **Standard_D16lds_v6** | 16 | 32 | 2x440 | 48 | 300000/1440 | 120000/720 | 25600/848 | 40000/1250 | 33333/992 | 52083/1463 | 8 | 12500 |
+| **Standard_D32lds_v6** | 32 | 64 | 4x440 | 64 | 600000/2880 | 240000/1440 | 51200/1696 | 80000/1696 | 66667/1984 | 104167/1984 | 8 | 16000 |
+| **Standard_D48lds_v6** | 48 | 96 | 6x440 | 64 | 900000/4320 | 360000/2160 | 76800/2544 | 80000/2544 | 100000/2976 | 104167/2976 | 8 | 24000 |
+| **Standard_D64lds_v6** | 64 | 128 | 4x880 | 64 | 1200000/5760 | 480000/2880 | 102400/3392 | 102400/3392 | 133333/3969 | 133333/3969 | 8 | 30000 |
+| **Standard_D96lds_v6** | 96 | 192 | 6x880 | 64 | 1800000/8640 | 720000/4320 | 153600/5088 | 153600/5088 | 200000/5953 | 200000/5953 | 8 | 41000 |
+| **Standard_D128lds_v6** | 128 | 256 | 4x1760 | 64 | 2400000/11520 | 960000/5760 | 204800/6782 | 204800/6782 | 266667/7935 | 266667/7935 | 8 | 54000 |
+
+## Size table definitions
+
+Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+
+Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+
+Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to **ReadOnly** or **ReadWrite**. For uncached data disk operation, the host cache mode is set to **None**.
+
+To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](https://learn.microsoft.com/azure/virtual-machines/disks-performance).
+
+**Expected network bandwidth** is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../virtual-network/virtual-machine-network-throughput.md).
+
+Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](https://learn.microsoft.com/azure/virtual-network/virtual-network-optimize-network-bandwidth). To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](https://learn.microsoft.com/azure/virtual-network/virtual-network-bandwidth-testing)
+
virtual-machines Esv6 Edsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/esv6-edsv6-series.md
+
+ Title: Esv6 and Edsv6-series
+description: Specifications for Esv6 and Edsv6-series
++++ Last updated : 07/17/2024+++
+# Esv6-series and Edsv6-series (Preview)
+
+Applies to ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets
+
+>[!NOTE]
+>Azure Virtual Machine Series Esv6 and Edsv6 are currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The Esv6 and Edsv6-series Virtual Machine (VM) run on the 5th Generation Intel® Xeon® Platinum 8473C (Emerald Rapids) processor in a [hyper threaded](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) configuration, providing a better value proposition for most general-purpose workloads. This new processor features an all- core turbo clock speed of 3.0 GHz with [Intel® Turbo Boost Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel® Advanced-Vector Extensions 512 (Intel® AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) and [Intel® Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). The Esv6 and Edsv6-series feature up to 1024 GiB of RAM. These virtual machines are ideal for memory-intensive enterprise applications, relational database servers, and in-memory analytics workloads.
+
+These new Intel based VMs have two variants: Esv6 without local SSD and Edsv6 with local SSD.
+
+## Esv6-series
+
+Esv6-series virtual machines run on the 5th Generation Intel® Xeon® Platinum 8473C (Emerald Rapids) processor reaching an all- core turbo clock speed of up to 3.0 GHz. These virtual machines offer up to 128 vCPU and 1024 GiB of RAM. Esv6-series virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
+
+[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Not Supported<br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Not Supported<br>[Live Migration](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported<br>[Memory Preserving Updates](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported<br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli)<sup>1</sup>: Required<br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported
+
+| **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth** |
+||||||||||||||
+| **Standard_E2s_v6** | 2 | 16 | 0 | 8 | NA | NA | 3750/106 | 40000/1250 | 4167/124 | 44444/1463 | 2 | 12500 |
+| **Standard_E4s_v6** | 4 | 32 | 0 | 12 | NA | NA | 6400/212 | 40000/1250 | 8333/248 | 52083/1463 | 2 | 12500 |
+| **Standard_E8s_v6** | 8 | 64 | 0 | 24 | NA | NA | 12800/424 | 40000/1250 | 16667/496 | 52083/1463 | 4 | 12500 |
+| **Standard_E16s_v6** | 16 | 128 | 0 | 48 | NA | NA | 25600/848 | 40000/1250 | 33333/992 | 52083/1463 | 8 | 12500 |
+| **Standard_E20s_v6** | 20 | 160 | 0 | 48 | NA | NA | 32000/1060 | 64000/1600 | 41667/1240 | 83333/1872 | 8 | 12500 |
+| **Standard_E32s_v6** | 32 | 256 | 0 | 64 | NA | NA | 51200/1696 | 80000/1696 | 66667/1984 | 104167/1984 | 8 | 16000 |
+| **Standard_E48s_v6** | 48 | 384 | 0 | 64 | NA | NA | 76800/2544 | 80000/2544 | 100000/2976 | 104167/2976 | 8 | 24000 |
+| **Standard_E64s_v6** | 64 | 512 | 0 | 64 | NA | NA | 102400/3392 | 102400/3392 | 133333/3969 | 133333/3969 | 8 | 30000 |
+| **Standard_E96s_v6** | 96 | 768 | 0 | 64 | NA | NA | 153600/5088 | 153600/5088 | 200000/5953 | 200000/5953 | 8 | 41000 |
+| **Standard_E128s_v6** | 128 | 1024 | 0 | 64 | NA | NA | 204800/6782 | 204800/6782 | 266667/7935 | 266667/7935 | 8 | 54000 |
+
+## Edsv6-series
+
+Edsv6-series virtual machines run on the 5th Generation Intel® Xeon® Platinum 8473C (Emerald Rapids) processor reaching an all core turbo clock speed of up to 3.0 GHz. These virtual machines offer up to 192 vCPU and 1832 GiB of RAM and fast, local SSD storage up to 6x1760 GiB. Edsv6-series virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
+
+Edsv6-series virtual machines support Standard SSD and Standard HDD disk types. To use Premium SSD or Ultra Disk storage, select Edsv6-series virtual machines. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported<br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported<br>[Live Migration](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported<br>[Memory Preserving Updates](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported<br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli)<sup>1</sup>: Required<br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported
+
+| **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth** |
+||||||||||||||
+| **Standard_E2ds_v6** | 2 | 16 | 1x110 | 8 | 37500/180 | 15000/90 | 3750/106 | 40000/1250 | 4167/124 | 44444/1463 | 2 | 12500 |
+| **Standard_E4ds_v6** | 4 | 32 | 1x220 | 12 | 75000/360 | 30000/180 | 6400/212 | 40000/1250 | 8333/248 | 52083/1463 | 2 | 12500 |
+| **Standard_E8ds_v6** | 8 | 64 | 1x440 | 24 | 150000/720 | 60000/360 | 12800/424 | 40000/1250 | 16667/496 | 52083/1463 | 4 | 12500 |
+| **Standard_E16ds_v6** | 16 | 128 | 2x440 | 48 | 300000/1440 | 120000/720 | 25600/848 | 40000/1250 | 33333/992 | 52083/1463 | 8 | 12500 |
+| **Standard_E20ds_v6** | 20 | 160 | 2x550 | 48 | 375000/1800 | 150000/900 | 32000/1060 | 64000/1600 | 41667/1240 | 83333/1872 | 8 | 12500 |
+| **Standard_E32ds_v6** | 32 | 256 | 4x440 | 64 | 600000/2880 | 240000/1440 | 51200/1696 | 80000/1696 | 66667/1984 | 104167/1984 | 8 | 16000 |
+| **Standard_E48ds_v6** | 48 | 384 | 6x440 | 64 | 900000/4320 | 360000/2160 | 76800/2544 | 80000/2544 | 100000/2976 | 104167/2976 | 8 | 24000 |
+| **Standard_E64ds_v6** | 64 | 512 | 4x880 | 64 | 1200000/5760 | 480000/2880 | 102400/3392 | 102400/3392 | 133333/3969 | 133333/3969 | 8 | 30000 |
+| **Standard_E96ds_v6** | 96 | 768 | 6x880 | 64 | 1800000/8640 | 720000/4320 | 153600/5088 | 153600/5088 | 200000/5953 | 200000/5953 | 8 | 41000 |
+| **Standard_E128ds_v6** | 128 | 1024 | 4x1760 | 64 | 2400000/11520 | 960000/5760 | 204800/6782 | 204800/6782 | 266667/7935 | 266667/7935 | 8 | 54000 |
+
+## Size table definitions
+
+Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+
+Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+
+Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to **ReadOnly** or **ReadWrite**. For uncached data disk operation, the host cache mode is set to **None**.
+
+To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](https://learn.microsoft.com/azure/virtual-machines/disks-performance).
+
+**Expected network bandwidth** is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](https://learn.microsoft.com/azure/virtual-network/virtual-machine-network-throughput).
+
+Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](https://learn.microsoft.com/azure/virtual-network/virtual-network-optimize-network-bandwidth). To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](https://learn.microsoft.com/azure/virtual-network/virtual-network-bandwidth-testing).
+
virtual-machines Jboss Eap Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-azure-vm.md
If you're interested in providing feedback or working closely on your migration
- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] - Ensure the Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)-- A Java Development Kit (JDK), version 11. In this guide, we recommend the [Red Hat Build of OpenJDK](https://developers.redhat.com/products/openjdk/download). Ensure that your `JAVA_HOME` environment variable is set correctly in the shells in which you run the commands.
+- A Java Development Kit (JDK), version 17. In this guide, we recommend the [Red Hat Build of OpenJDK](https://developers.redhat.com/products/openjdk/download). Ensure that your `JAVA_HOME` environment variable is set correctly in the shells in which you run the commands.
- [Git](https://git-scm.com/downloads). Use `git --version` to test whether `git` works. This tutorial was tested with version 2.34.1. - [Maven](https://maven.apache.org/download.cgi). Use `mvn -version` to test whether `mvn` works. This tutorial was tested with version 3.8.6.
virtual-network Accelerated Networking Mana Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-mana-windows.md
PS C:\Users\testVM> Get-NetAdapter
Name InterfaceDescription ifIndex Status MacAddress LinkSpeed - -- - -
-Ethernet 4 Microsoft Hyper-V Network Adapter #2 10 Up 00-00-AA-AA-00-AA 200 Gbps
-Ethernet 5 Microsoft Azure Network Adapter #3 7 Up 11-11-BB-BB-11-BB 200 Gbps
+Ethernet Microsoft Hyper-V Network Adapter 13 Up 00-0D-3A-AA-00-AA 200 Gbps
+Ethernet 3 Microsoft Azure Network Adapter #2 8 Up 00-0D-3A-AA-00-AA 200 Gbps
``` #### Device Manager
web-application-firewall Web Application Firewall Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-logs.md
The firewall log is generated only if you have enabled it for each application g
||| |instanceId | Application Gateway instance for which firewall data is being generated. For a multiple-instance application gateway, there is one row per instance. | |clientIp | Originating IP for the request. |
-|clientPort | Originating port for the request. |
|requestUri | URL of the received request. | |ruleSetType | Rule set type. The available value is OWASP. | |ruleSetVersion | Rule set version used. Available values are 2.2.9 and 3.0. |
The firewall log is generated only if you have enabled it for each application g
"properties": { "instanceId": "ApplicationGatewayRole_IN_0", "clientIp": "52.161.109.147",
- "clientPort": "0",
"requestUri": "/", "ruleSetType": "OWASP", "ruleSetVersion": "3.0",