Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
advisor | Advisor Reference Reliability Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md | - Title: Reliability recommendations -description: Full list of available reliability recommendations in Advisor. - Previously updated : 12/11/2023-++ Title: Reliability recommendations +description: Full list of available reliability recommendations in Advisor. +++ Last updated : 08/29/2024+++# Reliability recommendations ++Azure Advisor helps you ensure and improve the continuity of your business-critical applications. You can get reliability recommendations on the **Reliability** tab on the Advisor dashboard. ++1. Sign in to the [**Azure portal**](https://portal.azure.com). ++1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page. ++1. On the **Advisor** dashboard, select the **Reliability** tab. +++## AgFood Platform +<!--77f976ab-59e3-474d-ba04-32a7d41c9cb1_begin--> +#### Upgrade to the latest ADMA DotNet SDK version + +We identified calls to an ADMA DotNet SDK version that is scheduled for deprecation. To ensure uninterrupted access to ADMA, latest features, and performance improvements, switch to the latest SDK version. ++For More information, see [What is Azure Data Manager for Agriculture?](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ) +ID: 77f976ab-59e3-474d-ba04-32a7d41c9cb1 ++<!--77f976ab-59e3-474d-ba04-32a7d41c9cb1_end--> ++<!--1233e513-ac1c-402d-be94-7133dc37cac6_begin--> +#### Upgrade to the latest ADMA Java SDK version + +We have identified calls to a ADMA Java Sdk version that is scheduled for deprecation. We recommend switching to the latest Sdk version to ensure uninterrupted access to ADMA, latest features, and performance improvements. ++For More information, see [What is Azure Data Manager for Agriculture?](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ) +ID: 1233e513-ac1c-402d-be94-7133dc37cac6 ++<!--1233e513-ac1c-402d-be94-7133dc37cac6_end--> + ++<!--c4ec2fa1-19f4-491f-9311-ca023ee32c38_begin--> +#### Upgrade to the latest ADMA Python SDK version + +We identified calls to an ADMA Python SDK version that is scheduled for deprecation. To ensure uninterrupted access to ADMA, latest features, and performance improvements, switch to the latest SDK version. ++For More information, see [What is Azure Data Manager for Agriculture?](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ) +ID: c4ec2fa1-19f4-491f-9311-ca023ee32c38 ++<!--c4ec2fa1-19f4-491f-9311-ca023ee32c38_end--> + ++<!--9e49a43a-dbe2-477d-9d34-a4f209617fdb_begin--> +#### Upgrade to the latest ADMA JavaScript SDK version + +We identified calls to an ADMA JavaScript SDK version that is scheduled for deprecation. To ensure uninterrupted access to ADMA, latest features, and performance improvements, switch to the latest SDK version. ++For More information, see [What is Azure Data Manager for Agriculture?](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ) +ID: 9e49a43a-dbe2-477d-9d34-a4f209617fdb ++<!--9e49a43a-dbe2-477d-9d34-a4f209617fdb_end--> + +<!--microsoft_agfoodplatform_end> +## API Management +<!--3dd24a8c-af06-49c3-9a04-fb5721d7a9bb_begin--> +#### Migrate API Management service to stv2 platform + +Support for API Management instances hosted on the stv1 platform will be retired by 31 August 2024. Migrate to stv2 based platform before that to avoid service disruption. ++For More information, see [API Management stv1 platform retirement - Global Azure cloud (August 2024)](/azure/api-management/breaking-changes/stv1-platform-retirement-august-2024) +ID: 3dd24a8c-af06-49c3-9a04-fb5721d7a9bb ++<!--3dd24a8c-af06-49c3-9a04-fb5721d7a9bb_end--> ++<!--8962964c-a6d6-4c3d-918a-2777f7fbdca7_begin--> +#### Hostname certificate rotation failed + +The API Management service failing to refresh the hostname certificate from the Key Vault can lead to the service using a stale certificate and runtime API traffic being blocked. Ensure that the certificate exists in the Key Vault, and the API Management service identity is granted secret read access. ++For More information, see [Configure a custom domain name for your Azure API Management instance](https://aka.ms/apimdocs/customdomain) +ID: 8962964c-a6d6-4c3d-918a-2777f7fbdca7 ++<!--8962964c-a6d6-4c3d-918a-2777f7fbdca7_end--> + ++<!--6124b23c-0d97-4098-9009-79e8c56cbf8c_begin--> +#### The legacy portal was deprecated 3 years ago and retired in October 2023. However, we are seeing active usage of the portal which may cause service disruption soon when we disable it. + +We highly recommend that you migrate to the new developer portal as soon as possible to continue enjoying our services and take advantage of the new features and improvements. ++For More information, see [Migrate to the new developer portal](/previous-versions/azure/api-management/developer-portal-deprecated-migration) +ID: 6124b23c-0d97-4098-9009-79e8c56cbf8c ++<!--6124b23c-0d97-4098-9009-79e8c56cbf8c_end--> + ++<!--53fd1359-ace2-4712-911c-1fc420dd23e8_begin--> +#### Dependency network status check failed + +Azure API Management service dependency not available. Please, check virtual network configuration. ++For More information, see [Deploy your Azure API Management instance to a virtual network - external mode](https://aka.ms/apim-vnet-common-issues) +ID: 53fd1359-ace2-4712-911c-1fc420dd23e8 ++<!--53fd1359-ace2-4712-911c-1fc420dd23e8_end--> + ++<!--b7316772-5c8f-421f-bed0-d86b0f128e25_begin--> +#### SSL/TLS renegotiation blocked + +SSL/TLS renegotiation attempt blocked; secure communication might fail. To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, this option might result in a certificate prompt being presented to the client. -# Reliability recommendations +For More information, see [How to secure APIs using client certificate authentication in API Management](/azure/api-management/api-management-howto-mutual-certificates-for-clients) +ID: b7316772-5c8f-421f-bed0-d86b0f128e25 -Azure Advisor helps you ensure and improve the continuity of your business-critical applications. You can get reliability recommendations on the **Reliability** tab on the Advisor dashboard. +<!--b7316772-5c8f-421f-bed0-d86b0f128e25_end--> + -1. Sign in to the [**Azure portal**](https://portal.azure.com). +<!--2e4d65a3-1e77-4759-bcaa-13009484a97e_begin--> +#### Deploy an Azure API Management instance to multiple Azure regions for increased service availability + +Azure API Management supports multi-region deployment, which enables API publishers to add regional API gateways to an existing API Management instance. Multi-region deployment helps reduce request latency perceived by geographically distributed API consumers and improves service availability. -1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page. +For More information, see [Deploy an Azure API Management instance to multiple Azure regions](/azure/api-management/api-management-howto-deploy-multi-region) +ID: 2e4d65a3-1e77-4759-bcaa-13009484a97e -1. On the **Advisor** dashboard, select the **Reliability** tab. +<!--2e4d65a3-1e77-4759-bcaa-13009484a97e_end--> + -## AI Services +<!--f4c48f42-74f2-41bf-bf99-14e2f9ea9ac9_begin--> +#### Enable and configure autoscale for API Management instance on production workloads. + +API Management instance in production service tiers can be scaled by adding and removing units. The autoscaling feature can dynamically adjust the units of an API Management instance to accommodate a change in load without manual intervention. -### You're close to exceeding storage quota of 2GB. Create a Standard search service +For More information, see [Automatically scale an Azure API Management instance](https://aka.ms/apimautoscale) +ID: f4c48f42-74f2-41bf-bf99-14e2f9ea9ac9 -You're close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded. +<!--f4c48f42-74f2-41bf-bf99-14e2f9ea9ac9_end--> + +<!--microsoft_apimanagement_end> +## App Service +<!--1294987d-c97d-41d0-8fd8-cb6eab52d87b_begin--> +#### Scale out your App Service plan to avoid CPU exhaustion + +High CPU utilization can lead to runtime issues with applications. Your application exceeded 90% CPU over the last couple of days. To reduce CPU usage and avoid runtime issues, scale out the application. -Learn more about [Service limits in Azure AI Search](/azure/search/search-limits-quotas-capacity). +For More information, see [Best practices for Azure App Service](https://aka.ms/antbc-cpu) +ID: 1294987d-c97d-41d0-8fd8-cb6eab52d87b -### You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service +<!--1294987d-c97d-41d0-8fd8-cb6eab52d87b_end--> -You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded. +<!--a85f5f1c-c01f-4926-84ec-700b7624af8c_begin--> +#### Check your app's service health issues + +We have a recommendation related to your app's service health. Open the Azure Portal, go to the app, click the Diagnose and Solve to see more details. -Learn more about [Service limits in Azure AI Search](/azure/search/search-limits-quotas-capacity). +For More information, see [Best practices for Azure App Service](/azure/app-service/app-service-best-practices) +ID: a85f5f1c-c01f-4926-84ec-700b7624af8c -### You're close to exceeding your available storage quota. Add more partitions if you need more storage +<!--a85f5f1c-c01f-4926-84ec-700b7624af8c_end--> + -You're close to exceeding your available storage quota. Add extra partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work. +<!--b30897cc-2c2e-4677-a2a1-107ae982ff49_begin--> +#### Fix the backup database settings of your App Service resource + +When an application has an invalid database configuration, its backups fail. For details, see your application's backup history on your app management page. -Learn more about [Service limits in Azure AI Search](/azure/search/search-limits-quotas-capacity) +For More information, see [Best practices for Azure App Service](https://aka.ms/antbc) +ID: b30897cc-2c2e-4677-a2a1-107ae982ff49 -### Quota Exceeded for this resource +<!--b30897cc-2c2e-4677-a2a1-107ae982ff49_end--> + -We have detected that the quota for your resource has been exceeded. You can wait for it to automatically get replenished soon, or to get unblocked and use the resource again now, you can upgrade it to a paid SKU. +<!--80efd6cb-dcee-491b-83a4-7956e9e058d5_begin--> +#### Fix the backup storage settings of your App Service resource + +When an application has invalid storage settings, its backups fail. For details, see your application's backup history on your app management page. -Learn more about [Cognitive Service - CognitiveServiceQuotaExceeded (Quota Exceeded for this resource)](/azure/cognitive-services/plan-manage-costs#pay-as-you-go). +For More information, see [Best practices for Azure App Service](https://aka.ms/antbc) +ID: 80efd6cb-dcee-491b-83a4-7956e9e058d5 -### Upgrade your application to use the latest API version from Azure OpenAI +<!--80efd6cb-dcee-491b-83a4-7956e9e058d5_end--> + -We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality. +<!--66d3137a-c4da-4c8a-b6b8-e03f5dfba66e_begin--> +#### Scale up your App Service plan SKU to avoid memory problems + +The App Service Plan containing your application exceeded 85% memory allocation. High memory consumption can lead to runtime issues your applications. Find the problem application and scale it up to a higher plan with more memory resources. -Learn more about [Cognitive Service - CogSvcApiVersionOpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference). +For More information, see [Best practices for Azure App Service](https://aka.ms/antbc-memory) +ID: 66d3137a-c4da-4c8a-b6b8-e03f5dfba66e -### Upgrade your application to use the latest API version from Azure OpenAI +<!--66d3137a-c4da-4c8a-b6b8-e03f5dfba66e_end--> + -We have detected that you have an Azure OpenAI resource that is being used with an older API version. Use the latest REST API version to take advantage of the latest features and functionality. +<!--45cfc38d-3ffd-4088-bb15-e4d0e1e160fe_begin--> +#### Scale out your App Service plan + +Consider scaling out your App Service Plan to at least two instances to avoid cold start delays and service interruptions during routine maintenance. -Learn more about [Cognitive Service - API version: OpenAI (Upgrade your application to use the latest API version from Azure OpenAI)](/azure/cognitive-services/openai/reference). +For More information, see [https://aka.ms/appsvcnuminstances](https://aka.ms/appsvcnuminstances) +ID: 45cfc38d-3ffd-4088-bb15-e4d0e1e160fe +<!--45cfc38d-3ffd-4088-bb15-e4d0e1e160fe_end--> + +<!--3e35f804-52cb-4ebf-84d5-d15b3ab85dfc_begin--> +#### Fix application code, a worker process crashed due to an unhandled exception + +A worker process in your application crashed due to an unhandled exception. To identify the root cause, collect memory dumps and call stack information at the time of the crash. -## Analytics +For More information, see [https://aka.ms/appsvcproactivecrashmonitoring](https://aka.ms/appsvcproactivecrashmonitoring) +ID: 3e35f804-52cb-4ebf-84d5-d15b3ab85dfc -### Your cluster running Ubuntu 16.04 is out of support +<!--3e35f804-52cb-4ebf-84d5-d15b3ab85dfc_end--> + -We detected that your HDInsight cluster still uses Ubuntu 16.04 LTS. End of support for Azure HDInsight clusters on Ubuntu 16.04 LTS began on November 30, 2022. Existing clusters run as is without support from Microsoft. Consider rebuilding your cluster with the latest images. +<!--78c5ab69-858a-43ca-a5ac-4ca6f9cdc30d_begin--> +#### Upgrade your App Service to a Standard plan to avoid request rejects + +When an application is part of a shared App Service plan and meets its quota multiple times, incoming requests might be rejected. Your web application canΓÇÖt accept incoming requests after meeting a quota. To remove the quota, upgrade to a Standard plan. -Learn more about [HDInsight cluster - ubuntu1604HdiClusters (Your cluster running Ubuntu 16.04 is out of support)](/azure/hdinsight/hdinsight-component-versioning#supported-hdinsight-versions). +For More information, see [Azure App Service plan overview](https://aka.ms/ant-asp) +ID: 78c5ab69-858a-43ca-a5ac-4ca6f9cdc30d -### Upgrade your HDInsight cluster +<!--78c5ab69-858a-43ca-a5ac-4ca6f9cdc30d_end--> + -We detected your cluster isn't using the latest image. We recommend customers to use the latest versions of HDInsight images as they bring in the best of open source updates, Azure updates and security fixes. HDInsight release happens every 30 to 60 days. Consider moving to the latest release. +<!--59a83512-d885-4f09-8e4f-c796c71c686e_begin--> +#### Move your App Service resource to Standard or higher and use deployment slots + +When an application is deployed multiple times in a week, problems might occur. You deployed your application multiple times last week. To help you reduce deployment impact to your production web application, move your App Service resource to the Standard (or higher) plan, and use deployment slots. -Learn more about [HDInsight cluster - upgradeHDInsightCluster (Upgrade your HDInsight Cluster)](/azure/hdinsight/hdinsight-release-notes). +For More information, see [Set up staging environments in Azure App Service](https://aka.ms/ant-staging) +ID: 59a83512-d885-4f09-8e4f-c796c71c686e -### Your cluster was created one year ago +<!--59a83512-d885-4f09-8e4f-c796c71c686e_end--> + -We detected your cluster was created one year ago. As part of the best practices, we recommend you to use the latest HDInsight images as they bring in the best of open source updates, Azure updates and security fixes. The recommended maximum duration for cluster upgrades is less than six months. +<!--dc3edeee-f0ab-44ae-b612-605a0a739612_begin--> +#### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU. + +The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100GB. Consider upgrading these applications to Standard SKU to avoid throttling. -Learn more about [HDInsight cluster - clusterOlderThanAYear (Your cluster was created one year ago)](/azure/hdinsight/hdinsight-overview-before-you-start#keep-your-clusters-up-to-date). +For More information, see [Pricing ΓÇô Static Web Apps ](https://azure.microsoft.com/pricing/details/app-service/static/) +ID: dc3edeee-f0ab-44ae-b612-605a0a739612 -### Your Kafka cluster disks are almost full +<!--dc3edeee-f0ab-44ae-b612-605a0a739612_end--> + -The data disks used by Kafka brokers in your HDInsight cluster are almost full. When that happens, the Apache Kafka broker process can't start and fails because of the disk full error. To mitigate, find the retention time for every Kafka Topic, back up the files that are older and restart the brokers. +<!--0dc165fd-69bf-468a-aa04-a69377b6feb0_begin--> +#### Use deployment slots for your App Service resource + +When an application is deployed multiple times in a week, problems might occur. You deployed your application multiple times over the last week. To help you manage changes and help reduce deployment impact to your production web application, use deployment slots. -Learn more about [HDInsight cluster - KafkaDiskSpaceFull (Your Kafka Cluster Disks are almost full)](https://aka.ms/kafka-troubleshoot-full-disk). +For More information, see [Set up staging environments in Azure App Service](https://aka.ms/ant-staging) +ID: 0dc165fd-69bf-468a-aa04-a69377b6feb0 -### Creation of clusters under custom virtual network requires more permission +<!--0dc165fd-69bf-468a-aa04-a69377b6feb0_end--> + -Your clusters with custom virtual network were created without virtual network joining permission. Ensure that the users who perform create operations have permissions to the Microsoft.Network/virtualNetworks/subnets/join action before September 30, 2023. +<!--6d732ac5-82e0-4a66-887e-eccee79a2063_begin--> +#### CX Observer Personalized Recommendation + +CX Observer Personalized Recommendation -Learn more about [HDInsight cluster - EnforceVNetJoinPermissionCheck (Creation of clusters under custom VNet requires more permission)](https://aka.ms/hdinsightEnforceVnet). + +ID: 6d732ac5-82e0-4a66-887e-eccee79a2063 -### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster +<!--6d732ac5-82e0-4a66-887e-eccee79a2063_end--> + -Starting July 1, 2020, you can't create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption. +<!--8be322ab-e38b-4391-a5f3-421f2270d825_begin--> +#### Consider changing your application architecture to 64-bit + +Your App Service is configured as 32-bit, and its memory consumption is approaching the limit of 2 GB. If your application supports, consider recompiling your application and changing the App Service configuration to 64-bit instead. -Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](https://aka.ms/hdiretirekafka). +For More information, see [Application performance FAQs for Web Apps in Azure](https://aka.ms/appsvc32bit) +ID: 8be322ab-e38b-4391-a5f3-421f2270d825 -### Deprecation of Older Spark Versions in HDInsight Spark cluster +<!--8be322ab-e38b-4391-a5f3-421f2270d825_end--> + +<!--microsoft_web_end> +## App Service Certificates +<!--a2385343-200c-4eba-bbe2-9252d3f1d6ea_begin--> +#### Domain verification required to issue your App Service Certificate + +You have an App Service Certificate that's currently in a Pending Issuance status and requires domain verification. Failure to validate domain ownership will result in an unsuccessful certificate issuance. Domain verification isn't automated for App Service Certificates and will require action. If you've recently verified domain ownership and have been issued a certificate, you may disregard this message. -Starting July 1, 2020, you can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters run as is without support from Microsoft. +For More information, see [Add and manage TLS/SSL certificates in Azure App Service](https://aka.ms/ASCDomainVerificationRequired) +ID: a2385343-200c-4eba-bbe2-9252d3f1d6ea -Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](https://aka.ms/hdiretirespark). +<!--a2385343-200c-4eba-bbe2-9252d3f1d6ea_end--> +<!--microsoft_certificateregistration_end> +## Application Gateway +<!--6a2b1e70-bd4c-4163-86de-5243d7ac05ee_begin--> +#### Upgrade your SKU or add more instances + +Deploying two or more medium or large sized instances ensures business continuity (fault tolerance) during outages caused by planned or unplanned maintenance. -### Enable critical updates to be applied to your HDInsight clusters +For More information, see [Multi-region load balancing - Azure Reference Architectures ](https://aka.ms/aa_gatewayrec_learnmore) +ID: 6a2b1e70-bd4c-4163-86de-5243d7ac05ee -HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources such as Load balancer, Network interface and Public IP address, associated with your clusters before January 13, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 13, 2021 05:00 PM UTC and January 16, 2021 05:00 PM UTC. Failure to apply this update might result in your clusters becoming unhealthy and unusable. +<!--6a2b1e70-bd4c-4163-86de-5243d7ac05ee_end--> -Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). +<!--52a9d0a7-efe1-4512-9716-394abd4e0ab1_begin--> +#### Avoid hostname override to ensure site integrity + +Avoid overriding the hostname when configuring Application Gateway. Having a domain on the frontend of Application Gateway different than the one used to access the backend, can lead to broken cookies or redirect URLs. Make sure the backend is able to deal with the domain difference, or update the Application Gateway configuration so the hostname doesn't need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the *.azurewebsites.net host name towards the backend. Note that a different frontend domain isn't a problem in all situations, and certain categories of backends like REST APIs, are less sensitive in general. -### Drop and recreate your HDInsight clusters to apply critical updates +For More information, see [Troubleshoot App Service issues in Application Gateway](https://aka.ms/appgw-advisor-usecustomdomain) +ID: 52a9d0a7-efe1-4512-9716-394abd4e0ab1 -The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we're unable to apply the certificate updates on some of your clusters. +<!--52a9d0a7-efe1-4512-9716-394abd4e0ab1_end--> + -Learn more about [HDInsight cluster - GCSCertRotationRound2 (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). +<!--17454550-1543-4068-bdaf-f3ed7cdd3d86_begin--> +#### Implement ExpressRoute Monitor on Network Performance Monitor + +When ExpressRoute circuit isn't monitored by ExpressRoute Monitor on Network Performance, you miss notifications of loss, latency, and performance of on-premises to Azure resources, and Azure to on-premises resources. For end-to-end monitoring, implement ExpressRoute Monitor on Network Performance. -### Drop and recreate your HDInsight clusters to apply critical updates +For More information, see [Configure Network Performance Monitor for ExpressRoute (deprecated)](/azure/expressroute/how-to-npm) +ID: 17454550-1543-4068-bdaf-f3ed7cdd3d86 -The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we're unable to apply the certificate updates on some of your clusters. Drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. +<!--17454550-1543-4068-bdaf-f3ed7cdd3d86_end--> + -Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). +<!--70f87e66-9b2d-4bfa-ae38-1d7d74837689_begin--> +#### Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency + +When an ExpressRoute gateway only has one ExpressRoute circuit associated to it, resiliency issues might occur. To ensure peering location redundancy and resiliency, connect one or more additional circuits to your gateway. -### Apply critical updates to your HDInsight clusters +For More information, see [Designing for high availability with ExpressRoute](/azure/expressroute/designing-for-high-availability-with-expressroute) +ID: 70f87e66-9b2d-4bfa-ae38-1d7d74837689 -The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying the update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources associated with your clusters. Change your policy assignment before January 21, 2021 05:00 PM UTC when the HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources in the same resource group and subnet where your cluster is. Failure to apply this update might result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters. +<!--70f87e66-9b2d-4bfa-ae38-1d7d74837689_end--> + -Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). +<!--6cd70072-c45c-4716-bf7b-b35c18e46e72_begin--> +#### Add at least one more endpoint to the profile, preferably in another Azure region + +Profiles need more than one endpoint to ensure availability if one of the endpoints fails. We also recommend that endpoints be in different regions. -### Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021 +For More information, see [Traffic Manager endpoints](https://aka.ms/AA1o0x4) +ID: 6cd70072-c45c-4716-bf7b-b35c18e46e72 -You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) are retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 are deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more information, see 'Learn More' link or contact us at askhdinsight@microsoft.com +<!--6cd70072-c45c-4716-bf7b-b35c18e46e72_end--> + -Learn more about [HDInsight cluster - VM Deprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/). +<!--0bbe0a49-3c63-49d3-ab4a-aa24198f03f7_begin--> +#### Add an endpoint configured to "All (World)" + +For geographic routing, traffic is routed to endpoints in defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles avoids traffic black holing and guarantees service availablity. +For More information, see [Add, disable, enable, delete, or move endpoints](https://aka.ms/Rf7vc5) +ID: 0bbe0a49-3c63-49d3-ab4a-aa24198f03f7 +<!--0bbe0a49-3c63-49d3-ab4a-aa24198f03f7_end--> + -## Compute +<!--0db76759-6d22-4262-93f0-2f989ba2b58e_begin--> +#### Add or move one endpoint to another Azure region + +All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region will improve overall performance for proximity routing and provide better availability if all endpoints in one region fail. -### Cloud Services (classic) is retiring. Migrate off before 31 August 2024 +For More information, see [Configure the performance traffic routing method](https://aka.ms/Ldkkdb) +ID: 0db76759-6d22-4262-93f0-2f989ba2b58e -Cloud Services (classic) is retiring. Migrate off before 31 August 2024 to avoid any loss of data or business continuity. +<!--0db76759-6d22-4262-93f0-2f989ba2b58e_end--> + -Learn more about [Resource - Cloud Services Retirement (Cloud Services (classic) is retiring. Migrate off before 31 August 2024)](https://aka.ms/ExternalRetirementEmailMay2022). +<!--e070c4bf-afaf-413e-bc00-e476b89c5f3d_begin--> +#### Move to production gateway SKUs from Basic gateways + +The Basic VPN SKU is for development or testing scenarios. If you're using the VPN gateway for production, move to a production SKU, which offers higher numbers of tunnels, Border Gateway Protocol (BGP), active-active configuration, custom IPsec/IKE policy, and increased stability and availability. -### Upgrade the standard disks attached to your premium-capable VM to premium disks +For More information, see [About VPN Gateway configuration settings](https://aka.ms/aa_basicvpngateway_learnmore) +ID: e070c4bf-afaf-413e-bc00-e476b89c5f3d -We have identified that you're using standard disks with your premium-capable virtual machines and we recommend you consider upgrading the standard disks to premium disks. For any single instance virtual machine using premium storage for all operating system disks and data disks, we guarantee virtual machine connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks. +<!--e070c4bf-afaf-413e-bc00-e476b89c5f3d_end--> + -Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](https://aka.ms/aa_storagestandardtopremium_learnmore). +<!--c249dc0e-9a17-423e-838a-d72719e8c5dd_begin--> +#### Enable Active-Active gateways for redundancy + +In active-active configuration, both instances of the VPN gateway establish site-to-site (S2S) VPN tunnels to your on-premise VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic is automatically switched over to the other active IPsec tunnel. -### Enable virtual machine replication to protect your applications from regional outage +For More information, see [Design highly available gateway connectivity for cross-premises and VNet-to-VNet connections](https://aka.ms/aa_vpnha_learnmore) +ID: c249dc0e-9a17-423e-838a-d72719e8c5dd -Virtual machines that don't have replication enabled to another region aren't resilient to regional outages. Replicating the machines drastically reduce any adverse business effect during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the following list so that in an event of an outage, you can quickly bring up your machines in remote Azure region. -Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](https://aka.ms/azure-site-recovery-dr-azure-vms). +<!--c249dc0e-9a17-423e-838a-d72719e8c5dd_end--> + -### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no extra cost +<!--1c7fc5ab-f776-4aee-8236-ab478519f68f_begin--> +#### Disable health probes when there is only one origin in an origin group + +If you only have a single origin, Front Door always routes traffic to that origin even if its health probe reports an unhealthy status. The status of the health probe doesn't do anything to change Front Door's behavior. In this scenario, health probes don't provide a benefit. -We have identified that your VM is using premium unmanaged disks that can be migrated to managed disks at no extra cost. Azure Managed Disks provides higher resiliency, simplified service management, higher scale target and more choices among several disk types. This upgrade can be done through the portal in less than 5 minutes. +For More information, see [Best practices for Front Door](https://aka.ms/afd-disable-health-probes) +ID: 1c7fc5ab-f776-4aee-8236-ab478519f68f -Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost (Upgrade VM from Premium Unmanaged Disks to Managed Disks at no extra cost)](https://aka.ms/md_overview). +<!--1c7fc5ab-f776-4aee-8236-ab478519f68f_end--> + -### Update your outbound connectivity protocol to Service Tags for Azure Site Recovery +<!--5185d64e-46fd-4ed2-8633-6d81f5e3ca59_begin--> +#### Use managed TLS certificates + +When Front Door manages your TLS certificates, it reduces your operational costs, and helps you to avoid costly outages caused by forgetting to renew a certificate. Front Door automatically issues and rotates the managed TLS certificates. -Using IP Address based filtering has been identified as a vulnerable way to control outbound connectivity for firewalls. We advise using Service Tags as an alternative for controlling connectivity. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines. +For More information, see [Best practices for Front Door](https://aka.ms/afd-use-managed-tls) +ID: 5185d64e-46fd-4ed2-8633-6d81f5e3ca59 -Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](https://aka.ms/azure-site-recovery-using-service-tags). +<!--5185d64e-46fd-4ed2-8633-6d81f5e3ca59_end--> + -### Update your firewall configurations to allow new RHUI 4 IPs +<!--56f0c458-521d-4b8b-a704-c0a099483d19_begin--> +#### Use NAT gateway for outbound connectivity + +Prevent connectivity failures due to source network address translation (SNAT) port exhaustion by using NAT gateway for outbound traffic from your virtual networks. NAT gateway scales dynamically and provides secure connections for traffic headed to the internet. -Your Virtual Machine Scale Sets start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates. +For More information, see [Use Source Network Address Translation (SNAT) for outbound connections](/azure/load-balancer/load-balancer-outbound-connections#2-associate-a-nat-gateway-to-the-subnet) +ID: 56f0c458-521d-4b8b-a704-c0a099483d19 -Learn more about [Virtual machine - Rhui3ToRhui4MigrationV2 (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list). +<!--56f0c458-521d-4b8b-a704-c0a099483d19_end--> + -### Virtual machines in your subscription are running on images that have been scheduled for deprecation +<!--5c488377-be3e-4365-92e8-09d1e8d9038c_begin--> +#### Deploy your Application Gateway across Availability Zones + +Achieve zone redundancy by deploying Application Gateway across Availability Zones. Zone redundancy boosts resilience by enabling Application Gateway to survive various outages, which ensures continuity even if one zone is affected, and enhances overall reliability. -Virtual machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to a newer version of the image to prevent disruption to your workloads. +For More information, see [Scaling Application Gateway v2 and WAF v2](https://aka.ms/appgw/az) +ID: 5c488377-be3e-4365-92e8-09d1e8d9038c -Learn more about [Virtual machine - VMRunningDeprecatedOfferLevelImage (Virtual machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ). +<!--5c488377-be3e-4365-92e8-09d1e8d9038c_end--> + -### Virtual machines in your subscription are running on images that have been scheduled for deprecation +<!--6cc8be07-8c03-4bd7-ad9b-c2985b261e01_begin--> +#### Update VNet permission of Application Gateway users + +To improve security and provide a more consistent experience across Azure, all users must pass a permission check to create or update an Application Gateway in a Virtual Network. The users or service principals minimum permission required is Microsoft.Network/virtualNetworks/subnets/join/action. -Virtual machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to a newer SKU of the image to prevent disruption to your workloads. +For More information, see [Application Gateway infrastructure configuration](https://aka.ms/agsubnetjoin) +ID: 6cc8be07-8c03-4bd7-ad9b-c2985b261e01 -Learn more about [Virtual machine - VMRunningDeprecatedPlanLevelImage (Virtual machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ). +<!--6cc8be07-8c03-4bd7-ad9b-c2985b261e01_end--> + -### Virtual machines in your subscription are running on images that have been scheduled for deprecation +<!--79f543f9-60e6-4ef6-ae42-2095f6149cba_begin--> +#### Use the same domain name on Front Door and your origin + +When you rewrite the Host header, request cookies and URL redirections might break. When you use platforms like Azure App Service, features like session affinity and authentication and authorization might not work correctly. Make sure to validate whether your application is going to work correctly. -Virtual machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer version of the image to prevent disruption to your workloads. +For More information, see [Best practices for Front Door](https://aka.ms/afd-same-domain-origin) +ID: 79f543f9-60e6-4ef6-ae42-2095f6149cba +<!--79f543f9-60e6-4ef6-ae42-2095f6149cba_end--> + -Learn more about [Virtual machine - VMRunningDeprecatedImage (Virtual machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ). +<!--8d61a7d4-5405-4f43-81e3-8c6239b844a6_begin--> +#### Implement Site Resiliency for ExpressRoute + +To ensure maximum resiliency, Microsoft recommends that you connect to two ExpressRoute circuits in two peering locations. The goal of Maximum Resiliency is to enhance availability and ensure the highest level of resilience for critical workloads. -### Use Availability zones for better resiliency and availability +For More information, see [Design and architect Azure ExpressRoute for resiliency](https://aka.ms/ersiteresiliency) +ID: 8d61a7d4-5405-4f43-81e3-8c6239b844a6 -Availability Zones (AZ) in Azure help protect your applications and data from datacenter failures. Each AZ is made up of one or more datacenters equipped with independent power, cooling, and networking. By designing solutions to use zonal VMs, you can isolate your VMs from failure in any other zone. +<!--8d61a7d4-5405-4f43-81e3-8c6239b844a6_end--> + -Learn more about [Virtual machine - AvailabilityZoneVM (Use Availability zones for better resiliency and availability)](/azure/reliability/availability-zones-overview). +<!--c9af1ef6-55bc-48af-bfe4-2c80490159f8_begin--> +#### Implement Zone Redundant ExpressRoute Gateways + +Implement zone-redundant Virtual Network Gateway in Azure Availability Zones. This brings resiliency, scalability, and higher availability to your Virtual Network Gateways. -### Use Managed Disks to improve data reliability +For More information, see [Create a zone-redundant virtual network gateway in availability zones](/azure/vpn-gateway/create-zone-redundant-vnet-gateway) +ID: c9af1ef6-55bc-48af-bfe4-2c80490159f8 -Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure. +<!--c9af1ef6-55bc-48af-bfe4-2c80490159f8_end--> + -Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore). +<!--c9c9750b-9ddb-436f-b19a-9c725539a0b5_begin--> +#### Ensure autoscaling is used for increased performance and resiliency + +When configuring the Application Gateway, it's recommended to provision autoscaling to scale in and out in response to changes in demand. This helps to minimize the effects of a single failing component. -### Access to mandatory URLs missing for your Azure Virtual Desktop environment +For More information, see [Scaling Application Gateway v2 and WAF v2](/azure/application-gateway/application-gateway-autoscaling-zone-redundant) +ID: c9c9750b-9ddb-436f-b19a-9c725539a0b5 -In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to the allowed list, in case your virtual machine runs in a restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you might also search your Application event log for event 3702. +<!--c9c9750b-9ddb-436f-b19a-9c725539a0b5_end--> + +<!--microsoft_network_end> +## Application Gateway for Containers +<!--db83b3d4-96e5-4cfe-b736-b3280cadd163_begin--> +#### Migrate to supported version of AGC + +The version of Application Gateway for Containers was provisioned with a preview version and is not supported for production. Ensure you provision a new gateway using the latest API version. ++For More information, see [What is Application Gateway for Containers?](https://aka.ms/appgwcontainers/docs) +ID: db83b3d4-96e5-4cfe-b736-b3280cadd163 ++<!--db83b3d4-96e5-4cfe-b736-b3280cadd163_end--> +<!--microsoft_servicenetworking_end> +## Azure AI Search +<!--97b38421-f88c-4db0-b397-b2d81eff6630_begin--> +#### Create a Standard search service (2GB) + +When you exceed your storage quota, indexing operations stop working. You're close to exceeding your storage quota of 2GB. If you need more storage, create a Standard search service or add extra partitions. ++For More information, see [https://aka.ms/azs/search-limits-quotas-capacity](https://aka.ms/azs/search-limits-quotas-capacity) +ID: 97b38421-f88c-4db0-b397-b2d81eff6630 ++<!--97b38421-f88c-4db0-b397-b2d81eff6630_end--> ++<!--8d31f25f-31a9-4267-b817-20ee44f88069_begin--> +#### Create a Standard search service (50MB) + +When you exceed your storage quota, indexing operations stop working. You're close to exceeding your storage quota of 50MB. To maintain operations, create a Basic or Standard search service. ++For More information, see [https://aka.ms/azs/search-limits-quotas-capacity](https://aka.ms/azs/search-limits-quotas-capacity) +ID: 8d31f25f-31a9-4267-b817-20ee44f88069 ++<!--8d31f25f-31a9-4267-b817-20ee44f88069_end--> + ++<!--b3efb46f-6d30-4201-98de-6492c1f8f10d_begin--> +#### Avoid exceeding your available storage quota by adding more partitions + +When you exceed your storage quota, you can still query, but indexing operations stop working. You're close to exceeding your available storage quota. If you need more storage, add extra partitions. ++For More information, see [https://aka.ms/azs/search-limits-quotas-capacity](https://aka.ms/azs/search-limits-quotas-capacity) +ID: b3efb46f-6d30-4201-98de-6492c1f8f10d ++<!--b3efb46f-6d30-4201-98de-6492c1f8f10d_end--> + +<!--microsoft_search_end> +## Azure Arc-enabled Kubernetes +<!--6d55ea5b-6e80-4313-9b80-83d384667eaa_begin--> +#### Upgrade to the latest agent version of Azure Arc-enabled Kubernetes + +For the best Azure Arc enabled Kubernetes experience, improved stability and new functionality, upgrade to the latest agent version. ++For More information, see [Upgrade Azure Arc-enabled Kubernetes agents](https://aka.ms/ArcK8sAgentUpgradeDocs) +ID: 6d55ea5b-6e80-4313-9b80-83d384667eaa ++<!--6d55ea5b-6e80-4313-9b80-83d384667eaa_end--> +<!--microsoft_kubernetes_end> +## Azure Arc-enabled Kubernetes Configuration +<!--4bc7a00b-edbb-4963-8800-1b0f8897fecf_begin--> +#### Upgrade Microsoft Flux extension to the newest major version + +The Microsoft Flux extension has a major version release. Plan for a manual upgrade to the latest major version for Microsoft Flux for all Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters within 6 months for continued support and new functionality. ++For More information, see [Available extensions for Azure Arc-enabled Kubernetes clusters](https://aka.ms/fluxreleasenotes) +ID: 4bc7a00b-edbb-4963-8800-1b0f8897fecf ++<!--4bc7a00b-edbb-4963-8800-1b0f8897fecf_end--> ++<!--79cfad72-9b6d-4215-922d-7df77e1ea3bb_begin--> +#### Upcoming Breaking Changes for Microsoft Flux Extension + +The Microsoft Flux extension frequently receives updates for security and stability. The upcoming update, in line with the OSS Flux Project, will modify the HelmRelease and HelmChart APIs by removing deprecated fields. To avoid disruption to your workloads, necessary action is needed. ++For More information, see [Available extensions for Azure Arc-enabled Kubernetes clusters](https://aka.ms/fluxreleasenotes) +ID: 79cfad72-9b6d-4215-922d-7df77e1ea3bb ++<!--79cfad72-9b6d-4215-922d-7df77e1ea3bb_end--> + ++<!--c8e3b516-a0d5-4c64-8a7a-71cfd068d5e8_begin--> +#### Upgrade Microsoft Flux extension to a supported version + +Current version of Microsoft Flux on one or more Azure Arc enabled clusters and Azure Kubernetes clusters is out of support. To get security patches, bug fixes and Microsoft support, upgrade to a supported version. ++For More information, see [Available extensions for Azure Arc-enabled Kubernetes clusters](https://aka.ms/fluxreleasenotes) +ID: c8e3b516-a0d5-4c64-8a7a-71cfd068d5e8 ++<!--c8e3b516-a0d5-4c64-8a7a-71cfd068d5e8_end--> + +<!--microsoft_kubernetesconfiguration_end> +## Azure Arc-enabled servers +<!--9d5717d2-4708-4e3f-bdda-93b3e6f1715b_begin--> +#### Upgrade to the latest version of the Azure Connected Machine agent + +The Azure Connected Machine agent is updated regularly with bug fixes, stability enhancements, and new functionality. For the best Azure Arc experience, upgrade your agent to the latest version. ++For More information, see [Managing and maintaining the Connected Machine agent](/azure/azure-arc/servers/manage-agent) +ID: 9d5717d2-4708-4e3f-bdda-93b3e6f1715b ++<!--9d5717d2-4708-4e3f-bdda-93b3e6f1715b_end--> +<!--microsoft_hybridcompute_end> +## Azure Cache for Redis +<!--7c380315-6ad9-4fb2-8930-a8aeb1d6241b_begin--> +#### Increase fragmentation memory reservation + +Fragmentation and memory pressure can cause availability incidents. To help in reduce cache failures when running under high memory pressure, increase reservation of memory for fragmentation through the maxfragmentationmemory-reserved setting available in the Advanced Settings options. ++For More information, see [How to configure Azure Cache for Redis](https://aka.ms/redis/recommendations/memory-policies) +ID: 7c380315-6ad9-4fb2-8930-a8aeb1d6241b ++<!--7c380315-6ad9-4fb2-8930-a8aeb1d6241b_end--> ++<!--c9e4a27c-79e6-4e4c-904f-b6612b6cd892_begin--> +#### Configure geo-replication for Cache for Redis instances to increase durability of applications + +Geo-Replication enables disaster recovery for cached data, even in the unlikely event of a widespread regional failure. This can be essential for mission-critical applications. We recommend that you configure passive geo-replication for Premium Azure Cache for Redis instances. ++For More information, see [Configure passive geo-replication for Premium Azure Cache for Redis instances](https://aka.ms/redispremiumgeoreplication) +ID: c9e4a27c-79e6-4e4c-904f-b6612b6cd892 ++<!--c9e4a27c-79e6-4e4c-904f-b6612b6cd892_end--> + +<!--microsoft_cache_end> +## Azure Container Apps +<!--c692e862-953b-49fe-9c51-e5d2792c1cc1_begin--> +#### Re-create your your Container Apps environment to avoid DNS issues + +There's a potential networking issue with your Container Apps environments that might cause DNS issues. We recommend that you create a new Container Apps environment, re-create your Container Apps in the new environment, and delete the old Container Apps environment. ++For More information, see [Quickstart: Deploy your first container app using the Azure portal](https://aka.ms/createcontainerapp) +ID: c692e862-953b-49fe-9c51-e5d2792c1cc1 ++<!--c692e862-953b-49fe-9c51-e5d2792c1cc1_end--> ++<!--b9ce2d2e-554b-4391-8ebc-91c570602b04_begin--> +#### Renew custom domain certificate + +The custom domain certificate you uploaded is near expiration. To prevent possible service downtime, renew your certificate and upload the new certificate for your container apps. ++For More information, see [Custom domain names and bring your own certificates in Azure Container Apps](https://aka.ms/containerappcustomdomaincert) +ID: b9ce2d2e-554b-4391-8ebc-91c570602b04 ++<!--b9ce2d2e-554b-4391-8ebc-91c570602b04_end--> + ++<!--fa6c0880-da2e-42fd-9cb3-e1267ec5b5c2_begin--> +#### An issue has been detected that is preventing the renewal of your Managed Certificate. + +We detected the managed certificate used by the Container App has failed to auto renew. Follow the documentation link to make sure that the DNS settings of your custom domain are correct. -Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Azure Virtual Desktop environment)](../virtual-desktop/safe-url-list.md). +For More information, see [Custom domain names and free managed certificates in Azure Container Apps](https://aka.ms/containerapps/managed-certificates) +ID: fa6c0880-da2e-42fd-9cb3-e1267ec5b5c2 -### Update your firewall configurations to allow new RHUI 4 IPs +<!--fa6c0880-da2e-42fd-9cb3-e1267ec5b5c2_end--> + -Your Virtual Machine Scale Sets start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates. +<!--9be5f344-6fa5-4abc-a1f2-61ae6192a075_begin--> +#### Increase the minimal replica count for your containerized application + +The minimal replica count set for your Azure Container App containerized application might be too low, which can cause resilience, scalability, and load balancing issues. For better availability, consider increasing the minimal replica count. -Learn more about [Virtual machine scale set - Rhui3ToRhui4MigrationVMSS (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list). +For More information, see [Set scaling rules in Azure Container Apps](https://aka.ms/containerappscalingrules) +ID: 9be5f344-6fa5-4abc-a1f2-61ae6192a075 -### Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation +<!--9be5f344-6fa5-4abc-a1f2-61ae6192a075_end--> + +<!--microsoft_app_end> +## Azure Cosmos DB +<!--5e4e9f04-9201-4fd9-8af6-a9539d13d8ec_begin--> +#### Configure Azure Cosmos DB containers with a partition key + +When Azure Cosmos DB nonpartitioned collections reach their provisioned storage quota, you lose the ability to add data. Your Cosmos DB nonpartitioned collections are approaching their provisioned storage quota. Migrate these collections to new collections with a partition key definition so they can automatically be scaled out by the service. -Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to a newer version of the image to prevent disruption to your workload. +For More information, see [Partitioning and horizontal scaling in Azure Cosmos DB](/azure/cosmos-db/partitioning-overview#choose-partitionkey) +ID: 5e4e9f04-9201-4fd9-8af6-a9539d13d8ec -Learn more about [Virtual machine scale set - VMScaleSetRunningDeprecatedOfferImage (Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ). +<!--5e4e9f04-9201-4fd9-8af6-a9539d13d8ec_end--> -### Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation +<!--bdb595a4-e148-41f9-98e8-68ec92d1932e_begin--> +#### Use static Cosmos DB client instances in your code and cache the names of databases and collections + +A high number of metadata operations on an account can result in rate limiting. Metadata operations have a system-reserved request unit (RU) limit. Avoid rate limiting from metadata operations by using static Cosmos DB client instances in your code and caching the names of databases and collections. -Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to newer version of the image to prevent disruption to your workload. +For More information, see [Performance tips for Azure Cosmos DB and .NET SDK v2](/azure/cosmos-db/performance-tips) +ID: bdb595a4-e148-41f9-98e8-68ec92d1932e -Learn more about [Virtual machine scale set - VMScaleSetRunningDeprecatedImage (Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ). +<!--bdb595a4-e148-41f9-98e8-68ec92d1932e_end--> + -### Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation +<!--44a0a07f-23a2-49df-b8dc-a1b14c7c6a9d_begin--> +#### Check linked Azure Key Vault hosting your encryption key + +When an Azure Cosmos DB account can't access its linked Azure Key Vault hosting the encyrption key, data access and security issues might happen. Your Azure Key Vault's configuration is preventing your Cosmos DB account from contacting the key vault to access your managed encryption keys. If you recently performed a key rotation, ensure that the previous key, or key version, remains enabled and available until Cosmos DB completes the rotation. The previous key or key version can be disabled after 24 hours, or after the Azure Key Vault audit logs don't show any activity from Azure Cosmos DB on that key or key version. -Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Sets workloads would no longer scale out. Upgrade to newer plan of the image to prevent disruption to your workload. +For More information, see [Configure customer-managed keys for your Azure Cosmos DB account with Azure Key Vault](/azure/cosmos-db/how-to-setup-cmk) +ID: 44a0a07f-23a2-49df-b8dc-a1b14c7c6a9d -Learn more about [Virtual machine scale set - VMScaleSetRunningDeprecatedPlanImage (Virtual Machine Scale Sets in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ). +<!--44a0a07f-23a2-49df-b8dc-a1b14c7c6a9d_end--> + +<!--213974c8-ed9c-459f-9398-7cdaa3c28856_begin--> +#### Configure consistent indexing mode on Azure Cosmos DB containers + +Azure Cosmos containers configured with the Lazy indexing mode update asynchronously, which improves write performance, but can impact query freshness. Your container is configured with the Lazy indexing mode. If query freshness is critical, use Consistent Indexing Mode for immediate index updates. +For More information, see [Manage indexing policies in Azure Cosmos DB](/azure/cosmos-db/how-to-manage-indexing-policy) +ID: 213974c8-ed9c-459f-9398-7cdaa3c28856 -## Containers +<!--213974c8-ed9c-459f-9398-7cdaa3c28856_end--> + -### Increase the minimal replica count for your container app +<!--bc9e5110-a220-4ab9-8bc9-53f92d3eef70_begin--> +#### Hotfix - Upgrade to 2.6.14 version of the Async Java SDK v2 or to Java SDK v4 + +There's a critical bug in version 2.6.13 (and lower) of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. The error happens transparently to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: While this is a critical hotfix for the Async Java SDK v2, we still highly recommend you migrate to the [Java SDK v4](/azure/cosmos-db/sql/sql-api-sdk-java-v4). -We detected the minimal replica count set for your container app might be lower than optimal. Consider increasing the minimal replica count for better availability. +For More information, see [Azure Cosmos DB Async Java SDK for API for NoSQL (legacy): Release notes and resources](/azure/cosmos-db/sql/sql-api-sdk-async-java) +ID: bc9e5110-a220-4ab9-8bc9-53f92d3eef70 -Learn more about [Microsoft App Container App - ContainerAppMinimalReplicaCountTooLow (Increase the minimal replica count for your container app)](https://aka.ms/containerappscalingrules). +<!--bc9e5110-a220-4ab9-8bc9-53f92d3eef70_end--> + -### Renew custom domain certificate +<!--38942ae5-3154-4e0b-98d9-23aa061c334b_begin--> +#### Critical issue - Upgrade to the current recommended version of the Java SDK v4 + +There's a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparently to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Avoid this problem by upgrading to the current recommended version of the Java SDK v4 -We detected the custom domain certificate you uploaded is near expiration. Renew your certificate and upload the new certificate for your container apps. +For More information, see [Azure Cosmos DB Java SDK v4 for API for NoSQL: release notes and resources](/azure/cosmos-db/sql/sql-api-sdk-java-v4) +ID: 38942ae5-3154-4e0b-98d9-23aa061c334b -Learn more about [Microsoft App Container App - ContainerAppCustomDomainCertificateNearExpiration (Renew custom domain certificate)](https://aka.ms/containerappcustomdomaincert). +<!--38942ae5-3154-4e0b-98d9-23aa061c334b_end--> + -### A potential networking issue has been identified with your Container Apps environment that requires it to be re-created to avoid DNS issues +<!--123039b5-0fda-4744-9a17-d6b5d5d122b2_begin--> +#### Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB's API for MongoDB account + +Some of your applications are connecting to your upgraded Azure Cosmos DB's API for MongoDB account using the legacy 3.2 endpoint - [accountname].documents.azure.com. Use the new endpoint - [accountname].mongo.cosmos.azure.com (or its equivalent in sovereign, government, or restricted clouds). -A potential networking issue has been identified for your Container Apps environments. To prevent this potential networking issue, create a new Container Apps environment, re-create your Container Apps in the new environment, and delete the old Container Apps environment +For More information, see [Azure Cosmos DB for MongoDB (4.0 server version): supported features and syntax](/azure/cosmos-db/mongodb-feature-support-40) +ID: 123039b5-0fda-4744-9a17-d6b5d5d122b2 -Learn more about [Managed Environment - CreateNewContainerAppsEnvironment (A potential networking issue has been identified with your Container Apps Environment that requires it to be re-created to avoid DNS issues)](https://aka.ms/createcontainerapp). +<!--123039b5-0fda-4744-9a17-d6b5d5d122b2_end--> + -### Domain verification required to renew your App Service Certificate +<!--0da795d9-26d2-4f02-a019-0ec383363c88_begin--> +#### Upgrade your Azure Cosmos DB API for MongoDB account to v4.2 to save on query/storage costs and utilize new features + +Your Azure Cosmos DB API for MongoDB account is eligible to upgrade to version 4.2. Upgrading to v4.2 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.2. -You have an App Service certificate that's currently in a Pending Issuance status and requires domain verification. Failure to validate domain ownership results in an unsuccessful certificate issuance. Domain verification isn't automated for App Service certificates and requires your action. +For More information, see [Upgrade the API version of your Azure Cosmos DB for MongoDB account](/azure/cosmos-db/mongodb-version-upgrade) +ID: 0da795d9-26d2-4f02-a019-0ec383363c88 -Learn more about [App Service Certificate - ASCDomainVerificationRequired (Domain verification required to renew your App Service Certificate)](https://aka.ms/ASCDomainVerificationRequired). +<!--0da795d9-26d2-4f02-a019-0ec383363c88_end--> + -### Clusters having node pools using unrecommended B-Series +<!--ec6fe20c-08d6-43da-ac18-84ac83756a88_begin--> +#### Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account + +When an account is throwing a TooManyRequests error with the 16500 error code, enabling Server Side Retry (SSR) can help mitigate the issue. -Cluster has one or more node pools using an unrecommended burstable VM SKU. With burstable VMs, full vCPU capability 100% is unguaranteed. Make sure B-series VMs aren't used in a Production environment. + +ID: ec6fe20c-08d6-43da-ac18-84ac83756a88 -Learn more about [Kubernetes service - ClustersUsingBSeriesVMs (Clusters having node pools using unrecommended B-Series)](/azure/virtual-machines/sizes-b-series-burstable). +<!--ec6fe20c-08d6-43da-ac18-84ac83756a88_end--> + -### Upgrade to Standard tier for mission-critical and production clusters +<!--b57f7a29-dcc8-43de-86fa-18d3f9d3764d_begin--> +#### Add a second region to your production workloads on Azure Cosmos DB + +Production workloads on Azure Cosmos DB run in a single region might have availability issues, this appears to be the case with some of your Cosmos DB accounts. Increase their availability by configuring them to span at least two Azure regions. NOTE: Additional regions incur additional costs. -This cluster has more than 10 nodes and hasn't enabled the Standard tier. The Kubernetes Control Plane on the Free tier comes with limited resources and isn't intended for production use or any cluster with 10 or more nodes. +For More information, see [High availability (Reliability) in Azure Cosmos DB for NoSQL](/azure/cosmos-db/high-availability) +ID: b57f7a29-dcc8-43de-86fa-18d3f9d3764d -Learn more about [Kubernetes service - UseStandardpricingtier (Upgrade to Standard tier for mission-critical and production clusters)](/azure/aks/uptime-sla). +<!--b57f7a29-dcc8-43de-86fa-18d3f9d3764d_end--> + -### Pod Disruption Budgets Recommended +<!--51a4e6bd-5a95-4a41-8309-40f5640fdb8b_begin--> +#### Upgrade old Azure Cosmos DB SDK to the latest version + +An Azure Cosmos DB account using an old version of the SDK lacks the latest fixes and improvements. Your Azure Cosmos DB account is using an old version of the SDK. For the latest fixes, performance improvements, and new feature capabilities, upgrade to the latest version. -Pod Disruption budgets recommended. Improve service high availability. +For More information, see [Azure Cosmos DB documentation](/azure/cosmos-db/) +ID: 51a4e6bd-5a95-4a41-8309-40f5640fdb8b -Learn more about [Kubernetes service - PodDisruptionBudgetsRecommended (Pod Disruption Budgets Recommended)](/azure/aks/operator-best-practices-scheduler#plan-for-availability-using-pod-disruption-budgets). +<!--51a4e6bd-5a95-4a41-8309-40f5640fdb8b_end--> + -### Upgrade to the latest agent version of Azure Arc-enabled Kubernetes +<!--60a55165-9ccd-4536-81f6-e8dc6246d3d2_begin--> +#### Upgrade outdated Azure Cosmos DB SDK to the latest version + +An Azure Cosmos DB account using an old version of the SDK lacks the latest fixes and improvements. Your Azure Cosmos DB account is using an outdated version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities. -Upgrade to the latest agent version for the best Azure Arc enabled Kubernetes experience, improved stability and new functionality. +For More information, see [Azure Cosmos DB documentation](/azure/cosmos-db/) +ID: 60a55165-9ccd-4536-81f6-e8dc6246d3d2 -Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade (Upgrade to the latest agent version of Azure Arc-enabled Kubernetes)](https://aka.ms/ArcK8sAgentUpgradeDocs). +<!--60a55165-9ccd-4536-81f6-e8dc6246d3d2_end--> + +<!--5de9f2e6-087e-40da-863a-34b7943beed4_begin--> +#### Enable service managed failover for Cosmos DB account + +Enable service managed failover for Cosmos DB account to ensure high availability of the account. Service managed failover automatically switches the write region to the secondary region in case of a primary region outage. This ensures that the application continues to function without any downtime. +For More information, see [High availability (Reliability) in Azure Cosmos DB for NoSQL](/azure/cosmos-db/high-availability) +ID: 5de9f2e6-087e-40da-863a-34b7943beed4 -## Databases +<!--5de9f2e6-087e-40da-863a-34b7943beed4_end--> + -### Replication - Add a primary key to the table that currently doesn't have one +<!--64fbcac1-f652-4b6f-8170-2f97ffeb5631_begin--> +#### Enable HA for your Production workload + +Many clusters with consistent workloads do not have high availability (HA) enabled. It's recommended to activate HA from the Scale page in the Azure Portal to prevent database downtime in case of unexpected node failures and to qualify for SLA guarantees. -Based on our internal monitoring, we have observed significant replication lag on your replica server. This lag is occurring because the replica server is replaying relay logs on a table that lacks a primary key. To ensure that the replica can synchronize with the primary and keep up with changes, add primary keys to the tables in the primary server. Once the primary keys are added, recreate the replica server. +For More information, see [Scaling and configuring Your Azure Cosmos DB for MongoDB vCore cluster](https://aka.ms/enableHAformongovcore) +ID: 64fbcac1-f652-4b6f-8170-2f97ffeb5631 -Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerReplicaMissingPKfb41 (Replication - Add a primary key to the table that currently doesn't have one)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table). +<!--64fbcac1-f652-4b6f-8170-2f97ffeb5631_end--> + -### High Availability - Add primary key to the table that currently doesn't have one +<!--8034b205-167a-4fd5-a133-0c8cb166103c_begin--> +#### Enable zone redundancy for multi-region Cosmos DB accounts + +This recommendation suggests enabling zone redundancy for multi-region Cosmos DB accounts to improve high availability and reduce the risk of data loss in case of a regional outage. -Our internal monitoring system has identified significant replication lag on the High Availability standby server. The standby server replaying relay logs on a table that lacks a primary key, is the main cause of the lag. To address this issue and adhere to best practices, we recommend you add primary keys to all tables. Once you add the primary keys, proceed to disable and then re-enable High Availability to mitigate the problem. +For More information, see [High availability (Reliability) in Azure Cosmos DB for NoSQL](/azure/cosmos-db/high-availability#replica-outages) +ID: 8034b205-167a-4fd5-a133-0c8cb166103c -Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerHAMissingPKcf38 (High Availability - Add primary key to the table that currently doesn't have one.)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table). +<!--8034b205-167a-4fd5-a133-0c8cb166103c_end--> + -### Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid +<!--92056ca3-8fab-43d1-bebf-f9c377ef20e9_begin--> +#### Add at least one data center in another Azure region + +Your Azure Managed Instance for Apache Cassandra cluster is designated as a production cluster but is currently deployed in a single Azure region. For production clusters, we recommend adding at least one more data center in another Azure region to guard against disaster recovery scenarios. -Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased with the maxfragmentationmemory-reserved setting available in the advanced settings option area. +For More information, see [Best practices for high availability and disaster recovery](/azure/managed-instance-apache-cassandra/resilient-applications) +ID: 92056ca3-8fab-43d1-bebf-f9c377ef20e9 -Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential effect.)](https://aka.ms/redis/recommendations/memory-policies). +<!--92056ca3-8fab-43d1-bebf-f9c377ef20e9_end--> + -### Enable Azure backup for SQL on your virtual machines +<!--a030f8ab-4dd4-4751-822b-f231a0df5f5a_begin--> +#### Avoid being rate limited for Control Plane operation + +We found high number of Control Plane operations on your account through resource provider. Request that exceeds the documented limits at sustained levels over consecutive 5-minute periods may experience request being throttling as well failed or incomplete operation on Azure Cosmos DB resources. -Enable backups for SQL databases on your virtual machines using Azure backup and realize the benefits of zero-infrastructure backup, point-in-time restore, and central management with SQL AG integration. +For More information, see [Azure Cosmos DB service quotas](https://docs.microsoft.com/azure/cosmos-db/concepts-limits#control-plane) +ID: a030f8ab-4dd4-4751-822b-f231a0df5f5a -Learn more about [SQL virtual machine - EnableAzBackupForSQL (Enable Azure backup for SQL on your virtual machines)](/azure/backup/backup-azure-sql-database). +<!--a030f8ab-4dd4-4751-822b-f231a0df5f5a_end--> + +<!--microsoft_documentdb_end> +## Azure Data Explorer +<!--fa2649e9-e1a5-4d07-9b26-51c080d9a9ba_begin--> +#### Resolve virtual network issues + +Service failed to install or resume due to virtual network (VNet) issues. To resolve this issue, follow the steps in the troubleshooting guide. -### Improve PostgreSQL availability by removing inactive logical replication slots +For More information, see [Troubleshoot access, ingestion, and operation of your Azure Data Explorer cluster in your virtual network](/azure/data-explorer/vnet-deploy-troubleshoot) +ID: fa2649e9-e1a5-4d07-9b26-51c080d9a9ba -Our internal system indicates that your PostgreSQL server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server. +<!--fa2649e9-e1a5-4d07-9b26-51c080d9a9ba_end--> -Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding). +<!--f2bcadd1-713b-4acc-9810-4170a5d01dea_begin--> +#### Add subnet delegation for 'Microsoft.Kusto/clusters' + +If a subnet isnΓÇÖt delegated, the associated Azure service wonΓÇÖt be able to operate within it. Your subnet doesnΓÇÖt have the required delegation. Delegate your subnet for 'Microsoft.Kusto/clusters'. -### Improve PostgreSQL availability by removing inactive logical replication slots +For More information, see [What is subnet delegation?](/azure/virtual-network/subnet-delegation-overview) +ID: f2bcadd1-713b-4acc-9810-4170a5d01dea -Our internal system indicates that your PostgreSQL flexible server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server. +<!--f2bcadd1-713b-4acc-9810-4170a5d01dea_end--> + +<!--microsoft_kusto_end> +## Azure Database for MySQL +<!--cf388b0c-2847-4ba9-8b07-54c6b23f60fb_begin--> +#### High Availability - Add primary key to the table that currently doesn't have one. + +Our internal monitoring system has identified significant replication lag on the High Availability standby server. This lag is primarily caused by the standby server replaying relay logs on a table that lacks a primary key. To address this issue and adhere to best practices, it's recommended to add primary keys to all tables. Once this is done, proceed to disable and then re-enable High Availability to mitigate the problem. -Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding). +For More information, see [Troubleshoot replication latency in Azure Database for MySQL - Flexible Server](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table) +ID: cf388b0c-2847-4ba9-8b07-54c6b23f60fb -### Configure Consistent indexing mode on your Azure Cosmos DB container +<!--cf388b0c-2847-4ba9-8b07-54c6b23f60fb_end--> -We noticed that your Azure Cosmos DB container is configured with the Lazy indexing mode, which might affect the freshness of query results. We recommend switching to Consistent mode. +<!--fb41cc05-7ac3-4b0e-a773-a39b5c1ca9e4_begin--> +#### Replication - Add a primary key to the table that currently doesn't have one + +Our internal monitoring observed significant replication lag on your replica server because the replica server is replaying relay logs on a table that lacks a primary key. To ensure that the replica server can effectively synchronize with the primary and keep up with changes, add primary keys to the tables in the primary server and then recreate the replica server. -Learn more about [Azure Cosmos DB account - CosmosDBLazyIndexing (Configure Consistent indexing mode on your Azure Cosmos DB container)](/azure/cosmos-db/how-to-manage-indexing-policy). +For More information, see [Troubleshoot replication latency in Azure Database for MySQL - Flexible Server](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table) +ID: fb41cc05-7ac3-4b0e-a773-a39b5c1ca9e4 -### Upgrade your old Azure Cosmos DB SDK to the latest version +<!--fb41cc05-7ac3-4b0e-a773-a39b5c1ca9e4_end--> + +<!--microsoft_dbformysql_end> +## Azure Database for PostgreSQL +<!--33f26810-57d0-4612-85ff-a83ee9be884a_begin--> +#### Remove inactive logical replication slots (important) + +Inactive logical replication slots can result in degraded server performance and unavailability due to write ahead log (WAL) file retention and buildup of snapshot files. Your Azure Database for PostgreSQL flexible server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Either delete the inactive replication slots, or start consuming the changes from these slots, so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server. -Your Azure Cosmos DB account is using an old version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities. +For More information, see [Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible Server](https://aka.ms/azure_postgresql_flexible_server_logical_decoding) +ID: 33f26810-57d0-4612-85ff-a83ee9be884a -Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/). +<!--33f26810-57d0-4612-85ff-a83ee9be884a_end--> -### Upgrade your outdated Azure Cosmos DB SDK to the latest version +<!--6f33a917-418c-4608-b34f-4ff0e7be8637_begin--> +#### Remove inactive logical replication slots + +When an Orcas PostgreSQL flexible server has inactive logical replication slots, degraded server performance and unavailability due to write ahead log (WAL) file retention and buildup of snapshot files might occur. THIS NEEDS IMMEDIATE ATTENTION. Either delete the inactive replication slots, or start consuming the changes from these slots, so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server. -Your Azure Cosmos DB account is using an outdated version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities. +For More information, see [Logical decoding](https://aka.ms/azure_postgresql_logical_decoding) +ID: 6f33a917-418c-4608-b34f-4ff0e7be8637 -Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/). +<!--6f33a917-418c-4608-b34f-4ff0e7be8637_end--> + -### Configure your Azure Cosmos DB containers with a partition key +<!--5295ed8a-f7a1-48d3-b4a9-e5e472cf1685_begin--> +#### Configure geo redundant backup storage + +Configure GRS to ensure that your database meets its availability and durability targets even in the face of failures or disasters. -Your Azure Cosmos DB nonpartitioned collections are approaching their provisioned storage quota. Migrate these collections to new collections with a partition key definition so the service can automatically scale them out. +For More information, see [Backup and restore in Azure Database for PostgreSQL - Flexible Server](https://aka.ms/PGGeoBackup) +ID: 5295ed8a-f7a1-48d3-b4a9-e5e472cf1685 -Learn more about [Azure Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](/azure/cosmos-db/partitioning-overview#choose-partitionkey). +<!--5295ed8a-f7a1-48d3-b4a9-e5e472cf1685_end--> + -### Upgrade your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features +<!--eb241cd1-4bdc-4800-945b-4c9c8eeb6f07_begin--> +#### Define custom maintenance windows to occur during low-peak hours + +When specifying preferences for the maintenance schedule, you can pick a day of the week and a time window. If you don't specify, the system will pick times between 11pm and 7am in your server's region time. Pick a day and time where usage is low. -Your Azure Cosmos DB for MongoDB account is eligible to upgrade to version 4.0. Reduce your storage costs by up to 55% and your query costs by up to 45% by upgrading to the v4.0 new storage format. Numerous other features such as multi-document transactions are also included in v4.0. +For More information, see [Scheduled maintenance in Azure Database for PostgreSQL - Flexible Server](https://aka.ms/PGCustomMaintenanceWindow) +ID: eb241cd1-4bdc-4800-945b-4c9c8eeb6f07 -Learn more about [Azure Cosmos DB account - CosmosDBMongoSelfServeUpgrade (Upgrade your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-version-upgrade). +<!--eb241cd1-4bdc-4800-945b-4c9c8eeb6f07_end--> + +<!--microsoft_dbforpostgresql_end> +## Azure IoT Hub +<!--51b1fad8-4838-426f-9871-107bc089677b_begin--> +#### Upgrade Microsoft Edge device runtime to a supported version for IoT Hub + +When Edge devices use outdated versions, performance degradation might occur. We recommend you upgrade to the latest supported version of the Azure IoT Edge runtime. ++For More information, see [Update IoT Edge](https://aka.ms/IOTEdgeSDKCheck) +ID: 51b1fad8-4838-426f-9871-107bc089677b ++<!--51b1fad8-4838-426f-9871-107bc089677b_end--> ++<!--d448c687-b808-4143-bbdc-02c35478198a_begin--> +#### Upgrade device client SDK to a supported version for IotHub + +When devices use an outdated SDK, performance degradation can occur. Some or all of your devices are using an outdated SDK. We recommend you upgrade to a supported SDK version. -### Add a second region to your production workloads on Azure Cosmos DB +For More information, see [Azure IoT Hub SDKs](https://aka.ms/iothubsdk) +ID: d448c687-b808-4143-bbdc-02c35478198a -Based on their names and configuration, we have detected the Azure Cosmos DB accounts listed as being potentially used for production workloads. These accounts currently run in a single Azure region. You can increase their availability by configuring them to span at least two Azure regions. +<!--d448c687-b808-4143-bbdc-02c35478198a_end--> + -> [!NOTE] -> Additional regions incur extra costs. +<!--8d7efd88-c891-46be-9287-0aec2fabd51c_begin--> +#### IoT Hub Potential Device Storm Detected + +This is when two or more devices are trying to connect to the IoT Hub using the same device ID credentials. When the second device (B) connects, it causes the first one (A) to become disconnected. Then (A) attempts to reconnect again, which causes (B) to get disconnected. -Learn more about [Azure Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](/azure/cosmos-db/high-availability). +For More information, see [Understand and resolve Azure IoT Hub errors](https://aka.ms/IotHubDeviceStorm) +ID: 8d7efd88-c891-46be-9287-0aec2fabd51c -### Enable Server Side Retry (SSR) on your Azure Cosmos DB for MongoDB account +<!--8d7efd88-c891-46be-9287-0aec2fabd51c_end--> + -We observed your account is throwing a TooManyRequests error with the 16500 error code. Enabling Server Side Retry (SSR) can help mitigate this issue for you. +<!--d1ff97b9-44cd-4acf-a9d3-3af500bd79d6_begin--> +#### Upgrade Device Update for IoT Hub SDK to a supported version + +When a Device Update for IoT Hub instance uses an outdated version of the SDK, it doesn't get the latest upgrades. For the latest fixes, performance improvements, and new feature capabilities, upgrade to the latest Device Update for IoT Hub SDK version. -Learn more about [Azure Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB for MongoDB account)](/azure/cosmos-db/cassandra/prevent-rate-limiting-errors). +For More information, see [What is Device Update for IoT Hub?](/azure/iot-hub-device-update/understand-device-update) +ID: d1ff97b9-44cd-4acf-a9d3-3af500bd79d6 -### Migrate your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features +<!--d1ff97b9-44cd-4acf-a9d3-3af500bd79d6_end--> + -Migrate your database account to a new database account to take advantage of Azure Cosmos DB for MongoDB v4.0. Reduce your storage costs by up to 55% and your query costs by up to 45% by upgrading to the v4.0 new storage format. Numerous other features such as multi-document transactions are also included in v4.0. When upgrading, you must also migrate the data in your existing account to a new account created using version 4.0. Azure Data Factory or Studio 3T can assist you in migrating your data. +<!--e4bda6ac-032c-44e0-9b40-e0522796a6d2_begin--> +#### Add IoT Hub units or increase SKU level + +When an IoT Hub exceeds its daily message quota, operation and cost problems might occur. To ensure smooth operation in the future, add units or increase the SKU level. -Learn more about [Azure Cosmos DB account - CosmosDBMongoMigrationUpgrade (Migrate your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-feature-support-40). +For More information, see [Understand and resolve Azure IoT Hub errors](/azure/iot-hub/troubleshoot-error-codes#403002-iothubquotaexceeded) +ID: e4bda6ac-032c-44e0-9b40-e0522796a6d2 -### Your Azure Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key +<!--e4bda6ac-032c-44e0-9b40-e0522796a6d2_end--> + +<!--microsoft_devices_end> +## Azure Kubernetes Service (AKS) +<!--70829b1a-272b-4728-b418-8f1a56432d33_begin--> +#### Enable Autoscaling for your system node pools + +To ensure your system pods are scheduled even during times of high load, enable autoscaling on your system node pool. -It appears that your key vault's configuration is preventing your Azure Cosmos DB account from contacting the key vault to access your managed encryption keys. If you've recently performed a key rotation, make sure that the previous key or key version remains enabled and available until Azure Cosmos DB has completed the rotation. The previous key or key version can be disabled after 24 hours, or after the Azure Key Vault audit logs don't show activity from Azure Cosmos DB on that key or key version anymore. +For More information, see [Use the cluster autoscaler in Azure Kubernetes Service (AKS)](/azure/aks/cluster-autoscaler?tabs=azure-cli#before-you-begin) +ID: 70829b1a-272b-4728-b418-8f1a56432d33 -Learn more about [Azure Cosmos DB account - CosmosDBKeyVaultWrap (Your Azure Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key)](/azure/cosmos-db/how-to-setup-cmk). +<!--70829b1a-272b-4728-b418-8f1a56432d33_end--> -### Avoid being rate limited from metadata operations +<!--a9228ae7-4386-41be-b527-acd59fad3c79_begin--> +#### Have at least 2 nodes in your system node pool + +Ensure your system node pools have at least 2 nodes for reliability of your system pods. With a single node, your cluster can fail in the event of a node or hardware failure. -We found a high number of metadata operations on your account. Your data in Azure Cosmos DB, including metadata about your databases and collections, is distributed across partitions. Metadata operations have a system-reserved request unit (RU) limit. A high number of metadata operations can cause rate limiting. Avoid rate limiting by using static Azure Cosmos DB client instances in your code, and caching the names of databases and collections. +For More information, see [Manage system node pools in Azure Kubernetes Service (AKS)](/azure/aks/use-system-pools?tabs=azure-cli#system-and-user-node-pools) +ID: a9228ae7-4386-41be-b527-acd59fad3c79 -Learn more about [Azure Cosmos DB account - CosmosDBHighMetadataOperations (Avoid being rate limited from metadata operations)](/azure/cosmos-db/performance-tips). +<!--a9228ae7-4386-41be-b527-acd59fad3c79_end--> + -### Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB for MongoDB account +<!--f31832f1-7e87-499d-a52a-120f610aba98_begin--> +#### Create a dedicated system node pool + +A cluster without a dedicated system node pool is less reliable. We recommend you dedicate system node pools to only serve critical system pods, preventing resource starvation between system and competing user pods. Enforce this behavior with the CriticalAddonsOnly=true:NoSchedule taint on the pool. -We observed some of your applications are connecting to your upgraded Azure Cosmos DB for MongoDB account using the legacy 3.2 endpoint `[accountname].documents.azure.com`. Use the new endpoint `[accountname].mongo.cosmos.azure.com` (or its equivalent in sovereign, government, or restricted clouds). +For More information, see [Manage system node pools in Azure Kubernetes Service (AKS)](/azure/aks/use-system-pools?tabs=azure-cli#before-you-begin) +ID: f31832f1-7e87-499d-a52a-120f610aba98 -Learn more about [Azure Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB for MongoDB account)](/azure/cosmos-db/mongodb-feature-support-40). +<!--f31832f1-7e87-499d-a52a-120f610aba98_end--> + ++<!--fac2ad84-1421-4dd3-8477-9d6e605392b4_begin--> +#### Ensure B-series Virtual Machine's (VMs) aren't used in production environments + +When a cluster has one or more node pools using a non-recommended burstable VM SKU, full vCPU capability 100% is unguaranteed. Ensure B-series VM's aren't used in production environments. ++For More information, see [B-series burstable virtual machine sizes](/azure/virtual-machines/sizes-b-series-burstable) +ID: fac2ad84-1421-4dd3-8477-9d6e605392b4 ++<!--fac2ad84-1421-4dd3-8477-9d6e605392b4_end--> + +<!--microsoft_containerservice_end> +## Azure NetApp Files +<!--2e795f35-fce6-48dc-a5ac-6860cb9a0442_begin--> +#### Configure AD DS Site for Azure Netapp Files AD Connector + +If Azure NetApp Files can't reach assigned AD DS site domain controllers, the domain controller discovery process queries all domain controllers. Unreachable domain controllers may be used, causing issues with volume creation, client queries, authentication, and AD connection modifications. -### Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated +For More information, see [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](https://aka.ms/anfsitescoping) +ID: 2e795f35-fce6-48dc-a5ac-6860cb9a0442 -There's a critical bug in version 2.6.13 and lower, of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: There's a critical hotfix for the Async Java SDK v2, however we still highly recommend you migrate to the [Java SDK v4](/azure/cosmos-db/sql/sql-api-sdk-java-v4). +<!--2e795f35-fce6-48dc-a5ac-6860cb9a0442_end--> ++<!--4e112555-7dc0-4f33-85e7-18398ac41345_begin--> +#### Ensure Roles assigned to Microsoft.NetApp Delegated Subnet has Subnet Read Permissions + +Roles that are required for the management of Azure NetApp Files resources, must have "Microsoft.network/virtualNetworks/subnets/read" permissions on the subnet that is delegated to Microsoft.NetApp If the role, whether Custom or Built-In doesn't have this permission, then Volume Creations will fail ++ +ID: 4e112555-7dc0-4f33-85e7-18398ac41345 ++<!--4e112555-7dc0-4f33-85e7-18398ac41345_end--> + ++<!--8754f0ed-c82a-497e-be31-c9d701c976e1_begin--> +#### Review SAP configuration for timeout values used with Azure NetApp Files + +High availability of SAP while used with Azure NetApp Files relies on setting proper timeout values to prevent disruption to your application. Review the 'Learn more' link to ensure your configuration meets the timeout values as noted in the documentation. ++For More information, see [Use Azure to host and run SAP workload scenarios](/azure/sap/workloads/get-started) +ID: 8754f0ed-c82a-497e-be31-c9d701c976e1 ++<!--8754f0ed-c82a-497e-be31-c9d701c976e1_end--> + ++<!--cda11061-35a8-4ca3-aa03-b242dcdf7319_begin--> +#### Implement disaster recovery strategies for your Azure NetApp Files resources + +To avoid data or functionality loss during a regional or zonal disaster, implement common disaster recovery techniques such as cross region replication or cross zone replication for your Azure NetApp Files volumes. ++For More information, see [Understand data protection and disaster recovery options in Azure NetApp Files](https://aka.ms/anfcrr) +ID: cda11061-35a8-4ca3-aa03-b242dcdf7319 ++<!--cda11061-35a8-4ca3-aa03-b242dcdf7319_end--> + ++<!--e4bebd74-387a-4a74-b757-475d2d1b4e3e_begin--> +#### Azure Netapp Files - Enable Continuous Availability for SMB Volumes + +For Continuous Availability, we recommend enabling Server Message Block (SMB) volume for your Azure Netapp Files. ++For More information, see [Enable Continuous Availability on existing SMB volumes](https://aka.ms/anfdoc-continuous-availability) +ID: e4bebd74-387a-4a74-b757-475d2d1b4e3e ++<!--e4bebd74-387a-4a74-b757-475d2d1b4e3e_end--> + +<!--microsoft_netapp_end> +## Azure Site Recovery +<!--3ebfaf53-4d8c-4e67-a948-017bbbf59de6_begin--> +#### Enable soft delete for your Recovery Services vaults + +Soft delete helps you retain your backup data in the Recovery Services vault for an additional duration after deletion, giving you an opportunity to retrieve it before it's permanently deleted. ++For More information, see [Soft delete for Azure Backup](/azure/backup/backup-azure-security-feature-cloud) +ID: 3ebfaf53-4d8c-4e67-a948-017bbbf59de6 ++<!--3ebfaf53-4d8c-4e67-a948-017bbbf59de6_end--> ++<!--9b1308f1-4c25-4347-a061-7cc5cd6a44ab_begin--> +#### Enable Cross Region Restore for your recovery Services Vault + +Cross Region Restore (CRR) allows you to restore Azure VMs in a secondary region (an Azure paired region), helping with disaster recovery. ++For More information, see [How to restore Azure VM data in Azure portal](/azure/backup/backup-azure-arm-restore-vms#cross-region-restore) +ID: 9b1308f1-4c25-4347-a061-7cc5cd6a44ab ++<!--9b1308f1-4c25-4347-a061-7cc5cd6a44ab_end--> + +<!--microsoft_recoveryservices_end> +## Azure Spring Apps +<!--39d862c8-445c-40c6-ba59-0e86134df606_begin--> +#### Upgrade Application Configuration Service to Gen 2 + +We notice you are still using Application Configuration Service Gen1 which will be end of support by April 2024. Application Configuration Service Gen2 provides better performance compared to Gen1 and the upgrade from Gen1 to Gen2 is zero downtime so we recommend to upgrade as soon as possible. ++For More information, see [Use Application Configuration Service for Tanzu](https://aka.ms/AsaAcsUpgradeToGen2) +ID: 39d862c8-445c-40c6-ba59-0e86134df606 ++<!--39d862c8-445c-40c6-ba59-0e86134df606_end--> +<!--microsoft_appplatform_end> +## Azure SQL Database +<!--2ea11bcb-dfd0-48dc-96f0-beba578b989a_begin--> +#### Enable cross region disaster recovery for SQL Database + +Enable cross region disaster recovery for Azure SQL Database for business continuity in the event of regional outage. ++For More information, see [Overview of business continuity with Azure SQL Database](https://aka.ms/sqldb_dr_overview) +ID: 2ea11bcb-dfd0-48dc-96f0-beba578b989a ++<!--2ea11bcb-dfd0-48dc-96f0-beba578b989a_end--> ++<!--807e58d0-e385-41ad-987b-4a4b3e3fb563_begin--> +#### Enable zone redundancy for Azure SQL Database to achieve high availability and resiliency. + +To achieve high availability and resiliency, enable zone redundancy for the SQL database or elastic pool to use availability zones and ensure the database or elastic pool is resilient to zonal failures. ++For More information, see [Availability through redundancy - Azure SQL Database](/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell#zone-redundant-availability) +ID: 807e58d0-e385-41ad-987b-4a4b3e3fb563 ++<!--807e58d0-e385-41ad-987b-4a4b3e3fb563_end--> + +<!--microsoft_sql_end> +## Azure Stack HCI +<!--09e56b5a-9a00-47a7-82dd-9bd9569eb6ed_begin--> +#### Upgrade to the latest version of AKS enabled by Arc + +Upgrade to the latest version of API/SDK of AKS enabled by Azure Arc for new functionality and improved stability. ++For More information, see [https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html) +ID: 09e56b5a-9a00-47a7-82dd-9bd9569eb6ed ++<!--09e56b5a-9a00-47a7-82dd-9bd9569eb6ed_end--> ++<!--2ac72093-309f-41ec-bf9d-55e9fc490563_begin--> +#### Upgrade to the latest version of AKS enabled by Arc + +Upgrade to the latest version of API/SDK of AKS enabled by Azure Arc for new functionality and improved stability. ++For More information, see [https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html) +ID: 2ac72093-309f-41ec-bf9d-55e9fc490563 ++<!--2ac72093-309f-41ec-bf9d-55e9fc490563_end--> + +<!--microsoft_azurestackhci_end> +## Classic deployment model storage +<!--fd04ff97-d3b3-470a-9544-dfea3a5708db_begin--> +#### Action required: Migrate classic storage accounts by 8/30/2024. + +Migrate your classic storage accounts to Azure Resource Manager to ensure business continuity. Azure Resource Manager will provide all of the same functionality plus a consistent management layer, resource grouping, and access to new features and updates. ++ +ID: fd04ff97-d3b3-470a-9544-dfea3a5708db ++<!--fd04ff97-d3b3-470a-9544-dfea3a5708db_end--> +<!--microsoft_classicstorage_end> +## Classic deployment model virtual machine +<!--13ff4efb-6c84-4684-8838-52c123e3e3a2_begin--> +#### Migrate off Cloud Services (classic) before 31 August 2024 + +Cloud Services (classic) is retiring. To avoid any loss of data or business continuity, migrate off before 31 Aug 2024. ++For More information, see [Migrate Azure Cloud Services (classic) to Azure Cloud Services (extended support)](https://aka.ms/ExternalRetirementEmailMay2022) +ID: 13ff4efb-6c84-4684-8838-52c123e3e3a2 ++<!--13ff4efb-6c84-4684-8838-52c123e3e3a2_end--> +<!--microsoft_classiccompute_end> +## Cognitive Services +<!--13fed411-54aa-4923-b830-23b51539d79d_begin--> +#### Upgrade your application to use the latest API version from Azure OpenAI + +An Azure OpenAI resource with an older API version lacks the latest features and functionalities. We recommend that you use the latest REST API version. ++For More information, see [Azure OpenAI Service REST API reference](/azure/cognitive-services/openai/reference) +ID: 13fed411-54aa-4923-b830-23b51539d79d ++<!--13fed411-54aa-4923-b830-23b51539d79d_end--> ++<!--3f83aee8-222d-445c-9a46-2af5fe5b4777_begin--> +#### Quota exceeded for this resource, wait or upgrade to unblock + +If the quota for your resource is exceeded your resource becomes blocked. You can wait for the quota to automatically get replenished soon, or, to use the resource again now, upgrade it to a paid SKU. ++For More information, see [Plan and manage costs for Azure AI Studio](/azure/cognitive-services/plan-manage-costs#pay-as-you-go) +ID: 3f83aee8-222d-445c-9a46-2af5fe5b4777 ++<!--3f83aee8-222d-445c-9a46-2af5fe5b4777_end--> + +<!--microsoft_cognitiveservices_end> +## Container Registry +<!--af0cdbce-c610-499b-9bd7-b169cdb1bb2e_begin--> +#### Use Premium tier for critical production workloads + +Premium registries provide the highest amount of included storage, concurrent operations and network bandwidth, enabling high-volume scenarios. The Premium tier also adds features such as geo-replication, availability zone support, content-trust, customer-managed keys and private endpoints. ++For More information, see [Azure Container Registry service tiers](https://aka.ms/AAqwyv6) +ID: af0cdbce-c610-499b-9bd7-b169cdb1bb2e ++<!--af0cdbce-c610-499b-9bd7-b169cdb1bb2e_end--> ++<!--dcfa2602-227e-4b6c-a60d-7b1f6514e690_begin--> +#### Ensure Geo-replication is enabled for resilience + +Geo-replication enables workloads to use a single image, tag and registry name across regions, provides network-close registry access, reduced data transfer costs and regional Registry resilience if a regional outage occurs. This feature is only available in the Premium service tier. ++For More information, see [Geo-replication in Azure Container Registry](https://aka.ms/AAqwx90) +ID: dcfa2602-227e-4b6c-a60d-7b1f6514e690 ++<!--dcfa2602-227e-4b6c-a60d-7b1f6514e690_end--> + +<!--microsoft_containerregistry_end> +## Content Delivery Network +<!--ceecfd41-89b3-4c64-afe6-984c9cc03126_begin--> +#### Azure CDN From Edgio, Managed Certificate Renewal Unsuccessful. Additional Validation Required. + +Azure CDN from Edgio employs CNAME delegation to renew certificates with DigiCert for managed certificate renewals. It's essential that Custom Domains resolve to an azureedge.net endpoint for the automatic renewal process with DigiCert to be successful. Ensure your Custom Domain's CNAME and CAA records are configured correctly. Should you require further assistance, please submit a support case to Azure to re-attempt the renewal request. ++ +ID: ceecfd41-89b3-4c64-afe6-984c9cc03126 ++<!--ceecfd41-89b3-4c64-afe6-984c9cc03126_end--> ++<!--4e1c2077-7c73-4ace-b4aa-f11b36c28290_begin--> +#### Renew the expired Azure Front Door customer certificate to avoid service disruption + +When customer certificates for Azure Front Door Standard and Premium profiles expire, you might have service disruptions. To avoid service disruption, renew the certificate before it expires. ++For More information, see [Configure HTTPS on an Azure Front Door custom domain by using the Azure portal](/azure/frontdoor/standard-premium/how-to-configure-https-custom-domain#use-your-own-certificate) +ID: 4e1c2077-7c73-4ace-b4aa-f11b36c28290 ++<!--4e1c2077-7c73-4ace-b4aa-f11b36c28290_end--> + ++<!--bfe85fd2-ee53-4c35-8781-7790da2107e1_begin--> +#### Re-validate domain ownership for the Azure Front Door managed certificate renewal + +Azure Front Door (AFD) can't automatically renew the managed certificate because the domain isn't CNAME mapped to AFD endpoint. For the managed certificate to be automatically renewed, revalidate domain ownership. ++For More information, see [Configure a custom domain on Azure Front Door by using the Azure portal](/azure/frontdoor/standard-premium/how-to-add-custom-domain#domain-validation-state) +ID: bfe85fd2-ee53-4c35-8781-7790da2107e1 ++<!--bfe85fd2-ee53-4c35-8781-7790da2107e1_end--> + ++<!--2c057605-4707-4d3e-bbb0-a7fe9b6a626b_begin--> +#### Switch Secret version to 'Latest' for the Azure Front Door customer certificate + +Configure the Azure Front Door (AFD) customer certificate secret to 'Latest' for the AFD to refer to the latest secret version in Azure Key Vault, allowing the secret can be automatically rotated. ++For More information, see [Configure HTTPS on an Azure Front Door custom domain by using the Azure portal](/azure/frontdoor/standard-premium/how-to-configure-https-custom-domain#certificate-renewal-and-changing-certificate-types) +ID: 2c057605-4707-4d3e-bbb0-a7fe9b6a626b ++<!--2c057605-4707-4d3e-bbb0-a7fe9b6a626b_end--> + ++<!--9411bc9f-d181-497c-b519-4154ae04fb00_begin--> +#### Validate domain ownership by adding DNS TXT record to DNS provider + +Validate domain ownership by adding the DNS TXT record to your DNS provider. Validating domain ownership through TXT records enhances security and ensures proper control over your domain. ++For More information, see [Configure a custom domain on Azure Front Door by using the Azure portal](/azure/frontdoor/standard-premium/how-to-add-custom-domain#domain-validation-state) +ID: 9411bc9f-d181-497c-b519-4154ae04fb00 ++<!--9411bc9f-d181-497c-b519-4154ae04fb00_end--> + +<!--microsoft_cdn_end> +## Data Factory +<!--617ee02c-be69-441e-8294-dee5a237efff_begin--> +#### Implement BCDR strategy for cross region redundancy in Azure Data Factory + +Implementing BCDR strategy improves high availability and reduced risk of data loss -Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](/azure/cosmos-db/sql/sql-api-sdk-async-java). +For More information, see [BCDR for Azure Data Factory and Azure Synapse Analytics pipelines - Azure Architecture Center ](https://aka.ms/AArn7ln) +ID: 617ee02c-be69-441e-8294-dee5a237efff -### Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue +<!--617ee02c-be69-441e-8294-dee5a237efff_end--> -There's a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. +<!--939b97dc-fdca-4324-ba36-6ea7e1ab399b_begin--> +#### Enable auto upgrade on your SHIR + +Auto-upgrade of Self-hosted Integration runtime has been disabled. Know that you aren't getting the latest changes and bug fixes on the Self-Hosted Integration runtime. Review them to enable the SHIR auto upgrade -Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](/azure/cosmos-db/sql/sql-api-sdk-java-v4). +For More information, see [Self-hosted integration runtime auto-update and expire notification](https://aka.ms/shirexpirynotification) +ID: 939b97dc-fdca-4324-ba36-6ea7e1ab399b +<!--939b97dc-fdca-4324-ba36-6ea7e1ab399b_end--> + +<!--microsoft_datafactory_end> +## Fluid Relay +<!--a5e8a0f8-2c84-407a-b3d8-f371d684363b_begin--> +#### Azure Fluid Relay client library should be upgraded + +If the Azure Fluid Relay service is invoked with an old client library, it might cause appplication problems. To ensure your application remains operational, upgrade your Azure Fluid Relay client library to the latest version. Upgrading provides the most up-to-date functionality, and enhancements in performance and stability. +For More information, see [Version compatibility with Fluid Framework releases](/azure/azure-fluid-relay/concepts/version-compatibility) +ID: a5e8a0f8-2c84-407a-b3d8-f371d684363b -## Integration +<!--a5e8a0f8-2c84-407a-b3d8-f371d684363b_end--> +<!--microsoft_fluidrelay_end> +## HDInsight +<!--69740e3e-5b96-4b0e-b9b8-4d7573e3611c_begin--> +#### Apply critical updates by dropping and recreating your HDInsight clusters (certificate rotation round 2) + +The HDInsight service attempted to apply a critical certificate update on your running clusters. However, due to some custom configuration changes, we're unable to apply the updates on all clusters. To prevent those clusters from becoming unhealthy and unusable, drop and recreate your clusters. ++For More information, see [Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters) +ID: 69740e3e-5b96-4b0e-b9b8-4d7573e3611c ++<!--69740e3e-5b96-4b0e-b9b8-4d7573e3611c_end--> ++<!--24acd95e-fc9f-490c-b32d-edc6d747d0bc_begin--> +#### Non-ESP ABFS clusters [Cluster Permissions for Word Readable] + +Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from running Hadoop commands for storage operations. This change is to improve cluster security posture. Customers need to plan for the updates before September 30, 2023. -### Upgrade to the latest FarmBeats API version +For More information, see [Azure HDInsight release notes](https://aka.ms/hdireleasenotes) +ID: 24acd95e-fc9f-490c-b32d-edc6d747d0bc -We have identified calls to a FarmBeats API version that is scheduled for deprecation. We recommend switching to the latest FarmBeats API version to ensure uninterrupted access to FarmBeats, latest features, and performance improvements. +<!--24acd95e-fc9f-490c-b32d-edc6d747d0bc_end--> + ++<!--35e3a19f-16e7-4bb1-a7b8-49e02a35af2e_begin--> +#### Restart brokers on your Kafka Cluster Disks + +When data disks used by Kafka brokers in HDInsight clusters are almost full, the Apache Kafka broker process can't start and fails. To mitigate, find the retention time for every topic, back up the files that are older, and restart the brokers. ++For More information, see [Scenario: Brokers are unhealthy or can't restart due to disk space full issue](https://aka.ms/kafka-troubleshoot-full-disk) +ID: 35e3a19f-16e7-4bb1-a7b8-49e02a35af2e ++<!--35e3a19f-16e7-4bb1-a7b8-49e02a35af2e_end--> + ++<!--41a248ef-50d4-4c48-81fb-13196f957210_begin--> +#### Cluster Name length update + +The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. This change will be implemented by September 30th, 2023. ++For More information, see [Azure HDInsight release notes](/azure/hdinsight/hdinsight-release-notes) +ID: 41a248ef-50d4-4c48-81fb-13196f957210 ++<!--41a248ef-50d4-4c48-81fb-13196f957210_end--> + ++<!--8f163c95-0029-4139-952a-42bd0d773b93_begin--> +#### Upgrade your cluster to the the latest HDInsight image + +A cluster created one year ago doesn't have the latest image upgrades. Your cluster was created 1 year ago. As part of the best practices, we recommend you use the latest HDInsight images for the best open source updates, Azure updates, and security fixes. The recommended maximum duration for cluster upgrades is less than six months. ++For More information, see [Consider the below points before starting to create a cluster.](/azure/hdinsight/hdinsight-overview-before-you-start#keep-your-clusters-up-to-date) +ID: 8f163c95-0029-4139-952a-42bd0d773b93 ++<!--8f163c95-0029-4139-952a-42bd0d773b93_end--> + ++<!--97355d8e-59ae-43ff-9214-d4acf728467a_begin--> +#### Upgrade your HDInsight Cluster + +A cluster not using the latest image doesn't have the latest upgrades. Your cluster is not using the latest image. We recommend you use the latest versions of HDInsight images for the best of open source updates, Azure updates, and security fixes. HDInsight releases happen every 30 to 60 days. ++For More information, see [Azure HDInsight release notes](/azure/hdinsight/hdinsight-release-notes) +ID: 97355d8e-59ae-43ff-9214-d4acf728467a ++<!--97355d8e-59ae-43ff-9214-d4acf728467a_end--> + ++<!--b3bf9f14-c83e-4dd3-8f5c-a6be746be173_begin--> +#### Gateway or virtual machine not reachable + +We have detected a Network prob failure, it indicates unreachable gateway or a virtual machine. Verify all cluster hostsΓÇÖ availability. Restart virtual machine to recover. If you need further assistance, don't hesitate to contact Azure support for help. ++ +ID: b3bf9f14-c83e-4dd3-8f5c-a6be746be173 ++<!--b3bf9f14-c83e-4dd3-8f5c-a6be746be173_end--> + ++<!--e4635832-0ab1-48b1-a386-c791197189e6_begin--> +#### VM agent is 9.9.9.9. Upgrade the cluster. + +Our records indicate that one or more of your clusters are using images dated February 2022 or older (image versions 2202xxxxxx or older). +There is a potential reliability issue on HDInsight clusters that use images dated February 2022 or older.Consider rebuilding your clusters with latest image. ++ +ID: e4635832-0ab1-48b1-a386-c791197189e6 ++<!--e4635832-0ab1-48b1-a386-c791197189e6_end--> + +<!--microsoft_hdinsight_end> +## Media Services +<!--b7c9fd99-a979-40b4-ab48-b1dfab6bb41a_begin--> +#### Increase Media Services quotas or limits + +When a media account hits its quota limits, disruption of service might occur. To avoid any disruption of service, review current usage of assets, content key policies, and stream policies and increase quota limits for the entities that are close to hitting the limit. You can request quota limits be increased by opening a ticket and adding relevant details. TIP: Don't create additional Azure Media accounts in an attempt to obtain higher limits. -Learn more about [Azure FarmBeats - FarmBeatsApiVersion (Upgrade to the latest FarmBeats API version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ). +For More information, see [Azure Media Services quotas and limits](https://aka.ms/ams-quota-recommendation/) +ID: b7c9fd99-a979-40b4-ab48-b1dfab6bb41a ++<!--b7c9fd99-a979-40b4-ab48-b1dfab6bb41a_end--> +<!--microsoft_media_end> +## Service Bus +<!--29765e2c-5286-4039-963f-f8231e56cc3e_begin--> +#### Use Service Bus premium tier for improved resilience + +When running critical applications, the Service Bus premium tier offers better resource isolation at the CPU and memory level, enhancing availability. It also supports Geo-disaster recovery feature enabling easier recovery from regional disasters without having to change application configurations. -### Upgrade to the latest ADMA Java SDK version +For More information, see [Service Bus premium messaging tier](https://aka.ms/asb-premium) +ID: 29765e2c-5286-4039-963f-f8231e56cc3e -We have identified calls to an Azure Data Manager for Agriculture (ADMA) Java SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements. +<!--29765e2c-5286-4039-963f-f8231e56cc3e_end--> -Learn more about [Azure FarmBeats - FarmBeatsJavaSdkVersion (Upgrade to the latest ADMA Java SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ). +<!--68e62f5c-4ed1-4b78-a2a0-4d9a4cebf106_begin--> +#### Use Service Bus autoscaling feature in the premium tier for improved resilience + +When running critical applications, enabling the auto scale feature allows you to have enough capacity to handle the load on your application. Having the right amount of resources running can reduce throttling and provide a better user experience. -### Upgrade to the latest ADMA DotNet SDK version +For More information, see [Automatically update messaging units of an Azure Service Bus namespace](https://aka.ms/asb-autoscale) +ID: 68e62f5c-4ed1-4b78-a2a0-4d9a4cebf106 -We have identified calls to an ADMA DotNet SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements. --Learn more about [Azure FarmBeats - FarmBeatsDotNetSdkVersion (Upgrade to the latest ADMA DotNet SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ). --### Upgrade to the latest ADMA JavaScript SDK version --We have identified calls to an ADMA JavaScript SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements. --Learn more about [Azure FarmBeats - FarmBeatsJavaScriptSdkVersion (Upgrade to the latest ADMA JavaScript SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ). --### Upgrade to the latest ADMA Python SDK version --We have identified calls to an ADMA Python SDK version that is scheduled for deprecation. We recommend switching to the latest SDK version to ensure uninterrupted access to ADMA, latest features, and performance improvements. --Learn more about [Azure FarmBeats - FarmBeatsPythonSdkVersion (Upgrade to the latest ADMA Python SDK version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ). --### SSL/TLS renegotiation blocked --SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it's blocked, reading 'context.Request.Certificate' in policy expressions returns 'null.' To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client. --Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](/azure/api-management/api-management-howto-mutual-certificates-for-clients). --### Hostname certificate rotation failed --API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service can't retrieve certificate updates from Key Vault, which might lead to the service using stale certificate and runtime API traffic being blocked as a result. --Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain). ----## Internet of Things --### Upgrade device client SDK to a supported version for IotHub --Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the recommendation. --Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk). --### IoT Hub Potential Device Storm Detected --A device storm is when two or more devices are trying to connect to the IoT Hub using the same device ID credentials. When the second device (B) connects, it causes the first one (A) to become disconnected. Then (A) attempts to reconnect again, which causes (B) to get disconnected. --Learn more about [IoT hub - IoTHubDeviceStorm (IoT Hub Potential Device Storm Detected)](https://aka.ms/IotHubDeviceStorm). --### Upgrade Device Update for IoT Hub SDK to a supported version --Your Device Update for IoT Hub Instance is using an outdated version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities. --Learn more about [IoT hub - DU_SDK_Advisor_Recommendation (Upgrade Device Update for IoT Hub SDK to a supported version)](/azure/iot-hub-device-update/understand-device-update). --### IoT Hub Quota Exceeded Detected --We have detected that your IoT Hub has exceeded its daily message quota. To prevent your IoT Hub exceeding its daily message quota in the future, add units or increase the SKU level. --Learn more about [IoT hub - IoTHubQuotaExceededAdvisor (IoT Hub Quota Exceeded Detected)](/azure/iot-hub/troubleshoot-error-codes#403002-iothubquotaexceeded). --### Upgrade device client SDK to a supported version for Iot Hub --Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the link given. --Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk). --### Upgrade Microsoft Edge Device Runtime to a supported version for Iot Hub --Some or all of your Microsoft Edge devices are using outdated versions and we recommend you upgrade to the latest supported version of the runtime. See the details in the link given. --Learn more about [IoT hub - UpgradeEdgeSdk (Upgrade Microsoft Edge Device Runtime to a supported version for Iot Hub)](https://aka.ms/IOTEdgeSDKCheck). ----## Media --### Increase Media Services quotas or limits to ensure continuity of service --Your media account is about to hit its quota limits. Review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Don't create extra Azure Media accounts in an attempt to obtain higher limits. --Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](https://aka.ms/ams-quota-recommendation/). ----## Networking --### Check Point virtual machine might lose Network Connectivity --We have identified that your virtual machine might be running a version of Check Point image that might lose network connectivity during a platform servicing operation. We recommend that you upgrade to a newer version of the image. Contact Check Point for further instructions on how to upgrade your image. --Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Check Point virtual machine might lose Network Connectivity.)](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk151752&partition=Advanced&product=CloudGuard). --### Upgrade to the latest version of the Azure Connected Machine agent --The Azure Connected Machine agent is updated regularly with bug fixes, stability enhancements, and new functionality. Upgrade your agent to the latest version for the best Azure Arc experience. --Learn more about [Connected Machine agent - Azure Arc - ArcServerAgentVersion (Upgrade to the latest version of the Azure Connected Machine agent)](../azure-arc/servers/manage-agent.md). --### Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate --We recommend configuring the Azure Front Door (AFD) customer certificate secret to ΓÇÿLatestΓÇÖ for the AFD to refer to the latest secret version in Azure Key Vault, so that the secret can be automatically rotated. --Learn more about [Front Door Profile - SwitchVersionBYOC (Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate)](https://aka.ms/how-to-configure-https-custom-domain#certificate-renewal-and-changing-certificate-types). --### Validate domain ownership by adding DNS TXT record to DNS provider. --Validate domain ownership by adding DNS TXT record to DNS provider. --Learn more about [Front Door Profile - ValidateDomainOwnership (Validate domain ownership by adding DNS TXT record to DNS provider.)](https://aka.ms/how-to-add-custom-domain#domain-validation-state). --### Revalidate domain ownership for the Azure Front Door managed certificate renewal --Azure Front Door can't automatically renew the managed certificate because the domain isn't CNAME mapped to AFD endpoint. Revalidate domain ownership for the managed certificate to be automatically renewed. --Learn more about [Front Door Profile - RevalidateDomainOwnership (Revalidate domain ownership for the Azure Front Door managed certificate renewal)](https://aka.ms/how-to-add-custom-domain#domain-validation-state). --### Renew the expired Azure Front Door customer certificate to avoid service disruption --Some of the customer certificates for Azure Front Door Standard and Premium profiles expired. Renew the certificate in time to avoid service disruption. --Learn more about [Front Door Profile - RenewExpiredBYOC (Renew the expired Azure Front Door customer certificate to avoid service disruption.)](https://aka.ms/how-to-configure-https-custom-domain#use-your-own-certificate). --### Upgrade your SKU or add more instances to ensure fault tolerance --Deploying two or more medium or large sized instances ensures business continuity during outages caused by planned or unplanned maintenance. --Learn more about [Improve the reliability of your application by using Azure Advisor - Ensure application gateway fault tolerance)](/azure/advisor/advisor-high-availability-recommendations#ensure-application-gateway-fault-tolerance). --### Avoid hostname override to ensure site integrity --Try to avoid overriding the hostname when configuring Application Gateway. Having a domain on the frontend of Application Gateway different than the one used to access the backend, can potentially lead to cookies or redirect URLs being broken. A different frontend domain isn't a problem in all situations, and certain categories of backends like REST APIs, are less sensitive in general. Make sure the backend is able to deal with the domain difference, or update the Application Gateway configuration so the hostname doesn't need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend. --Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain). --### Azure WAF RuleSet CRS 3.1/3.2 has been updated with Log4j 2 vulnerability rule --In response to Log4j 2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide extra protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable them. --Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule)](https://aka.ms/log4jcve). --### Extra protection to mitigate Log4j 2 vulnerability (CVE-2021-44228) --To mitigate the effect of Log4j 2 vulnerability, we recommend these steps: --1) Upgrade Log4j 2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link provided. -2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU. --Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (More protection to mitigate Log4j 2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve). --### Update virtual network permission of Application Gateway users --To improve security and provide a more consistent experience across Azure, all users must pass a permission check before creating or updating an Application Gateway in a Virtual Network. The users or service principals must include at least Microsoft.Network/virtualNetworks/subnets/join/action permission. --Learn more about [Application gateway - AppGwLinkedAccessFailureRecmmendation (Update VNet permission of Application Gateway users)](https://aka.ms/agsubnetjoin). --### Use version-less Key Vault secret identifier to reference the certificates --We strongly recommend that you use a version-less secret identifier to allow your application gateway resource to automatically retrieve the new certificate version, whenever available. Example: https://myvault.vault.azure.net/secrets/mysecret/ --Learn more about [Application gateway - AppGwAdvisorRecommendationForCertificateUpdate (Use version-less Key Vault secret identifier to reference the certificates)](https://aka.ms/agkvversion). --### Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency --We have detected that your ExpressRoute gateway only has 1 ExpressRoute circuit associated to it. Connect one or more extra circuits to your gateway to ensure peering location redundancy and resiliency --Learn more about [Virtual network gateway - ExpressRouteGatewayRedundancy (Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency)](../expressroute/designing-for-high-availability-with-expressroute.md). --### Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit --We have detected that ExpressRoute Monitor on Network Performance Monitor isn't currently monitoring your ExpressRoute circuit. ExpressRoute monitor provides end-to-end monitoring capabilities including: loss, latency, and performance from on-premises to Azure and Azure to on-premises --Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit)](../expressroute/how-to-npm.md). --### Use ExpressRoute Global Reach to improve your design for disaster recovery --You appear to have ExpressRoute circuits peered in at least two different locations. Connect them to each other using ExpressRoute Global Reach to allow traffic to continue flowing between your on-premises network and Azure environments if one circuit losing connectivity. You can establish Global Reach connections between circuits in different peering locations within the same metro or across metros. --Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute Global Reach to improve your design for disaster recovery)](../expressroute/about-upgrade-circuit-bandwidth.md). --### Add at least one more endpoint to the profile, preferably in another Azure region --Profiles require more than one endpoint to ensure availability if one of the endpoints fails. We also recommend that endpoints be in different regions. --Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](https://aka.ms/AA1o0x4). --### Add an endpoint configured to "All (World)" --For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there's no predefined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles avoids traffic black holing and guarantee service remains available. --Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \""All (World)\"")](https://aka.ms/Rf7vc5). --### Add or move one endpoint to another Azure region --All endpoints associated to this proximity profile are in the same region. Users from other regions might experience long latency when attempting to connect. Adding or moving an endpoint to another region improves overall performance for proximity routing and provide better availability in case all endpoints in one region fail. --Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb). --### Move to production gateway SKUs from Basic gateways --The VPN gateway Basic SKU is designed for development or testing scenarios. Move to a production SKU if you're using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability. --Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](https://aka.ms/aa_basicvpngateway_learnmore). --### Use NAT gateway for outbound connectivity --Prevent risk of connectivity failures due to SNAT port exhaustion by using NAT gateway for outbound traffic from your virtual networks. NAT gateway scales dynamically and provides secure connections for traffic headed to the internet. --Learn more about [Virtual network - natGateway (Use NAT gateway for outbound connectivity)](/azure/load-balancer/load-balancer-outbound-connections#2-associate-a-nat-gateway-to-the-subnet). --### Update virtual network permission of Application Gateway users --To improve security and provide a more consistent experience across Azure, all users must pass a permission check before creating or updating an Application Gateway in a Virtual Network. The users or service principals must include at least Microsoft.Network/virtualNetworks/subnets/join/action permission. --Learn more about [Application gateway - AppGwLinkedAccessFailureRecmmendation (Update VNet permission of Application Gateway users)](https://aka.ms/agsubnetjoin). --### Use version-less Key Vault secret identifier to reference the certificates --We strongly recommend that you use a version-less secret identifier to allow your application gateway resource to automatically retrieve the new certificate version, whenever available. Example: https://myvault.vault.azure.net/secrets/mysecret/ --Learn more about [Application gateway - AppGwAdvisorRecommendationForCertificateUpdate (Use version-less Key Vault secret identifier to reference the certificates)](https://aka.ms/agkvversion). --### Enable Active-Active gateways for redundancy --In active-active configuration, both instances of the VPN gateway establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic is switched over to the other active IPsec tunnel automatically. --Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](https://aka.ms/aa_vpnha_learnmore). --<!-- -### Use HEAD health probes --For health probes, itΓÇÖs a good practice to use the HEAD method, which reduces the amount of traffic load on your origins. --Learn more about [Front Door - Use HEAD health probes](https://aka.ms/afd-use-health-probes). >-### Use managed TLS certificates --Front Door management of your TLS certificates reduces your operational costs and helps you to avoid costly outages caused by forgetting to renew a certificate. --Learn more about [Use managed TLS certificates](https://aka.ms/afd-use-managed-tls). --### Disable health probes when there is only one origin in an origin group --If you only have a single origin, Front Door always routes traffic to that origin even if its health probe reports an unhealthy status. The status of the health probe doesn't do anything to change Front Door's behavior. In this scenario, health probes don't provide a benefit and you should disable them to reduce the traffic on your origin. --Learn more about [Health probe best practices](https://aka.ms/afd-disable-health-probes). --### Use the same domain name on Azure Front Door and your origin --We recommend that you preserve the original HTTP host name when you use a reverse proxy in front of a web application. Having a different host name at the reverse proxy than the one that's provided to the back-end application server can lead to cookies or redirect URLs that don't work properly. For example, session state can get lost, authentication can fail, or back-end URLs can inadvertently be exposed to end users. You can avoid these problems by preserving the host name of the initial request so that the application server sees the same domain as the web browser. --Learn more about [Use the same domain name on Azure Front Door and your origin](https://aka.ms/afd-same-domain-origin). --## SAP for Azure --### Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads --The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for ASCS HA setup. --Learn more about [Central Server Instance - ConcurrentFencingHAASCSRH (Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel). --### Ensure that stonith is enabled for the Pacemaker cofiguration in ASCS HA setup in SAP workloads --In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration of your SAP workload. --Learn more about [Central Server Instance - StonithEnabledHAASCSRH (Ensure that stonith is enabled for the Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel). --### Set the stonith timeout to 144 for the cluster cofiguration in ASCS HA setup in SAP workloads --Set the stonith timeout to 144 for HA cluster as per recommendation for SAP on Azure. --Learn more about [Central Server Instance - StonithTimeOutHAASCS (Set the stonith timeout to 144 for the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). --### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads --The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance. --Learn more about [Central Server Instance - CorosyncTokenHAASCSRH (Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel). --### Set the expected votes parameter to 2 in Pacemaker cofiguration in ASCS HA setup in SAP workloads +<!--68e62f5c-4ed1-4b78-a2a0-4d9a4cebf106_end--> + +<!--microsoft_servicebus_end> +## SQL Server on Azure Virtual Machines +<!--77f01e65-e57f-40ee-a0e9-e18c007d4d4c_begin--> +#### Enable Azure backup for SQL on your virtual machines + +For the benefits of zero-infrastructure backup, point-in-time restore, and central management with SQL AG integration, enable backups for SQL databases on your virtual machines using Azure backup. ++For More information, see [About SQL Server Backup in Azure VMs](/azure/backup/backup-azure-sql-database) +ID: 77f01e65-e57f-40ee-a0e9-e18c007d4d4c ++<!--77f01e65-e57f-40ee-a0e9-e18c007d4d4c_end--> +<!--microsoft_sqlvirtualmachine_end> +## Storage +<!--d42d751d-682d-48f0-bc24-bb15b61ac4b8_begin--> +#### Use Managed Disks for storage accounts reaching capacity limit + +When Premium SSD unmanaged disks in storage accounts are about to reach their Premium Storage capacity limit, failures might occur. To avoid failures when this limit is reached, migrate to Managed Disks that don't have an account capacity limit. This migration can be done through the portal in less than 5 minutes. -In a two node HA cluster, set the quorum votes to 2 as per recommendation for SAP on Azure. +For More information, see [Scalability and performance targets for standard storage accounts](https://aka.ms/premium_blob_quota) +ID: d42d751d-682d-48f0-bc24-bb15b61ac4b8 -Learn more about [Central Server Instance - ExpectedVotesHAASCSRH (Set the expected votes parameter to 2 in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel). +<!--d42d751d-682d-48f0-bc24-bb15b61ac4b8_end--> -### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads +<!--8ef907f4-f8e3-4bf1-962d-27e005a7d82d_begin--> +#### Configure blob backup + +Azure blob backup helps protect data from accidental or malicious deletion. We recommend that you configure blob backup. -The corosync token_retransmits_before_loss_const determines the number of times that tokens can be retransmitted the system attempts before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for ASCS HA setup. +For More information, see [Overview of Azure Blob backup](/azure/backup/blob-backup-overview) +ID: 8ef907f4-f8e3-4bf1-962d-27e005a7d82d -Learn more about [Central Server Instance - TokenRestransmitsHAASCSSLE (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--8ef907f4-f8e3-4bf1-962d-27e005a7d82d_end--> + +<!--microsoft_storage_end> +## Subscriptions +<!--9e91a63f-faaf-46f2-ac7c-ddfcedf13366_begin--> +#### Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data + +Keep your information and applications safe with robust, one click backup from Azure. Activate Azure Backup to get cost-effective protection for a wide range of workloads including VMs, SQL databases, applications, and file shares. -### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads +For More information, see [Azure Backup Documentation - Azure Backup ](/azure/backup/) +ID: 9e91a63f-faaf-46f2-ac7c-ddfcedf13366 -The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance. +<!--9e91a63f-faaf-46f2-ac7c-ddfcedf13366_end--> -Learn more about [Central Server Instance - CorosyncTokenHAASCSSLE (Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--242639fd-cd73-4be2-8f55-70478db8d1a5_begin--> +#### Create an Azure Service Health alert + +Azure Service Health alerts keep you informed about issues and advisories in four areas (Service issues, Planned maintenance, Security and Health advisories). These alerts are personalized to notify you about disruptions or potential impacts on your chosen Azure regions and services. -### Set the 'corosync max_messages' in Pacemaker cluster to 20 for ASCS HA setup in SAP workloads +For More information, see [Create activity log alerts on service notifications using the Azure portal](https://aka.ms/aa_servicehealthalert_action) +ID: 242639fd-cd73-4be2-8f55-70478db8d1a5 -The corosync max_messages constant specifies the maximum number of messages allowed to be sent by one processor once the token is received. We recommend you set to 20 times the corosync token parameter in Pacemaker cluster configuration. +<!--242639fd-cd73-4be2-8f55-70478db8d1a5_end--> + +<!--microsoft_subscriptions_end> +## Virtual Machines +<!--02cfb5ef-a0c1-4633-9854-031fbda09946_begin--> +#### Improve data reliability by using Managed Disks + +Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure. -Learn more about [Central Server Instance - CorosyncMaxMessagesHAASCSSLE (Set the 'corosync max_messages' in Pacemaker cluster to 20 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [https://aka.ms/aa_avset_manageddisk_learnmore](https://aka.ms/aa_avset_manageddisk_learnmore) +ID: 02cfb5ef-a0c1-4633-9854-031fbda09946 -### Set the 'corosync consensus' in Pacemaker cluster to 36000 for ASCS HA setup in SAP workloads +<!--02cfb5ef-a0c1-4633-9854-031fbda09946_end--> -The corosync parameter 'consensus' specifies in milliseconds how long to wait for consensus to be achieved before starting a new round of membership in the cluster configuration. We recommend that you set 1.2 times the corosync token in Pacemaker cluster configuration for ASCS HA setup. +<!--ed651749-cd37-4fd5-9897-01b416926745_begin--> +#### Enable virtual machine replication to protect your applications from regional outage + +Virtual machines are resilient to regional outages when replication to another region is enabled. To reduce adverse business impact during an Azure region outage, we recommend enabling replication of all business-critical virtual machines. -Learn more about [Central Server Instance - CorosyncConsensusHAASCSSLE (Set the 'corosync consensus' in Pacemaker cluster to 36000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Quickstart: Set up disaster recovery to a secondary Azure region for an Azure VM](https://aka.ms/azure-site-recovery-dr-azure-vms) +ID: ed651749-cd37-4fd5-9897-01b416926745 -### Set the expected votes parameter to 2 in the cluster cofiguration in ASCS HA setup in SAP workloads +<!--ed651749-cd37-4fd5-9897-01b416926745_end--> + -In a two node HA cluster, set the quorum parameter expected_votes to 2 as per recommendation for SAP on Azure. +<!--bcfeb92b-fe93-4cea-adc6-e747055518e9_begin--> +#### Update your outbound connectivity protocol to Service Tags for Azure Site Recovery + +IP address-based allowlisting is a vulnerable way to control outbound connectivity for firewalls, Service Tags are a good alternative. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines. -Learn more about [Central Server Instance - ExpectedVotesHAASCSSLE (Set the expected votes parameter to 2 in the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [About networking in Azure VM disaster recovery](https://aka.ms/azure-site-recovery-using-service-tags) +ID: bcfeb92b-fe93-4cea-adc6-e747055518e9 -### Set the two_node parameter to 1 in the cluster cofiguration in ASCS HA setup in SAP workloads +<!--bcfeb92b-fe93-4cea-adc6-e747055518e9_end--> + -In a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure. +<!--58d6648d-32e8-4346-827c-4f288dd8ca24_begin--> +#### Upgrade the standard disks attached to your premium-capable VM to premium disks + +Using Standard SSD disks with premium VMs may lead to suboptimal performance and latency issues. We recommend that you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee Virtual Machine Connectivity of at least 99.9%. When choosing to upgrade, there are two factors to consider. The first factor is that upgrading requires a VM reboot and that takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks. -Learn more about [Central Server Instance - TwoNodesParametersHAASCSSLE (Set the two_node parameter to 1 in the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Azure managed disk types](https://aka.ms/aa_storagestandardtopremium_learnmore) +ID: 58d6648d-32e8-4346-827c-4f288dd8ca24 -### Set the 'corosync join' in Pacemaker cluster to 60 for ASCS HA setup in SAP workloads +<!--58d6648d-32e8-4346-827c-4f288dd8ca24_end--> + -The corosync join timeout specifies in milliseconds how long to wait for join messages in the membership protocol. We recommend that you set 60 in Pacemaker cluster configuration for ASCS HA setup. +<!--57ecb3cd-f2b4-4cad-8b3a-232cca527a0b_begin--> +#### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost + +Azure Managed Disks provide higher resiliency, simplified service management, higher scale target and more choices among several disk types. Your VM is using premium unmanaged disks that can be migrated to managed disks at no additional cost through the portal in less than 5 minutes. -Learn more about [Central Server Instance - CorosyncJoinHAASCSSLE (Set the 'corosync join' in Pacemaker cluster to 60 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Introduction to Azure managed disks](https://aka.ms/md_overview) +ID: 57ecb3cd-f2b4-4cad-8b3a-232cca527a0b -### Ensure that stonith is enabled for the cluster cofiguration in ASCS HA setup in SAP workloads +<!--57ecb3cd-f2b4-4cad-8b3a-232cca527a0b_end--> + -In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration. +<!--11f04d70-5bb3-4065-b717-1f11b2e050a8_begin--> +#### Upgrade your deprecated Virtual Machine image to a newer image + +Virtual Machines (VMs) in your subscription are running on images scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. To prevent disruption to your workloads, upgrade to a newer image. (VMRunningDeprecatedImage) -Learn more about [Central Server Instance - StonithEnabledHAASCS (Ensure that stonith is enabled for the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Deprecated Azure Marketplace images - Azure Virtual Machines ](https://aka.ms/DeprecatedImagesFAQ) +ID: 11f04d70-5bb3-4065-b717-1f11b2e050a8 -### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for ASCS HA setup +<!--11f04d70-5bb3-4065-b717-1f11b2e050a8_end--> + -Set the stonith-timeout to 900 for reliable function of the Pacemaker for ASCS HA setup. This stonith-timeout setting is applicable if you're using Azure fence agent for fencing with either managed identity or service principal. +<!--937d85a4-11b2-4e13-a6b5-9e15e3d74d7b_begin--> +#### Upgrade to a newer offer of Virtual Machine image + +Virtual Machines (VMs) in your subscription are running on images scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. To prevent disruption to your workloads, upgrade to a newer image. (VMRunningDeprecatedOfferLevelImage) -Learn more about [Central Server Instance - StonithTimeOutHAASCSSLE (Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Deprecated Azure Marketplace images - Azure Virtual Machines ](https://aka.ms/DeprecatedImagesFAQ) +ID: 937d85a4-11b2-4e13-a6b5-9e15e3d74d7b -### Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads +<!--937d85a4-11b2-4e13-a6b5-9e15e3d74d7b_end--> + -The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for ASCS HA setup. +<!--681acf17-11c3-4bdd-8f71-da563c79094c_begin--> +#### Upgrade to a newer SKU of Virtual Machine image + +Virtual Machines (VMs) in your subscription are running on images scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. To prevent disruption to your workloads, upgrade to a newer image. -Learn more about [Central Server Instance - ConcurrentFencingHAASCSSLE (Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Deprecated Azure Marketplace images - Azure Virtual Machines ](https://aka.ms/DeprecatedImagesFAQ) +ID: 681acf17-11c3-4bdd-8f71-da563c79094c -### Create the softdog config file in Pacemaker configuration for ASCS HA setup in SAP workloads +<!--681acf17-11c3-4bdd-8f71-da563c79094c_end--> + -The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. Ensure that the softdog configuration file is created in the Pacemaker cluster for ASCS HA setup. +<!--3b739bd1-c193-4bb6-a953-1362ee3b03b2_begin--> +#### Upgrade your Virtual Machine Scale Set to alternative image version + +VMSS in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Set workloads would no longer scale out. Upgrade to newer version of the image to prevent disruption to your workload. -Learn more about [Central Server Instance - SoftdogConfigHAASCSSLE (Create the softdog config file in Pacemaker configuration for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Deprecated Azure Marketplace images - Azure Virtual Machines ](https://aka.ms/DeprecatedImagesFAQ) +ID: 3b739bd1-c193-4bb6-a953-1362ee3b03b2 -### Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads +<!--3b739bd1-c193-4bb6-a953-1362ee3b03b2_end--> + -The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for ASCS HA setup. +<!--3d18d7cd-bdec-4c68-9160-16a677d0f86a_begin--> +#### Upgrade your Virtual Machine Scale Set to alternative image offer + +VMSS in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Set workloads would no longer scale out. To prevent disruption to your workload, upgrade to newer offer of the image. -Learn more about [Central Server Instance - softdogmoduleloadedHAASCSSLE (Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Deprecated Azure Marketplace images - Azure Virtual Machines ](https://aka.ms/DeprecatedImagesFAQ) +ID: 3d18d7cd-bdec-4c68-9160-16a677d0f86a -### Ensure there's one instance of a fence_azure_arm in Pacemaker configuration for ASCS HA setup +<!--3d18d7cd-bdec-4c68-9160-16a677d0f86a_end--> + -The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure there's one instance of a fence_azure_arm in your Pacemaker configuration for ASCS HA setup. The fence_azure_arm requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal. +<!--44abb62e-7789-4f2f-8001-fa9624cb3eb3_begin--> +#### Upgrade your Virtual Machine Scale Set to alternative image SKU + +VMSS in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, your Virtual Machine Scale Set workloads would no longer scale out. To prevent disruption to your workload, upgrade to newer SKU of the image. -Learn more about [Central Server Instance - FenceAzureArmHAASCSSLE (Ensure that there's one instance of a fence_azure_arm in your Pacemaker configuration for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Deprecated Azure Marketplace images - Azure Virtual Machines ](https://aka.ms/DeprecatedImagesFAQ) +ID: 44abb62e-7789-4f2f-8001-fa9624cb3eb3 -### Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads +<!--44abb62e-7789-4f2f-8001-fa9624cb3eb3_end--> + -Enable HA ports in the Load balancing rules for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings. +<!--53e0a3cb-3569-474a-8d7b-7fd06a8ec227_begin--> +#### Provide access to mandatory URLs missing for your Azure Virtual Desktop environment + +For a session host to deploy and register to Windows Virtual Desktop (WVD) properly, you need a set of URLs in the 'allowed list' in case your VM runs in a restricted environment. For specific URLs missing from your allowed list, search your application event log for event 3702. -Learn more about [Central Server Instance - ASCSHAEnableLBPorts (Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance). +For More information, see [Required FQDNs and endpoints for Azure Virtual Desktop](/azure/virtual-desktop/safe-url-list) +ID: 53e0a3cb-3569-474a-8d7b-7fd06a8ec227 -### Enable Floating IP in the Azure Load balancer for ASCS HA setup in SAP workloads +<!--53e0a3cb-3569-474a-8d7b-7fd06a8ec227_end--> + -Enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings. +<!--00e4ac6c-afa3-4578-a021-5f15e18850a2_begin--> +#### Align location of resource and resource group + +To reduce the impact of region outages, co-locate your resources with their resource group in the same region. This way, Azure Resource Manager stores metadata related to all resources within the group in one region. By co-locating, you reduce the chance of being affected by region unavailability. -Learn more about [Central Server Instance - ASCSHAEnableFloatingIpLB (Enable Floating IP in the Azure Load balancer for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance). +For More information, see [What is Azure Resource Manager?](/azure/azure-resource-manager/management/overview#resource-group-location-alignment) +ID: 00e4ac6c-afa3-4578-a021-5f15e18850a2 -### Set the Idle timeout in Azure Load Balancer to 30 minutes for ASCS HA setup in SAP workloads +<!--00e4ac6c-afa3-4578-a021-5f15e18850a2_end--> + -To prevent load balancer timeout, make sure that all Azure Load Balancing Rules have: 'Idle timeout (minutes)' set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings. +<!--066a047a-9ace-45f4-ac50-6325840a6b00_begin--> +#### Use Availability zones for better resiliency and availability + +Availability Zones (AZ) in Azure help protect your applications and data from datacenter failures. Each AZ is made up of one or more datacenters equipped with independent power, cooling, and networking. By designing solutions to use zonal VMs, you can isolate your VMs from failure in any other zone. -Learn more about [Central Server Instance - ASCSHASetIdleTimeOutLB (Set the Idle timeout in Azure Load Balancer to 30 minutes for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [What are availability zones?](/azure/reliability/availability-zones-overview) +ID: 066a047a-9ace-45f4-ac50-6325840a6b00 -### Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads +<!--066a047a-9ace-45f4-ac50-6325840a6b00_end--> + -Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down. +<!--3b587048-b04b-4f81-aaed-e43793652b0f_begin--> +#### Enable Azure Virtual Machine Scale Set (VMSS) application health monitoring + +Configuring Virtual Machine Scale Set application health monitoring using the Application Health extension or load balancer health probes enables the Azure platform to improve the resiliency of your application by responding to changes in application health. -Learn more about [Central Server Instance - ASCSLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-general-update-november-2021/ba-p/2807619#network-settings-and-tuning-for-sap-on-azure). +For More information, see [Using Application Health extension with Virtual Machine Scale Sets](https://aka.ms/vmss-app-health-monitoring) +ID: 3b587048-b04b-4f81-aaed-e43793652b0f -### Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with Redhat OS +<!--3b587048-b04b-4f81-aaed-e43793652b0f_end--> + -In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration of your SAP workload. +<!--651c7925-17a3-42e5-85cd-73bd095cf27f_begin--> +#### Enable Backups on your Virtual Machines + +Secure your data by enabling backups for your virtual machines. -Learn more about [Database Instance - StonithEnabledHARH (Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with Redhat OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel). +For More information, see [What is the Azure Backup service?](/azure/backup/backup-overview) +ID: 651c7925-17a3-42e5-85cd-73bd095cf27f -### Set the stonith timeout to 144 for the cluster cofiguration in HA enabled SAP workloads +<!--651c7925-17a3-42e5-85cd-73bd095cf27f_end--> + -Set the stonith timeout to 144 for HA cluster as per recommendation for SAP on Azure. +<!--b4d988a9-85e6-4179-b69c-549bdd8a55bb_begin--> +#### Enable automatic repair policy on Azure Virtual Machine Scale Sets (VMSS) + +Enabling automatic instance repairs helps achieve high availability by maintaining a set of healthy instances. If an unhealthy instance is found by the Application Health extension or load balancer health probe, automatic instance repairs attempt to recover the instance by triggering repair actions. -Learn more about [Database Instance - StonithTimeoutHASLE (Set the stonith timeout to 144 for the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Automatic instance repairs for Azure Virtual Machine Scale Sets](https://aka.ms/vmss-automatic-repair) +ID: b4d988a9-85e6-4179-b69c-549bdd8a55bb -### Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with SUSE OS +<!--b4d988a9-85e6-4179-b69c-549bdd8a55bb_end--> + -In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration. +<!--ce8bb934-ce5c-44b3-a94c-1836fa7a269a_begin--> +#### Configure Virtual Machine Scale Set automated scaling by metrics + +Optimize resource utilization, reduce costs, and enhance application performance with custom autoscale based on a metric. Automatically add Virtual Machine instances based on real-time metrics such as CPU, memory, and disk operations. Ensure high availability while maintaining cost-efficiency. -Learn more about [Database Instance - StonithEnabledHASLE (Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with SUSE OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Overview of autoscale with Azure Virtual Machine Scale Sets](https://aka.ms/VMSSCustomAutoscaleMetric) +ID: ce8bb934-ce5c-44b3-a94c-1836fa7a269a -### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for HANA DB HA setup +<!--ce8bb934-ce5c-44b3-a94c-1836fa7a269a_end--> + -Set the stonith-timeout to 900 for reliable functioning of the Pacemaker for HANA DB HA setup. This setting is important if you're using the Azure fence agent for fencing with either managed identity or service principal. +<!--d4102c0f-ebe3-4b22-8fe0-e488866a87af_begin--> +#### Use Azure Disks with Zone Redundant Storage (ZRS) for higher resiliency and availability + +Azure Disks with ZRS provide synchronous replication of data across three Availability Zones in a region, making the disk tolerant to zonal failures without disruptions to applications. For higher resiliency and availability, migrate disks from LRS to ZRS. -Learn more about [Database Instance - StonithTimeOutSuseHDB (Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +For More information, see [Convert a disk from LRS to ZRS](https://aka.ms/migratedisksfromLRStoZRS) +ID: d4102c0f-ebe3-4b22-8fe0-e488866a87af -### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with Redhat OS +<!--d4102c0f-ebe3-4b22-8fe0-e488866a87af_end--> + +<!--microsoft_compute_end> +## Workloads +<!--3ca22452-0f8f-4701-a313-a2d83334e3cc_begin--> +#### Configure an Always On availability group for Multi-purpose SQL servers (MPSQL) + +MPSQL servers with an Always On availability group have better availability. Your MPSQL servers aren't configured as part of an Always On availability group in the shared infrastructure in your Epic system. Always On availability groups improve database availability and resource use. -The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance. +For More information, see [What is an Always On availability group?](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server?view=sql-server-ver16#Benefits) +ID: 3ca22452-0f8f-4701-a313-a2d83334e3cc -Learn more about [Database Instance - CorosyncTokenHARH (Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with Redhat OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel). +<!--3ca22452-0f8f-4701-a313-a2d83334e3cc_end--> -### Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads +<!--f3d23f88-aee2-4b5a-bfd6-65b22bd70fc0_begin--> +#### Configure Local host cache on Citrix VDI servers to ensure seamless connection brokering operations + +We have observed that your Citrix VDI servers aren't configured Local host Cache. Local Host Cache (LHC) is a feature in Citrix Virtual Apps and Desktops that allows connection brokering operations to continue when an outage occurs.LHC engages when the site database is inaccessible for 90 seconds. -In a two node HA cluster, set the quorum votes to 2 as per recommendation for SAP on Azure. + +ID: f3d23f88-aee2-4b5a-bfd6-65b22bd70fc0 -Learn more about [Database Instance - ExpectedVotesParamtersHARH (Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel). +<!--f3d23f88-aee2-4b5a-bfd6-65b22bd70fc0_end--> + -### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with SUSE OS +<!--dfa50c39-104a-418b-873a-c145fe521c9b_begin--> +#### Deploy Hyperspace Web servers as part of a Virtual Machine Scale Set Flex configured for 3 zones + +We have observed that your Hyperspace Web servers in the Virtual Machine Scale Set Flex set up aren't spread across 3 zones in the selected region. For services like Hyperspace Web in Epic systems that require high availability and large scale, it's recommended that servers are deployed as part of Virtual Machine Scale Set Flex and spread across 3 zones. With Flexible orchestration, Azure provides a unified experience across the Azure VM ecosystem -The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance. +For More information, see [Create a Virtual Machine Scale Set that uses Availability Zones](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones?tabs=cli-1%2Cportal-2) +ID: dfa50c39-104a-418b-873a-c145fe521c9b -Learn more about [Database Instance - CorosyncTokenHASLE (Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with SUSE OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--dfa50c39-104a-418b-873a-c145fe521c9b_end--> + -### Set parameter PREFER_SITE_TAKEOVER to 'true' in the Pacemaker cofiguration for HANA DB HA setup +<!--45c2994f-a01d-4024-843e-a2a84dae48b4_begin--> +#### Set the Idle timeout in Azure Load Balancer to 30 minutes for ASCS HA setup in SAP workloads + +To prevent load balancer timeout, make sure that all Azure Load Balancing Rules have: 'Idle timeout (minutes)' set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add or edit the rule to enable the setting. -The parameter PREFER_SITE_TAKEOVER in SAP HANA topology defines if the HANA SR resource agent prefers to takeover to the secondary instance instead of restarting the failed primary locally. Set it to 'true' for reliable function of HANA DB HA setup. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability#:~:text=To%20set%20up%20standard%20load%20balancer%2C%20follow%20these%20configuration%20steps) +ID: 45c2994f-a01d-4024-843e-a2a84dae48b4 -Learn more about [Database Instance - PreferSiteTakeOverHARH (Set parameter PREFER_SITE_TAKEOVER to 'true' in the Pacemaker cofiguration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel). +<!--45c2994f-a01d-4024-843e-a2a84dae48b4_end--> + -### Enable the 'concurrent-fencing' parameter in the Pacemaker cofiguration for HANA DB HA setup +<!--aec9b9fb-145f-4af8-94f3-7fdc69762b72_begin--> +#### Enable Floating IP in the Azure Load balancer for ASCS HA setup in SAP workloads + +For port resuse and better high availability, enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add or edit the rule to enable. -The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for HANA DB HA setup. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability#:~:text=To%20set%20up%20standard%20load%20balancer%2C%20follow%20these%20configuration%20steps) +ID: aec9b9fb-145f-4af8-94f3-7fdc69762b72 -Learn more about [Database Instance - ConcurrentFencingHARH (Enable the 'concurrent-fencing' parameter in the Pacemaker cofiguration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel). +<!--aec9b9fb-145f-4af8-94f3-7fdc69762b72_end--> + -### Set parameter PREFER_SITE_TAKEOVER to 'true' in the cluster cofiguration in HA enabled SAP workloads +<!--c3811f93-a1a5-4a84-8fba-dd700043cc42_begin--> +#### Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads + +For port resuse and better high availability, enable HA ports in the load balancing rules for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add or edit the rule to enable. -The parameter PREFER_SITE_TAKEOVER in SAP HANA topology defines if the HANA SR resource agent prefers to takeover to the secondary instance instead of restarting the failed primary locally. Set it to 'true' for reliable function of HANA DB HA setup. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability#:~:text=To%20set%20up%20standard%20load%20balancer%2C%20follow%20these%20configuration%20steps) +ID: c3811f93-a1a5-4a84-8fba-dd700043cc42 -Learn more about [Database Instance - PreferSiteTakeoverHDB (Set parameter PREFER_SITE_TAKEOVER to 'true' in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--c3811f93-a1a5-4a84-8fba-dd700043cc42_end--> + -### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads +<!--27899d14-ac62-41f4-a65d-e6c2a5af101b_begin--> +#### Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads + +Disable TCP timestamps on VMs placed behind AzurEnabling TCP timestamps will cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack causing the load balancer to mark the endpoint as down -The corosync token_retransmits_before_loss_const determines the amount of token retransmits that are attempted before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for HANA DB HA setup. +For More information, see [https://launchpad.support.sap.com/#/notes/2382421](https://launchpad.support.sap.com/#/notes/2382421) +ID: 27899d14-ac62-41f4-a65d-e6c2a5af101b -Learn more about [Database Instance - TokenRetransmitsHDB (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--27899d14-ac62-41f4-a65d-e6c2a5af101b_end--> + -### Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads +<!--1c1deb1c-ae1b-49a7-88d3-201285ad63b6_begin--> +#### Set the Idle timeout in Azure Load Balancer to 30 minutes for HANA DB HA setup in SAP workloads + +To prevent load balancer timeout, ensure that all Azure Load Balancing Rules 'Idle timeout (minutes)' parameter is set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add or edit the rule to enable the recommended settings. -In a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability#:~:text=To%20set%20up%20standard%20load%20balancer%2C%20follow%20these%20configuration%20steps) +ID: 1c1deb1c-ae1b-49a7-88d3-201285ad63b6 -Learn more about [Database Instance - TwoNodeParameterSuseHDB (Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--1c1deb1c-ae1b-49a7-88d3-201285ad63b6_end--> + -### Enable the 'concurrent-fencing' parameter in the cluster cofiguration in HA enabled SAP workloads +<!--cca36756-d938-4f3a-aebf-75358c7c0622_begin--> +#### Enable Floating IP in the Azure Load balancer for HANA DB HA setup in SAP workloads + +For more flexible routing, enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add or edit the rule to enable the recommended settings. -The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for HANA DB HA setup. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability#:~:text=To%20set%20up%20standard%20load%20balancer%2C%20follow%20these%20configuration%20steps) +ID: cca36756-d938-4f3a-aebf-75358c7c0622 -Learn more about [Database Instance - ConcurrentFencingSuseHDB (Enable the 'concurrent-fencing' parameter in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--cca36756-d938-4f3a-aebf-75358c7c0622_end--> + -### Set the 'corosync join' in Pacemaker cluster to 60 for HA enabled HANA DB in SAP workloads +<!--a5ac35c2-a299-4864-bfeb-09d2348bda68_begin--> +#### Enable HA ports in the Azure Load Balancer for HANA DB HA setup in SAP workloads + +For enhanced scalability, enable HA ports in the Load balancing rules for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add or edit the rule to enable the recommended settings. -The corosync join timeout specifies in milliseconds how long to wait for join messages in the membership protocol. We recommend that you set 60 in Pacemaker cluster configuration for HANA DB HA setup. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability#:~:text=To%20set%20up%20standard%20load%20balancer%2C%20follow%20these%20configuration%20steps) +ID: a5ac35c2-a299-4864-bfeb-09d2348bda68 -Learn more about [Database Instance - CorosyncHDB (Set the 'corosync join' in Pacemaker cluster to 60 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--a5ac35c2-a299-4864-bfeb-09d2348bda68_end--> + -### Set the 'corosync max_messages' in Pacemaker cluster to 20 for HA enabled HANA DB in SAP workloads +<!--760ba688-69ea-431b-afeb-13683a03f0c2_begin--> +#### Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads + +Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail due to TCP packets dropped by the VM's guest OS TCP stack causing the load balancer to mark the endpoint as down. -The corosync max_messages constant specifies the maximum number of messages allowed to be sent by one processor once the token is received. We recommend that you set 20 times the corosync token parameter in Pacemaker cluster configuration. +For More information, see [Azure Load Balancer health probes](/azure/load-balancer/load-balancer-custom-probe-overview#:~:text=Don%27t%20enable%20TCP,must%20be%20disabled) +ID: 760ba688-69ea-431b-afeb-13683a03f0c2 -Learn more about [Database Instance - CorosyncMaxMessageHDB (Set the 'corosync max_messages' in Pacemaker cluster to 20 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--760ba688-69ea-431b-afeb-13683a03f0c2_end--> + -### Set the 'corosync consensus' in Pacemaker cluster to 36000 for HA enabled HANA DB in SAP workloads +<!--28a00e1e-d0ad-452f-ad58-95e6c584e594_begin--> +#### Ensure that stonith is enabled for the Pacemaker configuration in ASCS HA setup in SAP workloads + +In a Pacemaker cluster, the implementation of node level fencing is done using a STONITH (Shoot The Other Node in the Head) resource. To help manage failed nodes, ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration. -The corosync parameter 'consensus' specifies in milliseconds how long to wait for consensus to be achieved before starting a new round of membership in the cluster configuration. We recommend that you set 1.2 times the corosync token in Pacemaker cluster configuration for HANA DB HA setup. +For More information, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) +ID: 28a00e1e-d0ad-452f-ad58-95e6c584e594 -Learn more about [Database Instance - CorosyncConsensusHDB (Set the 'corosync consensus' in Pacemaker cluster to 36000 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--28a00e1e-d0ad-452f-ad58-95e6c584e594_end--> + -### Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads +<!--deede7ea-68c5-4fb9-8f08-5e706f88ac67_begin--> +#### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads (RHEL) + +The corosync token setting determines the timeout that is used directly, or as a base, for real token timeout calculation in HA clusters. To allow memory-preserving maintenance, set the corosync token to 30000 for SAP on Azure. -The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. Ensure that the softdog configuration file is created in the Pacemaker cluster for HANA DB HA setup. +For More information, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) +ID: deede7ea-68c5-4fb9-8f08-5e706f88ac67 -Learn more about [Database Instance - SoftdogConfigSuseHDB (Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--deede7ea-68c5-4fb9-8f08-5e706f88ac67_end--> + -### Ensure there's one instance of a fence_azure_arm in Pacemaker configuration for HANA DB HA setup +<!--35ef8bba-923e-44f3-8f06-691deb679468_begin--> +#### Set the expected votes parameter to '2' in Pacemaker cofiguration in ASCS HA setup in SAP workloads (RHEL) + +For a two node HA cluster, set the quorum 'expected-votes' parameter to '2' as recommended for SAP on Azure to ensure a proper quorum, resilience, and data consistency. -The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure there's one instance of a fence_azure_arm in your Pacemaker configuration for HANA DB HA setup. The fence_azure-arm instance requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal. +For More information, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) +ID: 35ef8bba-923e-44f3-8f06-691deb679468 -Learn more about [Database Instance - FenceAzureArmSuseHDB (Ensure there's one instance of a fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--35ef8bba-923e-44f3-8f06-691deb679468_end--> + -### Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads +<!--0fffcdb4-87db-44f2-956f-dc9638248659_begin--> +#### Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads (ConcurrentFencingHAASCSRH) + +Concurrent fencing enables the fencing operations to be performed in parallel, which enhances high availability (HA), prevents split-brain scenarios, and contributes to a robust SAP deployment. Set this parameter to 'true' in the Pacemaker cluster configuration for ASCS HA setup. -The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for HANA DB HA setup. +For More information, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) +ID: 0fffcdb4-87db-44f2-956f-dc9638248659 -Learn more about [Database Instance - SoftdogModuleSuseHDB (Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--0fffcdb4-87db-44f2-956f-dc9638248659_end--> + -### Set the Idle timeout in Azure Load Balancer to 30 minutes for HANA DB HA setup in SAP workloads +<!--6921340e-baa1-424f-80d5-c07bbac3cf7c_begin--> +#### Ensure that stonith is enabled for the cluster configuration in ASCS HA setup in SAP workloads + +In a Pacemaker cluster, the implementation of node level fencing is done using a STONITH (Shoot The Other Node in the Head) resource. To help manage failed nodes, ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration. -To prevent load balancer timeout, make sure that all Azure Load Balancing Rules have: 'Idle timeout (minutes)' set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 6921340e-baa1-424f-80d5-c07bbac3cf7c -Learn more about [Database Instance - DBHASetIdleTimeOutLB (Set the Idle timeout in Azure Load Balancer to 30 minutes for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--6921340e-baa1-424f-80d5-c07bbac3cf7c_end--> + -### Enable Floating IP in the Azure Load balancer for HANA DB HA setup in SAP workloads +<!--4eb10096-942e-402d-b4a6-e4e271c87a02_begin--> +#### Set the stonith timeout to 144 for the cluster configuration in ASCS HA setup in SAP workloads + +The ΓÇÿstonith-timeoutΓÇÖ specifies how long the cluster waits for a STONITH action to complete. Setting it to '144' seconds allows more time for fencing actions to complete. We recommend this setting for HA clusters for SAP on Azure. -Enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 4eb10096-942e-402d-b4a6-e4e271c87a02 -Learn more about [Database Instance - DBHAEnableFloatingIpLB (Enable Floating IP in the Azure Load balancer for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--4eb10096-942e-402d-b4a6-e4e271c87a02_end--> + -### Enable HA ports in the Azure Load Balancer for HANA DB HA setup in SAP workloads +<!--9f30eb2b-6a6f-4fa8-89dc-85a395c31233_begin--> +#### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads (SUSE) + +The corosync token setting determines the timeout that is used directly, or as a base, for real token timeout calculation in HA clusters. To allow memory-preserving maintenance, set the corosync token to '30000' for SAP on Azure. -Enable HA ports in the Load balancing rules for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 9f30eb2b-6a6f-4fa8-89dc-85a395c31233 -Learn more about [Database Instance - DBHAEnableLBPorts (Enable HA ports in the Azure Load Balancer for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--9f30eb2b-6a6f-4fa8-89dc-85a395c31233_end--> + -### Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads +<!--f32b8f89-fb3c-4030-bd4a-0a16247db408_begin--> +#### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads + +The corosync token_retransmits_before_loss_const determines how many token retransmits are attempted before timeout in HA clusters. For stability and reliability, set the 'totem.token_retransmits_before_loss_const' to '10' for ASCS HA setup. -Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: f32b8f89-fb3c-4030-bd4a-0a16247db408 -Learn more about [Database Instance - DBLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads)](/azure/load-balancer/load-balancer-custom-probe-overview). +<!--f32b8f89-fb3c-4030-bd4a-0a16247db408_end--> + -### There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup +<!--fed84141-4942-49b3-8b0c-73a8b352f754_begin--> +#### The 'corosync join' timeout specifies in milliseconds how long to wait for join messages in the membership protocol so when a new node joins the cluster, it has time to synchronize its state with existing nodes. Set to '60' in Pacemaker cluster configuration for ASCS HA setup. + + -The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure there's one instance of a fence_azure_arm in the Pacemaker configuration for HANA DB HA setup. The fence_azure_arm is needed if you're using Azure fence agent for fencing with either managed identity or service principal. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: fed84141-4942-49b3-8b0c-73a8b352f754 -Learn more about [Database Instance - FenceAzureArmSuseHDB (There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +<!--fed84141-4942-49b3-8b0c-73a8b352f754_end--> + +<!--73227428-640d-4410-aec4-bac229a2b7bd_begin--> +#### Set the 'corosync consensus' in Pacemaker cluster to '36000' for ASCS HA setup in SAP workloads + +The corosync 'consensus' parameter specifies in milliseconds how long to wait for consensus before starting a round of membership in the cluster configuration. Set 'consensus' in the Pacemaker cluster configuration for ASCS HA setup to 1.2 times the corosync token for reliable failover behavior. -## Storage +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 73227428-640d-4410-aec4-bac229a2b7bd -### Enable soft delete for your Recovery Services vaults +<!--73227428-640d-4410-aec4-bac229a2b7bd_end--> + -The soft delete option helps you retain your backup data in the Recovery Services vault for an extra duration after deletion. The extra duration gives you an opportunity to retrieve the data before it's permanently deleted. +<!--14a889a6-374f-4bd4-8add-f644e3fe277d_begin--> +#### Set the 'corosync max_messages' in Pacemaker cluster to '20' for ASCS HA setup in SAP workloads + +The corosync 'max_messages' constant specifies the maximum number of messages that one processor can send on receipt of the token. Set it to 20 times the corosync token parameter in the Pacemaker cluster configuration to allow efficient communication without overwhelming the network. -Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete for your Recovery Services vaults)](../backup/backup-azure-security-feature-cloud.md). +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 14a889a6-374f-4bd4-8add-f644e3fe277d -### Enable Cross Region Restore for your recovery Services Vault +<!--14a889a6-374f-4bd4-8add-f644e3fe277d_end--> + -Enabling cross region restore for your geo-redundant vaults. +<!--89a9ddd9-f9bf-47e4-b5f7-a0a4edfa0cdb_begin--> +#### Set 'expected votes' to '2' in the cluster configuration in ASCS HA setup in SAP workloads (SUSE) + +For a two node HA cluster, set the quorum 'expected_votes' parameter to 2 as recommended for SAP on Azure to ensure a proper quorum, resilience, and data consistency. -Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Restore for your Recovery Services Vault)](../backup/backup-azure-arm-restore-vms.md#cross-region-restore). +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 89a9ddd9-f9bf-47e4-b5f7-a0a4edfa0cdb -### Enable Backups on your virtual machines +<!--89a9ddd9-f9bf-47e4-b5f7-a0a4edfa0cdb_end--> + -Enable backups for your virtual machines and secure your data. +<!--2030a15b-ff0b-47c3-b934-60072ccda75e_begin--> +#### Set the two_node parameter to 1 in the cluster cofiguration in ASCS HA setup in SAP workloads + +For a two node HA cluster, set the quorum parameter 'two_node' to 1 as recommended for SAP on Azure. -Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on your virtual machines)](../backup/backup-overview.md). +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 2030a15b-ff0b-47c3-b934-60072ccda75e -### Configure blob backup +<!--2030a15b-ff0b-47c3-b934-60072ccda75e_end--> + -Configure blob backup. +<!--dc19b2c9-0770-4929-8f63-81c07fe7b6f3_begin--> +#### Enable 'concurrent-fencing' in Pacemaker ASCS HA setup in SAP workloads (ConcurrentFencingHAASCSSLE) + +Concurrent fencing enables the fencing operations to be performed in parallel, which enhances HA, prevents split-brain scenarios, and contributes to a robust SAP deployment. Set this parameter to 'true' in the Pacemaker cluster configuration for ASCS HA setup. -Learn more about [Storage Account - ConfigureBlobBackup (Configure blob backup)](/azure/backup/blob-backup-overview). +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: dc19b2c9-0770-4929-8f63-81c07fe7b6f3 -### Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data +<!--dc19b2c9-0770-4929-8f63-81c07fe7b6f3_end--> + -Keep your information and applications safe with robust, one select backup from Azure. Activate Azure Backup to get cost-effective protection for a wide range of workloads including VMs, SQL databases, applications, and file shares. +<!--cb56170a-0ecb-420a-b2c9-5c4878a0132a_begin--> +#### Ensure the number of 'fence_azure_arm' instances is one in Pacemaker in HA enabled SAP workloads + +If you're using Azure fence agent for fencing with either managed identity or service principal, ensure that there's one instance of fence_azure_arm (an I/O fencing agent for Azure Resource Manager) in the Pacemaker configuration for ASCS HA setup for high availability. -Learn more about [Subscription - AzureBackupService (Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data)](/azure/backup/). +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: cb56170a-0ecb-420a-b2c9-5c4878a0132a -### You have ADLS Gen1 Accounts Which Need to be Migrated to ADLS Gen2 +<!--cb56170a-0ecb-420a-b2c9-5c4878a0132a_end--> + -As previously announced, Azure Data Lake Storage Gen1 will be retired on February 29, 2024. We highly recommend that you migrate your data lake to Azure Data Lake Storage Gen2. Azure Data Lake Storage Gen2 offers advanced capabilities designed for big data analytics, and is built on top of Azure Blob Storage. +<!--05747c68-715f-4c8f-b027-f57a931cc07a_begin--> +#### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for ASCS HA setup + +For reliable function of the Pacemaker for ASCS HA set the 'stonith-timeout' to 900. This setting is applicable if you're using the Azure fence agent for fencing with either managed identity or service principal. -Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/). +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 05747c68-715f-4c8f-b027-f57a931cc07a -### You have ADLS Gen1 Accounts Which Need to be Migrated to ADLS Gen2 +<!--05747c68-715f-4c8f-b027-f57a931cc07a_end--> + -As previously announced, Azure Data Lake Storage Gen1 will be retired on February 29, 2024. We highly recommend that you migrate your data lake to Azure Data Lake Storage Gen2, which offers advanced capabilities designed for big data analytics. Azure Data Lake Storage Gen2 is built on top of Azure Blob Storage. +<!--88261a1a-6a32-4fb6-8bbd-fcd60fdfcab6_begin--> +#### Create the softdog config file in Pacemaker configuration for ASCS HA setup in SAP workloads + +The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. Ensure that the softdog configuation file is created in the Pacemaker cluster forASCS HA set up -Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/). +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 88261a1a-6a32-4fb6-8bbd-fcd60fdfcab6 -### Enable Soft Delete to protect your blob data +<!--88261a1a-6a32-4fb6-8bbd-fcd60fdfcab6_end--> + -After you enable the Soft Delete option, deleted data transitions to a "soft" deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires. +<!--3730bc11-c81c-43eb-896a-8fce0bac139d_begin--> +#### Ensure the softdog module is loaded in for Pacemaler in ASCS HA setup in SAP workloads + +The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for ASCS HA setup -Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete). +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 3730bc11-c81c-43eb-896a-8fce0bac139d -### Use Managed Disks for storage accounts reaching capacity limit +<!--3730bc11-c81c-43eb-896a-8fce0bac139d_end--> + -We have identified that you're using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that don't have account capacity limit. This migration can be done through the portal in less than 5 minutes. +<!--255e9f7b-db3a-4a67-b87e-6fdc36ea070d_begin--> +#### Set PREFER_SITE_TAKEOVER parameter to 'true' in the Pacemaker configuration for HANA DB HA setup + +The PREFER_SITE_TAKEOVER parameter in SAP HANA defines if the HANA system replication (SR) resource agent prefers to takeover the secondary instance instead of restarting the failed primary locally. For reliable function of HANA DB high availability (HA) setup, set PREFER_SITE_TAKEOVER to 'true'. -Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota). +For More information, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) +ID: 255e9f7b-db3a-4a67-b87e-6fdc36ea070d -### Use Azure Disks with Zone Redundant Storage for higher resiliency and availability +<!--255e9f7b-db3a-4a67-b87e-6fdc36ea070d_end--> + -Azure Disks with ZRS provide synchronous replication of data across three Availability Zones in a region, making the disk tolerant to zonal failures without disruptions to applications. Migrate disks from LRS to ZRS for higher resiliency and availability. +<!--4594198b-b114-4865-8ed8-be06db945408_begin--> +#### Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with Redhat OS + +In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. To help manage failed nodes, ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration of your SAP workload. -Learn more about [Changing Disk type of an Azure managed disk](https://aka.ms/migratedisksfromLRStoZRS). +For More information, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) +ID: 4594198b-b114-4865-8ed8-be06db945408 -### Use Managed Disks to improve data reliability +<!--4594198b-b114-4865-8ed8-be06db945408_end--> + -Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure. +<!--604f3822-6a28-47db-b31c-4b0dbe317625_begin--> +#### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with RHEL OS + +The corosync token setting determines the timeout that is used directly, or as a base, for real token timeout calculation in HA clusters. To allow memory-preserving maintenance, set the corosync token to 30000 for SAP on Azure with Redhat OS. -Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore). +For More information, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) +ID: 604f3822-6a28-47db-b31c-4b0dbe317625 -### Implement disaster recovery strategies for your Azure NetApp Files Resources +<!--604f3822-6a28-47db-b31c-4b0dbe317625_end--> + -To avoid data or functionality loss if there's a regional or zonal disaster, implement common disaster recovery techniques such as cross region replication or cross zone replication for your Azure NetApp Files volumes +<!--937a1997-fc2d-4a3a-a9f6-e858a80921fd_begin--> +#### Set the expected votes parameter to '2' in HA enabled SAP workloads (RHEL) + +For a two node HA cluster, set the quorum votes to '2' as recommended for SAP on Azure to ensure a proper quorum, resilience, and data consistency. -Learn more about [Volume - ANFCRRCZRRecommendation (Implement disaster recovery strategies for your Azure NetApp Files Resources)](https://aka.ms/anfcrr). +For More information, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) +ID: 937a1997-fc2d-4a3a-a9f6-e858a80921fd -### Azure NetApp Files Enable Continuous Availability for SMB Volumes +<!--937a1997-fc2d-4a3a-a9f6-e858a80921fd_end--> + -Recommendation to enable SMB volume for Continuous Availability. +<!--6cc63594-c89f-4535-b878-cdd13659cfc5_begin--> +#### Enable the 'concurrent-fencing' parameter in the Pacemaker cofiguration for HANA DB HA setup + +Concurrent fencing enables the fencing operations to be performed in parallel, which enhances high availability (HA), prevents split-brain scenarios, and contributes to a robust SAP deployment. Set this parameter to 'true' in the Pacemaker cluster configuration for HANA DB HA setup. -Learn more about [Volume - anfcaenablement (Azure NetApp Files Enable Continuous Availability for SMB Volumes)](https://aka.ms/anfdoc-continuous-availability). +For More information, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) +ID: 6cc63594-c89f-4535-b878-cdd13659cfc5 -### Review SAP configuration for timeout values used with Azure NetApp Files +<!--6cc63594-c89f-4535-b878-cdd13659cfc5_end--> + -High availability of SAP while used with Azure NetApp Files relies on setting proper timeout values to prevent disruption to your application. Review the documentation to ensure your configuration meets the timeout values as noted in the documentation. +<!--230fddab-0864-4c5e-bb27-037bec7c46c6_begin--> +#### Set parameter PREFER_SITE_TAKEOVER to 'true' in the cluster cofiguration in HA enabled SAP workloads + +The PREFER_SITE_TAKEOVER parameter in SAP HANA topology defines if the HANA SR resource agent prefers to takeover the secondary instance instead of restarting the failed primary locally. For reliable function of HANA DB HA setup, set it to 'true'. -Learn more about [Volume - SAPTimeoutsANF (Review SAP configuration for timeout values used with Azure NetApp Files)](/azure/sap/workloads/get-started). +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 230fddab-0864-4c5e-bb27-037bec7c46c6 +<!--230fddab-0864-4c5e-bb27-037bec7c46c6_end--> + +<!--210d0895-074c-4cc7-88de-b0a9e00820c6_begin--> +#### Enable stonith in the cluster configuration in HA enabled SAP workloads for VMs with SUSE OS + +In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. To help manage failed nodes, ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 210d0895-074c-4cc7-88de-b0a9e00820c6 -## Web +<!--210d0895-074c-4cc7-88de-b0a9e00820c6_end--> + -### Consider scaling out your App Service Plan to avoid CPU exhaustion +<!--64e5e17e-640e-430f-987a-721f133dbd5c_begin--> +#### Set the stonith timeout to 144 for the cluster configuration in HA enabled SAP workloads + +The ΓÇÿstonith-timeoutΓÇÖ specifies how long the cluster waits for a STONITH action to complete. Setting it to '144' seconds allows more time for fencing actions to complete. We recommend this setting for HA clusters for SAP on Azure. -Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this problem, you could scale out your app. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 64e5e17e-640e-430f-987a-721f133dbd5c -Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](https://aka.ms/antbc-cpu). +<!--64e5e17e-640e-430f-987a-721f133dbd5c_end--> + -### Fix the backup database settings of your App Service resource +<!--a563e3ad-b6b5-4ec2-a444-c4e30800b8cf_begin--> +#### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with SUSE OS + +The corosync token setting determines the timeout that is used directly, or as a base, for real token timeout calculation in HA clusters. To allow memory-preserving maintenance, set the corosync token to 30000 for HA enabled HANA DB for VM with SUSE OS. -Your app's backups are consistently failing due to invalid DB configuration, you can find more details in backup history. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: a563e3ad-b6b5-4ec2-a444-c4e30800b8cf -Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](https://aka.ms/antbc). +<!--a563e3ad-b6b5-4ec2-a444-c4e30800b8cf_end--> + -### Consider scaling up your App Service Plan SKU to avoid memory exhaustion +<!--99681175-0124-44de-93ae-edc08f9dc0a8_begin--> +#### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads + +The corosync token_retransmits_before_loss_const determines how many token retransmits are attempted before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as recommended for HANA DB HA setup. -The App Service Plan containing your app reached >85% memory allocated. High memory consumption can lead to runtime issues with your apps. Investigate which app in the App Service Plan is exhausting memory and scale up to a higher plan with more memory resources if needed. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 99681175-0124-44de-93ae-edc08f9dc0a8 -Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](https://aka.ms/antbc-memory). +<!--99681175-0124-44de-93ae-edc08f9dc0a8_end--> + -### Scale up your App Service resource to remove the quota limit +<!--b8ac170f-433e-4d9c-8b75-f7070a2a5c92_begin--> +#### Set the 'corosync join' in Pacemaker cluster to 60 for HA enabled HANA DB in SAP workloads + +The 'corosync join' timeout specifies in milliseconds how long to wait for join messages in the membership protocol so when a new node joins the cluster, it has time to synchronize its state with existing nodes. Set to '60' in Pacemaker cluster configuration for HANA DB HA setup. -Your app is part of a shared App Service plan and has met its quota multiple times. Once quota is met, your web app canΓÇÖt accept incoming requests. To remove the quota, upgrade to a Standard plan. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: b8ac170f-433e-4d9c-8b75-f7070a2a5c92 -Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service resource to remove the quota limit)](https://aka.ms/ant-asp). +<!--b8ac170f-433e-4d9c-8b75-f7070a2a5c92_end--> + -### Use deployment slots for your App Service resource +<!--63e27ad9-1804-405a-97eb-d784686ffbe3_begin--> +#### Set the 'corosync consensus' in Pacemaker cluster to 36000 for HA enabled HANA DB in SAP workloads + +The corosync 'consensus' parameter specifies in milliseconds how long to wait for consensus before starting a new round of membership in the cluster. For reliable failover behavior, set 'consensus' in the Pacemaker cluster configuration for HANA DB HA setup to 1.2 times the corosync token. -You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment effect to your production web app. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 63e27ad9-1804-405a-97eb-d784686ffbe3 -Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](https://aka.ms/ant-staging). +<!--63e27ad9-1804-405a-97eb-d784686ffbe3_end--> + -### Fix the backup storage settings of your App Service resource +<!--7ce9ff70-f684-47a2-b26f-781f80b1bccc_begin--> +#### Set the 'corosync max_messages' in Pacemaker cluster to 20 for HA enabled HANA DB in SAP workloads + +The corosync 'max_messages' constant specifies the maximum number of messages that one processor can send on receipt of the token. To allow efficient communication without overwhelming the network, set it to 20 times the corosync token parameter in the Pacemaker cluster configuration. -Your app's backups are consistently failing due to invalid storage settings, you can find more details in backup history. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 7ce9ff70-f684-47a2-b26f-781f80b1bccc -Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](https://aka.ms/antbc). +<!--7ce9ff70-f684-47a2-b26f-781f80b1bccc_end--> + -### Move your App Service resource to Standard or higher and use deployment slots +<!--37240e75-9493-433a-8671-2e2582584875_begin--> +#### Set the expected votes parameter to 2 in HA enabled SAP workloads (SUSE) + +Set the expected votes parameter to '2' in the cluster configuration in HA enabled SAP workloads to ensure a proper quorum, resilience, and data consistency. -You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment effect to your production web app. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 37240e75-9493-433a-8671-2e2582584875 -Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging). +<!--37240e75-9493-433a-8671-2e2582584875_end--> + -### Consider scaling out your App Service Plan to optimize user experience and availability +<!--41cd63e2-69a4-4a4f-bb69-1d3f832001f9_begin--> +#### Set the two_node parameter to 1 in the cluster configuration in HA enabled SAP workloads + +For a two node HA cluster, set the quorum parameter 'two_node' to 1 as recommended for SAP on Azure. -Consider scaling out your App Service Plan to at least two instances to avoid cold start delays and service interruptions during routine maintenance. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 41cd63e2-69a4-4a4f-bb69-1d3f832001f9 -Learn more about [App Service plan - AppServiceNumberOfInstances (Consider scaling out your App Service Plan to optimize user experience and availability.)](https://aka.ms/appsvcnuminstances). +<!--41cd63e2-69a4-4a4f-bb69-1d3f832001f9_end--> + -### Application code needs fixing when the worker process crashes due to Unhandled Exception +<!--d763b894-7641-4c5d-9bc3-6f2515a6eb67_begin--> +#### Enable the 'concurrent-fencing' parameter in the cluster configuration in HA enabled SAP workloads + +Concurrent fencing enables the fencing operations to be performed in parallel, which enhances HA, prevents split-brain scenarios, and contributes to a robust SAP deployment. Set this parameter to 'true' in HA enabled SAP workloads. -We identified the following thread that resulted in an unhandled exception for your App and the application code must be fixed to prevent effect to application availability. A crash happens when an exception in your code terminates the process. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: d763b894-7641-4c5d-9bc3-6f2515a6eb67 -Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code must be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html). +<!--d763b894-7641-4c5d-9bc3-6f2515a6eb67_end--> + -### Consider changing your App Service configuration to 64-bit +<!--1f4b5e87-69e9-470a-8245-f337fd0d5528_begin--> +#### Ensure there is one instance of fence_azure_arm in the Pacemaker configuration for HANA DB HA setup + +If you're using Azure fence agent for fencing with either managed identity or service principal, ensure that one instance of fence_azure_arm (an I/O fencing agent for Azure Resource Manager) is in the Pacemaker configuration for HANA DB HA setup for high availability. -We identified your application is running in 32-bit and the memory is reaching the 2-GB limit. Consider switching to 64-bit processes so you can take advantage of the extra memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 1f4b5e87-69e9-470a-8245-f337fd0d5528 -Learn more about [App service 32-bit limitations](/troubleshoot/azure/app-service/web-apps-performance-faqs#i-see-the-message-worker-process-requested-recycle-due-to-percent-memory-limit-how-do-i-address-this-issue). +<!--1f4b5e87-69e9-470a-8245-f337fd0d5528_end--> + -### Upgrade your Azure Fluid Relay client library +<!--943f7572-1884-4120-808d-ac2a3e70e33a_begin--> +#### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for HANA DB HA setup + +If you're using the Azure fence agent for fencing with either managed identity or service principal, ensure reliable function of the Pacemaker for HANA DB HA setup, by setting the 'stonith-timeout' to 900. -You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library must now be upgraded to the latest version to ensure your application remains operational. Upgrading provides the most up-to-date functionality and enhancements in performance and stability. For more information on the latest version to use and how to upgrade, see the following article. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 943f7572-1884-4120-808d-ac2a3e70e33a -Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure Fluid Relay client library)](https://github.com/microsoft/FluidFramework). +<!--943f7572-1884-4120-808d-ac2a3e70e33a_end--> + -### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU +<!--63233341-73a2-4180-b57f-6f83395161b9_begin--> +#### Ensure that the softdog config file is in the Pacemaker configuration for HANA DB in SAP workloads + +The softdog timer is loaded as a kernel module in Linux OS. This timer triggers a system reset if it detects that the system is hung. Ensure that the softdog configuration file is created in the Pacemaker cluster for HANA DB HA setup. -The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100 GB. Consider upgrading these apps to Standard SKU to avoid throttling. +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: 63233341-73a2-4180-b57f-6f83395161b9 -Learn more about [Static Web App - StaticWebAppsUpgradeToStandardSKU (Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.)](https://azure.microsoft.com/pricing/details/app-service/static/). +<!--63233341-73a2-4180-b57f-6f83395161b9_end--> + +<!--b27248cd-67dc-4824-b162-4563adaa6d70_begin--> +#### Ensure the softdog module is loaded in Pacemaker in ASCS HA setup in SAP workloads + +The softdog timer is loaded as a kernel module in Linux OS. This timer triggers a system reset if it detects that the system is hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for HANA DB HA setup. -## Next steps +For More information, see [High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server](/azure/virtual-machines/workloads/sap/sap-hana-high-availability) +ID: b27248cd-67dc-4824-b162-4563adaa6d70 -Learn more about [Reliability - Microsoft Azure Well Architected Framework](/azure/architecture/framework/resiliency/overview) +<!--b27248cd-67dc-4824-b162-4563adaa6d70_end--> + +<!--microsoft_workloads_end> +<!--articleBody--> + + + +## Next steps ++Learn more about [Reliability - Microsoft Azure Well Architected Framework](/azure/architecture/framework/resiliency/overview) |
api-management | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md | The following table lists all the upcoming breaking changes and feature retireme | [API version retirements][api2023] | June 1, 2024 | | [Workspaces preview breaking changes][workspaces2024] | June 14, 2024 | | [stv1 platform retirement - Global Azure][stv12024] | August 31, 2024 |-| [stv1 platform retirement - Azure Government, Azure in China][stv1sov2025] | February 25, 2025 | +| [stv1 platform retirement - Azure Government, Azure in China][stv1sov2025] | February 24, 2025 | | [Git repository retirement][git2025] | March 15, 2025 | | [Direct management API retirement][mgmtapi2025] | March 15, 2025 | | [Workspaces preview breaking changes, part 2][workspaces2025march] | March 31, 2025 | |
api-management | Stv1 Platform Retirement August 2024 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-august-2024.md | The following table summarizes the compute platforms currently used for instance | -| -| -- | - | | `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports availability zones, private endpoints | Developer, Basic, Standard, Premium | | `stv1` | Single-tenant v1 | Azure-allocated compute infrastructure | Developer, Basic, Standard, Premium | -| `mtv1` | Multi-tenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption | +| `mtv1` | Multitenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption | **For continued support and to take advantage of upcoming features, customers must [migrate](../migrate-stv1-to-stv2.md) their Azure API Management instances from the `stv1` compute platform to the `stv2` compute platform.** The `stv2` compute platform comes with additional features and improvements such as support for Azure Private Link and other networking features. -New instances created in service tiers other than the Consumption tier are mostly hosted on the `stv2` platform already. Existing instances on the `stv1` compute platform will continue to work normally until the retirement date, but those instances wonΓÇÖt receive the latest features available to the `stv2` platform. Support for `stv1` instances will be retired by 31 August 2024. +New instances created in service tiers other than the Consumption tier are mostly hosted on the `stv2` platform already. Existing instances on the `stv1` compute platform will continue to work normally until the retirement date, but those instances won't receive the latest features available to the `stv2` platform. Support for `stv1` instances will be retired by 31 August 2024. ## Is my service affected by this? If the value of the `platformVersion` property of your service is `stv1`, it's h Support for API Management instances hosted on the `stv1` platform will be retired by 31 August 2024. -> [!WARNING] -> If your instance is currently hosted on the `stv1` platform, you must migrate to the `stv2` platform. Failure to migrate by the retirement date might result in loss of the environments running APIs and all configuration data. - ## What do I need to do? -**Migrate all your existing instances hosted on the `stv1` compute platform to the `stv2` compute platform by 31 August 2024.** +**Migrate all your existing instances hosted on the `stv1` compute platform to the `stv2` compute platform.** If you have existing instances hosted on the `stv1` platform, follow our **[migration guide](../migrate-stv1-to-stv2.md)** to ensure a successful migration. +## What happens after 31 August 2024? ++**Your `stv1` instance will not be shut down, deactivated, or deleted.** However, the SLA commitment for the instance ends, and any `stv1` instance after the retirement date will be scheduled for automatic migration to the `stv2` platform. ++### End of SLA commitment for `stv1` instances ++As of 1 September 2024, API Management will no longer provide any service level guarantees, and by extension service credits, for performance or availability issues related to the Developer, Basic, Standard, and Premium service instances running on the `stv1` compute platform. Also, no new security and compliance investments will be made in the API Management `stv1` platform. ++Through continued use of an instance hosted on the `stv1` platform beyond the retirement date, you acknowledge that Azure does not commit to the SLA of 99.95% for the retired instances. ++### Automatic migration ++Starting 1 September 2024, we'll automatically migrate remaining `stv1` service instances to the `stv2` compute platform. All affected customers will be notified of the upcoming automatic migration a week in advance. Automatic migration might cause downtime for your upstream API consumers. You may still migrate your own instances before automatic migration takes place. + [!INCLUDE [api-management-migration-support](../../../includes/api-management-migration-support.md)] +> [!NOTE] +> Azure support can't extend the timeline for automatic migration or for SLA support of `stv1` instances after the retirement date. + ## Related content * [Migrate from stv1 platform to stv2](../migrate-stv1-to-stv2.md) |
api-management | Stv1 Platform Retirement Sovereign Clouds February 2025 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-sovereign-clouds-february-2025.md | The following table summarizes the compute platforms currently used for instance | -| -| -- | - | | `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports availability zones, private endpoints | Developer, Basic, Standard, Premium | | `stv1` | Single-tenant v1 | Azure-allocated compute infrastructure | Developer, Basic, Standard, Premium | -| `mtv1` | Multi-tenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption | +| `mtv1` | Multitenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption | **For continued support and to take advantage of upcoming features, customers must [migrate](../migrate-stv1-to-stv2.md) their Azure API Management instances from the `stv1` compute platform to the `stv2` compute platform.** The `stv2` compute platform comes with additional features and improvements such as support for Azure Private Link and other networking features. If the value of the `platformVersion` property of your service is `stv1`, it's h In Azure Government and Azure operated by 21Vianet, support for API Management instances hosted on the `stv1` platform will be retired by 24 February 2025. +Also, as of 1 September 2024, API Management will no longer back API Management instances running on the `stv1` compute platform with an SLA. + ## What do I need to do? **Migrate all your existing instances hosted on the `stv1` compute platform to the `stv2` compute platform by 24 February 2025.** If you have existing instances hosted on the `stv1` platform, follow our **[migration guide](../migrate-stv1-to-stv2.md)** to ensure a successful migration. +## End of SLA commitment for `stv1` instances - 1 September 2024 ++As of 1 September 2024, API Management will no longer provide any service level guarantees, and by extension service credits, for performance or availability issues related to the Developer, Basic, Standard, and Premium service instances running on the `stv1` compute platform. Also, no new security and compliance investments will be made in the API Management `stv1` platform. ++Through continued use of an instance hosted on the `stv1` platform beyond 1 September 2024, you acknowledge that Azure does not commit to the SLA of 99.95%. ++ [!INCLUDE [api-management-migration-support](../../../includes/api-management-migration-support.md)] +> [!NOTE] +> Azure support can't extend the timeline for SLA support of `stv1` instances. ## Related content |
api-management | Migrate Stv1 To Stv2 No Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2-no-vnet.md | API Management platform migration from `stv1` to `stv2` involves updating the un ## Migrate the instance to stv2 platform +### Public IP address options You can choose whether the virtual IP address of API Management will change, or whether the original VIP address is preserved. * **New virtual IP address** - If you choose this mode, API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address. * **Preserve IP address** - If you preserve the VIP address, API requests will be unresponsive for approximately 15 minutes while the IP address is migrated to the new infrastructure. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 45 minutes. No further configuration is required after migration. ++### Migration steps + #### [Portal](#tab/portal) 1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. |
api-management | Migrate Stv1 To Stv2 Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2-vnet.md | If you need to migrate a *non-VNnet-injected* API Management hosted on the `stv1 API Management platform migration from `stv1` to `stv2` involves updating the underlying compute alone and has no impact on the service/API configuration persisted in the storage layer. -* The upgrade process involves creating a new compute in parallel to the old compute, which can take up to 45 minutes. +* The upgrade process involves creating a new compute in parallel to the old compute, which can take up to 45 minutes. Plan longer times for multi-region deployments and in scenarios that involve changing the subnet more than once. * The API Management status in the Azure portal will be **Updating**. * For certain migration options, the VIP address (or addresses, for a multi-region deployment) of the instance will change. If you migrate and keep the same subnet configuration, you can choose to preserve the VIP address or a new public VIP will be generated. + [!INCLUDE [api-management-migration-no-preserve-ip](../../includes/api-management-migration-no-preserve-ip.md)] * For migration scenarios when a new VIP address is generated: * Azure manages the migration. * The gateway DNS still points to the old compute if a custom domain is in use. You can migrate your API Management instance to the `stv2` platform keeping the ### Public IP address options - same-subnet migration + You can choose whether the API Management instance's original VIP address is preserved (recommended) or whether a new VIP address will be generated. * **Preserve virtual IP address** - If you preserve the VIP address in a VNet in external mode, API requests can remain responsive during migration (see [Expected downtime](#expected-downtime-and-compute-retention)); for a VNet in internal mode, temporary downtime is expected. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 45 minutes. No further configuration is required after migration. You can choose whether the API Management instance's original VIP address is pre With this option, the `stv1` compute is retained for a period by default after migration is complete so that you can validate the migrated instance and confirm the network and DNS configuration. -### Precreated IP address for migration --API Management precreates a public IP address for the migration process. Find the precreated IP address in the JSON output of your API Management instance's properties. Under `customProperties`, the precreated IP address is the value of the `Microsoft.WindowsAzure.ApiManagement.Stv2MigrationPreCreatedIps` property. For a multi-region deployment, the value is a comma-separated list of precreated IP addresses. --Use the precreated IP address (or addresses) to help you manage the migration process: --* When you migrate and preserve the VIP address, the precreated IP address is assigned temporarily to the new `stv2` deployment, before the original IP address is assigned to the `stv2` deployment. If you have firewall rules limiting access to the API Management instance, for example, you can add the precreated IP address to the allowlist to preserve continuity of client access during migration. After migration is complete, you can remove the precreated IP address from your allowlist. -* When you migrate and generate a new VIP address, the precreated IP address is assigned to the new `stv2` deployment during migration and persists after migration is complete. Use the precreated IP address to update your network dependencies, such as DNS and firewall rules, to point to the new IP address. ### Expected downtime and compute retention When migrating a VNet-injected instance and keeping the same subnet configuratio |Internal | Preserve VIP | Downtime for approximately 20 minutes during migration while the existing IP address is assigned to the new `stv2` deployment. | No retention | |Internal | New VIP | No downtime | Retained by default for 4 hours to allow you to update network dependencies | - ### Migration script -> [!NOTE] -> If your API Management instance is deployed in multiple regions, the REST API migrates the VNet settings for all locations of your instance using a single call. - [!INCLUDE [api-management-migration-cli-steps](../../includes/api-management-migration-cli-steps.md)] +> [!NOTE] +> If your API Management instance is deployed in multiple regions, the REST API migrates the VNet settings for all locations of your instance using a single call. + ## Option 2: Migrate and change to new subnet Using the Azure portal, you can migrate your instance by specifying a different subnet in the same or a different VNet. After migration, optionally migrate back to the instance's original subnet. |
api-management | Workspaces Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md | Manage gateway capacity by manually adding or removing scale units, similar to t Workspace gateways need to be in the same Azure region and subscription as the API Management service. > [!NOTE]-> Starting in August 2024, workspace gateway support will be rolled out in the following regions. These regions are a subset of those where API Management is available. +> These regions are a subset of those where API Management is available. * West US * North Central US |
app-service | Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md | The free certificate comes with the following limitations: ### [Apex domain](#tab/apex) - Must have an A record pointing to your web app's IP address.-- Isn't supported on apps that aren't publicly accessible.+- Must be on apps that are publicly accessible. - Isn't supported with root domains that are integrated with Traffic Manager. - Must meet all the above for successful certificate issuances and renewals. ### [Subdomain](#tab/subdomain) - Must have CNAME mapped _directly_ to `<app-name>.azurewebsites.net` or [trafficmanager.net](configure-domain-traffic-manager.md#enable-custom-domain). Mapping to an intermediate CNAME value blocks certificate issuance and renewal.+- Must be on apps that are publicly accessible. - Must meet all the above for successful certificate issuance and renewals. |
application-gateway | Application Gateway Private Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-private-deployment.md | Use the following steps to enroll into the public preview for the enhanced Appli 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. In the search box, enter _subscriptions_ and select **Subscriptions**. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="Azure portal search."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="Screenshot of Azure portal search."::: 3. Select the link for your subscription's name. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="Select Azure subscription."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="Screenshot of selecting the Azure subscription."::: 4. From the left menu, under **Settings** select **Preview features**. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="Azure preview features menu."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="Screenshot of the Azure preview features menu."::: 5. You see a list of available preview features and your current registration status. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="Azure portal list of preview features."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="Screenshot of the Azure portal list of preview features."::: 6. From **Preview features** type into the filter box **EnableApplicationGatewayNetworkIsolation**, check the feature, and click **Register**. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="Azure portal filter preview features."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="Screenshot of the Azure portal filter preview features."::: # [Azure PowerShell](#tab/powershell) To opt out of the public preview for the enhanced Application Gateway network co 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. In the search box, enter _subscriptions_ and select **Subscriptions**. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="Azure portal search."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="Screenshot of Azure portal search."::: 3. Select the link for your subscription's name. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="Select Azure subscription."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="Screenshot of selecting Azure subscription."::: 4. From the left menu, under **Settings** select **Preview features**. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="Azure preview features menu."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="Screenshot of the Azure preview features menu."::: 5. You see a list of available preview features and your current registration status. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="Azure portal list of preview features."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="Screenshot of the Azure portal list of preview features."::: 6. From **Preview features** type into the filter box **EnableApplicationGatewayNetworkIsolation**, check the feature, and click **Unregister**. - :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="Azure portal filter preview features."::: + :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="Screenshot of the Azure portal filter preview features."::: # [Azure PowerShell](#tab/powershell) After registration into the public preview, configuration of NSG, Route Table, a After your gateway is provisioned, a resource tag is automatically assigned with the name of **EnhancedNetworkControl** and value of **True**. See the following example: - ![View the EnhancedNetworkControl tag](./media/application-gateway-private-deployment/tags.png) + ![Screenshot of the EnhancedNetworkControl tag.](./media/application-gateway-private-deployment/tags.png) The resource tag is cosmetic, and serves to confirm that the gateway has been provisioned with the capabilities to configure any combination of the private only gateway features. Modification or deletion of the tag or value doesn't change any functional workings of the gateway. Network security groups associated to an Application Gateway subnet no longer re The following configuration is an example of the most restrictive set of inbound rules, denying all traffic but Azure health probes. In addition to the defined rules, explicit rules are defined to allow client traffic to reach the listener of the gateway. - [ ![View the inbound security group rules](./media/application-gateway-private-deployment/inbound-rules.png) ](./media/application-gateway-private-deployment/inbound-rules.png#lightbox) + [ ![Screenshot of the inbound security group rules.](./media/application-gateway-private-deployment/inbound-rules.png) ](./media/application-gateway-private-deployment/inbound-rules.png#lightbox) > [!Note] > Application Gateway will display an alert asking to ensure the **Allow LoadBalanceRule** is specified if a **DenyAll** rule inadvertently restricts access to health probes. First, [create a network security group](../virtual-network/tutorial-filter-netw Three inbound [default rules](../virtual-network/network-security-groups-overview.md#default-security-rules) are already provisioned in the security group. See the following example: - [ ![View default security group rules](./media/application-gateway-private-deployment/default-rules.png) ](./media/application-gateway-private-deployment/default-rules.png#lightbox) + [ ![Screenshot of the default security group rules.](./media/application-gateway-private-deployment/default-rules.png) ](./media/application-gateway-private-deployment/default-rules.png#lightbox) Next, create the following four new inbound security rules: To create these rules: Select **Refresh** to review all rules when provisioning is complete. - [ ![View example inbound security group rules](./media/application-gateway-private-deployment/inbound-example.png) ](./media/application-gateway-private-deployment/inbound-example.png#lightbox) + [ ![Screenshot of example inbound security group rules.](./media/application-gateway-private-deployment/inbound-example.png) ](./media/application-gateway-private-deployment/inbound-example.png#lightbox) #### Outbound rules Three default outbound rules with priority 65000, 65001, and 65500 are already p Create the following three new outbound security rules: -- Allow TCP 443 from 10.10.4.0/24 to backend target 20.62.8.49+- Allow TCP 443 from 10.10.4.0/24 to backend target 203.0.113.1 - Allow TCP 80 from source 10.10.4.0/24 to destination 10.13.0.4 - DenyAll traffic rule These rules are assigned a priority of 400, 401, and 4096, respectively. > [!NOTE] > - 10.10.4.0/24 is the Application Gateway subnet address space. > - 10.13.0.4 is a virtual machine in a peered VNet.-> - 20.63.8.49 is a backend target VM. +> - 203.0.113.1 is a backend target VM. To create these rules: - Select **Outbound security rules** To create these rules: | Rule # | Source | Source IP addresses/CIDR ranges | Source port ranges | Destination | Destination IP addresses/CIDR ranges | Service | Dest port ranges | Protocol | Action | Priority | Name | | | | - | | | | - | - | -- | | -- | -- |-| 1 | IP Addresses | 10.10.4.0/24 | * | IP Addresses | 20.63.8.49 | HTTPS | 443 | TCP | Allow | 400 | AllowToBackendTarget | +| 1 | IP Addresses | 10.10.4.0/24 | * | IP Addresses | 203.0.113.1 | HTTPS | 443 | TCP | Allow | 400 | AllowToBackendTarget | | 2 | IP Addresses | 10.10.4.0/24 | * | IP Addresses | 10.13.0.4 | HTTP | 80 | TCP | Allow | 401 | AllowToPeeredVnetVM | | 3 | Any | | * | Any | | Custom | * | Any | Deny | 4096 | DenyAll | Select **Refresh** to review all rules when provisioning is complete. -[ ![View example outbound security group rules](./media/application-gateway-private-deployment/outbound-example.png) ](./media/application-gateway-private-deployment/outbound-example.png#lightbox) #### Associate NSG to the subnet The last step is to [associate the network security group to the subnet](../virtual-network/tutorial-filter-network-traffic.md#associate-network-security-group-to-subnet) that contains your Application Gateway. -![Associate NSG to subnet](./media/application-gateway-private-deployment/nsg-subnet.png) +![Screenshot of associate NSG to subnet.](./media/application-gateway-private-deployment/nsg-subnet.png) Result: -[ ![View the NSG overview](./media/application-gateway-private-deployment/nsg-overview.png) ](./media/application-gateway-private-deployment/nsg-overview.png#lightbox) > [!IMPORTANT] > Be careful when you define **DenyAll** rules, as you might inadvertently deny inbound traffic from clients to which you intend to allow access. You might also inadvertently deny outbound traffic to the backend target, causing backend health to fail and produce 5XX responses. In the following example, we create a route table and associate it to the Applic - There is a network virtual appliance (a virtual machine) in the hub network - A route table with a default route (0.0.0.0/0) to the virtual appliance is associated to Application Gateway subnet -![Diagram for example route table](./media/application-gateway-private-deployment/route-table-diagram.png) +![Diagram for example route table.](./media/application-gateway-private-deployment/route-table-diagram.png) **Figure 1**: Internet access egress through virtual appliance To create a route table and associate it to the Application Gateway subnet: 1. [Create a route table](../virtual-network/manage-route-table.yml#create-a-route-table): - ![View the newly created route table](./media/application-gateway-private-deployment/route-table-create.png) + ![Screenshot of the newly created route table.](./media/application-gateway-private-deployment/route-table-create.png) 2. Select **Routes** and create the next hop rule for 0.0.0.0/0 and configure the destination to be the IP address of your VM: - [ ![View of adding default route to network virtual applicance](./media/application-gateway-private-deployment/default-route-nva.png) ](./media/application-gateway-private-deployment/default-route-nva.png#lightbox) + [ ![Screenshot of adding default route to network virtual applicance.](./media/application-gateway-private-deployment/default-route-nva.png) ](./media/application-gateway-private-deployment/default-route-nva.png#lightbox) 3. Select **Subnets** and associate the route table to the Application Gateway subnet: - [ ![View of associating the route to the AppGW subnet](./media/application-gateway-private-deployment/associate-route-to-subnet.png) ](./media/application-gateway-private-deployment/associate-route-to-subnet.png#lightbox) + [ ![Screenshot of associating the route to the AppGW subnet.](./media/application-gateway-private-deployment/associate-route-to-subnet.png) ](./media/application-gateway-private-deployment/associate-route-to-subnet.png#lightbox) 4. Validate that traffic is passing through the virtual appliance. |
application-gateway | Create Ssl Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-ssl-portal.md | Sign in to the [Azure portal](https://portal.azure.com). - **Resource group**: Select **myResourceGroupAG** for the resource group. If it doesn't exist, select **Create new** to create it. - **Application gateway name**: Enter *myAppGateway* for the name of the application gateway. - ![Create new application gateway: Basics](./media/application-gateway-create-gateway-portal/application-gateway-create-basics.png) + ![Screenshot of creating a new application gateway basics.](./media/application-gateway-create-gateway-portal/application-gateway-create-basics.png) 2. For Azure to communicate between the resources that you create, it needs a virtual network. You can either create a new virtual network or use an existing one. In this example, you'll create a new virtual network at the same time that you create the application gateway. Application Gateway instances are created in separate subnets. You create two subnets in this example: one for the application gateway, and another for the backend servers. Sign in to the [Azure portal](https://portal.azure.com). Select **OK** to close the **Create virtual network** window and save the virtual network settings. - ![Create new application gateway: virtual network](./media/application-gateway-create-gateway-portal/application-gateway-create-vnet.png) + ![Screenshot of creating a new application gateway virtual network.](./media/application-gateway-create-gateway-portal/application-gateway-create-vnet.png) 3. On the **Basics** tab, accept the default values for the other settings and then select **Next: Frontends**. Sign in to the [Azure portal](https://portal.azure.com). 2. Choose **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**. - ![Create new application gateway: frontends](./media/application-gateway-create-gateway-portal/application-gateway-create-frontends.png) + ![Screenshot of creating a new application gateway frontends.](./media/application-gateway-create-gateway-portal/application-gateway-create-frontends.png) 3. Select **Next: Backends**. ### Backends tab -The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool. +The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multitenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool. 1. On the **Backends** tab, select **Add a backend pool**. The backend pool is used to route requests to the backend servers that serve the 3. In the **Add a backend pool** window, select **Add** to save the backend pool configuration and return to the **Backends** tab. - ![Create new application gateway: backends](./media/application-gateway-create-gateway-portal/application-gateway-create-backends.png) + ![Screenshot of create a new application gateway backends.](./media/application-gateway-create-gateway-portal/application-gateway-create-backends.png) 4. On the **Backends** tab, select **Next: Configuration**. On the **Configuration** tab, you'll connect the frontend and backend pool you c Accept the default values for the other settings on the **Listener** tab, then select the **Backend targets** tab to configure the rest of the routing rule. - ![Create new application gateway: listener](./media/create-ssl-portal/application-gateway-create-rule-listener.png) + ![Screenshot of create a new application gateway listener.](./media/create-ssl-portal/application-gateway-create-rule-listener.png) 4. On the **Backend targets** tab, select **myBackendPool** for the **Backend target**. On the **Configuration** tab, you'll connect the frontend and backend pool you c 6. On the **Add a routing rule** window, select **Add** to save the routing rule and return to the **Configuration** tab. - ![Create new application gateway: routing rule](./media/application-gateway-create-gateway-portal/application-gateway-create-rule-backends.png) + ![Screenshot of creating a new application gateway routing rule.](./media/application-gateway-create-gateway-portal/application-gateway-create-rule-backends.png) 7. Select **Next: Tags** and then **Next: Review + create**. In this example, you install IIS on the virtual machines only to verify Azure cr 1. Open [Azure PowerShell](../cloud-shell/quickstart-powershell.md). To do so, select **Cloud Shell** from the top navigation bar of the Azure portal and then select **PowerShell** from the drop-down list. - ![Install custom extension](./media/application-gateway-create-gateway-portal/application-gateway-extension.png) + ![Screenshot of installing custom extension.](./media/application-gateway-create-gateway-portal/application-gateway-extension.png) 2. Change the location setting for your environment, and then run the following command to install IIS on the virtual machine: In this example, you install IIS on the virtual machines only to verify Azure cr 6. Repeat to add the network interface for **myVM2**. - ![Add backend servers](./media/application-gateway-create-gateway-portal/application-gateway-backend.png) + ![Screenshot of adding backend servers.](./media/application-gateway-create-gateway-portal/application-gateway-backend.png) 6. Select **Save**. In this example, you install IIS on the virtual machines only to verify Azure cr 1. Select **All resources**, and then select **myAGPublicIPAddress**. - ![Record application gateway public IP address](./media/create-ssl-portal/application-gateway-ag-address.png) + :::image type="content" source="./media/create-ssl-portal/application-gateway-ag-address.png" alt-text="Screenshot of finding the application gateway public IP address."::: -2. In the address bar of your browser, type *https://\<your application gateway ip address\>*. +3. In the address bar of your browser, type *https://\<your application gateway ip address\>*. To accept the security warning if you used a self-signed certificate, select **Details** (or **Advanced** on Chrome) and then go on to the webpage: - ![Secure warning](./media/create-ssl-portal/application-gateway-secure.png) + ![Screenshot of a browser security warning.](./media/create-ssl-portal/application-gateway-secure.png) Your secured IIS website is then displayed as in the following example: - ![Test base URL in application gateway](./media/create-ssl-portal/application-gateway-iistest.png) + ![Screenshot of testing the base URL in application gateway.](./media/create-ssl-portal/application-gateway-iistest.png) ## Clean up resources |
automation | Automation Create Alert Triggered Runbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-create-alert-triggered-runbook.md | Azure Automation provides scripts for common Azure VM management operations like |**Azure VM management operations** | **Details**| | | |-[Stop-Azure-VM-On-Alert](https://github.com/azureautomation/Stop-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given. -[Restart-Azure-VM-On-Alert](https://github.com/azureautomation/Restart-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given. -[Delete-Azure-VM-On-Alert](https://github.com/azureautomation/Delete-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given. -[ScaleDown-Azure-VM-On-Alert](https://github.com/azureautomation/ScaleDown-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given. -[ScaleUp-Azure-VM-On-Alert](https://github.com/azureautomation/ScaleUp-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> Input is alert data with information needed to identify which VM to stop.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given. +|[Stop-Azure-VM-On-Alert](https://github.com/azureautomation/Stop-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> The target resource of the triggered alert must be the VM to stop. This is passed in an input parameter from the triggered alert payload.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.| +|[Restart-Azure-VM-On-Alert](https://github.com/azureautomation/Restart-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> The target resource of the triggered alert must be the VM to restart. This is passed in an input parameter from the triggered alert payload.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.| +|[Delete-Azure-VM-On-Alert](https://github.com/azureautomation/Delete-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> The target resource of the triggered alert must be the VM to delete. This is passed in an input parameter from the triggered alert payload.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.| +|[ScaleDown-Azure-VM-On-Alert](https://github.com/azureautomation/ScaleDown-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> The target resource of the triggered alert must be the VM to scale down. This is passed in an input parameter from the triggered alert payload.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.| +|[ScaleUp-Azure-VM-On-Alert](https://github.com/azureautomation/ScaleUp-Azure-VM-On-Alert) | This runbook will stop an Azure Resource Manager VM in response to an Azure alert trigger. </br></br> The target resource of the triggered alert must be the VM to scale up. This is passed in an input parameter from the triggered alert payload.</br></br> The runbook must be called from an Azure alert via a webhook. </br></br> Latest version of Az module should be added to the automation account. </br></br> Managed Identity should be enabled and contributor access to the automation account should be given.| ## Next steps |
azure-app-configuration | Concept Experimentation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-experimentation.md | The results help you to conclude the learnings and outcomes into actionable item ## Scenarios for using experimentation -- **Intelligent applications (e.g., AI-based features)**-Accelerate General AI (Gen AI) adoption and optimize AI models and use cases through rapid experimentation. Use Experimentation to iterate quickly on AI models, test different scenarios, and determine effective approaches. -It helps enhance agility in adapting AI solutions to evolving user needs and market trends, and facilitate understanding of the most effective approaches for scaling AI initiatives. +### Release defense -- **CI, CD and continuous experimentation (Gradual feature rollouts and version updates)**-Ensure seamless transitions and maintain or improve key metrics with each version update while managing feature releases. Utilize experimentation to gradually roll out new features to subsets of users using feature flags, monitor performance metrics, and collect feedback for iterative improvements. -It's beneficial to reduce the risk of introducing bugs or performance issues to the entire user base. It enables data-driven decision-making during version rollouts and feature flag management, leading to improved product quality and user satisfaction. +Objective: Ensure smooth transitions and maintain or improve key metrics with each release. -- **User experience optimization (UI A/B testing)**-Optimize business metrics by comparing different UI variations and determining the most effective design. Conduct A/B tests using experimentation to test UI elements, measure user interactions, and analyze performance metrics. -The best return here's improved user experience by implementing UI changes based on empirical evidence. +Approach: Employ experimentation to gradually roll out new features, monitor performance metrics, and collect feedback for iterative improvements. -- **Personalization and targeting experiments**-Deliver personalized content and experiences tailored to user preferences and behaviors. Use experimentation to test personalized content, measure engagement, and iterate on personalization strategies. -Results are increased user engagement, conversion rates, and customer loyalty through relevant and personalized experiences. These results, in turn drive revenue growth and customer retention by targeting audiences with tailored messages and offers. +Benefits: -- **Performance optimization experiments**-Improve application performance and provide an efficient user experience through performance optimization experiments. Conduct experiments to test performance enhancements, measure key metrics, and implement successful optimizations. -Here, experimentation enhances application scalability, reliability, and responsiveness through proactive performance improvements. It optimizes resource utilization and infrastructure costs by implementing efficient optimizations. +* Minimizes the risk of widespread issues by using guardrail metrics to detect and address problems early in the rollout. +* Helps maintain or improve key performance and user satisfaction metrics by making informed decisions based on real-time data. + +### Test hypotheses ++Objective: Validate assumptions and hypotheses to make informed decisions about product features, user behaviors, or business strategies. ++Approach: Use experimentation to test specific hypotheses by creating different feature versions or scenarios, then analyze user interactions and performance metrics to determine outcomes. ++Benefits: ++* Provides evidence-based insights that reduce uncertainty and guide strategic decision-making. +* Enables faster iteration and innovation by confirming or refuting hypotheses with real user data. +* Enhances product development by focusing efforts on ideas that are proven to work, ultimately leading to more successful and user-aligned features. + +### A/B testing ++Objective: Optimize business metrics by comparing different UI variations and determining the most effective design. ++Approach: Conduct A/B tests using experimentation to test UI elements, measure user interactions, and analyze performance metrics. ++Benefits: +* Improves user experience by implementing UI changes based on empirical evidence. +* Increases conversion rates, engagement levels, and overall effectiveness of digital products or services. + +### For intelligent applications (for example, AI-based features) ++Objective: Accelerate General AI (Gen AI) adoption and optimize AI models and use cases through rapid experimentation. ++Approach: Use experimentation to iterate quickly on AI models, test different scenarios, and determine effective approaches. ++Benefits: ++* Enhances agility in adapting AI solutions to evolving user needs and market trends. +* Facilitates understanding of the most effective approaches for scaling AI initiatives. +* Improves accuracy and performance of AI models based on real-world data and feedback. + +### Personalization and targeting experiments ++Objective: Deliver personalized content and experiences tailored to user preferences and behaviors. ++Approach: Leverage experimentation to test personalized content, measure engagement, and iterate on personalization strategies. ++Benefits: ++* Increases user engagement, conversion rates, and customer loyalty through relevant and personalized experiences. +* Drives revenue growth and customer retention by targeting audiences with tailored messages and offers. + +### Performance optimization experiments ++Objective: Improve application performance and user experience through performance optimization experiments. ++Approach: Conduct experiments to test performance enhancements, measure key metrics, and implement successful optimizations. ++Benefits: ++* Enhances application scalability, reliability, and responsiveness through proactive performance improvements. +* Optimizes resource utilization and infrastructure costs by implementing efficient optimizations. ## Experiment operations |
azure-arc | Quick Start Create A Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md | Title: Create a virtual machine on VMware vCenter using Azure Arc description: In this quickstart, you learn how to create a virtual machine on VMware vCenter using Azure Arc Previously updated : 11/06/2023 Last updated : 08/29/2024 +zone_pivot_groups: vmware-portal-bicep-terraform +This article describes how to provision a VM using vCenter resources from Azure portal. ++## Create a VM in the Azure portal + Once your administrator has connected a VMware vCenter to Azure, represented VMware vCenter resources in Azure, and provided you with permissions on those resources, you'll create a virtual machine. -## Prerequisites +### Prerequisites - An Azure subscription and resource group where you have an Arc VMware VM contributor role.- - A resource pool/cluster/host on which you have Arc Private Cloud Resource User Role.- - A virtual machine template resource on which you have Arc Private Cloud Resource User Role.- - A virtual network resource on which you have Arc Private Cloud Resource User Role. -## How to create a VM in the Azure portal +Follow these steps to create VM in the Azure portal: 1. From your browser, go to the [Azure portal](https://portal.azure.com). Navigate to virtual machines browse view. You'll see a unified browse experience for Azure and Arc virtual machines. Once your administrator has connected a VMware vCenter to Azure, represented VMw 6. Select the **datastore** that you want to use for storage. -7. Select the **Template** based on which the VM you'll create. +7. Select the **Template** based on which you'll create the VM. >[!TIP] >You can override the template defaults for **CPU Cores** and **Memory**. Once your administrator has connected a VMware vCenter to Azure, represented VMw 11. Select **Create** after reviewing all the properties. It should take a few minutes to create the VM. +++This article describes how to provision a VM using vCenter resources using a Bicep template. ++## Create an Arc VMware machine using Bicep template ++The following bicep template can be used to create an Arc VMware machine. [Here](/azure/templates/microsoft.connectedvmwarevsphere/2023-12-01/virtualmachineinstances?pivots=deployment-language-arm-template) is the list of available Azure Resource Manager (ARM), Bicep, and Terraform templates for Arc-enabled VMware resources. To trigger any other Arc operation, convert the corresponding [ARM template to Bicep template](/azure/azure-resource-manager/bicep/decompile#decompile-from-json-to-bicep). ++```bicep +// Parameters +param vmName string = 'contoso-vm' +param vmAdminPassword string = 'examplepassword!#' +param vCenterId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/vcenters/contoso-vcenter' +param templateId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/VirtualMachineTemplates/contoso-template-win22' +param resourcePoolId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/ResourcePools/contoso-respool' +param datastoreId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/Datastores/contoso-datastore' +param networkId string = '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ConnectedVMwarevSphere/VirtualNetworks/contoso-network' +param extendedLocation object = { + type: 'customLocation' + name: '/subscriptions/01234567-0123-0123-0123-0123456789ab/resourceGroups/contoso-rg/providers/Microsoft.ExtendedLocation/customLocations/contoso-customlocation' +} +param ipSettings object = { + allocationMethod: 'static' + gateway: ['172.24.XXX.1'] + ipAddress: '172.24.XXX.105' + subnetMask: '255.255.255.0' + dnsServers: ['172.24.XXX.9'] +} ++resource contosoMachine 'Microsoft.HybridCompute/machines@2023-10-03-preview' = { + name: vmName + location:'westeurope' + kind:'VMware' + properties:{} + tags: { + foo: 'bar' + } +} ++resource vm 'Microsoft.ConnectedVMwarevSphere/virtualMachineInstances@2023-12-01' = { + name: 'default' + scope: contosoMachine + extendedLocation: extendedLocation + properties: { + hardwareProfile: { + memorySizeMB: 4096 + numCPUs: 2 + } + osProfile: { + computerName: vmName + adminPassword: vmAdminPassword + } + placementProfile: { + resourcePoolId: resourcePoolId + datastoreId: datastoreId + } + infrastructureProfile: { + templateId: templateId + vCenterId: vCenterId + } + networkProfile: { + networkInterfaces: [ + { + nicType: 'vmxnet3' + ipSettings: ipSettings + networkId: networkId + name: 'VLAN103NIC' + powerOnBoot: 'enabled' + } + ] + } + } +} ++// Outputs +output vmId string = vm.id ++``` +++This article describes how to provision a VM using vCenter resources using a Terraform template. ++## Create an Arc VMware machine with Terraform ++### Prerequisites ++- **Azure Subscription**: Ensure you have an active Azure subscription. +- **Terraform**: Install Terraform on your machine. +- **Azure CLI**: Install Azure CLI to authenticate and manage resources. ++Follow these steps to create an Arc VMware machine using Terraform. The following two scenarios are covered in this article: ++1. For VMs discovered in vCenter inventory, perform enable in Azure operation and install Arc agents. +2. Create a new Arc VMware VM using templates, Resource pool, Datastore and install Arc agents. ++### Scenario 1 ++For VMs discovered in vCenter inventory, perform enable in Azure operation and install Arc agents. ++#### Step 1: Define variables in a variables.tf file ++Create a file named variables.tf and define all the necessary variables. ++```terraform +variable "subscription_id" { + description = "The subscription ID for the Azure account." + type = string +} + +variable "resource_group_name" { + description = "The name of the resource group." + type = string +} + +variable "location" { + description = "The location/region where the resources will be created." + type = string +} + +variable "machine_name" { + description = "The name of the machine." + type = string +} + +variable "inventory_item_id" { + description = "The ID of the Inventory Item for the VM." + type = string +} + +variable "custom_location_id" { + description = "The ID of the custom location." + type = string +} + +variable "vm_username" { + description = "The admin username for the VM." + type = string +} + +variable "vm_password" { + description = "The admin password for the VM." + type = string +} ++variable "resource_group_name" { + description = "The name of the resource group." + type = string +} + +variable "location" { + description = "The location/region where the resources will be created." + type = string +} + +variable "machine_name" { + description = "The name of the machine." + type = string +} + +variable "vm_username" { + description = "The admin username for the VM." + type = string +} + +variable "vm_password" { + description = "The admin password for the VM." + type = string +} + +variable "inventory_id" { + description = "The Inventory ID for the VM." + type = string +} + +variable "vcenter_id" { + description = "The ID of the vCenter." + type = string +} + +variable "custom_location_id" { + description = "The ID of the custom location." + type = string +} ++``` +#### Step 2: Create a tfvars file ++Create a file named *CreateVMwareVM.tfvars* and provide sample values for the variables. ++```terraform +subscription_id = "your-subscription-id" +resource_group_name = "your-resource-group" +location = "eastus" +machine_name = "test_machine0001" +inventory_item_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/VCenters/your-vcenter-id/InventoryItems/your-inventory-item-id" +custom_location_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ExtendedLocation/customLocations/your-custom-location-id" +vm_username = "Administrator" +vm_password = " The admin password for the VM " ++``` ++#### Step 3: Modify the configuration to use variables ++Create a file named *main.tf* and insert the following code. ++```terraform +terraform { + required_providers { + azurerm = { + source = "hashicorp/azurerm" + version = ">= 3.0" + } + azapi = { + source = "azure/azapi" + version = ">= 1.0.0" + } + } +} + +# Configure the AzureRM provider with the subscription ID +provider "azurerm" { + features {} + subscription_id = var.subscription_id +} + +# Configure the AzAPI provider with the subscription ID +provider "azapi" { + subscription_id = var.subscription_id +} + +# Retrieve the resource group details +data "azurerm_resource_group" "example" { + name = var.resource_group_name +} + +# Create a VMware machine resource in Azure +resource "azapi_resource" "test_machine0001" { + schema_validation_enabled = false + parent_id = data.azurerm_resource_group.example.id + type = "Microsoft.HybridCompute/machines@2023-06-20-preview" + name = var.machine_name + location = data.azurerm_resource_group.example.location + body = jsonencode({ + kind = "VMware" + identity = { + type = "SystemAssigned" + } + }) +} + +# Create a Virtual Machine instance using the VMware machine and Inventory Item ID +resource "azapi_resource" "test_inventory_vm0001" { + schema_validation_enabled = false + type = "Microsoft.ConnectedVMwarevSphere/VirtualMachineInstances@2023-10-01" + name = "default" + parent_id = azapi_resource.test_machine0001.id + body = jsonencode({ + properties = { + infrastructureProfile = { + inventoryItemId = var.inventory_item_id + } + } + extendedLocation = { + type = "CustomLocation" + name = var.custom_location_id + } + }) + depends_on = [azapi_resource.test_machine0001] +} + +# Install Arc agent on the VM +resource "azapi_resource" "guestAgent" { + type = "Microsoft.ConnectedVMwarevSphere/virtualMachineInstances/guestAgents@2023-10-01" + parent_id = azapi_resource.test_inventory_vm0001.id + name = "default" + body = jsonencode({ + properties = { + credentials = { + username = var.vm_username + password = var.vm_password + } + provisioningAction = "install" + } + }) + schema_validation_enabled = false + ignore_missing_property = false + depends_on = [azapi_resource.test_inventory_vm0001] +} ++``` +#### Step 4: Run Terraform commands ++Use the -var-file flag to pass the *.tfvars* file during Terraform commands. ++1. Initialize Terraform (if not already initialized): +`terraform init` +2. Validate the configuration: +`terraform validate -var-file="CreateVMwareVM.tfvars"` +3. Plan the changes: +`terraform plan -var-file="CreateVMwareVM.tfvars"` +4. Apply the changes: +`terraform apply -var-file="CreateVMwareVM.tfvars"` ++Confirm the prompt by entering yes to apply the changes. ++### Best practices ++- **Use version control**: Keep your Terraform configuration files under version control (for example, Git) to track changes over time. +- **Review plans carefully**: Always review the output of terraform plan before applying changes to ensure that you understand what changes will be made. +- **State management**: Regularly back up your Terraform state files to avoid data loss. ++By following these steps, you can effectively create and manage HCRP and Arc VMware VMs on Azure using Terraform and install guest agents on the created VMs. ++### Scenario 2 ++Create a new Arc VMware VM using templates, Resource pool, Datastore and install Arc agents. ++#### Step 1: Define variables in a variables.tf file ++Create a file named variables.tf and define all the necessary variables. ++```terraform +variable "subscription_id" { + description = "The subscription ID for the Azure account." + type = string +} + +variable "resource_group_name" { + description = "The name of the resource group." + type = string +} + +variable "location" { + description = "The location/region where the resources will be created." + type = string +} + +variable "machine_name" { + description = "The name of the machine." + type = string +} + +variable "vm_username" { + description = "The admin username for the VM." + type = string +} + +variable "vm_password" { + description = "The admin password for the VM." + type = string +} + +variable "template_id" { + description = "The ID of the VM template." + type = string +} + +variable "vcenter_id" { + description = "The ID of the vCenter." + type = string +} + +variable "resource_pool_id" { + description = "The ID of the resource pool." + type = string +} + +variable "datastore_id" { + description = "The ID of the datastore." + type = string +} + +variable "custom_location_id" { + description = "The ID of the custom location." + type = string +} ++``` ++#### Step 2: Create tfvars file ++Create a file named *CreateVMwareVM.tfvars* and provide sample values for the variables. ++```terraform +subscription_id = "your-subscription-id" +resource_group_name = "your-resource-group" +location = "eastus" +machine_name = "test_machine0002" +vm_username = "Administrator" +vm_password = "*********" +template_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/virtualmachinetemplates/your-template-id" +vcenter_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/VCenters/your-vcenter-id" +resource_pool_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/resourcepools/your-resource-pool-id" +datastore_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ConnectedVMwarevSphere/datastores/your-datastore-id" +custom_location_id = "/subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.ExtendedLocation/customLocations/your-custom-location-id" ++``` ++#### Step 3: Modify the configuration to use variables ++Create a file named *main.tf* and insert the following code. ++```terraform +terraform { + required_providers { + azurerm = { + source = "hashicorp/azurerm" + version = ">= 3.0" + } + azapi = { + source = "azure/azapi" + version = ">= 1.0.0" + } + } +} + +# Configure the AzureRM provider with the subscription ID +provider "azurerm" { + features {} + subscription_id = var.subscription_id +} + +# Configure the AzAPI provider with the subscription ID +provider "azapi" { + subscription_id = var.subscription_id +} + +# Retrieve the resource group details +data "azurerm_resource_group" "example" { + name = var.resource_group_name +} + +# Create a VMware machine resource in Azure +resource "azapi_resource" "test_machine0002" { + schema_validation_enabled = false + parent_id = data.azurerm_resource_group.example.id + type = "Microsoft.HybridCompute/machines@2023-06-20-preview" + name = var.machine_name + location = data.azurerm_resource_group.example.location + body = jsonencode({ + kind = "VMware" + identity = { + type = "SystemAssigned" + } + }) +} + +# Create a Virtual Machine instance using the VMware machine created above +resource "azapi_resource" "test_vm0002" { + schema_validation_enabled = false + type = "Microsoft.ConnectedVMwarevSphere/VirtualMachineInstances@2023-10-01" + name = "default" + parent_id = azapi_resource.test_machine0002.id + body = jsonencode({ + properties = { + infrastructureProfile = { + templateId = var.template_id + vCenterId = var.vcenter_id + } + + placementProfile = { + resourcePoolId = var.resource_pool_id + datastoreId = var.datastore_id + } + + osProfile = { + adminPassword = var.vm_password + } + } + extendedLocation = { + type = "CustomLocation" + name = var.custom_location_id + } + }) + depends_on = [azapi_resource.test_machine0002] +} + +# Create a guest agent for the VM instance +resource "azapi_resource" "guestAgent" { + type = "Microsoft.ConnectedVMwarevSphere/virtualMachineInstances/guestAgents@2023-10-01" + parent_id = azapi_resource.test_vm0002.id + name = "default" + body = jsonencode({ + properties = { + credentials = { + username = var.vm_username + password = var.vm_password + } + provisioningAction = "install" + } + }) + schema_validation_enabled = false + ignore_missing_property = false + depends_on = [azapi_resource.test_vm0002] +} ++``` ++#### Step 4: Run Terraform commands ++Use the -var-file flag to pass the *.tfvars* file during Terraform commands. ++1. Initialize Terraform (if not already initialized): +`terraform init` +2. Validate the configuration: +`terraform validate -var-file="CreateVMwareVM.tfvars"` +3. Plan the changes: +`terraform plan -var-file="CreateVMwareVM.tfvars"` +4. Apply the changes: +`terraform apply -var-file="CreateVMwareVM.tfvars"` ++Confirm the prompt by entering yes to apply the changes. ++### Best practices ++- **Use version control**: Keep your Terraform configuration files under version control (for example, Git) to track changes over time. +- **Review plans carefully**: Always review the output of terraform plan before applying changes to ensure that you understand what changes will be made. +- **State management**: Regularly back up your Terraform state files to avoid data loss. ++ ## Next steps [Perform operations on VMware VMs in Azure](perform-vm-ops-through-azure.md). |
azure-arc | Troubleshoot Guest Management Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md | Title: Troubleshoot Guest Management Issues -description: Learn about how to troubleshoot the guest management issues for Arc-enabled VMware vSphere. +description: Learn how to troubleshoot the guest management issues for Arc-enabled VMware vSphere. Previously updated : 08/06/2024 Last updated : 08/29/2024 -## Troubleshoot issues while enabling Guest Management on a domain-joined Linux VM +## Troubleshoot issues while enabling Guest Management ++**Troubleshoot issues while enabling Guest Management on:** ++# [Arc agent installation fails on a domain-joined Linux VM](#tab/linux) **Error message**: Enabling Guest Management on a domain-joined Linux VM fails with the error message **InvalidGuestLogin: Failed to authenticate to the system with the credentials**. Default: The default set of PAM service names includes: #### References -- [Invoke-VMScript to an domain joined Ubuntu VM](https://communities.vmware.com/t5/VMware-PowerCLI-Discussions/Invoke-VMScript-to-an-domain-joined-Ubuntu-VM/td-p/2257554).---## Troubleshoot issues while enabling Guest Management on RHEL-based Linux VMs +- [Invoke VMScript to a domain-joined Ubuntu VM](https://communities.vmware.com/t5/VMware-PowerCLI-Discussions/Invoke-VMScript-to-an-domain-joined-Ubuntu-VM/td-p/2257554). -Applies to: --- RedHat Linux-- CentOS-- Rocky Linux-- Oracle Linux-- SUSE Linux-- SUSE Linux Enterprise Server-- Alma Linux-- Fedora+# [Arc agent installation fails on RHEL Linux distros](#tab/rhel) +**Applies to:**<br> +:heavy_check_mark: RedHat Linux :heavy_check_mark: CentOS :heavy_check_mark: Rocky Linux :heavy_check_mark: Oracle Linux :heavy_check_mark: SUSE Linux :heavy_check_mark: SUSE Linux Enterprise Server :heavy_check_mark: Alma Linux :heavy_check_mark: Fedora **Error message**: Provisioning of the resource failed with Code: `AZCM0143`; Message: `install_linux_azcmagent.sh: installation error`. Upon `yum` or `rpm` executing scriptlets, the context is changed to `rpm_script_ - [Executing yum/rpm commands using VMware tools facility (vmrun) fails in error when packages have scriptlets](https://access.redhat.com/solutions/5347781). ++ ## Next steps If you don't see your problem here or you can't resolve your issue, try one of the following channels for support: |
azure-maps | How To Use Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md | For a list of supported languages and regional views, see [Localization support Beginning with Azure Maps Web SDK 3.0, the Web SDK includes full compatibility with [WebGL 2], a powerful graphics technology that enables hardware-accelerated rendering in modern web browsers. By using WebGL 2, developers can harness the capabilities of modern GPUs to render complex maps and visualizations more efficiently, resulting in improved performance and visual quality. -![Map image showing WebGL 2 Compatibility.](./media/how-to-use-map-control/webgl-2-compatability.png) ```html <!DOCTYPE html> Beginning with Azure Maps Web SDK 3.0, the Web SDK includes full compatibility w <title>WebGL2 - Azure Maps Web SDK Samples</title> <link href=https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css rel="stylesheet"/> <script src=https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js></script>- <script src="https://unpkg.com/deck.gl@latest/dist.min.js"></script> + <script src="https://unpkg.com/deck.gl@^8/dist.min.js"></script> <style> html, body { |
azure-monitor | Agent Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md | The following table highlights the packages required for [supported Linux distro |Required package |Description |Minimum version | |--||-|-|Glibc | GNU C library | 2.5-12 +|Glibc | GNU C library | 2.5-12| |Openssl | OpenSSL libraries | 1.0.x or 1.1.x | |Curl | cURL web client | 7.15.5 |-|Python | | 2.7 or 3.6+ +|Python | | 2.7 or 3.6-3.11| |Python-ctypes | | |PAM | Pluggable authentication modules | | |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | Migration is a complex task. Start planning your migration to Azure Monitor Agen > - **Installation:** The ability to install the legacy agents will be removed from the Azure Portal and installation policies for legacy agents will be removed. You can still install the MMA agents extension as well as perform offline installations. > - **Customer Support:** You will not be able to get support for legacy agent issues. > - **OS Support:** Support for new Linux or Windows distros, including service packs, won't be added after the deprecation of the legacy agents.-> - Log Analytics Agent will continue to function but not be able to connect Log Analytics workspaces. > - Log Analytics Agent can coexist with Azure Monitor Agent. Expect to see duplicate data if both agents are collecting the same data.   |
azure-monitor | Data Collection Log Json | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-log-json.md | Title: Collect logs from a JSON file with Azure Monitor Agent description: Configure a data collection rule to collect log data from a JSON file on a virtual machine using Azure Monitor Agent. Previously updated : 08/23/2024 Last updated : 08/28/2024 Many applications and services will log information to a JSON files instead of s ## Prerequisites - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- A data collection endpoint (DCE) if you plan to use Azure Monitor Private Links. The data collection endpoint must be in the same region as the Log Analytics workspace. See [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment) for details.+- A data collection endpoint (DCE) in the same region as the Log Analytics workspace. See [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment) for details. - Either a new or existing DCR described in [Collect data with Azure Monitor Agent](./azure-monitor-agent-data-collection.md). ## Basic operation Use the following ARM template to create a DCR for collecting text log files, ma | Setting | Description | |:|:| | Data collection rule name | Unique name for the DCR. |+| Data collection endpoint resource ID | Resource ID of the data collection endpoint (DCE). | | Location | Region for the DCR. Must be the same location as the Log Analytics workspace. | | File patterns | Identifies the location and name of log files on the local disk. Use a wildcard for filenames that vary, for example when a new file is created each day with a new name. You can enter multiple file patterns separated by commas (AMA version 1.26 or higher required for multiple file patterns on Linux).<br><br>Examples:<br>- C:\Logs\MyLog.json<br>- C:\Logs\MyLog*.json<br>- C:\App01\AppLog.json, C:\App02\AppLog.json<br>- /var/mylog.json<br>- /var/mylog*.json | | Table name | Name of the destination table in your Log Analytics Workspace. | Use the following ARM template to create a DCR for collecting text log files, ma "description": "Unique name for the DCR. " } },+ "dataCollectionEndpointResourceId": { + "type": "string", + "metadata": { + "description": "Resource ID of the data collection endpoint (DCE)." + } + }, "location": { "type": "string", "metadata": { Use the following ARM template to create a DCR for collecting text log files, ma "metadata": { "description": "Resource ID of the Log Analytics workspace with the target table." }- }, - "dataCollectionEndpointResourceId": { - "type": "string", - "metadata": { - "description": "Resource ID of the Data Collection Endpoint to be used with this rule." - } } }, "variables": { Use the following ARM template to create a DCR for collecting text log files, ma "name": "[parameters('dataCollectionRuleName')]", "location": "[parameters('location')]", "properties": {+ "dataCollectionEndpointId": "[parameters('dataCollectionEndpointResourceId')]", "streamDeclarations": { "Custom-Json-stream": { "columns": [ Use the following ARM template to create a DCR for collecting text log files, ma "transformKql": "source", "outputStream": "[variables('tableOutputStream')]" }- ], - "dataCollectionEndpointId": "[parameters('dataCollectionEndpointResourceId')]" + ] } } ] |
azure-monitor | Data Collection Log Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-log-text.md | Title: Collect logs from a text file with Azure Monitor Agent description: Configure a data collection rule to collect log data from a text file on a virtual machine using Azure Monitor Agent. Previously updated : 08/23/2024 Last updated : 08/28/2024 Many applications and services will log information to text files instead of sta ## Prerequisites - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- A data collection endpoint (DCE) if you plan to use Azure Monitor Private Links. The data collection endpoint must be in the same region as the Log Analytics workspace. See [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment) for details.+- A data collection endpoint (DCE) in the same region as the Log Analytics workspace. See [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment) for details. - Either a new or existing DCR described in [Collect data with Azure Monitor Agent](./azure-monitor-agent-data-collection.md). ## Basic operation Use the following ARM template to create or modify a DCR for collecting text log "description": "Unique name for the DCR. " } },+ "dataCollectionEndpointResourceId": { + "type": "string", + "metadata": { + "description": "Resource ID of the data collection endpoint (DCE)." + } + }, "location": { "type": "string", "metadata": { Use the following ARM template to create or modify a DCR for collecting text log "location": "[parameters('location')]", "apiVersion": "2022-06-01", "properties": {+ "dataCollectionEndpointId": "[parameters('dataCollectionEndpointResourceId')]", "streamDeclarations": { "Custom-Text-stream": { "columns": [ |
azure-monitor | Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md | After you create an action group, you can view it in the portal: A phone number or email can be included in action groups in many subscriptions. Azure Monitor uses rate limiting to suspend notifications when too many notifications are sent to a particular phone number, email address or device. Rate limiting ensures that alerts are manageable and actionable. -Rate limiting applies to SMS, voice, and email notifications. All other notification actions aren't rate limited. For information about rate limits, see [Azure Monitor service limits](../service-limits.md). --Rate limiting applies across all subscriptions. Rate limiting is applied as soon as the threshold is reached, even if messages are sent from multiple subscriptions. --When an email address is rate limited, a notification is sent to communicate that rate limiting was applied and when the rate limiting expires. +Rate limiting applies to SMS, voice, and email notifications. All other notification actions aren't rate limited. Rate limiting applies across all subscriptions. Rate limiting is applied as soon as the threshold is reached, even if messages are sent from multiple subscriptions. When an email address is rate limited, a notification is sent to communicate that rate limiting was applied and when the rate limiting expires. +For information about rate limits, see [Azure Monitor service limits](../service-limits.md). + ## Email Azure Resource Manager When you use Azure Resource Manager for email notifications, you can send email to the members of a subscription's role. Email is sent to Microsoft Entra ID **user** or **group** members of the role. This includes support for roles assigned through Azure Lighthouse. When you set up the Resource Manager role: > It can take up to 24 hours for a customer to start receiving notifications after they add a new Azure Resource Manager role to their subscription. ## SMS -For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). +You might have a limited number of SMS actions per action group. -For important information about using SMS notifications in action groups, see [SMS alert behavior in action groups](./alerts-sms-behavior.md). +- For information about rate limits, see [Azure Monitor service limits](../service-limits.md). +- For important information about using SMS notifications in action groups, see [SMS alert behavior in action groups](./alerts-sms-behavior.md). -You might have a limited number of SMS actions per action group. > [!NOTE] > You might have a limited number of Azure app actions per action group. | 1 | United States | ## Voice+You might have a limited number of voice actions per action group. For important information about rate limits, see [Azure Monitor service limits](../service-limits.md). -For important information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). -You might have a limited number of voice actions per action group. > [!NOTE] > |
azure-monitor | Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md | Using various authentication systems can be cumbersome and risky because it's di The following preliminary steps are required to enable Microsoft Entra authenticated ingestion. You need to: -- Be in the public cloud.-- Be familiar with:- - [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md). - - [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md). - - [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.yml). -- Granting access using [Azure built-in roles](../../role-based-access-control/built-in-roles.md) requires having an Owner role to the resource group.-- Understand the [unsupported scenarios](#unsupported-scenarios).+* Be in the public cloud. +* Be familiar with: + * [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md). + * [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md). + * [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.yml). +* Granting access using [Azure built-in roles](../../role-based-access-control/built-in-roles.md) requires having an Owner role to the resource group. +* Understand the [unsupported scenarios](#unsupported-scenarios). ## Unsupported scenarios The following Software Development Kits (SDKs) and features are unsupported for use with Microsoft Entra authenticated ingestion: -- [Application Insights Java 2.x SDK](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps).<br />+* [Application Insights Java 2.x SDK](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps).<br /> Microsoft Entra authentication is only available for Application Insights Java Agent greater than or equal to 3.2.0.-- [ApplicationInsights JavaScript web SDK](javascript.md).-- [Application Insights OpenCensus Python SDK](/previous-versions/azure/azure-monitor/app/opencensus-python) with Python version 3.4 and 3.5.-- [AutoInstrumentation for Python on Azure App Service](azure-web-apps-python.md)-- [Profiler](profiler-overview.md).+* [ApplicationInsights JavaScript web SDK](javascript.md). +* [Application Insights OpenCensus Python SDK](/previous-versions/azure/azure-monitor/app/opencensus-python) with Python version 3.4 and 3.5. +* [AutoInstrumentation for Python on Azure App Service](azure-web-apps-python.md) +* [Profiler](profiler-overview.md). <a name='configure-and-enable-azure-ad-based-authentication'></a> The following Software Development Kits (SDKs) and features are unsupported for 1. If you don't already have an identity, create one by using either a managed identity or a service principal. - - We recommend using a managed identity: + * We recommend using a managed identity: [Set up a managed identity for your Azure service](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) (Virtual Machines or App Service). - - We don't recommend using a service principal: + * We don't recommend using a service principal: For more information on how to create a Microsoft Entra application and service principal that can access resources, see [Create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md). The following Software Development Kits (SDKs) and features are unsupported for Application Insights .NET SDK supports the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/identity/Azure.Identity#credential-classes). -- We recommend `DefaultAzureCredential` for local development.-- Authenticate on Visual Studio with the expected Azure user account. For more information, see [Authenticate via Visual Studio](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity#authenticate-via-visual-studio).-- We recommend `ManagedIdentityCredential` for system-assigned and user-assigned managed identities.- - For system-assigned, use the default constructor without parameters. - - For user-assigned, provide the client ID to the constructor. +* We recommend `DefaultAzureCredential` for local development. +* Authenticate on Visual Studio with the expected Azure user account. For more information, see [Authenticate via Visual Studio](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity#authenticate-via-visual-studio). +* We recommend `ManagedIdentityCredential` for system-assigned and user-assigned managed identities. + * For system-assigned, use the default constructor without parameters. + * For user-assigned, provide the client ID to the constructor. The following example shows how to manually create and configure `TelemetryConfiguration` by using .NET: The following example shows how to configure `TelemetryConfiguration` by using . ```csharp services.Configure<TelemetryConfiguration>(config => {- var credential = new DefaultAzureCredential(); - config.SetAzureTokenCredential(credential); + var credential = new DefaultAzureCredential(); + config.SetAzureTokenCredential(credential); }); services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceOptions { services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceOptions Use the `APPLICATIONINSIGHTS_AUTHENTICATION_STRING` environment variable to let Application Insights authenticate to Microsoft Entra ID and send telemetry when using [Azure App Services autoinstrumentation](./azure-web-apps-net-core.md). -- For system-assigned identity:+* For system-assigned identity: -| App setting | Value | -| -- | | -| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` | +| App setting | Value | +|-|| +| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` | -- For user-assigned identity:+* For user-assigned identity: -| App setting | Value | -| - | -- | -| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` | +| App setting | Value | +|-|| +| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` | ### [Node.js](#tab/nodejs) Azure Monitor OpenTelemetry and Application Insights Node.JS supports the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/identity/identity#credential-classes). -- We recommend `DefaultAzureCredential` for local development.-- We recommend `ManagedIdentityCredential` for system-assigned and user-assigned managed identities.- - For system-assigned, use the default constructor without parameters. - - For user-assigned, provide the client ID to the constructor. -- We recommend `ClientSecretCredential` for service principals.- - Provide the tenant ID, client ID, and client secret to the constructor. +* We recommend `DefaultAzureCredential` for local development. +* We recommend `ManagedIdentityCredential` for system-assigned and user-assigned managed identities. + * For system-assigned, use the default constructor without parameters. + * For user-assigned, provide the client ID to the constructor. +* We recommend `ClientSecretCredential` for service principals. + * Provide the tenant ID, client ID, and client secret to the constructor. If using @azure/monitor-opentelemetry ```typescript useAzureMonitor(options); > Support for Microsoft Entra ID in the Application Insights Node.JS is included starting with [version 2.1.0-beta.1](https://www.npmjs.com/package/applicationinsights/v/2.1.0-beta.1). If using `applicationinsights` npm package.+ ```typescript const appInsights = require("applicationinsights"); const { DefaultAzureCredential } = require("@azure/identity"); appInsights.defaultClient.config.aadTokenCredential = credential; Use the `APPLICATIONINSIGHTS_AUTHENTICATION_STRING` environment variable to let Application Insights authenticate to Microsoft Entra ID and send telemetry when using [Azure App Services autoinstrumentation](./azure-web-apps-nodejs.md). -- For system-assigned identity:+* For system-assigned identity: -| App setting | Value | -| -- | | -| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` | +| App setting | Value | +|-|| +| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` | -- For user-assigned identity:+* For user-assigned identity: -| App setting | Value | -| - | -- | -| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` | +| App setting | Value | +|-|| +| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` | ### [Java](#tab/java) > [!NOTE] > Support for Microsoft Entra ID in the Application Insights Java agent is included starting with [Java 3.2.0-BETA](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0-BETA). -1. [Configure your application with the Java agent.](opentelemetry-enable.md?tabs=java#get-started) +1. [Configure your application with the Java agent.](opentelemetry-enable.md?tabs=java#enable-opentelemetry-with-application-insights) > [!IMPORTANT] > Use the full connection string, which includes `IngestionEndpoint`, when you configure your app with the Java agent. For example, use `InstrumentationKey=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX;IngestionEndpoint=https://XXXX.applicationinsights.azure.com/`. The following example shows how to configure the Java agent to use user-assigned The `APPLICATIONINSIGHTS_AUTHENTICATION_STRING` environment variable lets Application Insights authenticate to Microsoft Entra ID and send telemetry. -- For system-assigned identity:+* For system-assigned identity: -| App setting | Value | -| -- | | -| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` | +| App setting | Value | +|-|| +| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` | -- For user-assigned identity:+* For user-assigned identity: -| App setting | Value | -| - | -- | -| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` | +| App setting | Value | +|-|| +| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` | Set the `APPLICATIONINSIGHTS_AUTHENTICATION_STRING` environment variable using this string. Now that your app is registered and has permissions to use the API, grant your a ### Request an authorization token Before you begin, make sure you have all the values required to make the request successfully. All requests require:-- Your Microsoft Entra tenant ID.-- Your App Insights App ID - If you're currently using API Keys, it's the same app ID.-- Your Microsoft Entra client ID for the app.-- A Microsoft Entra client secret for the app.++* Your Microsoft Entra tenant ID. +* Your App Insights App ID - If you're currently using API Keys, it's the same app ID. +* Your Microsoft Entra client ID for the app. +* A Microsoft Entra client secret for the app. The Application Insights API supports Microsoft Entra authentication with three different [Microsoft Entra ID OAuth2](/azure/active-directory/develop/active-directory-protocols-oauth-code) flows:-- Client credentials-- Authorization code-- Implicit++* Client credentials +* Authorization code +* Implicit #### Client credentials flow You can disable local authentication by using the Azure portal or Azure Policy o 1. From your Application Insights resource, select **Properties** under **Configure** in the menu on the left. Select **Enabled (click to change)** if the local authentication is enabled. - :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot that shows Properties under the Configure section and the Enabled (select to change) local authentication button."::: + :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot that shows Properties under the Configure section and the Enabled (select to change) local authentication button."::: 1. Select **Disabled** and apply changes. - :::image type="content" source="./media/azure-ad-authentication/disable.png" alt-text="Screenshot that shows local authentication with the Enabled/Disabled button."::: + :::image type="content" source="./media/azure-ad-authentication/disable.png" alt-text="Screenshot that shows local authentication with the Enabled/Disabled button."::: 1. After disabling local authentication on your resource, you'll see the corresponding information in the **Overview** pane. - :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot that shows the Overview tab with the Disabled (select to change) local authentication button."::: + :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot that shows the Overview tab with the Disabled (select to change) local authentication button."::: ### Azure Policy The following example shows the Azure Resource Manager template you can use to c When developing a custom client to obtain an access token from Microsoft Entra ID for submitting telemetry to Application Insights, refer to the following table to determine the appropriate audience string for your particular host environment. -| Azure cloud version | Token audience value | -| | | -| Azure public cloud | `https://monitor.azure.com` | -| Microsoft Azure operated by 21Vianet cloud | `https://monitor.azure.cn` | -| Azure US Government cloud | `https://monitor.azure.us` | +| Azure cloud version | Token audience value | +|--|--| +| Azure public cloud | `https://monitor.azure.com` | +| Microsoft Azure operated by 21Vianet cloud | `https://monitor.azure.cn` | +| Azure US Government cloud | `https://monitor.azure.us` | If you're using sovereign clouds, you can find the audience information in the connection string as well. The connection string follows this structure: Using Fiddler, you might notice the response `HTTP/1.1 403 Forbidden - provided The issue could be due to: -- Creating the resource with a system-assigned managed identity or associating a user-assigned identity without adding the Monitoring Metrics Publisher role to it.-- Using the correct credentials for access tokens but linking them to the wrong Application Insights resource. Ensure your resource (virtual machine or app service) or user-assigned identity has Monitoring Metrics Publisher roles in your Application Insights resource.+* Creating the resource with a system-assigned managed identity or associating a user-assigned identity without adding the Monitoring Metrics Publisher role to it. +* Using the correct credentials for access tokens but linking them to the wrong Application Insights resource. Ensure your resource (virtual machine or app service) or user-assigned identity has Monitoring Metrics Publisher roles in your Application Insights resource. #### Invalid Client ID This error usually occurs when the provided credentials don't grant access to in ## Next steps -- [Monitor your telemetry in the portal](overview-dashboard.md)-- [Diagnose with Live Metrics Stream](live-stream.md)-- [Query Application Insights using Microsoft Entra authentication](./app-insights-azure-ad-api.md)+* [Monitor your telemetry in the portal](overview-dashboard.md) +* [Diagnose with Live Metrics Stream](live-stream.md) +* [Query Application Insights using Microsoft Entra authentication](./app-insights-azure-ad-api.md) |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 07/29/2024 Last updated : 08/27/2024 ms.devlang: csharp -# ms.devlang: csharp, javascript, typescript, python +# ms.devlang: csharp, java, javascript, typescript, python # Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python, and Java applications -This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the *Azure Monitor OpenTelemetry Distro*. The Azure Monitor OpenTelemetry Distro provides an [OpenTelemetry distribution](https://opentelemetry.io/docs/concepts/distributions/#what-is-a-distribution) that includes support for features specific to Azure Monitor. The Distro enables [automatic](opentelemetry-add-modify.md#automatic-data-collection) telemetry by including OpenTelemetry instrumentation libraries for collecting traces, metrics, logs, and exceptions, and allows collecting [custom](opentelemetry-add-modify.md#collect-custom-telemetry) telemetry. You can also use the [Live Metrics](live-stream.md) feature included in the Distro to monitor and collect more telemetry from live, in-production web applications. For more information about the advantages of using the Azure Monitor OpenTelemetry Distro, see [Why should I use the "Azure Monitor OpenTelemetry Distro"?](#why-should-i-use-the-azure-monitor-opentelemetry-distro). - -To learn more about collecting data using OpenTelemetry, see [Data Collection Basics](opentelemetry-overview.md) or [OpenTelemetry FAQ](#frequently-asked-questions). +This article describes how to enable and configure OpenTelemetry-based data collection within [Application Insights](app-insights-overview.md#application-insights-overview). The Azure Monitor OpenTelemetry Distro: -## OpenTelemetry Release Status +* Provides an [OpenTelemetry distribution](https://opentelemetry.io/docs/concepts/distributions/#what-is-a-distribution) which includes support for features specific to Azure Monitor, +* Enables [automatic](opentelemetry-add-modify.md#automatic-data-collection) telemetry by including OpenTelemetry instrumentation libraries for collecting traces, metrics, logs, and exceptions, +* Allows collecting [custom](opentelemetry-add-modify.md#collect-custom-telemetry) telemetry, and +* Supports [Live Metrics](live-stream.md) to monitor and collect more telemetry from live, in-production web applications. -OpenTelemetry offerings are available for .NET, Node.js, Python, and Java applications. +For more information about the advantages of using the Azure Monitor OpenTelemetry Distro, see [Why should I use the Azure Monitor OpenTelemetry Distro](#why-should-i-use-the-azure-monitor-opentelemetry-distro). -> [!NOTE] -> - For a feature-by-feature release status, see the [FAQ](#whats-the-current-release-state-of-features-within-the-azure-monitor-opentelemetry-distro). -> - The second tab of this article covers all .NET scenarios, including classic ASP.NET, console apps, Windows Forms (WinForms), etc. +To learn more about collecting data using OpenTelemetry, check out [Data Collection Basics](opentelemetry-overview.md) or the [OpenTelemetry FAQ](#frequently-asked-questions). ++### OpenTelemetry release status ++OpenTelemetry offerings are available for .NET, Node.js, Python, and Java applications. For a feature-by-feature release status, see the [FAQ](#whats-the-current-release-state-of-features-within-the-azure-monitor-opentelemetry-distro). ++## Enable OpenTelemetry with Application Insights -## Get started +Follow the steps in this section to instrument your application with OpenTelemetry. Select a tab for langauge-specific instructions. -Follow the steps in this section to instrument your application with OpenTelemetry. +> [!NOTE] +> .NET covers multiple scenarios, including classic ASP.NET, console apps, Windows Forms (WinForms), and more. ### Prerequisites -- An Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/)-- An Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)+> [!div class="checklist"] +> * Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/) +> * Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource) <!NOTE TO CONTRIBUTORS: PLEASE DO NOT SEPARATE OUT JAVASCRIPT AND TYPESCRIPT INTO DIFFERENT TABS.> #### [ASP.NET Core](#tab/aspnetcore) -- [ASP.NET Core Application](/aspnet/core/introduction-to-aspnet-core) using an officially supported version of [.NET](https://dotnet.microsoft.com/download/dotnet)+> [!div class="checklist"] +> * [ASP.NET Core Application](/aspnet/core/introduction-to-aspnet-core) using an officially supported version of [.NET](https://dotnet.microsoft.com/download/dotnet) > [!Tip] > If you're migrating from the Application Insights Classic API, see our [migration documentation](./opentelemetry-dotnet-migrate.md). -### [.NET](#tab/net) +#### [.NET](#tab/net) -- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2+> [!div class="checklist"] +> * Application using a [supported version](https://dotnet.microsoft.com/platform/support/policy) of [.NET](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) 4.6.2 and later. > [!Tip] > If you're migrating from the Application Insights Classic API, see our [migration documentation](./opentelemetry-dotnet-migrate.md). -### [Java](#tab/java) +#### [Java](#tab/java) -- A Java application using Java 8++> [!div class="checklist"] +> * A Java application using Java 8+ #### [Java native](#tab/java-native) -- A Java application using GraalVM 17++> [!div class="checklist"] +> * A Java application using GraalVM 17+ #### [Node.js](#tab/nodejs) -> [!NOTE] -> If you rely on any properties in the [not-supported table](https://github.com/microsoft/ApplicationInsights-node.js/blob/bet#ApplicationInsights-Shim-Unsupported-Properties), use the distro, and we'll provide a migration guide soon. If not, the App Insights shim is your easiest path forward when it's out of beta. +> [!div class="checklist"] +> * Application using an officially [supported version](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments) of Node.js runtime:<br>ΓÇó [OpenTelemetry supported runtimes](https://github.com/open-telemetry/opentelemetry-js#supported-runtimes)<br>ΓÇó [Azure Monitor OpenTelemetry Exporter supported runtimes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments) -- Application using an officially [supported version](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments) of Node.js runtime:- - [OpenTelemetry supported runtimes](https://github.com/open-telemetry/opentelemetry-js#supported-runtimes) - - [Azure Monitor OpenTelemetry Exporter supported runtimes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments) +> [!NOTE] +> If you don't rely on any properties listed in the [not-supported table](https://github.com/microsoft/ApplicationInsights-node.js/blob/bet#ApplicationInsights-Shim-Unsupported-Properties), the *ApplicationInsights shim* will be your easiest path forward once out of beta. +> +> If you rely on any those properties, proceed with the Azure Monitor OpenTelemetry Distro. We'll provide a migration guide soon. > [!Tip] > If you're migrating from the Application Insights Classic API, see our [migration documentation](./opentelemetry-nodejs-migrate.md). -### [Python](#tab/python) +#### [Python](#tab/python) -- Python Application using Python 3.8++> [!div class="checklist"] +> * Python Application using Python 3.8+ > [!Tip] > If you're migrating from OpenCensus, see our [migration documentation](./opentelemetry-python-opencensus-migrate.md). Download the [applicationinsights-agent-3.5.4.jar](https://github.com/microsoft/ #### [Java native](#tab/java-native) -For Spring Boot native applications: +For *Spring Boot* native applications: + * [Import the OpenTelemetry Bills of Materials (BOM)](https://opentelemetry.io/docs/zero-code/java/spring-boot-starter/getting-started/). * Add the [Spring Cloud Azure Starter Monitor](https://central.sonatype.com/artifact/com.azure.spring/spring-cloud-azure-starter-monitor) dependency. * Follow [these instructions](/azure//developer/java/spring-framework/developer-guide-overview#configuring-spring-boot-3) for the Azure SDK JAR (Java Archive) files. -For Quarkus native applications: +For *Quarkus* native applications: + * Add the [Quarkus OpenTelemetry Exporter for Azure](https://mvnrepository.com/artifact/io.quarkiverse.opentelemetry.exporter/quarkus-opentelemetry-exporter-azure) dependency. #### [Node.js](#tab/nodejs) -Install these packages: --- [@azure/monitor-opentelemetry](https://www.npmjs.com/package/@azure/monitor-opentelemetry)+Install the latest [@azure/monitor-opentelemetry](https://www.npmjs.com/package/@azure/monitor-opentelemetry) package: ```sh npm install @azure/monitor-opentelemetry npm install @azure/monitor-opentelemetry The following packages are also used for some specific scenarios described later in this article: -- [@opentelemetry/api](https://www.npmjs.com/package/@opentelemetry/api)-- [@opentelemetry/sdk-metrics](https://www.npmjs.com/package/@opentelemetry/sdk-metrics)-- [@opentelemetry/resources](https://www.npmjs.com/package/@opentelemetry/resources)-- [@opentelemetry/semantic-conventions](https://www.npmjs.com/package/@opentelemetry/semantic-conventions)-- [@opentelemetry/sdk-trace-base](https://www.npmjs.com/package/@opentelemetry/sdk-trace-base)+* [@opentelemetry/api](https://www.npmjs.com/package/@opentelemetry/api) +* [@opentelemetry/sdk-metrics](https://www.npmjs.com/package/@opentelemetry/sdk-metrics) +* [@opentelemetry/resources](https://www.npmjs.com/package/@opentelemetry/resources) +* [@opentelemetry/semantic-conventions](https://www.npmjs.com/package/@opentelemetry/semantic-conventions) +* [@opentelemetry/sdk-trace-base](https://www.npmjs.com/package/@opentelemetry/sdk-trace-base) ```sh npm install @opentelemetry/api pip install azure-monitor-opentelemetry -### Enable Azure Monitor Application Insights --To enable Azure Monitor Application Insights, you make a minor modification to your application and set your "Connection String." The Connection String tells your application where to send the telemetry the Distro collects, and it's unique to you. --#### Modify your Application +### Modify your application -##### [ASP.NET Core](#tab/aspnetcore) +#### [ASP.NET Core](#tab/aspnetcore) -Add `UseAzureMonitor()` to your application startup, located in your `program.cs` class. +Import the `Azure.Monitor.OpenTelemetry.AspNetCore` namespace, add OpenTelemetry, and configure it to use Azure Monitor in your `program.cs` class: ```csharp // Import the Azure.Monitor.OpenTelemetry.AspNetCore namespace. using Azure.Monitor.OpenTelemetry.AspNetCore; -// Create a new WebApplicationBuilder instance. var builder = WebApplication.CreateBuilder(args); // Add OpenTelemetry and configure it to use Azure Monitor. builder.Services.AddOpenTelemetry().UseAzureMonitor(); -// Build the application. var app = builder.Build(); -// Run the application. app.Run(); ``` -##### [.NET](#tab/net) +#### [.NET](#tab/net) -Add the Azure Monitor Exporter to each OpenTelemetry signal in application startup. Depending on your version of .NET, it is in either your `startup.cs` or `program.cs` class. +Add the Azure Monitor Exporter to each OpenTelemetry signal in the `program.cs` class: ```csharp // Create a new tracer provider builder and add an Azure Monitor trace exporter to the tracer provider builder. var loggerFactory = LoggerFactory.Create(builder => > [!NOTE] > For more information, see the [getting-started tutorial for OpenTelemetry .NET](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main#getting-started) -##### [Java](#tab/java) +#### [Java](#tab/java) -Java autoinstrumentation is enabled through configuration changes; no code changes are required. +Autoinstrumentation is enabled through configuration changes. *No code changes are required.* Point the Java virtual machine (JVM) to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.5.4.jar"` to your application's JVM args. -> [!TIP] +> [!NOTE] > Sampling is enabled by default at a rate of 5 requests per second, aiding in cost management. Telemetry data may be missing in scenarios exceeding this rate. For more information on modifying sampling configuration, see [sampling overrides](./java-standalone-sampling-overrides.md). > [!TIP] Point the Java virtual machine (JVM) to the jar file by adding `-javaagent:"path > [!TIP] > If you develop a Spring Boot application, you can optionally replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md). -##### [Java native](#tab/java-native) +#### [Java native](#tab/java-native) -Several automatic instrumentations are enabled through configuration changes; no code changes are required +Autoinstrumentation is enabled through configuration changes. *No code changes are required.* -##### [Node.js](#tab/nodejs) +#### [Node.js](#tab/nodejs) ```typescript // Import the `useAzureMonitor()` function from the `@azure/monitor-opentelemetry` package. const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); useAzureMonitor(); ``` -##### [Python](#tab/python) +#### [Python](#tab/python) ```python # Import the `configure_azure_monitor()` function from the configure_azure_monitor() -#### Copy the Connection String from your Application Insights Resource +### Copy the connection string from your Application Insights resource -> [!TIP] -> If you don't already have one, now is a great time to [Create an Application Insights Resource](create-workspace-resource.md#create-a-workspace-based-resource). Here's when we recommend you [create a new Application Insights Resource versus use an existing one](create-workspace-resource.md#when-to-use-a-single-application-insights-resource). +The connection string is unique and specifies where the Azure Monitor OpenTelemetry Distro sends the telemetry it collects. -To copy your unique Connection String: +> [!TIP] +> If you don't already have an Application Insights resource, create one following [this guide](create-workspace-resource.md#create-a-workspace-based-resource). We recommend you create a new resource rather than [using an existing one](create-workspace-resource.md#when-to-use-a-single-application-insights-resource). +To copy the connection string: 1. Go to the **Overview** pane of your Application Insights resource.-2. Find your **Connection String**. +2. Find your **connection string**. 3. Hover over the connection string and select the **Copy to clipboard** icon. -#### Paste the Connection String in your environment --To paste your Connection String, select from the following options: -- A. Set via Environment Variable (Recommended) - - Replace `<Your Connection String>` in the following command with *your* unique connection string. -- ```console - APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String> - ``` - B. Set via Configuration File - Java Only (Recommended) - - Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.5.4.jar` with the following content: +### Paste the connection string in your environment - ```json - { - "connectionString": "<Your Connection String>" - } - ``` +To paste your connection string, select from the following options: - Replace `<Your Connection String>` in the preceding JSON with *your* unique connection string. +> [!IMPORTANT] +> We recommend setting the connection string through code only in local development and test environments. +> +> For production, use an environment variable or configuration file (Java only). ++* **Set via environment variable** - *recommended* ++ Replace `<Your connection string>` in the following command with your connection string. + + ```console + APPLICATIONINSIGHTS_CONNECTION_STRING=<Your connection string> + ``` ++* **Set via configuration file** - *Java only* + + Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.5.4.jar` with the following content: + + ```json + { + "connectionString": "<Your connection string>" + } + ``` + + Replace `<Your connection string>` in the preceding JSON with *your* unique connection string. - C. Set via Code - ASP.NET Core, Node.js, and Python Only (Not recommended) +* **Set via code** - *ASP.NET Core, Node.js, and Python only* - See [Connection String Configuration](opentelemetry-configuration.md#connection-string) for an example of setting Connection String via code. + See [connection string configuration](opentelemetry-configuration.md#connection-string) for an example of setting connection string via code. - > [!NOTE] - > If you set the connection string in more than one place, we adhere to the following precedence: - > - > 1. Code - > 2. Environment Variable - > 3. Configuration File +> [!NOTE] +> If you set the connection string in multiple places, the environment variable will be prioritized in the following order: +> 1. Code +> 2. Environment variable +> 3. Configuration file -#### Confirm data is flowing +### Confirm data is flowing -Run your application and open your **Application Insights Resource** tab in the Azure portal. It might take a few minutes for data to show up in the portal. +Run your application, then open Application Insights in the Azure portal. It might take a few minutes for data to show up. :::image type="content" source="media/opentelemetry/server-requests.png" alt-text="Screenshot of the Application Insights Overview tab with server requests and server response time highlighted."::: -Application Insights is now enabled for your application. All the following steps are optional and allow for further customization. +Application Insights is now enabled for your application. The following steps are optional and allow for further customization. > [!IMPORTANT] > If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set Cloud Role Names](opentelemetry-configuration.md#set-the-cloud-role-name-and-the-cloud-role-instance) to represent them properly on the Application Map. As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md). -## Samples +## Sample applications -Azure Monitor OpenTelemetry sample applications are available for all supported languages. --### [ASP.NET Core](#tab/aspnetcore) --- [ASP.NET Core sample app](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo)--### [.NET](#tab/net) +Azure Monitor OpenTelemetry sample applications are available for all supported languages: -- [NET sample app](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo)--### [Java](#tab/java) --- [Java sample apps](https://github.com/Azure-Samples/ApplicationInsights-Java-Samples)--### [Java native](#tab/java-native) --- [Java GraalVM native sample apps](https://github.com/Azure-Samples/java-native-telemetry)--### [Node.js](#tab/nodejs) --- [Node.js sample app](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js)--### [Python](#tab/python) --- [Python sample apps](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry/samples)--+* [ASP.NET Core sample app](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo) +* [NET sample app](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo) +* [Java sample apps](https://github.com/Azure-Samples/ApplicationInsights-Java-Samples) +* [Java GraalVM native sample apps](https://github.com/Azure-Samples/java-native-telemetry) +* [Node.js sample app](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js) +* [Python sample apps](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry/samples) ## Next steps ### [ASP.NET Core](#tab/aspnetcore) -- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md).-- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md).-- To review the source code, see the [Azure Monitor AspNetCore GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore).-- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor AspNetCore NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore) page.-- To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet).-- To enable usage experiences, [enable web or browser user monitoring](javascript.md).+* For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md). +* To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md). +* To review the source code, see the [Azure Monitor AspNetCore GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore). +* To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor AspNetCore NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore) page. +* To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo). +* To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). +* To enable usage experiences, [enable web or browser user monitoring](javascript.md). ### [.NET](#tab/net) -- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md).-- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md).-- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter).-- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) page.-- To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet).-- To enable usage experiences, [enable web or browser user monitoring](javascript.md).+* For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md). +* To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md). +* To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter). +* To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) page. +* To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo). +* To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). +* To enable usage experiences, [enable web or browser user monitoring](javascript.md). ### [Java](#tab/java) -- See [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md) for details on adding and modifying Azure Monitor OpenTelemetry.-- Review [Java autoinstrumentation configuration options](java-standalone-config.md).-- Review the source code in the [Azure Monitor Java autoinstrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java).-- Learn more about OpenTelemetry and its community in the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).-- Enable usage experiences by seeing [Enable web or browser user monitoring](javascript.md).-- Review the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.+* See [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md) for details on adding and modifying Azure Monitor OpenTelemetry. +* Review [Java autoinstrumentation configuration options](java-standalone-config.md). +* Review the source code in the [Azure Monitor Java autoinstrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java). +* Learn more about OpenTelemetry and its community in the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation). +* Enable usage experiences by seeing [Enable web or browser user monitoring](javascript.md). +* Review the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub. ### [Java native](#tab/java-native)-- See [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md) for details on adding and modifying Azure Monitor OpenTelemetry.-- Review the source code in the [Azure Monitor OpenTelemetry Distro in Spring Boot native image Java application](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-starter-monitor) and [Quarkus OpenTelemetry Exporter for Azure](https://github.com/quarkiverse/quarkus-opentelemetry-exporter/tree/main/quarkus-opentelemetry-exporter-azure).-- Learn more about OpenTelemetry and its community in the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).-- Learn more features for Spring Boot native image applications in [OpenTelemetry SpringBoot starter](https://opentelemetry.io/docs/zero-code/java/spring-boot-starter/.)-- Learn more features for Quarkus native applications in [Quarkus OpenTelemetry Exporter for Azure](https://quarkus.io/guides/opentelemetry).-- Review the [release notes](https://github.com/Azure/azure-sdk-for-jav) on GitHub.+* See [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md) for details on adding and modifying Azure Monitor OpenTelemetry. +* Review the source code in the [Azure Monitor OpenTelemetry Distro in Spring Boot native image Java application](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-starter-monitor) and [Quarkus OpenTelemetry Exporter for Azure](https://github.com/quarkiverse/quarkus-opentelemetry-exporter/tree/main/quarkus-opentelemetry-exporter-azure). +* Learn more about OpenTelemetry and its community in the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation). +* Learn more features for Spring Boot native image applications in [OpenTelemetry SpringBoot starter](https://opentelemetry.io/docs/zero-code/java/spring-boot-starter/.) +* Learn more features for Quarkus native applications in [Quarkus OpenTelemetry Exporter for Azure](https://quarkus.io/guides/opentelemetry). +* Review the [release notes](https://github.com/Azure/azure-sdk-for-jav) on GitHub. ### [Node.js](#tab/nodejs) -- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md).-- To review the source code, see the [Azure Monitor OpenTelemetry GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry).-- To install the npm package and check for updates, see the [`@azure/monitor-opentelemetry` npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry) page.-- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js).-- To enable usage experiences, [enable web or browser user monitoring](javascript.md).+* For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md). +* To review the source code, see the [Azure Monitor OpenTelemetry GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry). +* To install the npm package and check for updates, see the [`@azure/monitor-opentelemetry` npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry) page. +* To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js). +* To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). +* To enable usage experiences, [enable web or browser user monitoring](javascript.md). ### [Python](#tab/python) -- See [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md) for details on adding and modifying Azure Monitor OpenTelemetry.-- Review the source code and extra documentation in the [Azure Monitor Distro GitHub repository](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/README.md).-- See extra samples and use cases in [Azure Monitor Distro samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry/samples).-- Review the [changelog](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/CHANGELOG.md) on GitHub.-- Install the PyPI package, check for updates, or view release notes on the [Azure Monitor Distro PyPI Package](https://pypi.org/project/azure-monitor-opentelemetry/) page.-- Become more familiar with Azure Monitor Application Insights and OpenTelemetry in the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-python).-- Learn more about OpenTelemetry and its community in the [OpenTelemetry Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python).-- See available OpenTelemetry instrumentations and components in the [OpenTelemetry Contributor Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python-contrib).-- Enable usage experiences by [enabling web or browser user monitoring](javascript.md).+* See [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md) for details on adding and modifying Azure Monitor OpenTelemetry. +* Review the source code and extra documentation in the [Azure Monitor Distro GitHub repository](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/README.md). +* See extra samples and use cases in [Azure Monitor Distro samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry/samples). +* Review the [changelog](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/CHANGELOG.md) on GitHub. +* Install the PyPI package, check for updates, or view release notes on the [Azure Monitor Distro PyPI Package](https://pypi.org/project/azure-monitor-opentelemetry/) page. +* Become more familiar with Azure Monitor Application Insights and OpenTelemetry in the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-python). +* Learn more about OpenTelemetry and its community in the [OpenTelemetry Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python). +* See available OpenTelemetry instrumentations and components in the [OpenTelemetry Contributor Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python-contrib). +* Enable usage experiences by [enabling web or browser user monitoring](javascript.md). |
azure-monitor | Opentelemetry Python Opencensus Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-python-opencensus-migrate.md | ms.devlang: python + # Migrating from OpenCensus Python SDK and Azure Monitor OpenCensus exporter for Python to Azure Monitor OpenTelemetry Python Distro > [!NOTE]-> - The [OpenCensus "How to Migrate to OpenTelemetry" blog](https://opentelemetry.io/blog/2023/sunsetting-opencensus/#how-to-migrate-to-opentelemetry) is not applicable to Azure Monitor users. -> - The [OpenTelemetry OpenCensus shim](https://pypi.org/project/opentelemetry-opencensus-shim/) is not recommended or supported by Microsoft. -> - The following outlines the only migration plan for Azure Monitor customers. +> * The [OpenCensus "How to Migrate to OpenTelemetry" blog](https://opentelemetry.io/blog/2023/sunsetting-opencensus/#how-to-migrate-to-opentelemetry) is not applicable to Azure Monitor users. +> * The [OpenTelemetry OpenCensus shim](https://pypi.org/project/opentelemetry-opencensus-shim/) is not recommended or supported by Microsoft. +> * The following outlines the only migration plan for Azure Monitor customers. ## Step 1: Uninstall OpenCensus libraries from opencensus.ext.azure.log_exporter import AzureLogHandler The following documentation provides prerequisite knowledge of the OpenTelemetry Python APIs/SDKs. +* OpenTelemetry Python [documentation](https://opentelemetry-python.readthedocs.io/en/stable/) +* Azure Monitor Distro documentation on [configuration](./opentelemetry-configuration.md?tabs=python) and [telemetry](./opentelemetry-add-modify.md?tabs=python) > [!NOTE] > OpenTelemetry Python and OpenCensus Python have different API surfaces, autocollection capabilities, and onboarding instructions. ## Step 4: Set up the Azure Monitor OpenTelemetry Distro -Follow the [getting started](./opentelemetry-enable.md?tabs=python#get-started) +Follow the [getting started](./opentelemetry-enable.md?tabs=python#enable-opentelemetry-with-application-insights) page to onboard onto the Azure Monitor OpenTelemetry Distro. ## Changes and limitations OpenTelemetry's Python-based monitoring solutions only support Python 3.7 and gr OpenCensus Python provided some [configuration](https://github.com/census-instrumentation/opencensus-python#customization) options related to the collection and exporting of telemetry. You achieve the same configurations, and more, by using the [OpenTelemetry Python](https://opentelemetry-python.readthedocs.io/en/stable/) APIs and SDK. The OpenTelemetry Azure monitor Python Distro is more of a one-stop-shop for the most common monitoring needs for your Python applications. Since the Distro encapsulates the OpenTelemetry APIs/SDk, some configuration for more uncommon use cases may not currently be supported for the Distro. Instead, you can opt to onboard onto the [Azure monitor OpenTelemetry exporter](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry-exporter), which, with the OpenTelemetry APIs/SDKs, should be able to fit your monitoring needs. Some of these configurations include: -- Custom propagators-- Custom samplers-- Adding extra span/log processors/metrics readers+* Custom propagators +* Custom samplers +* Adding extra span/log processors/metrics readers -### Cohesion with Azure functions +### Cohesion with Azure Functions In order to provide distributed tracing capabilities for Python applications that call other Python applications within an Azure function, the package [opencensus-extension-azure-functions](https://pypi.org/project/opencensus-extension-azure-functions/) was provided to allow for a connected distributed graph. The OpenCensus SDK offered ways to collect and export telemetry through OpenCens As for the other OpenTelemetry Python [instrumentations](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation) that aren't included in this list, users may still manually instrument with them. However, it's important to note that stability and behavior aren't guaranteed or supported in those cases. Therefore, use them at your own discretion. - If you would like to suggest a community instrumentation library us to include in our distro, post or up-vote an idea in our [feedback community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). For exporters, the Azure Monitor OpenTelemetry distro comes bundled with the [Azure Monitor OpenTelemetry exporter](https://pypi.org/project/azure-monitor-opentelemetry-exporter/). If you would like to use other exporters as well, you can use them with the distro, like in this [example](./opentelemetry-configuration.md?tabs=python#enable-the-otlp-exporter). +If you would like to suggest a community instrumentation library us to include in our distro, post or up-vote an idea in our [feedback community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). For exporters, the Azure Monitor OpenTelemetry distro comes bundled with the [Azure Monitor OpenTelemetry exporter](https://pypi.org/project/azure-monitor-opentelemetry-exporter/). If you would like to use other exporters as well, you can use them with the distro, like in this [example](./opentelemetry-configuration.md?tabs=python#enable-the-otlp-exporter). ### TelemetryProcessors |
azure-monitor | Azure Monitor Operations Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md | The following table describes the different features and strategies that are ava |:|:| | Dual-homed agents | SCOM uses the Microsoft Management Agent (MMA), which is the same as [Log Analytics agent](agents/log-analytics-agent.md) used by Azure Monitor. You can configure this agent to have a single machine connect to both SCOM and Azure Monitor simultaneously. This configuration does require that your Azure VMs have a connection to your on-premises management servers.<br><br>The [Log Analytics agent](agents/log-analytics-agent.md) has been replaced with the [Azure Monitor agent](agents/agents-overview.md), which provides significant advantages including simpler management and better control over data collection. The two agents can coexist on the same machine allowing you to connect to both Azure Monitor and SCOM. This configuration is a better option than dual-homing the legacy agent because of the significant [advantages of the Azure Monitor agent](agents/agents-overview.md#benefits). | | Connected management group | [Connect your SCOM management group to Azure Monitor](agents/om-agents.md) to forward data collected from your SCOM agents to Azure Monitor. This is similar to using dual-homed agents, but doesn't require each agent to be configured to connect to Azure Monitor. This strategy requires the legacy agent, so you can't specify monitoring with data collection rules. You also can't use VM insights unless you connect each VM directly to Azure Monitor. |-| SCOM Managed instance | [SCOM managed instance (preview)](vm/scom-managed-instance-overview.md) is a full implementation of SCOM in Azure allowing you to continue running the same management packs that you run in your on-premises SCOM environment. You can continue to use the same Operations console for analyzing your health and alerts and can also view alerts in Azure Monitor and analyze SCOM data in Grafana.<br><br>SCOM MI is similar to maintaining your existing SCOM environment and dual-homing agents, although you can consolidate your monitoring configuration in Azure and retire your on-premises components such as database and management servers. Agents from Azure VMs can connect to the SCOM managed instance in Azure rather than connecting to management servers in your own data center. | +| SCOM Managed instance | [SCOM managed instance](vm/scom-managed-instance-overview.md) is a full implementation of SCOM in Azure allowing you to continue running the same management packs that you run in your on-premises SCOM environment. You can continue to use the same Operations console for analyzing your health and alerts and can also view alerts in Azure Monitor and analyze SCOM data in Grafana.<br><br>SCOM MI is similar to maintaining your existing SCOM environment and dual-homing agents, although you can consolidate your monitoring configuration in Azure and retire your on-premises components such as database and management servers. Agents from Azure VMs can connect to the SCOM managed instance in Azure rather than connecting to management servers in your own data center. | | Azure management pack | The [Azure management pack](https://www.microsoft.com/download/details.aspx?id=50013) allows Operations Manager to discover Azure resources and monitor their health based on a particular set of monitoring scenarios. This management pack does require you to perform extra configuration for each resource in Azure. It may be helpful though to provide some visibility of your Azure resources in the Operations Console until you evolve your business processes to focus on Azure Monitor. | ## Monitor business applications |
azure-monitor | Best Practices Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md | This article describes built-in features for visualizing and analyzing collected This table describes Azure Monitor features that provide analysis of collected data without any configuration. -|Component |Description | Required training and/or configuration| +| Component | Description | Required training and/or configuration | |||--|-|Overview page|Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. |This page is based on platform metrics that are collected automatically. No configuration is required. | -|[Metrics Explorer](essentials/metrics-getting-started.md)|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no other configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. | -|[Log Analytics](logs/log-analytics-overview.md)|With Log Analytics, you can create log queries to interactively work with log data and create log search alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. | +| Overview page |Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. | This page is based on platform metrics that are collected automatically. No configuration is required. | +| [Metrics Explorer](essentials/metrics-getting-started.md)| You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. | ΓÇó Once data collection is configured, no other configuration is required.<br>ΓÇó Platform metrics for Azure resources are automatically available.<br>ΓÇó Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>ΓÇó Application metrics are available after Application Insights is configured. | +| [Log Analytics](logs/log-analytics-overview.md) | With Log Analytics, you can create log queries to interactively work with log data and create log search alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. | ## Built-in visualization tools ### Azure workbooks - [Azure Workbooks](./visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports. You can use workbooks to tap into multiple data sources from across Azure and combine them into unified interactive experiences. They're especially useful to prepare end-to-end monitoring views across multiple Azure resources. Insights use prebuilt workbooks to present you with critical health and performance information for a particular service. You can access a gallery of workbooks on the **Workbooks** tab of the Azure Monitor menu and create custom workbooks to meet the requirements of your different users. +[Azure workbooks](./visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports. You can use workbooks to tap into the most complete set of data sources from across Azure and combine them into unified interactive experiences. They're especially useful to prepare end-to-end monitoring views across multiple Azure resources. Insights use prebuilt workbooks to present you with critical health and performance information for a particular service. You can access a gallery of workbooks on the **Workbooks** tab in Azure Monitor, create custom workbooks, or leverage Azure GitHub community templates to meet the requirements of your different users. :::image type="content" source="media/visualizations/workbook.png" lightbox="media/visualizations/workbook.png" alt-text="Diagram that shows screenshots of three pages from a workbook, including Analysis of Page Views, Usage, and Time Spent on Page."::: This table describes Azure Monitor features that provide analysis of collected d Here's a video about how to create dashboards: > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4AslH]+ ### Grafana [Grafana](https://grafana.com/) is an open platform that excels in operational dashboards. It's useful for: -- Detecting, isolating, and triaging operational incidents.-- Combining visualizations of Azure and non-Azure data sources. These sources include on-premises, third-party tools, and data stores in other clouds.+* Detecting, isolating, and triaging operational incidents. +* Combining visualizations of Azure and non-Azure data sources. These sources include on-premises, third-party tools, and data stores in other clouds. -Grafana has popular plug-ins and dashboard templates for application performance monitoring(APM) tools such as Dynatrace, New Relic, and AppDynamics. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multicloud monitoring in a single pane of glass. +Grafana has popular plug-ins and dashboard templates for application performance monitoring (APM) tools such as Dynatrace, New Relic, and AppDynamics. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multicloud monitoring in a single pane of glass. ++Grafana allows you to leverage the extensive flexibility included for combining data queries, query results, and performing open-ended client-side data processing, as well as using open-source community dashboards. All versions of Grafana include the [Azure Monitor datasource plug-in](visualize/grafana-plugin.md) to visualize your Azure Monitor metrics and logs. [Azure Managed Grafana](../managed-grafan) to get started. - The [out-of-the-box Grafana Azure alerts dashboard](https://grafana.com/grafana/dashboards/15128-azure-alert-consumption/) allows you to view and consume Azure monitor alerts for Azure Monitor, your Azure datasources, and Azure Monitor managed service for Prometheus.++* For more information on define Azure Monitor alerts, see [Create a new alert rule](alerts/alerts-create-new-alert-rule.md). +* For Azure Monitor managed service for Prometheus, define your alerts using [Prometheus alert rules](alerts/prometheus-alerts.md) that are created as part of a [Prometheus rule group](essentials/prometheus-rule-groups.md), applied on the Azure Monitor workspace. :::image type="content" source="media/visualizations/grafana.png" lightbox="media/visualizations/grafana.png" alt-text="Screenshot that shows Grafana visualizations."::: ### Power BI -[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) is useful for creating business-centric dashboards and reports, along with reports that analyze long-term KPI trends. You can [import the results of a log query](./logs/log-powerbi.md) into a Power BI dataset. Then you can take advantage of its features, such as combining data from different sources and sharing reports on the web and mobile devices. -<!-- convertborder later --> +[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) is useful for creating business-centric dashboards and reports, along with reports that analyze long-term KPI (Key Performance Indicator) trends. You can [import the results of a log query](./logs/log-powerbi.md) into a Power BI dataset, which allows you to take advantage of features such as combining data from different sources and sharing reports on the web and mobile devices. + :::image type="content" source="media/visualizations/power-bi.png" lightbox="media/visualizations/power-bi.png" alt-text="Screenshot that shows an example Power BI report for IT operations." border="false"::: ## Choose the right visualization tool -|Visualization tool|Benefits|Recommended uses| -|||| -|[Azure Workbooks](./visualize/workbooks-overview.md)|Native Azure dashboarding platform |Use as a tool for engineering and technical teams to visualize and investigate scenarios. | -| |Autorefresh |Use as a reporting tool for App developers, Cloud engineers, and other technical personnel| -| |Out-of-the-box and public GitHub templates and reports | | -| |Parameters allow dynamic real time updates | | -| |Can provide high-level summaries that allow you to select any item for more in-depth data using the selected value in the query| | -| |Can query more sources than other visualizations| | -| |Fully customizable | | -| |Designed for collaborating and troubleshooting | | -|[Azure dashboards](../azure-portal/azure-portal-dashboards.md)|Native Azure dashboarding platform |For Azure/Arc exclusive environments | -| |No added cost | | -| |Supports at scale deployments | | -| |Can combine a metrics graph and the results of a log query with operational data for related services | | -| |Share a dashboard with service owners through integration with [Azure role-based access control](../role-based-access-control/overview.md) | | -|[Azure Managed Grafana](../managed-grafan)|Multi-platform, multicloud single pane of glass visualizations |For users without Azure access | -| |Seamless integration with Azure |Use for external visualization experiences, especially for RAG type dashboards in SOC and NOC environments | -| |Can combine time-series and event data in a single visualization panel |Cloud Native CNCF monitoring | -| |Can create dynamic dashboards based on user selection of dynamic variables |Multicloud environments | -| |Prometheus support|Overall Statuses, Up/Down, and high level trend reports for management or executive level users | -| |Integrates with third party monitoring tools|Use to show status of environments, apps, security, and network for continuous display in Network Operations Center (NOC) dashboards | -| |Out-of-the-box plugins from most monitoring tools and platforms | | -| |Dashboard templates with focus on operations | | -| |Can create a dashboard from a community-created and community-supported template | | -| |Can create a vendor-agnostic business continuity and disaster scenario that runs on any cloud provider or on-premises | | -|[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/)|Rich visualizations |Use for external visualizations aimed at management and executive levels | -| |Supports BI analytics with extensive slicing and dicing |Use to help design business centric KPI dashboards for long term trends | -| |Integrate data from multiple data sources| | -| |Results cached in a cube for better performance| | -| |Extensive interactivity, including zoom-in and cross-filtering| | -| |Share easily throughout your organization| | -+We recommend using Azure Managed Grafana for data visualizations and dashboards in cloud-native scenarios, such as Kubernetes and Azure Kubernetes Service (AKS), as well as multicloud, open source software, and third-party integrations. For other Azure scenarios, including Azure hybrid environments with Azure Arc, we recommend Azure workbooks. ++#### When to use Azure Managed Grafana ++* Cloud native environments monitored with Prometheus and CNCF tools +* Multi-cloud and multi-platform environments +* Multi-tenancy and portability support +* Interoperability with open-source and third-party tools +* Sharing dashboards outside of the Azure portal ++#### When to use Azure workbooks ++* Azure managed hybrid and edge environments +* Integrations with Azure actions and automation +* Creating custom reports based on Azure Monitor insights ++### Benefits and use cases ++| Visualization tool | Benefits | Recommended uses | +|--|-|| +| [**Azure workbooks**](./visualize/workbooks-overview.md) | | | +| | Native Azure dashboarding platform | Use as a tool for engineering and technical teams to visualize and investigate scenarios. | +| | Autorefresh | Use as a reporting tool for App developers, Cloud engineers, and other technical personnel | +| | Out-of-the-box and public GitHub templates and reports | | +| | Parameters allow dynamic real time updates | | +| | Can provide high-level summaries that allow you to select any item for more in-depth data using the selected value in the query | | +| | Can query more sources than other visualizations | | +| | Fully customizable | | +| | Designed for collaborating and troubleshooting | | +| [**Azure dashboards**](../azure-portal/azure-portal-dashboards.md) | | | +| | Native Azure dashboarding platform | For Azure/Arc exclusive environments | +| | No added cost | | +| | Supports at scale deployments | | +| | Can combine a metrics graph and the results of a log query with operational data for related services | | +| | Share a dashboard with service owners through integration with [Azure role-based access control](../role-based-access-control/overview.md) | | +| [**Azure Managed Grafana**](../managed-grafan) | | | +| | Multi-platform, multicloud single pane of glass visualizations | For users without Azure access | +| | Seamless integration with Azure | Use for external visualization experiences, especially for RAG type dashboards in SOC and NOC environments | +| | Can combine time-series and event data in a single visualization panel | Cloud Native CNCF monitoring | +| | Can create dynamic dashboards based on user selection of dynamic variables | Multicloud environments | +| | Prometheus support|Overall Statuses, Up/Down, and high level trend reports for management or executive level users | +| | Integrates with third party monitoring tools|Use to show status of environments, apps, security, and network for continuous display in Network Operations Center (NOC) dashboards | +| | Out-of-the-box plugins from most monitoring tools and platforms | | +| | Dashboard templates with focus on operations | | +| | Can create a dashboard from a community-created and community-supported template | | +| | Can create a vendor-agnostic business continuity and disaster scenario that runs on any cloud provider or on-premises | | +| [**Power BI**](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) | | | +| | Rich visualizations | Use for external visualizations aimed at management and executive levels | +| | Supports BI analytics with extensive slicing and dicing | Use to help design business centric KPI dashboards for long term trends | +| | Integrate data from multiple data sources | | +| | Results cached in a cube for better performance | | +| | Extensive interactivity, including zoom-in and cross-filtering | | +| | Share easily throughout your organization | | ## Other options+ Some Azure Monitor partners provide visualization functionality. An Azure Monitor partner might provide out-of-the-box visualizations to save you time, although these solutions might have an extra cost. You can also build your own custom websites and applications using metric and log data in Azure Monitor using the REST API. The REST API gives you flexibility in UI, visualization, interactivity, and features. ## Next steps-- [Deploy Azure Monitor: Alerts and automated actions](best-practices-alerts.md)-- [Optimize costs in Azure Monitor](best-practices-cost.md)++* [Deploy Azure Monitor: Alerts and automated actions](best-practices-alerts.md) +* [Optimize costs in Azure Monitor](best-practices-cost.md) |
azure-monitor | Container Insights Data Collection Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-data-collection-configure.md | Use the following procedure to configure and deploy your ConfigMap configuration 1. Create a ConfigMap by running the following kubectl command: ```azurecli+ kubectl config set-context <cluster-name> kubectl apply -f <configmap_yaml_file.yaml> # Example: + kubectl config set-context my-cluster kubectl apply -f container-azm-ms-agentconfig.yaml ``` Use the following procedure to configure and deploy your ConfigMap configuration To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod. ```azurecli-kubectl logs ama-logs-fdf58 -n kube-system +kubectl logs ama-logs-fdf58 -n kube-system -c ama-logs ``` If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following: |
azure-monitor | Prometheus Metrics Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md | Follow the steps in this article to determine the cause of Prometheus metrics no Replica pod scrapes metrics from `kube-state-metrics`, custom scrape targets in the `ama-metrics-prometheus-config` configmap and custom scrape targets defined in the [Custom Resources](prometheus-metrics-scrape-crd.md). DaemonSet pods scrape metrics from the following targets on their respective node: `kubelet`, `cAdvisor`, `node-exporter`, and custom scrape targets in the `ama-metrics-prometheus-config-node` configmap. The pod that you want to view the logs and the Prometheus UI for it depends on which scrape target you're investigating. -## Troubleshoot using powershell script +## Troubleshoot using PowerShell script -If you encounter an error while you attempt to enable monitoring for your AKS cluster, please follow the instructions mentioned [here](https://github.com/Azure/prometheus-collector/tree/main/internal/scripts/troubleshoot) to run the troubleshooting script. This script is designed to do a basic diagnosis of for any configuration issues on your cluster and you can ch the generated files while creating a support request for faster resolution for your support case. --## Missing metrics +If you encounter an error while you attempt to enable monitoring for your AKS cluster, follow [these instructions](https://github.com/Azure/prometheus-collector/tree/main/internal/scripts/troubleshoot) to run the troubleshooting script. This script is designed to do a basic diagnosis for any configuration issues on your cluster and you can attach the generated files while creating a support request for faster resolution for your support case. ## Metrics Throttling -In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` and verify that the metrics `Active Time Series % Utilization` and `Events Per Minute Ingested % Utilization` are below 100%. +In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics`, click on the `Add Metric` dropdown and then click on the `Add with builder` option to verify that the metrics `Active Time Series % Utilization` and `Events Per Minute Ingested % Utilization` are below 100%. :::image type="content" source="media/prometheus-metrics-troubleshoot/throttling.png" alt-text="Screenshot showing how to navigate to the throttling metrics." lightbox="media/prometheus-metrics-troubleshoot/throttling.png"::: Refer to [service quotas and limits](../service-limits.md#prometheus-metrics) fo ## Creation of Azure Monitor Workspace failed due to Azure Policy evaluation -If creation of Azure Monitor Workspace fails with an error saying "*Resource 'resource-name-xyz' was disallowed by policy*", there might an Azure policy that is preventing the resource to be created. If there is a policy that enforces a naming convention for your Azure resources or resource groups, you will need to create an exemption for the naming convention for creation of an Azure Monitor Workspace. +If creation of Azure Monitor Workspace fails with an error saying "*Resource 'resource-name-xyz' was disallowed by policy*", there might be an Azure policy that is preventing the resource to be created. If there is a policy that enforces a naming convention for your Azure resources or resource groups, you will need to create an exemption for the naming convention for creation of an Azure Monitor Workspace. When you create an Azure Monitor workspace, by default a data collection rule and a data collection endpoint in the form "*azure-monitor-workspace-name*" will automatically be created in a resource group in the form "*MA_azure-monitor-workspace-name_location_managed*". Currently there is no way to change the names of these resources, and you will need to set an exemption on the Azure Policy to exempt the above resources from policy evaluation. See [Azure Policy exemption structure](../../governance/policy/concepts/exemption-structure.md). |
azure-monitor | Data Retention Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-configure.md | Status code: 200 To set the default interactive retention period of Analytics tables within a Log Analytics workspace, run the [az monitor log-analytics workspace update](/cli/azure/monitor/log-analytics/workspace/#az-monitor-log-analytics-workspace-update) command and pass the `--retention-time` parameter. -This example sets the table's interactive retention to 30 days, and the total retention to two years, which means that the long-term retention period is 23 months: +This example sets the table's interactive retention to 30 days: ```azurecli az monitor log-analytics workspace update --resource-group myresourcegroup --retention-time 30 --workspace-name myworkspace |
azure-monitor | Logs Ingestion Api Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md | To send data to Azure Monitor with a REST API call, make a POST call over HTTP. ### Endpoint URI -The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. The immutable ID is generated for the DCR when it's created. You can retrieve it from the [JSON view of the DCR in the Azure portal](../essentials/data-collection-rule-view.md). `Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#streamdeclarations) in the DCR that should handle the custom data. +The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. The immutable ID is generated for the DCR when it's created. You can retrieve it from the [Overview page for the DCR in the Azure portal](../essentials/data-collection-rule-view.md). +++`Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#streamdeclarations) in the DCR that should handle the custom data. ``` {Data Collection Endpoint URI}/dataCollectionRules/{DCR Immutable ID}/streams/{Stream Name}?api-version=2023-01-01 |
azure-netapp-files | Cross Zone Replication Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-introduction.md | -Similar to the Azure NetApp Files [cross-region replication feature](cross-region-replication-introduction.md), the cross-zone replication (CZR) capability provides data protection between volumes in different availability zones. You can asynchronously replicate data from an Azure NetApp Files volume (source) in one availability zone to another Azure NetApp Files volume (destination) in another availability. This capability enables you to fail over your critical application if a zone-wide outage or disaster happens. +Similar to the Azure NetApp Files [cross-region replication feature](cross-region-replication-introduction.md), the cross-zone replication (CZR) capability provides data protection between volumes in different availability zones. You can asynchronously replicate data from an Azure NetApp Files volume (source) in one availability zone to another Azure NetApp Files volume (destination) in another availability zone. This capability enables you to fail over your critical application if a zone-wide outage or disaster happens. Cross-zone replication is available in all [AZ-enabled regions](../availability-zones/az-overview.md#azure-regions-with-availability-zones) with [Azure NetApp Files presence](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp®ions=all&rar=true). |
azure-netapp-files | Mount Volumes Vms Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/mount-volumes-vms-smb.md | -# Mount SMB volumes for Windows VMs +# Mount SMB volumes for Windows virtual machines You can mount an SMB file for Windows virtual machines (VMs). ## Mount SMB volumes on a Windows client -1. Select the **Volumes** menu and then the SMB volume that you want to mount. +1. Select the **Volumes** menu and then the SMB volume you want to mount. 1. To mount the SMB volume using a Windows client, select **Mount instructions** from the selected volume. Follow the displayed instructions to mount the volume. :::image type="content" source="./media/mount-volumes-vms-smb/azure-netapp-files-mount-instructions-smb.png" alt-text="Screenshot of Mount instructions." lightbox="./media/mount-volumes-vms-smb/azure-netapp-files-mount-instructions-smb.png"::: You can mount an SMB file for Windows virtual machines (VMs). * [Mount NFS volumes for Windows or Linux VMs](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [SMB FAQs](faq-smb.md)-* [Network File System overview](/windows-server/storage/nfs/nfs-overview) |
azure-portal | Dashboard Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/dashboard-hub.md | + + Title: Create and manage dashboards in Dashboard hub +description: This article describes how to create and customize a shared dashboard in Dashboard hub in the Azure portal. + Last updated : 08/28/2024+++# Create and manage dashboards in Dashboard hub (preview) ++Dashboards are a focused and organized view of your cloud resources in the Azure portal. The new Dashboard hub (preview) experience offers editing features such as tabs, a rich set of tiles with support for different data sources, and dashboard access in the latest version of the [Azure mobile app](mobile-app/overview.md). ++Currently, Dashboard hub can only be used to create and manage shared dashboards. These shared dashboards are implemented as Azure resources in your subscription. They're visible in the Azure portal or the Azure mobile app, to all users who have subscription-level access. ++> [!IMPORTANT] +> Dashboard hub is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++## Current limitations ++Before using the new Dashboard hub experience, be aware of the following current limitations and make sure that your new dashboard meets your organization's needs. ++Private dashboards aren't currently supported in Dashboard hub. These dashboards are shared with all users in a subscription by default. To create a private dashboard, or to share it with only a limited set of users, create your dashboard [from the **Dashboard** view in the Azure portal](azure-portal-dashboards.md) rather than using the new experience. ++Some tiles aren't yet available in the Dashboard hub experience. Currently, the following tiles are available: ++- **Azure Resource Graph query** +- **Metrics** +- **Resource** +- **Resource Group** +- **Recent Resources** +- **All Resources** +- **Markdown** +- **Policy** ++If your dashboard relies on one of these tiles, we recommend that you don't use the new experience for that dashboard at this time. We'll update this page as we add more tile types to the new experience. ++## Create a new dashboard ++To create a new shared dashboard with an assigned name, follow these steps. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Search for **Dashboard hub** and then select it. ++1. Under **Dashboards (preview)**, select **Shared dashboards**. Then select **Create**. ++ :::image type="content" source="media/dashboard-hub/dashboard-hub-create.png" alt-text="Screenshot of the Create option in the Dashboard hub."::: ++ You'll see an empty dashboard with a grid where you can arrange tiles. ++1. If you want to use a template to create your dashboard, select **Select Templates**, then choose an available template to start from. Enter a name and any other applicable information. For example, if you select **SQL database health**, you'll need to specify a SQL database resource. When you're finished, select **Submit**. ++1. If you aren't using a template, or if you want to add more tiles, select **Add tile** to open the **Tile Gallery**. The **Tile Gallery** features various tiles that display different types of information. Select a tile, then select **Add**. You can also drag tiles from the **Tile Gallery** onto your grid. Resize or rearrange the tiles as desired. ++1. If you haven't already provided a name, or want to change what you entered, select **Rename dashboard** to enter a name that will help you easily identify your dashboard. ++ :::image type="content" source="media/dashboard-hub/dashboard-hub-rename.png" alt-text="Screenshot showing a dashboard being renamed in the Dashboard hub."::: ++1. When you're finished, select **Publish dashboardV2** in the command bar. ++1. Select the subscription and resource group to which the dashboard will be saved. +1. Enter a name for the dashboard. This name is used for the dashboard resource in Azure, and it can't be changed after publishing. However, you can edit the displayed title of the dashboard later. +1. Select **Submit**. ++You'll see a notification confirming that your dashboard has been published. You can continue to [edit your dashboard](#edit-a-dashboard) as needed. ++> [!IMPORTANT] +> Since all dashboards in the new experience are shared by default, anyone with access to the subscription will have access to the dashboard resource. For more access control options, see [Understand access control](#understand-access-control). ++## Create a dashboard based on an existing dashboard ++To create a new shared dashboard with an assigned name, based on an existing dashboard, follow these steps. ++> [!TIP] +> Review the [current limitations ](#current-limitations) before you proceed. If your dashboard includes tiles that aren't currently supported in the new experience, you can still create a new dashboard based on the original one. However, any tiles that aren't yet available won't be included. ++1. Navigate to the dashboard that you want to start with. You can do this by selecting **Dashboard** from the Azure menu, then selecting the dashboard that you wish to start with. Alternately, in the new Dashboard hub, expand **Dashboards** and then select either **Private dashboards** or **Shared dashboards** to find your dashboard. +1. From the Select **Try it now**. ++ :::image type="content" source="media/dashboard-hub/dashboard-try-it-now.png" alt-text="Screenshot showing the Try it now link for a dashboard."::: ++ The dashboard opens in the new Dashboard hub editing experience. Follow the process described in the previous section to publish the dashboard as a new shared dashboard, or read on to learn how to make edits to your dashboard before publishing. ++## Edit a dashboard ++After you create a dashboard, you can add, resize, and arrange tiles that show your Azure resources or display other helpful information. ++To open the editing page for a dashboard, select **Edit** from its command bar. Make changes as described in the sections below, then select **Publish dashboardV2** when you're finished. ++### Add tiles from the Tile Gallery ++To add tiles to a dashboard by using the Tile Gallery, follow these steps. ++1. Click **Add tile** to open the Tile Gallery. +1. Select the tile you want to add to your dashboard, then select **Add**. Alternately, you can drag the tile to the desired location in your grid. +1. To configure the tile, select **Edit** to open the tile editor. ++ :::image type="content" source="media/dashboard-hub/dashboard-hub-edit-tile.png" alt-text="Screenshot of the Edit Tile option in the Dashboard hub in the Azure portal."::: ++1. Make the desired changes to the tile, including editing its title or changing its configuration. When you're done, select **Apply changes**. ++### Resize or rearrange tiles ++To change the size of a tile, select the arrow on the bottom right corner of the tile, then drag to resize it. If there's not enough grid space to resize the tile, it bounces back to its original size. ++To change the placement of a tile, select it and then drag it to a new location on the dashboard. ++Repeat these steps as needed until you're happy with the layout of your tiles. ++### Delete tiles ++To remove a tile from the dashboard, hover in the upper right corner of the tile and then select **Delete**. ++### Manage tabs ++The new dashboard experience lets you create multiple tabs where you can group information. To create tabs: ++1. Select **Manage tabs** from the command bar to open the **Manage tabs** pane. ++ :::image type="content" source="media/dashboard-hub/dashboard-hub-manage-tabs.png" alt-text="Screenshot of the Manage tabs page in the Dashboard hub in the Azure portal."::: ++1. Enter name for the tabs you want to create. +1. To change the tab order, drag and drop your tabs, or select the checkbox next to a tab and use the **Move up** and **Move down** buttons. +1. When you're finished, select **Apply changes**. ++You can then select each tab to make individual edits. ++### Apply dashboard filters ++To add filters to your dashboard, select **Parameters** from the command bar to open the **Manage parameters** pane ++The options you see depend on the tiles used in your dashboard. For example, you may see options to filter data for a specific subscription or location. ++If your dashboard includes the **Metrics** tile, the default parameters are **Time range** and **Time granularity.** +++To edit a parameter, select the pencil icon. ++To add a new parameter, select **Add**, then configure the parameter as desired. ++To remove a parameter, select the trash can icon. ++### Pin content from a resource page ++Another way to add tiles to your dashboard is directly from a resource page. ++Many resource pages include a pin icon in the command bar, which means that you can pin a tile representing that resource. +++In some cases, a pin icon may also appear by specific content within a page, which means you can pin a tile for that specific content, rather than the entire page. For example, you can pin some resources through the context pane. +++To pin content to your dashboard, select the **Pin to dashboard** option or the pin icon. Be sure to select the **Shared** dashboard type. You can also create a new dashboard which will include this pin by selecting **Create new**. ++## Export a dashboard ++You can export a dashboard from the Dashboard hub to view its structure programmatically. These exported templates can also be used as the basis for creating future dashboards. ++To export a dashboard, select **Export**. Select the option for the format you wish to download: ++- **ARM template**: Downloads an ARM template representation of the dashboard. +- **Dashboard**: Downloads a JSON representation of the dashboard. +- **View**: Downloads a declarative view of the dashboard. ++After you make your selection, you can view the downloaded version in the editor of your choice. ++## Understand access control ++Published dashboards are implemented as Azure resources, Each dashboard exists as a manageable item contained in a resource group within your subscription. You can manage access control through the Dashboard hub. ++Azure role-based access control (Azure RBAC) lets you assign users to roles at different levels of scope: management group, subscription, resource group, or resource. Azure RBAC permissions are inherited from higher levels down to the individual resource. In many cases, you may already have users assigned to roles for the subscription that will give them access to the published dashboard. ++For example, users who have the **Owner** or **Contributor** role for a subscription can list, view, create, modify, or delete dashboards within the subscription. Users with a custom role that includes the `Microsoft.Portal/Dashboards/Write` permission can also perform these tasks. ++Users with the **Reader** role for the subscription (or a custom role with `Microsoft.Portal/Dashboards/Read permission`) can list and view dashboards within that subscription, but they can't modify or delete them. These users can make private copies of dashboards for themselves. They can also make local edits to a published dashboard for their own use, such as when troubleshooting an issue, but they can't publish those changes back to the server. These users can also view these dashboards in the Azure mobile app. ++To expand access to a dashboard beyond the access granted at the subscription level, you can assign permissions to an individual dashboard, or to a resource group that contains several dashboards. For example, if a user has limited permissions across the subscription, but needs to be able to edit one particular dashboard, you can assign a different role with more permissions (such as Contributor) for that dashboard only. |
azure-portal | Home | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/home.md | Title: Azure mobile app Home description: Azure mobile app Home surfaces the most essential information and the resources you use most often. Previously updated : 05/29/2024 Last updated : 08/28/2024 Current card options include: - **Cloud Shell**: Quick access to the [Cloud Shell terminal](cloud-shell.md). - **Recent resources**: A list of your four most recently viewed resources, with the option to see all. - **Favorites**: A list of the resources you have added to your favorites, and the option to see all.+- **Dashboards (preview)**: Access to [shared dashboards](../dashboard-hub.md). :::image type="content" source="media/azure-mobile-app-home-layout.png" alt-text="Screenshot of the Azure mobile app Home screen with several display cards."::: |
azure-vmware | Azure Vmware Solution Platform Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md | Microsoft regularly applies important updates to the Azure VMware Solution for n All new Azure VMware Solution private clouds are being deployed with VMware vSphere 8.0 version in Azure Commercial. [Learn more](architecture-private-clouds.md#vmware-software-versions) -Azure VMware Solution was approved to be added as a service within the DoD SRG Impact Level 4 Provisional Authorization (PA) in [Microsoft Azure Government](https://azure.microsoft.com/explore/global-infrastructure/government/#why-azure). +Azure VMware Solution was approved to be added as a service within the [DoD SRG Impact Level 4 (IL4)](https://learn.microsoft.com/azure/azure-government/compliance/azure-services-in-fedramp-auditscope#azure-government-services-by-audit-scope) Provisional Authorization (PA) in [Microsoft Azure Government](https://azure.microsoft.com/explore/global-infrastructure/government/#why-azure). ## May 2024 |
batch | Batch Customer Managed Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-customer-managed-key.md | Title: Configure customer-managed keys for your Azure Batch account with Azure Key Vault and Managed Identity description: Learn how to encrypt Batch data using customer-managed keys. Previously updated : 04/03/2023 Last updated : 08/12/2024 ms.devlang: csharp As an example using the Batch management .NET client, you can create a Batch acc and customer-managed keys. ```c#-EncryptionProperties encryptionProperties = new EncryptionProperties() +string subscriptionId = "Your SubscriptionID"; +string resourceGroupName = "Your ResourceGroup name"; + +var credential = new DefaultAzureCredential(); +ArmClient _armClient = new ArmClient(credential); ++ResourceIdentifier resourceGroupResourceId = ResourceGroupResource.CreateResourceIdentifier(subscriptionId, resourceGroupName); +ResourceGroupResource resourceGroupResource = _armClient.GetResourceGroupResource(resourceGroupResourceId); ++var data = new BatchAccountCreateOrUpdateContent(AzureLocation.EastUS) {- KeySource = KeySource.MicrosoftKeyVault, - KeyVaultProperties = new KeyVaultProperties() + Encryption = new BatchAccountEncryptionConfiguration() {- KeyIdentifier = "Your Key Azure Resource Manager Resource ID" - } -}; + KeySource = BatchAccountKeySource.MicrosoftKeyVault, + KeyIdentifier = new Uri("Your Key Azure Resource Manager Resource ID"), + }, -BatchAccountIdentity identity = new BatchAccountIdentity() -{ - Type = ResourceIdentityType.UserAssigned, - UserAssignedIdentities = new Dictionary<string, BatchAccountIdentityUserAssignedIdentitiesValue> + Identity = new ManagedServiceIdentity(ManagedServiceIdentityType.UserAssigned) {- ["Your Identity Azure Resource Manager ResourceId"] = new BatchAccountIdentityUserAssignedIdentitiesValue() + UserAssignedIdentities = { + [new ResourceIdentifier("Your Identity Azure Resource Manager ResourceId")] = new UserAssignedIdentity(), + }, } };-var parameters = new BatchAccountCreateParameters(TestConfiguration.ManagementRegion, encryption:encryptionProperties, identity: identity); -var account = await batchManagementClient.Account.CreateAsync("MyResourceGroup", - "mynewaccount", parameters); +var lro = resourceGroupResource.GetBatchAccounts().CreateOrUpdate(WaitUntil.Completed, "Your BatchAccount name", data); +BatchAccountResource batchAccount = lro.Value; ``` ## Update the customer-managed key version |
batch | Batch Management Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-management-dotnet.md | Title: Use the Batch Management .NET library to manage account resources description: Create, delete, and modify Azure Batch account resources with the Batch Management .NET library. Previously updated : 06/13/2024 Last updated : 08/12/2024 ms.devlang: csharp You can lower maintenance overhead in your Azure Batch applications by using the ## Create and delete Batch accounts -One of the primary features of the Batch Management API is to create and delete [Batch accounts](accounts.md) in an Azure region. To do so, use [BatchManagementClient.Account.CreateAsync](/dotnet/api/microsoft.azure.management.batch.batchaccountoperationsextensions.createasync) and [DeleteAsync](/dotnet/api/microsoft.azure.management.batch.batchaccountoperationsextensions.deleteasync), or their synchronous counterparts. +One of the primary features of the Batch Management API is to create and delete [Batch accounts](accounts.md) in an Azure region. To do so, use [BatchAccountCollection.CreateOrUpdate](/dotnet/api/azure.resourcemanager.batch.batchaccountcollection.createorupdate) and [Delete](/dotnet/api/azure.resourcemanager.batch.batchaccountresource.delete), or their asynchronous counterparts. -The following code snippet creates an account, obtains the newly created account from the Batch service, and then deletes it. In this snippet and the others in this article, `batchManagementClient` is a fully initialized instance of [BatchManagementClient](/dotnet/api/microsoft.azure.management.batch.batchmanagementclient). +The following code snippet creates an account, obtains the newly created account from the Batch service, and then deletes it. ```csharp-// Create a new Batch account -await batchManagementClient.Account.CreateAsync("MyResourceGroup", - "mynewaccount", - new BatchAccountCreateParameters() { Location = "West US" }); --// Get the new account from the Batch service -AccountResource account = await batchManagementClient.Account.GetAsync( - "MyResourceGroup", - "mynewaccount"); --// Delete the account -await batchManagementClient.Account.DeleteAsync("MyResourceGroup", account.Name); + string subscriptionId = "Your SubscriptionID"; + string resourceGroupName = "Your ResourceGroup name"; ++ var credential = new DefaultAzureCredential(); + ArmClient _armClient = new ArmClient(credential); ++ ResourceIdentifier resourceGroupResourceId = ResourceGroupResource.CreateResourceIdentifier(subscriptionId, resourceGroupName); + ResourceGroupResource resourceGroupResource = _armClient.GetResourceGroupResource(resourceGroupResourceId); ++ var data = new BatchAccountCreateOrUpdateContent(AzureLocation.EastUS); ++ // Create a new batch account + resourceGroupResource.GetBatchAccounts().CreateOrUpdate(WaitUntil.Completed, "Your BatchAccount name", data); + + // Get an existing batch account + BatchAccountResource batchAccount = resourceGroupResource.GetBatchAccount("Your BatchAccount name"); ++ // Delete the batch account + batchAccount.Delete(WaitUntil.Completed); ``` > [!NOTE]-> Applications that use the Batch Management .NET library and its BatchManagementClient class require service administrator or coadministrator access to the subscription that owns the Batch account to be managed. For more information, see the Microsoft Entra ID section and the [AccountManagement](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp/AccountManagement) code sample. +> Applications that use the Batch Management .NET library require service administrator or coadministrator access to the subscription that owns the Batch account to be managed. For more information, see the Microsoft Entra ID section and the [AccountManagement](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp/AccountManagement) code sample. ## Retrieve and regenerate account keys -Obtain primary and secondary account keys from any Batch account within your subscription by using [GetKeysAsync](/dotnet/api/microsoft.azure.management.batch.batchaccountoperationsextensions.getkeysasync). You can regenerate those keys by using [RegenerateKeyAsync](/dotnet/api/microsoft.azure.management.batch.batchaccountoperationsextensions.regeneratekeyasync). +Obtain primary and secondary account keys from any Batch account within your subscription by using [GetKeys](/dotnet/api/azure.resourcemanager.batch.batchaccountresource.getkeys). You can regenerate those keys by using [RegenerateKey](/dotnet/api/microsoft.azure.management.batch.batchaccountoperationsextensions.regeneratekey). ```csharp+string subscriptionId = "Your SubscriptionID"; +string resourceGroupName = "Your ResourceGroup name"; ++var credential = new DefaultAzureCredential(); +ArmClient _armClient = new ArmClient(credential); ++ResourceIdentifier resourceGroupResourceId = ResourceGroupResource.CreateResourceIdentifier(subscriptionId, resourceGroupName); +ResourceGroupResource resourceGroupResource = _armClient.GetResourceGroupResource(resourceGroupResourceId); ++var data = new BatchAccountCreateOrUpdateContent(AzureLocation.EastUS); ++// Get an existing batch account +BatchAccountResource batchAccount = resourceGroupResource.GetBatchAccount("Your BatchAccount name"); + // Get and print the primary and secondary keys-BatchAccountGetKeyResult accountKeys = - await batchManagementClient.Account.GetKeysAsync( - "MyResourceGroup", - "mybatchaccount"); +BatchAccountKeys accountKeys = batchAccount.GetKeys(); + Console.WriteLine("Primary key: {0}", accountKeys.Primary); Console.WriteLine("Secondary key: {0}", accountKeys.Secondary); // Regenerate the primary key-BatchAccountRegenerateKeyResponse newKeys = - await batchManagementClient.Account.RegenerateKeyAsync( - "MyResourceGroup", - "mybatchaccount", - new BatchAccountRegenerateKeyParameters() { - KeyName = AccountKeyType.Primary - }); +BatchAccountRegenerateKeyContent regenerateKeyContent = new BatchAccountRegenerateKeyContent(BatchAccountKeyType.Primary); +batchAccount.RegenerateKey(regenerateKeyContent); ``` > [!TIP]-> You can create a streamlined connection workflow for your management applications. First, obtain an account key for the Batch account you wish to manage with [GetKeysAsync](/dotnet/api/microsoft.azure.management.batch.batchaccountoperationsextensions.getkeysasync). Then, use this key when initializing the Batch .NET library's [BatchSharedKeyCredentials](/dotnet/api/microsoft.azure.batch.auth.batchsharedkeycredentials) class, which is used when initializing [BatchClient](/dotnet/api/microsoft.azure.batch.batchclient). +> You can create a streamlined connection workflow for your management applications. First, obtain an account key for the Batch account you wish to manage with [GetKeys](/dotnet/api/azure.resourcemanager.batch.batchaccountresource.getkeys). Then, use this key when initializing the Batch .NET library's [BatchSharedKeyCredentials](/dotnet/api/microsoft.azure.batch.auth.batchsharedkeycredentials) class, which is used when initializing [BatchClient](/dotnet/api/microsoft.azure.batch.batchclient). ## Check Azure subscription and Batch account quotas Azure subscriptions and the individual Azure services like Batch all have defaul Before creating a Batch account in a region, you can check your Azure subscription to see whether you are able to add an account in that region. -In the code snippet below, we first use **ListAsync** to get a collection of all Batch accounts that are within a subscription. Once we've obtained this collection, we determine how many accounts are in the target region. Then we use **GetQuotasAsync** to obtain the Batch account quota and determine how many accounts (if any) can be created in that region. +In the code snippet below, we first use **GetBatchAccounts** to get a collection of all Batch accounts that are within a subscription. Once we've obtained this collection, we determine how many accounts are in the target region. Then we use **GetBatchQuotas** to obtain the Batch account quota and determine how many accounts (if any) can be created in that region. ```csharp+string subscriptionId = "Your SubscriptionID"; +ArmClient _armClient = new ArmClient(new DefaultAzureCredential()); ++ResourceIdentifier subscriptionResourceId = SubscriptionResource.CreateResourceIdentifier(subscriptionId); +SubscriptionResource subscriptionResource = _armClient.GetSubscriptionResource(subscriptionResourceId); + // Get a collection of all Batch accounts within the subscription-BatchAccountListResponse listResponse = - await batchManagementClient.BatchAccount.ListAsync(new AccountListParameters()); -IList<AccountResource> accounts = listResponse.Accounts; -Console.WriteLine("Total number of Batch accounts under subscription id {0}: {1}", - creds.SubscriptionId, - accounts.Count); +var batchAccounts = subscriptionResource.GetBatchAccounts(); +Console.WriteLine("Total number of Batch accounts under subscription id {0}: {1}", subscriptionId, batchAccounts.Count()); // Get a count of all accounts within the target region-string region = "westus"; -int accountsInRegion = accounts.Count(o => o.Location == region); +string region = "eastus"; +int accountsInRegion = batchAccounts.Count(o => o.Data.Location == region); // Get the account quota for the specified region-SubscriptionQuotasGetResponse quotaResponse = await batchManagementClient.Location.GetQuotasAsync(region); -Console.WriteLine("Account quota for {0} region: {1}", region, quotaResponse.AccountQuota); +BatchLocationQuota batchLocationQuota = subscriptionResource.GetBatchQuotas(AzureLocation.EastUS); +Console.WriteLine("Account quota for {0} region: {1}", region, batchLocationQuota.AccountQuota); // Determine how many accounts can be created in the target region Console.WriteLine("Accounts in {0}: {1}", region, accountsInRegion);-Console.WriteLine("You can create {0} accounts in the {1} region.", quotaResponse.AccountQuota - accountsInRegion, region); +Console.WriteLine("You can create {0} accounts in the {1} region.", batchLocationQuota.AccountQuota - accountsInRegion, region); ``` In the snippet above, `creds` is an instance of **TokenCredentials**. To see an example of creating this object, see the [AccountManagement](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp/AccountManagement) code sample on GitHub. In the snippet above, `creds` is an instance of **TokenCredentials**. To see an Before increasing compute resources in your Batch solution, you can check to ensure the resources you want to allocate won't exceed the account's quotas. In the code snippet below, we print the quota information for the Batch account named `mybatchaccount`. In your own application, you could use such information to determine whether the account can handle the additional resources to be created. ```csharp-// First obtain the Batch account -BatchAccountGetResponse getResponse = - await batchManagementClient.Account.GetAsync("MyResourceGroup", "mybatchaccount"); -AccountResource account = getResponse.Resource; +string subscriptionId = "Your SubscriptionID"; +string resourceGroupName = "Your ResourceGroup name"; ++var credential = new DefaultAzureCredential(); +ArmClient _armClient = new ArmClient(credential); ++ResourceIdentifier resourceGroupResourceId = ResourceGroupResource.CreateResourceIdentifier(subscriptionId, resourceGroupName); +ResourceGroupResource resourceGroupResource = _armClient.GetResourceGroupResource(resourceGroupResourceId); ++// Get an existing batch account +BatchAccountResource batchAccount = resourceGroupResource.GetBatchAccount("Your BatchAccount name"); // Now print the compute resource quotas for the account-Console.WriteLine("Core quota: {0}", account.Properties.CoreQuota); -Console.WriteLine("Pool quota: {0}", account.Properties.PoolQuota); -Console.WriteLine("Active job and job schedule quota: {0}", account.Properties.ActiveJobAndJobScheduleQuota); +Console.WriteLine("Core quota: {0}", batchAccount.Data.DedicatedCoreQuota); +Console.WriteLine("Pool quota: {0}", batchAccount.Data.PoolQuota); +Console.WriteLine("Active job and job schedule quota: {0}", batchAccount.Data.ActiveJobAndJobScheduleQuota); ``` > [!IMPORTANT] To run the sample application successfully, you must first register it with your ## Next steps - Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.-- Learn the basics of developing a Batch-enabled application using the [Batch .NET client library](quick-run-dotnet.md) or [Python](quick-run-python.md). These quickstarts guide you through a sample application that uses the Batch service to execute a workload on multiple compute nodes, using Azure Storage for workload file staging and retrieval.git pus+- Learn the basics of developing a Batch-enabled application using the [Batch .NET client library](quick-run-dotnet.md) or [Python](quick-run-python.md). These quickstarts guide you through a sample application that uses the Batch service to execute a workload on multiple compute nodes, using Azure Storage for workload file staging and retrieval. |
batch | Batch User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-user-accounts.md | Title: Run tasks under user accounts description: Learn the types of user accounts and how to configure them. Previously updated : 05/16/2023 Last updated : 08/27/2024 ms.devlang: csharp # ms.devlang: csharp, java, python pool = batchClient.PoolOperations.CreatePool( // Add named user accounts. pool.UserAccounts = new List<UserAccount> {- new UserAccount("adminUser", "xyz123", ElevationLevel.Admin), - new UserAccount("nonAdminUser", "123xyz", ElevationLevel.NonAdmin), + new UserAccount("adminUser", "A1bC2d", ElevationLevel.Admin), + new UserAccount("nonAdminUser", "A1bC2d", ElevationLevel.NonAdmin), }; // Commit the pool. pool.UserAccounts = new List<UserAccount> { new UserAccount( name: "adminUser",- password: "xyz123", + password: "A1bC2d", elevationLevel: ElevationLevel.Admin, linuxUserConfiguration: new LinuxUserConfiguration( uid: 12345, pool.UserAccounts = new List<UserAccount> )), new UserAccount( name: "nonAdminUser",- password: "123xyz", + password: "A1bC2d", elevationLevel: ElevationLevel.NonAdmin, linuxUserConfiguration: new LinuxUserConfiguration( uid: 45678, batchClient.poolOperations().createPool(addParameter); users = [ batchmodels.UserAccount( name='pool-admin',- password='******', + password='A1bC2d', elevation_level=batchmodels.ElevationLevel.admin) batchmodels.UserAccount( name='pool-nonadmin',- password='******', + password='A1bC2d', elevation_level=batchmodels.ElevationLevel.non_admin) ] pool = batchmodels.PoolAddParameter( |
batch | Create Pool Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-availability-zones.md | Title: Create a pool across availability zones description: Learn how to create a Batch pool with zonal policy to help protect against failures. Previously updated : 05/25/2023 Last updated : 08/12/2024 ms.devlang: csharp The following examples show how to create a Batch pool across Availability Zones ### Batch Management Client .NET SDK ```csharp-pool.DeploymentConfiguration.VirtualMachineConfiguration.NodePlacementConfiguration = new NodePlacementConfiguration() +var credential = new DefaultAzureCredential(); +ArmClient _armClient = new ArmClient(credential); ++var batchAccountIdentifier = ResourceIdentifier.Parse("your-batch-account-resource-id"); ++BatchAccountResource batchAccount = _armClient.GetBatchAccountResource(batchAccountIdentifier); ++var poolName = "pool2"; +var imageReference = new BatchImageReference() +{ + Publisher = "canonical", + Offer = "0001-com-ubuntu-server-jammy", + Sku = "22_04-lts", + Version = "latest" +}; +string nodeAgentSku = "batch.node.ubuntu 22.04"; ++var batchAccountPoolData = new BatchAccountPoolData() +{ + VmSize = "Standard_DS1_v2", + DeploymentConfiguration = new BatchDeploymentConfiguration() {- Policy = NodePlacementPolicyType.Zonal - }; + VmConfiguration = new BatchVmConfiguration(imageReference, nodeAgentSku) + { + NodePlacementPolicy = BatchNodePlacementPolicyType.Zonal, + }, + }, + ScaleSettings = new BatchAccountPoolScaleSettings() + { + FixedScale = new BatchAccountFixedScaleSettings() + { + TargetDedicatedNodes = 5, + ResizeTimeout = TimeSpan.FromMinutes(15), + } + }, + +}; ++ArmOperation<BatchAccountPoolResource> armOperation = batchAccount.GetBatchAccountPools().CreateOrUpdate( + WaitUntil.Completed, poolName, batchAccountPoolData); +BatchAccountPoolResource pool = armOperation.Value; ``` |
batch | Managed Identity Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md | Title: Configure managed identities in Batch pools description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes. Previously updated : 06/25/2024 Last updated : 08/12/2024 ms.devlang: csharp To create a Batch pool with a user-assigned managed identity through the Azure p To create a Batch pool with a user-assigned managed identity with the [Batch .NET management library](/dotnet/api/overview/azure/batch#management-library), use the following example code: ```csharp-var poolParameters = new Pool(name: "yourPoolName") +var credential = new DefaultAzureCredential(); +ArmClient _armClient = new ArmClient(credential); + +var batchAccountIdentifier = ResourceIdentifier.Parse("your-batch-account-resource-id"); +BatchAccountResource batchAccount = _armClient.GetBatchAccountResource(batchAccountIdentifier); ++var poolName = "HelloWorldPool"; +var imageReference = new BatchImageReference() +{ + Publisher = "canonical", + Offer = "0001-com-ubuntu-server-jammy", + Sku = "22_04-lts", + Version = "latest" +}; +string nodeAgentSku = "batch.node.ubuntu 22.04"; ++var batchAccountPoolData = new BatchAccountPoolData() +{ + VmSize = "Standard_DS1_v2", + DeploymentConfiguration = new BatchDeploymentConfiguration() {- VmSize = "standard_d2_v3", - ScaleSettings = new ScaleSettings - { - FixedScale = new FixedScaleSettings - { - TargetDedicatedNodes = 1 - } - }, - DeploymentConfiguration = new DeploymentConfiguration - { - VirtualMachineConfiguration = new VirtualMachineConfiguration( - new ImageReference( - "Canonical", - "0001-com-ubuntu-server-jammy", - "22_04-lts", - "latest"), - "batch.node.ubuntu 22.04") - }, - Identity = new BatchPoolIdentity + VmConfiguration = new BatchVmConfiguration(imageReference, nodeAgentSku) + }, + ScaleSettings = new BatchAccountPoolScaleSettings() + { + FixedScale = new BatchAccountFixedScaleSettings() {- Type = PoolIdentityType.UserAssigned, - UserAssignedIdentities = new Dictionary<string, UserAssignedIdentities> - { - ["Your Identity Resource Id"] = - new UserAssignedIdentities() - } + TargetDedicatedNodes = 1 }- }; --var pool = await managementClient.Pool.CreateWithHttpMessagesAsync( - poolName:"yourPoolName", - resourceGroupName: "yourResourceGroupName", - accountName: "yourAccountName", - parameters: poolParameters, - cancellationToken: default(CancellationToken)).ConfigureAwait(false); + } +}; ++ArmOperation<BatchAccountPoolResource> armOperation = batchAccount.GetBatchAccountPools().CreateOrUpdate( + WaitUntil.Completed, poolName, batchAccountPoolData); +BatchAccountPoolResource pool = armOperation.Value; ``` ## Use user-assigned managed identities in Batch nodes |
communication-services | European Union Data Boundary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/european-union-data-boundary.md | For EU communication resources, when the organizer, initiator, or guests join a ## SMS -Azure Communication Services guarantees that SMS data within the EUDB is stored in EUDB regions. As of today, we process and store data in the Netherlands, Ireland or Switzerland regions, ensuring no unauthorized data transfer outside the EEA. +Azure Communication Services guarantees that SMS data within the EUDB is stored in EUDB regions. As of today, we process and store data in the Netherlands, Ireland or Switzerland regions, ensuring no unauthorized data transfer outside the EEA (European Economic Area). Also, Azure Communication Services employs advanced security measures, including encryption, to protect SMS data both at rest and in transit. Customers can select their preferred data residency within the EUDB, making sure data remains within the designated EU regions. #### SMS EUDB FAQ |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md | This article describes which capabilities Azure Communication Services SDKs supp | Video rendering | Render single video in many places (local camera or remote stream) | ✔️ | | | Set/update scaling mode | ✔️ | | | Render remote video stream | ✔️ |-| | See **Together** mode video stream | ❌ | +| | See **Together** mode video stream | ✔️ | | | See **Large gallery** view | ❌ | | | Receive video stream from Teams media bot | ❌ | | | Receive adjusted stream for **Content from camera** | ❌ | |
communication-services | Teams User Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md | The following list presents the set of features that are currently available in | Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | ✔️ | ✔️ | ✔️ | | | Set / update scaling mode | ✔️ | ✔️ | ✔️ | ✔️ | | | Render remote video stream | ✔️ | ✔️ | ✔️ | ✔️ |-| | See together mode video stream | ❌ | ❌ | ❌ | ❌ | +| | See together mode video stream | ✔️ | ❌ | ❌ | ❌ | | | See Large gallery view | ❌ | ❌ | ❌ | ❌ | | | Receive video stream from Teams media bot | ❌ | ❌ | ❌ | ❌ | | | Receive adjusted stream for "content from Camera" | ❌ | ❌ | ❌ | ❌ | |
communication-services | Meeting Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md | The following list of capabilities is allowed when Microsoft 365 users participa | Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | ✔️ | ✔️ | ✔️ | | | Set / update scaling mode | ✔️ | ✔️ | ✔️ | ✔️ | | | Render remote video stream | ✔️ | ✔️ | ✔️ | ✔️ |-| | See together mode video stream | ❌ | ❌ | ❌ | ❌ | +| | See together mode video stream | ✔️ | ❌ | ❌ | ❌ | | | See Large gallery view | ❌ | ❌ | ❌ | ❌ | | | Receive video stream from Teams media bot | ❌ | ❌ | ❌ | ❌ | | | Receive adjusted stream for "content from Camera" | ❌ | ❌ | ❌ | ❌ | |
communication-services | Button Injection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/button-injection.md | + + Title: Customize the actions from the button bar in the UI Library ++description: Customize the actions from the button bar in the Azure Communication Services UI Library. ++++++ Last updated : 08/01/2024++zone_pivot_groups: acs-plat-ios-android ++#Customer intent: As a developer, I want to customize the button bar actions in the UI Library. +++# Customize the button bar +++To implement custom actions or modify the current button layout, you can interact with the Native UI Library's API. This API involves defining custom button configurations, specifying actions, and managing the button bar's current actions. The API provides methods for adding custom actions, and removing existing buttons, all of which are accessible via straightforward function calls. ++This functionality provides a high degree of customization, and ensures that the user interface remains cohesive and consistent with the application's overall design. ++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). +- A user access token to enable the call client. [Get a user access token](../../quickstarts/access-tokens.md). +- Optional: Completion of the [quickstart for getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md). ++## Set up the feature ++++## Next steps ++- [Learn more about the UI Library](../../concepts/ui-library/ui-library-overview.md) |
container-registry | Container Registry Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-cache.md | Before configuring the Credentials, you have to create and store secrets in the ```azurecli-interactive az acr credential-set create -r MyRegistry \- -n MyRule \ + -n MyDockerHubCredSet \ -l docker.io \ -u https://MyKeyvault.vault.azure.net/secrets/usernamesecret \ -p https://MyKeyvault.vault.azure.net/secrets/passwordsecret Before configuring the Credentials, you have to create and store secrets in the - For example, to update the username or password KV secret ID on the credentials for a given `MyRegistry` Azure Container Registry. ```azurecli-interactive- az acr credential-set update -r MyRegistry -n MyRule -p https://MyKeyvault.vault.azure.net/secrets/newsecretname + az acr credential-set update -r MyRegistry -n MyDockerHubCredSet -p https://MyKeyvault.vault.azure.net/secrets/newsecretname ``` -3. Run [az-acr-credential-set-show][az-acr-credential-set-show] to show the credentials. +3. Run [az acr credential-set show][az-acr-credential-set-show] to show the credentials. - - For example, to show the credentials for a given `MyRegistry` Azure Container Registry. + - For example, to show a credential set in a given `MyRegistry` Azure Container Registry. ```azurecli-interactive- az acr credential-set show -r MyRegistry -n MyCredSet + az acr credential-set show -r MyRegistry -n MyDockerHubCredSet ``` ### Configure and create a cache rule with the credentials Before configuring the Credentials, you have to create and store secrets in the - For example, to create a cache rule with the credentials for a given `MyRegistry` Azure Container Registry. ```azurecli-interactive- az acr cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu -c MyCredSet + az acr cache create -r MyRegistry -n MyRule -s docker.io/library/ubuntu -t ubuntu -c MyDockerHubCredSet ``` 2. Run [az acr cache update][az-acr-cache-update] command to update the credentials on a cache rule. Before configuring the Credentials, you have to create and store secrets in the ```azurecli-interactive PRINCIPAL_ID=$(az acr credential-set show - -n MyCredSet \ + -n MyDockerHubCredSet \ -r MyRegistry \ --query 'identity.principalId' \ -o tsv) Before configuring the Credentials, you have to create and store secrets in the az acr credential-set list -r MyRegistry ``` -4. Run [az-acr-credential-set-delete][az-acr-credential-set-delete] to delete the credentials. +4. Run [az acr credential-set delete][az-acr-credential-set-delete] to delete the credentials. - For example, to delete the credentials for a given `MyRegistry` Azure Container Registry. ```azurecli-interactive- az acr credential-set delete -r MyRegistry -n MyCredSet + az acr credential-set delete -r MyRegistry -n MyDockerHubCredSet ``` :::zone-end |
container-registry | Container Registry Tasks Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-overview.md | For example, with triggers for updates to a base image, you can automate [OS and > [!IMPORTANT] > Azure Container Registry task runs are temporarily paused from Azure free credits. This pause might affect existing task runs. If you encounter problems, open a [support case](../azure-portal/supportability/how-to-create-azure-support-request.md) for our team to provide additional guidance. -> [!WARNING] -> Any information that you provide on the command line or as part of a URI might be logged as part of Azure Container Registry diagnostic tracing. This information includes sensitive data such as credentials and GitHub personal access tokens. Exercise caution to prevent any potential security risks. Don't include sensitive details on command lines or URIs that are subject to diagnostic logging. +>[! WARNING] +Please be advised that any information provided on the command line or as part of a URI may be logged as part of Azure Container Registry (ACR) diagnostic tracing. This includes sensitive data such as credentials, GitHub personal access tokens, and other secure information. Exercise caution to prevent any potential security risks, it is crucial to avoid including sensitive details in command lines or URIs that are subject to diagnostic logging. ## Task scenarios |
copilot | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/capabilities.md | Title: Microsoft Copilot in Azure capabilities description: Learn about the things you can do with Microsoft Copilot in Azure. Previously updated : 07/26/2024 Last updated : 08/29/2024 Use Microsoft Copilot in Azure to perform many basic tasks in the Azure portal o - [Analyze, estimate, and optimize costs](analyze-cost-management.md) - [Query your attack surface](query-attack-surface.md) - Work smarter with Azure + - [Execute commands](execute-commands.md) - [Deploy virtual machines effectively](deploy-vms-effectively.md) - [Build infrastructure and deploy workloads](build-infrastructure-deploy-workloads.md) - [Create resources using interactive deployments](use-guided-deployments.md) |
copilot | Execute Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/execute-commands.md | + + Title: Execute commands using Microsoft Copilot in Azure (preview) +description: Learn about scenarios where Microsoft Copilot in Azure (preview) can help you perform tasks. Last updated : 08/28/2024+++++++# Execute commands using Microsoft Copilot in Azure (preview) ++Microsoft Copilot in Azure (preview) can help you execute individual or bulk commands on your resources. With Copilot in Azure, you can save time by prompting Copilot in Azure with natural language, rather than manually navigating to a resource and selecting a button in a resource's command bar. ++For example, you can restart your virtual machines by using prompts like **"Restart my VM named ContosoDemo"** or **"Stop my VMs in West US 2."** Copilot in Azure infers relevant resources inferred through an Azure Resource Graph query and determines the relevant command. Next, it asks you to confirm the action. Commands are never executed without your explicit confirmation. After the command is executed, you can track progress in the notification pane, just as if you manually ran the command from within the Azure portal. For faster responses, specify the resource ID of the resources that you want to run the command on. ++Copilot in Azure can execute many common commands on your behalf, as long as you have the permissions to perform them yourself. If Copilot in Azure is unable to run a command for you, it generally provides instructions to help you perform the task yourself. To learn more about the commands you can execute with natural language for a resource or service, you can ask Copilot in Azure directly. For instance, you can say **"Which commands can you help me perform on virtual machines?"** ++++## Sample prompts ++Here are a few examples of the kinds of prompts you can use to execute commands. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of queries. ++- "Restart my VM named ContosoDemo" +- "Stop VMs in Europe regions +- "Restore my deleted storage account +- "Enable backup on VM named ContosoDemo" +- "Restart my web app named ContosoWebApp" +- "Start my AKS cluster" ++## Examples ++When you say **"Restore my deleted storage account**, Copilot in Azure launches the **Restored deleted account** experience. From here, you can select the subscription and the storage account that you want to recover. +++If you say **"Find the VMs running right now and stop them"**, Copilot in Azure first queries to find all VMs running in your selected subscriptions. It then shows you the results and asks you to confirm that the selected VMs should be stopped. You can uncheck a box to exclude a resource from the command. After you confirm, the command is run, with progress shown in your notifications. +++Similarly, if you say **"Delete my VMs in West US 2"**, Copilot in Azure runs a query and then asks you to confirm before running the delete command. +++You can also specify the resource name in your prompt. When you say things like **"Restart my VM named ContosoDemo**", Copilot in Azure looks for that resource, then prompts you to confirm the operation. +++## Next steps ++- Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure. +- [Get tips for writing effective prompts](write-effective-prompts.md) to use with Microsoft Copilot in Azure. |
copilot | Get Information Resource Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-information-resource-graph.md | Title: Get resource information using Microsoft Copilot in Azure (preview) + Title: Get resource information using Microsoft Copilot in Azure (preview) description: Learn about scenarios where Microsoft Copilot in Azure (preview) can help with Azure Resource Graph. Last updated 05/28/2024 Here are a few examples of the kinds of prompts you can use to generate Azure Re ## Examples -You can Ask Microsoft Copilot in Azure (preview) to write queries with prompts like "**Write a query to list my virtual machines with their public interface and public IP.**" +You can ask Microsoft Copilot in Azure (preview) to write queries with prompts like "**Write a query to list my virtual machines with their public interface and public IP.**" :::image type="content" source="media/get-information-resource-graph/azure-resource-graph-explorer-list-vms.png" alt-text="Screenshot of Microsoft Copilot in Azure responding to a request to list VMs."::: |
cost-management-billing | Tutorial Improved Exports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-improved-exports.md | Title: Tutorial - Improved exports experience - Preview description: This tutorial helps you create automatic exports for your actual and amortized costs in the Cost and Usage Specification standard (FOCUS) format. Previously updated : 08/09/2024 Last updated : 08/28/2024 Agreement types, scopes, and required roles are explained at [Understand and wor | **Data types** | **Supported agreement** | **Supported scopes** | | | | |-| Cost and usage (actual) | ΓÇó EA<br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise<br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó Microsoft Partner Agreement (MPA) - Customer, subscription, and resource group | -| Cost and usage (amortized) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, and resource group | +| Cost and usage (actual) | ΓÇó EA<br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise<br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó Microsoft Partner Agreement (MPA) - Customer, subscription, and resource group | +| Cost and usage (amortized) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, and resource group | | Cost and usage (FOCUS) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner| ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, resource group. **NOTE**: The management group scope isn't supported for Cost and usage details (FOCUS) exports. | | All available prices | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile | | Reservation recommendations | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile | |
data-factory | Concepts Workflow Orchestration Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-workflow-orchestration-manager.md | Workflow Orchestration Manager in Azure Data Factory offers a range of powerful * SouthEast Asia > [!NOTE]-> By GA, all ADF regions will be supported. The Airflow environment region is defaulted to the Data Factory region and is not configurable, so ensure you use a Data Factory in the above supported region to be able to access the Workflow Orchestration Manager preview. +> The Airflow environment region is defaulted to the Data Factory region and is not configurable, so ensure you use a Data Factory in the above supported region to be able to access the Workflow Orchestration Manager preview. ## Supported Apache Airflow versions |
deployment-environments | Ade Roadmap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/ade-roadmap.md | + + Title: Roadmap for Azure Deployment Environments +description: Learn about features coming soon and in development for Azure Deployment Environments. ++++ Last updated : 08/26/2024++#customer intent: As a customer, I want to understand upcoming features and enhancements in Azure Deployment Environments so that I can plan and optimize development and deployment strategies. ++++# Azure Deployment Environments Roadmap ++This roadmap presents a set of planned feature releases that underscores MicrosoftΓÇÖs commitment to revolutionizing the way enterprise developers provision application infrastructure, offering a seamless and intuitive experience that also ensures robust centralized management and governance. This feature list offers a glimpse into our plans for the next six months, highlighting key features we're developing. It's not exhaustive but shows major investments. Some features might release as previews and evolve based on your feedback before becoming generally available. We always listen to your input, so the timing, design, and delivery of some features might change. ++The key deliverables focus on the following themes: ++- Self-serve app infrastructure +- Standardized deployments and customized templates +- Enterprise management ++## Self-serve app infrastructure ++Navigating complex dependencies, opaque configurations, and compatibility issues, alongside managing security risks, has long made deploying app infrastructure a challenging endeavor. Azure Deployment Environments aims to eliminate these obstacles and supercharge enterprise developer agility. By enabling developers to swiftly and effortlessly self-serve the infrastructure needed to deploy, test, and run cloud-based applications, we're transforming the development landscape. Our ongoing investment in this area underscores our commitment to optimizing and enhancing the end-to-end developer experience, empowering teams to innovate without barriers. ++- Enhanced integration with Azure Developer CLI (azd) will support ADEΓÇÖs extensibility model, enabling deployments using any preferred IaC framework. The extensibility model allows enterprise development teams to deploy their code onto newly provisioned or existing environments with simple commands like `azd up` and `azd deploy`. By facilitating real-time testing, rapid issue identification, and swift resolution, developers can deliver higher-quality applications faster than ever before. +- Ability to track and manage environment operations, logs, and the deployment outputs directly in the developer portal will make it easier for dev teams to troubleshoot any potential issues and fix their deployments. ++## Standardized deployments and customized templates ++Azure Deployment Environments empowers platform engineers and dev leads to securely provide curated, project-specific IaC templates directly from source control repositories. With the support for an extensibility model, organizations can now use their preferred IaC frameworks, including popular third-party options like Pulumi and Terraform, to execute deployments seamlessly. ++While the extensibility model already allows for customized deployments, we're committed to making it exceptionally easy for platform engineers and dev leads to tailor their deployments, ensuring they can securely meet the unique needs of their organization or development team. ++- Configuring pre- and post-deployment scripts as part of environment definitions will unlock the power to integrate more logic, validations, and custom actions into deployments, leveraging internal APIs and systems for more customized and efficient workflows. +- Support for private registries will allow platform engineers to store custom container images in a private Azure Container Registry (ACR), ensuring controlled and secure access. ++## Enterprise management ++Balancing developer productivity with security, compliance, and cost management is crucial for organizations. Deployment Environments boosts productivity while upholding organizational security and compliance standards by centralizing environment management and governance for platform engineers. ++We're committed to further investing in capabilities that strengthen both security and cost controls, ensuring a secure and efficient development ecosystem. ++- Ability to configure a private virtual network for the runner executing the template deployments puts enterprises in control while accessing confidential data and resources from internal systems. +- Default autodeletion eliminates orphaned cloud resources, safeguarding enterprises from unnecessary costs and ensuring budget efficiency. ++This roadmap outlines our current priorities, and we remain flexible to adapt based on customer feedback. We invite you to [share your thoughts and suggest more capabilities you would like to see](https://developercommunity.microsoft.com/deploymentenvironments/suggest). Your insights help us refine our focus and deliver even greater value. ++## Related content ++- [What is Azure Deployment Environments?](overview-what-is-azure-deployment-environments.md) |
energy-data-services | Concepts Reference Data Values | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-reference-data-values.md | + + Title: Reference Data Value Syncing in Microsoft Azure Data Manager for Energy +description: This article describes Reference Values and syncing of Reference Values with Azure Data Manager for Energy data partitions. ++++ Last updated : 08/28/2024++++# Syncing Reference Data Values ++> [!IMPORTANT] +> The feature to sync reference values in your Azure Data Manager for Energy data partition is currently in **Limited Preview**. If you are interested in having this feature enabled for your Azure subscription, then please reach out to either your Microsoft Sales contact or open a support ticket for assistance. +> ++This article provides an overview of reference data values in OSDU Data Platform, and explains how Azure Data Manager for Energy helps you synchronize them with OSDU community standards. ++## What are reference data values and why are they important? ++Within the OSDU Data Platform framework, reference data values play a crucial role in ensuring data consistency and standardization. Reference data refers to the set of permissible values for attributes to be used across various data fields, such as master data or work product components. For example, `degree Celsius` is a permitted `UnitofMeasure`, and `Billing Address` is a permitted `AddressType`. ++In addition to enabling data interpretation and collaboration, reference data is required for data ingestion via the OSDU manifest ingestion workflow. Manifests provide a specific container for reference data values, which are then used to validate the ingested data and generate metadata for later discovery and use. To learn more about manifest-based ingestion, see [Manifest-based ingestion concepts](concepts-manifest-ingestion.md). ++The OSDU Data Platform categorizes Reference data values into the following three buckets: +* **FIXED** values, which are universally recognized and used across OSDU deployments and the energy sector. These values can't be extended or changed except by OSDU community governance updates +* **OPEN** values. The OSDU community provides an initial list of OPEN values upon which you can extend but not otherwise change +* **LOCAL** values. The OSDU community provides an initial list of LOCAL values that you can freely change, extend, or entirely replace ++For more information about OSDU reference data values and their different types, see [OSDU Data Definitions / Data Definitions / Reference Data](https://community.opengroup.org/osdu/dat#22-reference-data). ++## Configuring value syncing in Azure Data Manager for Energy ++To help you maintain data integrity and facilitate interoperability, new Azure Data Manager for Energy instances are automatically created with **FIXED** and **OPEN** reference data values synced per the latest set from the OSDU community for the [current milestone supported by Azure Data Manager for Energy](osdu-services-on-adme.md). You can additionally choose to have new instances create with **LOCAL** values synced as well. ++Later, if you create new data partitions in the Azure Data Manager for Energy instance, they'll also be created with FIXED and OPEN reference values synced. If you had chosen to additionally sync LOCAL values when you first created the instance, new partitions will also sync LOCAL values from the community. ++As covered in the [Quickstart: Create an Azure Data Manager for Energy instance article](quickstart-create-microsoft-energy-data-services-instance.md), you can choose to enable LOCAL value syncing when creating a new Azure Data Manager for Energy instance. When deploying through the Azure portal, you can enable LOCAL syncing in the "Advanced Settings" tab. FIXED and OPEN reference values will always be synced when new instances are created. ++When deploying through ARM templates, you can enable LOCAL syncing by setting the `ReferenceDataProperties` property to `All`. To restrict syncing to only FIXED and OPEN values, set its value to `NonLocal`. ++## Legal Tags and Entitlements for synced reference values +Azure Data Manager for Energy automatically sets **Legal Tags** and **Entitlements** for reference data values as they're synced. ++For all synced reference data values, whether FIXED, OPEN, or LOCAL, **Legal Tags** are set to `{data-partition-id}-referencedata-legal`, where `{data-partition-id}` corresponds to the data partition name you provided when configuring new data partition creation. ++For **Entitlements**, Azure Data Manager for Energy automatically creates entitlement groups that you can then use for access controls. Groups are created for OWNERS and VIEWERS across FIXED, OPEN, and LOCAL values: ++| Governance Set | OWNERS Group | VIEWERS Group | +| | | | +| FIXED | data.referencedata.owners@{data_partition_id}.{osdu_domain} | data.referencedata.viewers@{data_partition_id}.{osdu_domain} | +| OPEN | data.referencedata.owners@{data_partition_id}.{osdu_domain} | data.referencedata.viewers@{data_partition_id}.{osdu_domain} | +| LOCAL | data.referencedata-local.owners@{data_partition_id}.{osdu_domain} | data.referencedata-local.viewers@{data_partition_id}.{osdu_domain} | ++The above LOCAL groups are only created if you chose to sync LOCAL values. ++If you extend OPEN values after instance creation, we recommend creating and using different access control lists (ACLs) to govern their access. For example, `data.referencedata-{ORG}.owners@{data_partition_id}.{osdu_domain}` and `data.referencedata-{ORG}.viewers@{data_partition_id}.{osdu_domain}`, where `{ORG}` differentiates the ACL from the one used for standard OPEN values synced at creation. ++**NameAlias updates** don't require a separate entitlement. Updates to the `NameAlias` field are governed by the same access control mechanisms as updates to any other part of a storage record. In effect, OWNER access confers the entitlement to update the `NameAlias` field. ++## Current scope of Azure Data Manager for Energy reference data value syncing +Currently, Azure Data Manager for Energy syncs reference data values at instance creation and at new partition creation. Reference values are synced to those from the OSDU community, corresponding to the OSDU milestone supported by Azure Data Manager for Energy at the time of instance or partition creation. For information on the current milestone supported by and available OSDU service in Azure Data Manager for Energy, refer [OSDU services available in Azure Data Manager for Energy](osdu-services-on-adme.md). ++## Next steps +- [Quickstart: Create Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) +- [Tutorial: Sample steps to perform a manifest-based file ingestion](tutorial-manifest-ingestion.md) +- [OSDU Operator Data Loading Quick Start Guide](https://community.opengroup.org/groups/osdu/platform/data-flow/data-loading/-/wikis/home#osdu-operator-data-loading-quick-start-guide) |
energy-data-services | Quickstart Create Microsoft Energy Data Services Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/quickstart-create-microsoft-energy-data-services-instance.md | In this quickstart, you create an Azure Data Manager for Energy instance by usin You use a simple interface in the Azure portal to set up your Azure Data Manager for Energy instance. The process takes about 50 minutes to complete. -Azure Data Manager for Energy is a managed platform as a service (PaaS) offering from Microsoft that builds on top of the [OSDU®](https://osduforum.org/) Data Platform. When you connect your consuming in-house or third-party applications to Azure Data Manager for Energy, you can use the service to ingest, transform, and export subsurface data. +Azure Data Manager for Energy is a managed platform as a service (PaaS) offering from Microsoft that builds on top of the [OSDU®](https://osduforum.org/) Data Platform. When you connect your consuming internal or external applications to Azure Data Manager for Energy, you can use the service to ingest, transform, and export subsurface data. OSDU® is a trademark of The Open Group. Client secret | Sometimes called an application password, a client secret is a s 1. Save your application (client) ID and client secret from Microsoft Entra ID to refer to them later in this quickstart. -1. Sign in to [Microsoft Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home). +1. Sign-in to [Microsoft Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home). 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter on the top menu to switch to the tenant in which you want to install Azure Data Manager for Energy. Client secret | Sometimes called an application password, a client secret is a s | Field | Requirements | | -- | |- **Instance details** > **Name** | Only alphanumeric characters are allowed, and the value must be 1 to 15 characters. The name is not case-sensitive. One resource group can't have two instances with the same name. + **Instance details** > **Name** | Only alphanumeric characters are allowed, and the value must be 1 to 15 characters. The name isn't case-sensitive. One resource group can't have two instances with the same name. **Instance details** > **App ID** | Enter the valid application ID that you generated and saved in the last section.- **Data Partitions** > **Name** | Each name must be 1 to 10 characters and consist of lowercase alphanumeric characters and hyphens. It must start with an alphanumeric character and not contain consecutive hyphens. Data partition names that you choose are automatically prefixed with your Azure Data Manager for Energy instance name. Application and API calls will use these compound names to refer to your data partitions. + **Data Partitions** > **Name** | Each name must be 1 to 10 characters and consist of lowercase alphanumeric characters and hyphens. It must start with an alphanumeric character and not contain consecutive hyphens. Data partition names that you choose are automatically prefixed with your Azure Data Manager for Energy instance name. Application and API calls use these compound names to refer to your data partitions. > [!NOTE] > After you create names for your Azure Data Manager for Energy instance and data partitions, you can't change them later. Client secret | Sometimes called an application password, a client secret is a s [![Screenshot of the tab for specifying tags in Azure Data Manager for Energy.](media/quickstart-create-microsoft-energy-data-services-instance/input-tags.png)](media/quickstart-create-microsoft-energy-data-services-instance/input-tags.png#lightbox) -1. Move to the **Resource Sharing (CORS)** tab and configure cross-origin resource sharing as needed. [Learn more about cross-origin resource sharing in Azure Data Manager for Energy](../energy-data-services/how-to-enable-cors.md). +1. Move to the **Advanced Settings** tab to configure **cross-origin resource sharing** and, if available to you as a customer in the limited preview for the feature, **reference data values settings**. To learn more about cross-origin resource sharing (CORS), see [Use CORS for resource sharing in Azure Data Manager for Energy](../energy-data-services/how-to-enable-cors.md). To learn more about reference data values, see [Syncing Reference Data Values](../energy-data-services/concepts-reference-data-values.md) - [![Screenshot of the tab for configuring cross-origin resource sharing in Azure Data Manager for Energy.](media/quickstart-create-microsoft-energy-data-services-instance/cors-tab.png)](media/quickstart-create-microsoft-energy-data-services-instance/cors-tab.png#lightbox) + [![Screenshot of the tab for configuring cross-origin resource sharing in Azure Data Manager for Energy.](media/quickstart-create-microsoft-energy-data-services-instance/advanced-settings-tab.png)](media/quickstart-create-microsoft-energy-data-services-instance/advanced-settings-tab.png#lightbox) 1. Move to the **Review + Create** tab. To delete an Azure Data Manager for Energy instance: 1. Remove any locks that you set at the resource group level. Locked resources remain active until you remove the locks and successfully delete the resources. -1. Sign in to the Azure portal and delete the resource group in which the Azure Data Manager for Energy components are installed. +1. Sign-in to the Azure portal and delete the resource group in which the Azure Data Manager for Energy components are installed. -1. This step is optional. Go to Microsoft Entra ID and delete the app registration that you linked to your Azure Data Manager for Energy instance. +1. (Optional) Go to Microsoft Entra ID and delete the app registration that you linked to your Azure Data Manager for Energy instance. ## Next steps -After you provision an Azure Data Manager for Energy instance, you can learn about user management on this instance: --> [!div class="nextstepaction"] -> [How to manage users](how-to-manage-users.md) +- [How to manage users](how-to-manage-users.md) |
expressroute | Circuit Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/circuit-migration.md | The diagram above illustrates the migration process from an existing ExpressRout ## Deploy new circuit in isolation -Follow the steps in [Create a circuit with ExpressRoute](expressroute-howto-circuit-portal-resource-manager.md), to create your new ExpressRoute circuit (Circuit B) in the desired peering location. Then, follow the steps in [Tutorial: Configure peering for ExpressRoute circuit](expressroute-howto-routing-portal-resource-manager.md) to configure the required peering types: private and Microsoft. +For a one-to-one replacement of the existing circuit, select the **Standard Resiliency** option and follow the steps outlined in the [Create a circuit with ExpressRoute](expressroute-howto-circuit-portal-resource-manager.md) guide to create your new ExpressRoute circuit (Circuit B) in the desired peering location. Then, follow the steps in [Configure peering for ExpressRoute circuit](expressroute-howto-routing-portal-resource-manager.md) to configure the required peering types: private and Microsoft. To prevent the private peering production traffic from using Circuit B before testing and validating it, don't link virtual network gateway that has production deployment to Circuit B. Similarly to avoid Microsoft peering production traffic from using Circuit B, don't associate a route filter to Circuit B. |
frontdoor | Front Door Custom Domain Https | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md | If the CNAME record entry for your endpoint no longer exists or it contains the After you enable HTTPS on your custom domain, the DigiCert CA validates ownership of your domain by contacting its registrant, according to the domain's [WHOIS](http://whois.domaintools.com/) registrant information. Contact is made via the email address (by default) or the phone number listed in the WHOIS registration. You must complete domain validation before HTTPS is active on your custom domain. You have six business days to approve the domain. Requests that aren't approved within six business days are automatically canceled. DigiCert domain validation works at the subdomain level. You need to prove ownership of each subdomain separately. -![WHOIS record](./media/front-door-custom-domain-https/whois-record.png) DigiCert also sends a verification email to other email addresses. If the WHOIS registrant information is private, verify that you can approve directly from one of the following addresses: When you select the approval link, you're directed to an online approval form. F - You can approve just the specific host name used in this request. Extra approval is required for subsequent requests. -After approval, DigiCert completes the certificate creation for your custom domain name. The certificate is valid for one year and gets autorenew before it expires. +After approval, DigiCert completes the certificate creation for your custom domain name. The certificate is valid for one year. If the CNAME record for your custom domain is added or updated to map to your Azure Front Door's default hostname after verification, then it will be autorenewed before it expires. ++> [!NOTE] +> Managed certificate autorenewal requires that your custom domain be directly mapped to your Front Door's default .azurefd.net hostname by a CNAME record. ## Wait for propagation |
governance | Determine Non Compliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/determine-non-compliance.md | Title: Determine causes of non-compliance description: When a resource is non-compliant, there are many possible reasons. Discover what caused the non-compliance with the policy. Previously updated : 11/30/2023 Last updated : 08/28/2024 Begin by following the same steps in the [Compliance details](#compliance-detail In the Compliance details pane view, select the link **Last evaluated resource**. The **Guest Assignment** page displays all available compliance details. Each row in the view represents an evaluation that was performed inside the machine. In the **Reason** column, a phrase is shown describing why the Guest Assignment is _Non-compliant_. For example, if password policies, the **Reason** column would display text including the current value for each setting. ### View configuration assignment details at scale |
hdinsight | Azure Monitor Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/azure-monitor-agent.md | Title: Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters description: Learn how to migrate to Azure Monitor Agent (AMA) in Azure HDInsight clusters. Previously updated : 08/28/2024 Last updated : 08/29/2024 # Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters The following sections describe how customers can use the new Azure Monitor Agen > [!NOTE] > Customers using Azure Monitor Classic will no longer work after 31 August, 2024. > Customers using New Azure Monitor experience (preview) are required to migrate to Azure Monitor Agent (AMA) before January 31, 2025.-> Clusters with image **2407260448** with the latest HDInsight API [2024-08-01-preview](/rest/api/hdinsight/extensions/enable-azure-monitor-agent?view=rest-hdinsight-2024-08-01-preview) will have ability to enable the Azure Monitor Agent integration, and this will be the default setup for customers using image **2407260448**. +> Clusters with image **2407260448** with the latest HDInsight API [2024-08-01-preview](/rest/api/hdinsight/extensions/enable-azure-monitor-agent?view=rest-hdinsight-2024-08-01-preview&preserve-view=true) will have ability to enable the Azure Monitor Agent integration, and this will be the default setup for customers using image **2407260448**. ### Activate a new Azure Monitor Agent integration The following sections describe how customers can use the new Azure Monitor Agen > > For more information about how to create a Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](/azure/azure-monitor/logs/quick-create-workspace). -### Approach 1: enable Azure monitor agent using Portal +### Enable Azure monitor agent using Portal Activate the new integration by going to your cluster's portal page and scrolling down the menu on the left until you reach the Monitoring section. The following steps describe how customers can enable the new Azure Monitor Agen There are two ways you can access the new tables. -**Known Issues** -Logs related to Livy jobs are missing some columns in few tables. Reach out to customer support. - #### Approach 1: 1. The first way to access the new tables is through the Log Analytics workspace. |
healthcare-apis | Transparency Note | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deidentification/transparency-note.md | + + Title: The Azure Health Data Services de-identification service (preview) transparency note +description: The basics of Azure Health Data Services’ de-identification service and Responsible AI ++++ Last updated : 8/16/2024+++# The basics of Azure Health Data Services’ de-identification service ++Azure Health Data Services’ de-identification service is an API that uses natural language processing techniques to find and label, redact, or surrogate Protected Health Information (PHI) in unstructured text. The service can be used for diverse types of unstructured health documents, including discharge summaries, clinical notes, clinical trials, messages, and more. The service uses machine learning to identify PHI, including HIPAA’s 18 identifiers, using the “TAG” operation. The redaction and surrogation operations replace these PHI values with a tag of the entity type or a surrogate, or pseudonym. ++## Key terms +| Term | Definition | +| : | : | +| Surrogation | The replacement of data using a pseudonym or alternative token. | +| Tag | The action or process of detecting words and phrases mentioned in unstructured text using named entity recognition. | +| Consistent Surrogation | The process of replacing PHI values with alternative non-PHI data, such that the same PHI values are repeatedly replaced with consistent values. This may be within the same document or across documents for a given organization. | ++## Capabilities +### System behavior +To use the de-identification service, the input raw, unstructured text, can be sent synchronously one at a time or asynchronously as a batch. For the synchronous call, the API output is handled in your application. For the batch use case, the API call requires a source and target file location in Azure blob storage. Three possible operations are available through the API: "Tag," "Redact," or "Surrogate." Tag returns PHI values detected with named entity recognition. Redact returns the input text, except with the entity type replacing the PHI values. Surrogation returns the input text, except with randomly selected identifiers, or the same entity type, replacing the PHI values. Consistent surrogation is available across documents using the batch API. ++## Use cases +### Intended uses +The de-identification service was built specifically for health and life sciences organizations within the United States subject to HIPAA. We do not recommend this service for non-medical applications or for applications other than English. Some common customer motivations for using the de-identification service include: ++- Developing de-identified data for a test or research environment +- Developing de-identified datasets for data analytics without revealing confidential information +- Training machine learning models on private data, which is especially important for generative AI +- Sharing data across collaborating institutions ++## Considerations when choosing other use cases +We encourage customers to use the de-identification service in their innovative solutions or applications. However, de-identified data alone or in combination with other information may reveal patients' identities. As such, customers creating, using, and sharing de-identified data should do so responsibly. ++## Disclaimer +Results derived from the de-identification service vary based on factors such as data input and functions selected. Microsoft is unable to evaluate the output of the de-identification service to determine the acceptability of any use cases or compliance needs. Outputs from the de-identification service are not guaranteed to meet any specific legal, regulatory, or compliance requirements. Please see the limitations before using the de-identification service. ++## Suggested use +The de-identification service offers three operations: Tag, Redact, and Surrogation. When appropriate, we recommend users deploy surrogation over redaction. Surrogation is useful when the system fails to identify true PHI. The real value is hidden among surrogates, or stand-in-data. The data is "hiding in plain sight," unlike redaction. The service also offers consistent surrogation, or a continuous mapping of surrogate replacements across documents. Consistent surrogation is available by submitting files in batches to the API using the asynchronous endpoint. We recommend limiting the batch size as consistent surrogation over a large number of records degrades the privacy of the document. ++## Technical limitations, operational factors, and ranges +There are various cases that would impact the de-identification service’s performance. ++- Coverage: Unstructured text may contain information that reveals identifying characteristics about an individual that alone, or in combination with external information, reveals the identity of the individual. For example, a clinical record could state that a patient is the only known living person diagnosed with a particular rare disease. The unstructured text alone, or in combination with external information, may reveal that patient’s clinical records. +- Languages: Currently, the de-identification service is enabled for English text only. +- Spelling: Incorrect spelling might affect the output. If a word or the surrounding words are misspelled the system might or might not have enough information to recognize that the text is PHI. +- Data Format: The service performs best on unstructured text, such as clinical notes, transcripts, or messages. Structured text without context of surrounding words may or may not have enough information to recognize that the text is PHI. +- Performance: Potential error types are outlined in the System performance section. +- Surrogation: As stated above, the service offers consistent surrogation, or a continuous mapping of surrogate replacements across documents. Consistent surrogation is available by submitting files in batches to the API using the asynchronous endpoint. Submitting the same files in different batches or through the real-time endpoint results in different surrogates used in place of the PHI values. +- Compliance: The de-identification service's performance is dependent on the user’s data. The service does not guarantee compliance with HIPAA’s Safe Harbor method or any other privacy methods. We encourage users to obtain appropriate legal review of your solution, particularly for sensitive or high-risk applications. ++## System performance +The de-identification service might have both false positive errors and false negative errors. An example of +a false positive is tag, redaction, or surrogation of a word or token that is not PHI. An example of a false +negative is the service’s failure to tag, redact, or surrogate a word or token that is truly PHI. ++| Classification | Example | Tag Example | Explanation | +| :- | : | :- | :- | +| False Positive | Patient reports allergy to cat hair. | Patient reports allergy to DOCTOR hair. | This is an example of a false positive, as "cat" in this context isn't PHI. "Cat" refers to an animal, and not a name. | +| False Negative | Jane reports allergy to cat hair. | Jane reports allergy to cat hair. | The system failed to identify Jane as a name. | +| True Positive | Jane reports allergy to cat hair. | PATIENT reports allergy to cat hair. | The system correctly identified Jane as a name. | +| True Negative | Patient reports allergy to cat hair. | Patient reports allergy to cat hair. | The system correctly identified that "cat" is not PHI. | ++When evaluating candidate models for our service, we strive to reduce false negatives, the most important +metric from a privacy perspective. ++The de-identification model is trained and evaluated on diverse types of unstructured medical documents, including clinical notes and transcripts. Our training data includes synthetically generated data, open datasets, and commercially obtained datasets with patient consent. We do not retain or use customer data to improve the service. Even though internal tests demonstrate the model’s potential to generalize to different populations and locales, you should carefully evaluate your model in the context of your intended use. ++## Best practices for improving system performance +There are numerous best practices to improve the de-identification services’ performance: ++- Surrogation: When appropriate, we recommend users deploy surrogation over redaction. This is because if the system fails to identify true PHI, then the real value would be hidden among surrogates, or stand-in-data. The data is "hiding in plain sight." +- Languages: Currently, the de-identification service is enabled for English text only. Code-switching or using other languages results in worse performance. +- Spelling: Correct spelling improves performance. If a word or the surrounding words are misspelled the system might or might not have enough information to recognize that the text is PHI. +- Data Format: The service performs best on unstructured text, such as clinical notes, transcripts, or messages. Structured text without context of surrounding words may or may not have enough information to recognize that the text is PHI. ++## Evaluation of the de-identification service +### Evaluation methods +Our de-identification system is evaluated in terms of its ability to detect PHI in incoming text, and secondarily our ability to replace that PHI with synthetic data that preserves the semantics of the incoming text. ++### PHI detection +Our system focuses on its ability to successfully identify and remove all PHI in incoming text (recall). Secondary metrics include precision, which tells us how often we think something is PHI when it is not, as well as how often we identify both the type and location of PHI in text. As a service that is typically used to mitigate risk associated with PHI, the primary release criteria we use is recall. Recall is measured on a number of academic and internal datasets written in English and typically covers medical notes and conversations across various medical specialties. Our internal metrics do not include non-PHI text and are measured at an entity level with fuzzy matching such that the true text span need not match exactly to the detected one. ++Our service goal is to maintain recall greater than 95%. ++### PHI replacement +An important consideration for a system such as ours is that we produce synthetic data that looks like the original data source in terms of plausibility and readability. To this end, we evaluate how often our system produces replacements that can be interpreted as the same type as the original. This is an important intermediate metric that is a predictor of how well downstream applications could make sense of the de-identified data. ++Secondarily, we internally study the performance of machine learning models trained on original vs. de-identified data. We do not publish the results of these studies, however we have found that using surrogation for machine learning applications can greatly improve the downstream ML model performance. As every machine learning application is different and these results may not translate across applications depending on their sensitivity to PHI, we encourage our customers who are using machine learning to study the applicability of de-identified data for machine learning purposes. ++### Evaluation results +Our system currently meets our benchmarks for recall and precision on our academic evaluation sets. ++### Limitations +The data and measurement that we perform represents most healthcare applications involving text conducted in English. In doing so, our system is optimized to perform well on medical data, and we believe represent the typical usage including length, encoding, formatting, markup, style, and content. Our system performs well for many types of text, but may underperform if the incoming data differs with respect to any of these +metrics. Care has been taken in the system to analyze text in large chunks, such that the context of a phrase is used to infer if it is PHI or not. We do not recommend using this system in a real-time / transcription application, where the caller may only have access to the context before a PHI utterance. Our system relies on both pre- and post- text for context. ++Our training algorithm leverages large foundational models that are trained on large amounts of text from all sources, including nonmedical sources. While every reasonable effort is employed to ensure that the results of these models are in line with the domain and intended usage of the application, these systems may not perform well in all circumstances for all data. We do not recommend this system for nonmedical applications or for applications other than English. ++### Fairness considerations +The surrogation system replaces names through random selection. This may result in a distribution of names more diverse than the original dataset. The surrogation system also strives to not include offensive content in results. The surrogation list has been evaluated by a content-scanning tool designed to check for sensitive geopolitical terms, profanity, and trademark terms in Microsoft products. At this time, we do not support languages other than English but plan to support multilingual input in the future. +Our model has been augmented to provide better than average performance for all cultures. We carefully inject data into our training process that represents many ethnicities in an effort to provide equal performance in PHI removal for all data, regardless of source. The service makes no guarantees implied or explicit with respect to its interpretation of data. Any user of this service should make no inferences about associations or correlations between tagged data elements such as: gender, age, location, language, occupation, illness, income level, marital status, disease or disorder, or any other demographic information. ++## Evaluating and integrating the de-identification service for your use +Microsoft wants to help you responsibly deploy the de-identification service. As part of our commitment +to developing responsible AI, we urge you to consider the following factors: ++- Understand what it can do: Fully assess the capabilities of the de-identification service to understand its capabilities and limitations. Understand how it will perform in your scenario, context, and on your specific data set. +- Test with real, diverse data: Understand how the de-identification service will perform in your scenario by thoroughly testing it by using real-life conditions and data that reflect the diversity in your users, geography, and deployment contexts. Small datasets, synthetic data, and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance. +- Respect an individual's right to privacy: Only collect or use data and information from individuals for lawful and justifiable purposes. Use only the data and information that you have consent to use or are legally permitted to use. +- Language: The de-identification service, at this time, is only built for English. Using other languages will impact the performance of the model. +- Legal review: Obtain appropriate legal review of your solution, particularly if you will use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and any risks that need to be mitigated prior to use. It is your responsibility to mitigate such risks and resolve any issues that might come up. +- System review: If you plan to integrate and responsibly use an AI-powered product or feature into an existing system for software or customer or organizational processes, take time to understand how each part of your system will be affected. Consider how your AI solution aligns with Microsoft Responsible AI principles. +- Human in the loop: Keep a human in the loop and include human oversight as a consistent pattern area to explore. This means constant human oversight of the AI-powered product or feature and +ensuring the role of humans in making any decisions that are based on the model’s output. To prevent harm and to manage how the AI model performs, ensure that humans have a way to intervene in the solution in real time. +- Security: Ensure that your solution is secure and that it has adequate controls to preserve the integrity of your content and prevent unauthorized access. +- Customer feedback loop: Provide a feedback channel that users and individuals can use to report issues with the service after it's deployed. After you deploy an AI-powered product or feature, it requires ongoing monitoring and improvement. Have a plan and be ready to implement feedback and suggestions for improvement. ++## Learn more about responsible AI +- [Microsoft AI principals](https://www.microsoft.com/ai/responsible-ai) +- [Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources) ++## Learn more about the de-identification service +* Explore [Microsoft Cloud for Healthcare](https://www.microsoft.com/industry/health/microsoft-cloud-for-healthcare) +* Explore [Azure Health Data Services](https://azure.microsoft.com/products/health-data-services/) ++## About this document +© 2023 Microsoft Corporation. All rights reserved. This document is provided "as-is" and for informational purposes only. Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. Some examples are for illustration only and are fictitious. No real association is intended or inferred. +This document is not intended to be, and should not be construed as providing legal advice. The jurisdiction in which you’re operating may have various regulatory or legal requirements that apply to your AI system. Consult a legal specialist if you are uncertain about laws or regulations that might apply to your system, especially if you think those might impact these recommendations. Be aware that not all of these recommendations and resources will be appropriate for every scenario, and conversely, these recommendations and resources may be insufficient for some scenarios. ++Published: September 30, 2023 ++Last updated: August 16, 2024 |
healthcare-apis | Dicom Services Conformance Statement V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md | We support the following matching types. | Range Query | `ScheduledProcedureStepStartDateTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This range is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to, and including `{value2}` is matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values must be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` isn't valid. | | Exact Match | All supported attributes | `{attributeID}={value1}` | | Fuzzy Match | `PatientName` | Matches any component of the name that starts with the value. |+| WildCard Match | `PatientID`, <br/> `ReferencedRequestSequence.AccessionNumber`, <br/> `ReferencedRequestSequence.RequestedProcedureID`, <br/> `ProcedureStepState`, <br/> `ScheduledStationNameCodeSequence.CodeValue`, <br/> `ScheduledStationClassCodeSequence.CodeValue`, <br/> `ScheduledStationGeographicLocationCodeSequence.CodeValue` | Following wildcard characters are supported: <br/> `*` - Matches zero or more characters. For example - `{attributeID}={val*}` matches "val", "valid", "value" but not "evaluate". <br/> `?` - Matches a single character. For example - `{attributeID}={valu?}` matches "value", "valu1" but not "valued" or "valu" | > [!NOTE] > Although we don't support full sequence matching, we do support exact match on the attributes listed that are contained in a sequence. |
logic-apps | Logic Apps Enterprise Integration Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-certificates.md | If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi > > If you're using access policies with your key vault, considering > [migrating to the Azure role-based access control permission model](/azure/key-vault/general/rbac-migration).+ > + > If you receive the error **"Please authorize logic apps to perform operations on key vault by granting access for the logic apps + > service principal '7cd684f4-8a78-49b0-91ec-6a35d38739ba' for 'list', 'get', 'decrypt' and 'sign' operations."**, your + > certificate might not have the **Key Usage** property set to **Data Encipherment**. If not, you might have to recreate the certificate + > with the **Key Usage** property set to **Data Encipherment**. To check your certificate, open the certificate, select the + > **Details** tab, and review the **Key Usage** property. * [Add the corresponding public certificate](#add-public-certificate) to your key vault. This certificate appears in your [agreement's **Send** and **Receive** settings for signing and encrypting messages](logic-apps-enterprise-integration-agreements.md). For example, review [Reference for AS2 messages settings in Azure Logic Apps](logic-apps-enterprise-integration-as2-message-settings.md). |
managed-grafana | How To Use Reporting And Image Rendering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-use-reporting-and-image-rendering.md | -# Use reporting and image rendering (preview) +# Use reporting and image rendering In this guide, you learn how to create reports from your dashboards in Azure Managed Grafana. You can configure to have these reports emailed to intended recipients on a regular schedule or on-demand. Generating reports in the PDF format requires Grafana's image rendering capability, which captures dashboard panels as PNG images. Azure Managed Grafana installs the image renderer for your instance automatically. -> [!IMPORTANT] -> Reporting and image rendering are currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Image rendering performance Image rendering is a CPU-intensive operation. An Azure Managed Grafana instance needs about 10 seconds to render one panel, assuming data query is completed in less than 1 second. The Grafana software allows a maximum of 200 seconds to generate an entire report. Dashboards should contain no more than 20 panels each if they're used in PDF reports. You may have to reduce the panel number further if you plan to include other artifacts (for example, CSV) in the reports. |
migrate | Migrate Support Matrix Vmware Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware-migration.md | The table summarizes VMware vSphere VM support for VMware vSphere VMs you want t ### Appliance requirements (agent-based) -When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2022 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements. +When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2016 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements. - Learn about [replication appliance requirements](../migrate-replication-appliance.md#appliance-requirements) for VMware vSphere. - Install MySQL on the appliance. Learn about [installation options](../migrate-replication-appliance.md#mysql-installation). |
nat-gateway | Nat Gateway Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-gateway-design.md | NAT Gateway supersedes any outbound configuration from a load-balancing rule or The NAT gateway supersedes any outbound configuration from a load-balancing rule or outbound rules on a load balancer and instance level public IPs on a virtual machine. All virtual machines in subnets 1 and 2 use the NAT gateway exclusively for outbound and return traffic. Instance-level public IPs take precedence over load balancer. The VM in subnet 1 uses the instance level public IP for inbound originating traffic. VMSS do not have instance-level public IPs. -## Monitor outbound network traffic with NSG flow logs +## Monitor outbound network traffic with VNet flow logs -A network security group allows you to filter inbound and outbound traffic to and from a virtual machine. To monitor outbound traffic flowing from the virtual machine behind your NAT gateway, enable NSG flow logs. + [Virtual network (VNet) flow logs](../network-watcher/vnet-flow-logs-overview.md) are a feature of Azure Network Watcher that logs information about IP traffic flowing through a virtual network. To monitor outbound traffic flowing from the virtual machine behind your NAT gateway, enable VNet flow logs. -For information about NSG flow logs, see [NSG flow log overview](/azure/network-watcher/network-watcher-nsg-flow-logging-overview). +For guides on how to enable VNet flow logs, see [Manage virtual network flow logs](../network-watcher/vnet-flow-logs-portal.md). -For guides on how to enable NSG flow logs, see [Enabling NSG flow logs](/azure/network-watcher/network-watcher-nsg-flow-logging-overview#enabling-nsg-flow-logs). +It is recommended to access the log data on [Log Analytics workspaces](../azure-monitor/logs/log-analytics-overview.md) where you can also query and filter the data for outbound traffic. To learn more about using Log Analytics, see [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md). ++For more details on the VNet flow log schema, see [Traffic analytics schema and data aggregation](../network-watcher/traffic-analytics-schema.md). > [!NOTE]-> NSG flow logs will only show the private IPs of your VM instances connecting outbound to the internet. NSG flow logs will not show you which NAT gateway public IP address the VMΓÇÖs private IP has SNATed to prior to connecting outbound. +> Virtual network flow logs will only show the private IPs of your VM instances connecting outbound to the internet. VNet flow logs will not show you which NAT gateway public IP address the VMΓÇÖs private IP has SNATed to prior to connecting outbound. ## Limitations |
nat-gateway | Troubleshoot Nat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/troubleshoot-nat.md | Refer to the following table for tools to use to validate NAT gateway connectivi ### How to analyze outbound connectivity -To analyze outbound traffic from NAT gateway, use NSG flow logs. NSG flow logs provide connection information for your virtual machines. The connection information contains the source IP and port and the destination IP and port and the state of the connection. The traffic flow direction and the size of the traffic in number of packets and bytes sent is also logged. The source IP and port specified in the NSG flow log is for the virtual machine and not the NAT gateway. +To analyze outbound traffic from NAT gateway, use virtual network (VNet) flow logs. VNet flow logs provide connection information for your virtual machines. The connection information contains the source IP and port and the destination IP and port and the state of the connection. The traffic flow direction and the size of the traffic in number of packets and bytes sent is also logged. The source IP and port specified in the VNet flow log is for the virtual machine and not the NAT gateway. -* To learn more about NSG flow logs, see [NSG flow log overview](../network-watcher/network-watcher-nsg-flow-logging-overview.md). +* To learn more about VNet flow logs, see [Virtual network flow logs overview](../network-watcher/vnet-flow-logs-overview.md). -* For guides on how to enable NSG flow logs, see [Managing NSG flow logs](../network-watcher/network-watcher-nsg-flow-logging-overview.md#managing-nsg-flow-logs). +* For guides on how to enable VNet flow logs, see [Manage virtual network flow logs](../network-watcher/vnet-flow-logs-portal.md). -* For guides on how to read NSG flow logs, see [Working with NSG flow logs](../network-watcher/network-watcher-nsg-flow-logging-overview.md#working-with-flow-logs). +* It is recommended to access the log data on [Log Analytics workspaces](../azure-monitor/logs/log-analytics-overview.md) where you can also query and filter the data for outbound traffic. To learn more about using Log Analytics, see [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md). ++* For more details on the VNet flow log schema, see [Traffic analytics schema and data aggregation](../network-watcher/traffic-analytics-schema.md). ## NAT gateway in a failed state |
network-watcher | Nsg Flow Logs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-overview.md | Here's an example format of a version 1 NSG flow log: "records": [ { "time": "2017-02-16T22:00:32.8950000Z",- "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", + "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282421,42.119.146.95,10.1.0.4,51529,5358,T,I,D" + "1487282421,192.0.2.95,10.1.0.4,51529,5358,T,I,D" ] } ] Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282370,163.28.66.17,10.1.0.4,61771,3389,T,I,A", - "1487282393,5.39.218.34,10.1.0.4,58596,3389,T,I,A", - "1487282393,91.224.160.154,10.1.0.4,61540,3389,T,I,A", - "1487282423,13.76.89.229,10.1.0.4,53163,3389,T,I,A" + "1487282370,192.0.2.17,10.1.0.4,61771,3389,T,I,A", + "1487282393,203.0.113.34,10.1.0.4,58596,3389,T,I,A", + "1487282393,192.0.2.154,10.1.0.4,61540,3389,T,I,A", + "1487282423,203.0.113.229,10.1.0.4,53163,3389,T,I,A" ] } ] Here's an example format of a version 1 NSG flow log: }, { "time": "2017-02-16T22:01:32.8960000Z",- "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", + "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282481,195.78.210.194,10.1.0.4,53,1732,U,I,D" + "1487282481,198.51.100.194,10.1.0.4,53,1732,U,I,D" ] } ] Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282435,61.129.251.68,10.1.0.4,57776,3389,T,I,A", - "1487282454,84.25.174.170,10.1.0.4,59085,3389,T,I,A", - "1487282477,77.68.9.50,10.1.0.4,65078,3389,T,I,A" + "1487282435,198.51.100.68,10.1.0.4,57776,3389,T,I,A", + "1487282454,203.0.113.170,10.1.0.4,59085,3389,T,I,A", + "1487282477,192.0.2.50,10.1.0.4,65078,3389,T,I,A" ] } ] Here's an example format of a version 1 NSG flow log: "records": [ { "time": "2017-02-16T22:00:32.8950000Z",- "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", + "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282421,42.119.146.95,10.1.0.4,51529,5358,T,I,D" + "1487282421,192.0.2.95,10.1.0.4,51529,5358,T,I,D" ] } ] Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282370,163.28.66.17,10.1.0.4,61771,3389,T,I,A", - "1487282393,5.39.218.34,10.1.0.4,58596,3389,T,I,A", - "1487282393,91.224.160.154,10.1.0.4,61540,3389,T,I,A", - "1487282423,13.76.89.229,10.1.0.4,53163,3389,T,I,A" + "1487282370,192.0.2.17,10.1.0.4,61771,3389,T,I,A", + "1487282393,203.0.113.34,10.1.0.4,58596,3389,T,I,A", + "1487282393,192.0.2.154,10.1.0.4,61540,3389,T,I,A", + "1487282423,203.0.113.229,10.1.0.4,53163,3389,T,I,A" ] } ] Here's an example format of a version 1 NSG flow log: }, { "time": "2017-02-16T22:01:32.8960000Z",- "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", + "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282481,195.78.210.194,10.1.0.4,53,1732,U,I,D" + "1487282481,198.51.100.194,10.1.0.4,53,1732,U,I,D" ] } ] Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282435,61.129.251.68,10.1.0.4,57776,3389,T,I,A", - "1487282454,84.25.174.170,10.1.0.4,59085,3389,T,I,A", - "1487282477,77.68.9.50,10.1.0.4,65078,3389,T,I,A" + "1487282435,198.51.100.68,10.1.0.4,57776,3389,T,I,A", + "1487282454,203.0.113.170,10.1.0.4,59085,3389,T,I,A", + "1487282477,192.0.2.50,10.1.0.4,65078,3389,T,I,A" ] } ] Here's an example format of a version 1 NSG flow log: }, { "time": "2017-02-16T22:02:32.9040000Z",- "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", + "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282492,175.182.69.29,10.1.0.4,28918,5358,T,I,D", - "1487282505,71.6.216.55,10.1.0.4,8080,8080,T,I,D" + "1487282492,203.0.113.29,10.1.0.4,28918,5358,T,I,D", + "1487282505,192.0.2.55,10.1.0.4,8080,8080,T,I,D" ] } ] Here's an example format of a version 1 NSG flow log: { "mac": "000D3AF8801A", "flowTuples": [- "1487282512,91.224.160.154,10.1.0.4,59046,3389,T,I,A" + "1487282512,192.0.2.154,10.1.0.4,59046,3389,T,I,A" ] } ] Here's an example format of a version 2 NSG flow log: "records": [ { "time": "2018-11-13T12:00:35.3899262Z",- "systemId": "a0fca5ce-022c-47b1-9735-89943b42f2fa", + "systemId": "66aa66aa-bb77-cc88-dd99-00ee00ee00ee", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", Here's an example format of a version 2 NSG flow log: { "mac": "000D3AF87856", "flowTuples": [- "1542110402,94.102.49.190,10.5.16.4,28746,443,U,I,D,B,,,,", - "1542110424,176.119.4.10,10.5.16.4,56509,59336,T,I,D,B,,,,", - "1542110432,167.99.86.8,10.5.16.4,48495,8088,T,I,D,B,,,," + "1542110402,192.0.2.190,10.5.16.4,28746,443,U,I,D,B,,,,", + "1542110424,203.0.113.10,10.5.16.4,56509,59336,T,I,D,B,,,,", + "1542110432,198.51.100.8,10.5.16.4,48495,8088,T,I,D,B,,,," ] } ] Here's an example format of a version 2 NSG flow log: { "mac": "000D3AF87856", "flowTuples": [- "1542110377,10.5.16.4,13.67.143.118,59831,443,T,O,A,B,,,,", - "1542110379,10.5.16.4,13.67.143.117,59932,443,T,O,A,E,1,66,1,66", - "1542110379,10.5.16.4,13.67.143.115,44931,443,T,O,A,C,30,16978,24,14008", - "1542110406,10.5.16.4,40.71.12.225,59929,443,T,O,A,E,15,8489,12,7054" + "1542110377,10.5.16.4,203.0.113.118,59831,443,T,O,A,B,,,,", + "1542110379,10.5.16.4,203.0.113.117,59932,443,T,O,A,E,1,66,1,66", + "1542110379,10.5.16.4,203.0.113.115,44931,443,T,O,A,C,30,16978,24,14008", + "1542110406,10.5.16.4,198.51.100.225,59929,443,T,O,A,E,15,8489,12,7054" ] } ] Here's an example format of a version 2 NSG flow log: }, { "time": "2018-11-13T12:01:35.3918317Z",- "systemId": "a0fca5ce-022c-47b1-9735-89943b42f2fa", + "systemId": "66aa66aa-bb77-cc88-dd99-00ee00ee00ee", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", |
network-watcher | Vnet Flow Logs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md | In the following example of virtual network flow logs, multiple records follow t { "time": "2022-09-14T09:00:52.5625085Z", "flowLogVersion": 4,- "flowLogGUID": "abcdef01-2345-6789-0abc-def012345678", + "flowLogGUID": "66aa66aa-bb77-cc88-dd99-00ee00ee00ee", "macAddress": "00224871C205", "category": "FlowLogFlowEvent", "flowLogResourceID": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_EASTUS2EUAP/FLOWLOGS/VNETFLOWLOG", In the following example of virtual network flow logs, multiple records follow t { "rule": "DefaultRule_AllowInternetOutBound", "flowTuples": [- "1663146003599,10.0.0.6,52.239.184.180,23956,443,6,O,B,NX,0,0,0,0", - "1663146003606,10.0.0.6,52.239.184.180,23956,443,6,O,E,NX,3,767,2,1580", - "1663146003637,10.0.0.6,40.74.146.17,22730,443,6,O,B,NX,0,0,0,0", - "1663146003640,10.0.0.6,40.74.146.17,22730,443,6,O,E,NX,3,705,4,4569", - "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,B,NX,0,0,0,0", - "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,E,NX,3,705,4,4569", - "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,B,NX,0,0,0,0", - "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,E,NX,2,134,1,108", - "1663146017343,10.0.0.6,104.16.218.84,36776,443,6,O,B,NX,0,0,0,0", - "1663146022793,10.0.0.6,104.16.218.84,36776,443,6,O,E,NX,22,2217,33,32466" + "1663146003599,10.0.0.6,192.0.2.180,23956,443,6,O,B,NX,0,0,0,0", + "1663146003606,10.0.0.6,192.0.2.180,23956,443,6,O,E,NX,3,767,2,1580", + "1663146003637,10.0.0.6,203.0.113.17,22730,443,6,O,B,NX,0,0,0,0", + "1663146003640,10.0.0.6,203.0.113.17,22730,443,6,O,E,NX,3,705,4,4569", + "1663146004251,10.0.0.6,203.0.113.17,22732,443,6,O,B,NX,0,0,0,0", + "1663146004251,10.0.0.6,203.0.113.17,22732,443,6,O,E,NX,3,705,4,4569", + "1663146004622,10.0.0.6,203.0.113.17,22734,443,6,O,B,NX,0,0,0,0", + "1663146004622,10.0.0.6,203.0.113.17,22734,443,6,O,E,NX,2,134,1,108", + "1663146017343,10.0.0.6,198.51.100.84,36776,443,6,O,B,NX,0,0,0,0", + "1663146022793,10.0.0.6,198.51.100.84,36776,443,6,O,E,NX,22,2217,33,32466" ] } ] In the following example of virtual network flow logs, multiple records follow t { "rule": "Internet", "flowTuples": [- "1663145989563,20.106.221.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0", - "1663145989679,20.55.117.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0", - "1663145989709,20.55.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0", - "1663145990049,13.65.224.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0", - "1663145990145,20.55.117.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0", - "1663145990175,20.55.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0", - "1663146015545,20.106.221.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0" + "1663145989563,192.0.2.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0", + "1663145989679,203.0.113.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0", + "1663145989709,203.0.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0", + "1663145990049,198.51.100.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0", + "1663145990145,203.0.113.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0", + "1663145990175,203.0.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0", + "1663146015545,192.0.2.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0" ] } ] In the following example of virtual network flow logs, multiple records follow t :::image type="content" source="media/vnet-flow-logs-overview/vnet-flow-log-format.png" alt-text="Table that shows the format of a virtual network flow log."lightbox="media/vnet-flow-logs-overview/vnet-flow-log-format.png" -Here's an example bandwidth calculation for flow tuples from a TCP conversation between `185.170.185.105:35370` and `10.2.0.4:23`: +Here's an example bandwidth calculation for flow tuples from a TCP conversation between `203.0.113.105:35370` and `10.2.0.4:23`: -`1493763938,185.170.185.105,10.2.0.4,35370,23,6,I,B,NX,,,,` -`1493695838,185.170.185.105,10.2.0.4,35370,23,6,I,C,NX,1021,588096,8005,4610880` -`1493696138,185.170.185.105,10.2.0.4,35370,23,6,I,E,NX,52,29952,47,27072` +`1493763938,203.0.113.105,10.2.0.4,35370,23,6,I,B,NX,,,,` +`1493695838,203.0.113.105,10.2.0.4,35370,23,6,I,C,NX,1021,588096,8005,4610880` +`1493696138,203.0.113.105,10.2.0.4,35370,23,6,I,E,NX,52,29952,47,27072` For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1,021 + 52 + 8,005 + 47 = 9,125. The total number of bytes transferred is 588,096 + 29,952 + 4,610,880 + 27,072 = 5,256,000. |
operational-excellence | Overview Relocation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview-relocation.md | The following tables provide links to each Azure service relocation document. Th [Azure Event Hubs](relocation-event-hub.md)| ✅ | ❌| ❌ | [Azure Event Hubs Cluster](relocation-event-hub-cluster.md)| ✅ | ❌ | ❌ | [Azure Key Vault](./relocation-key-vault.md)| ✅ | ✅| ❌ |+[Azure Load Balancer](../load-balancer/move-across-regions-external-load-balancer-portal.md)| ✅ | ✅| ❌ | [Azure Site Recovery (Recovery Services vaults)](relocation-site-recovery.md)| ✅ | ✅| ❌ |+[Azure Virtual Machines]( ../resource-mover/tutorial-move-region-virtual-machines.md?toc=/azure/operational-excellence/toc.json)| ❌ | ❌| ✅ | +[Azure Virtual Machine Scale Sets](./relocation-virtual-machine-scale-sets.md)|❌ |✅ | ❌ | [Azure Virtual Network](./relocation-virtual-network.md)| ✅| ❌ | ✅ | [Azure Virtual Network - Network Security Groups](./relocation-virtual-network-nsg.md)|✅ |❌ | ✅ | The following tables provide links to each Azure service relocation document. Th [Azure Container Registry](relocation-container-registry.md)|✅ | ✅| ❌ | [Azure Cosmos DB](relocation-cosmos-db.md)|✅ | ✅| ❌ | [Azure Database for MariaDB Server](/azure/mariadb/howto-move-regions-portal?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |-[Azure Database for MySQL Server](/azure/mysql/howto-move-regions-portal?toc=/azure/operational-excellence/toc.json)✅ | ✅| ❌ | +[Azure Database for MySQL Server](/azure/mysql/howto-move-regions-portal?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ | [Azure Database for PostgreSQL](./relocation-postgresql-flexible-server.md)| ✅ | ✅| ❌ | [Azure Event Grid domains](relocation-event-grid-domains.md)| ✅ | ❌| ❌ | [Azure Event Grid custom topics](relocation-event-grid-custom-topics.md)| ✅ | ❌| ❌ | |
operational-excellence | Relocation Virtual Machine Scale Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-virtual-machine-scale-sets.md | + + Title: Relocate Azure Virtual Machine Scale Sets to another region +description: Learn how to relocate Azure Virtual Machine Scale Sets to another region +++ Last updated : 08/20/2024++++ - subject-relocation +# Customer intent: As an administrator, I want to move Azure Virtual Machine Scale Sets to another region. ++++# Relocate Azure Virtual Machine Scale Sets to another region ++This article covers the recommended approach, guidelines, and practices for relocating Virtual Machine Scale Sets to another region. ++## Prerequisites ++Before you begin, ensure that you have the following prerequisites: ++- If the source VM supports availability zones, then the target region must also support availability zones. To see which regions support availability zones, see [Azure regions with availability zone support](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support). ++- The subscription in the destination region needs enough quota to create the resources. If you exceeded the quota, request an increase. For more information, see [Azure subscription and service limits, quotas, and constraints](..//azure-resource-manager/management/azure-subscription-service-limits.md). ++- Consolidate all the associated extensions from source Virtual Machine Scale Set, as some need to be reconfigured after relocation. ++- Confirm if the VM image is a part of VM image gallery. Gallery resources need to be replicated to the target region. ++- Capture the list of resources that are being configured, such as capturing diagnostic logs. This is important with respect to prioritization and sequencing. ++- Ensure that the following services are available and deployed in the target region: ++ - [Log Analytics Workspace](./relocation-log-analytics.md) + - Diagnostic Virtual Machine Scale Set + - [Key Vault](./relocation-key-vault.md) + - [Proximity Placement Group](/azure/virtual-machine-scale-sets/proximity-placement-groups) + - Public IP address + - [Load Balancer](../load-balancer/move-across-regions-external-load-balancer-portal.md) + - [Virtual Network](./relocation-virtual-network.md) ++- Ensure that you have a Network Contributor role or higher in order to configure and deploy a Load Balancer template in another region. ++- Identify the networking layout of the solution in the source region, such as NSGs, Public IPs, VNet address spaces, and more. ++++## Prepare ++In this section, follow the instructions to prepare for relocating a Virtual Machine Scale Set to another region. +++1. Locate the image reference used by the source Virtual Machine Scale Set and replicate it to the Image Gallery in the target region. ++ :::image type="content" source="media\relocation\virtual-machine-scale-sets\image-replication.png" alt-text="Screenshot showing how to locate image of virtual machine."::: ++1. Relocate the Load Balancer, along with the public IP by doing one of the following methods: ++ - *Resource Mover*. Associate Load Balancer with public IP in the source region to the target region. For more information, see [Move resources across regions (from resource group) with Azure Resource Mover](../resource-mover/move-region-within-resource-group.md). + - *Export Template*. Relocate the Load Balancer along with public IP to the target region using the export template option. For information on how to do this, see [Move an external load balancer to another region using the Azure portal](../load-balancer/move-across-regions-external-load-balancer-portal.md). ++ >[!IMPORTANT] + > Because public IPs are a regional resource, Azure Resource Mover re-creates Load Balancer at the target region with a new public IP address. ++1. Manually set the source Virtual Machine Scale Set instance count to 0. ++ :::image type="content" source="media\relocation\virtual-machine-scale-sets\set-instance-count.png" alt-text="Screenshot showing how to set Virtual Machine Scale Set instance count to 0."::: ++1. Export the source Virtual Machine Scale Set template from Azure portal: + + 1. In the [Azure portal](https://portal.azure.com), navigate to your source Virtual Machine Scale Set. + 1. In the menu, under **Automation**, select **Export template** > **Download**. + 1. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice. This zip file contains the .json files that include the template and scripts to deploy the template. + ++1. Edit the template: + + 1. Remove associated resources if theyΓÇÖre present in the template, such as Log Analytics Workspace in the **Monitoring** section. ++ 1. Make any necessary changes to the template, such as updating all occurrences of the name and the location for the relocated source Virtual Machine Scale Set. ++ 1. Update the parameter file with these inputs: + - Source Virtual Machine Scale set `name`. + - Image Gallery `Resource id`. + - Virtual network `subnet Id`. Also, make the necessary ARM code changes to the subnet section so that it can call the Virtual Network `subnet Id`. + - Load Balancers` resource id`, `Address id`, and `virtual network id`. Change the `value` property under `parameters`. ++## Relocate ++In this section, follow the steps below to relocate a Virtual Machine Scale Set across geographies. ++1. In the target region, recreate the Virtual Machine Scale Set with the exported template by using IAC (Infrastructure as Code) tools such as Azure Resource Manager templates, Azure CLI, or PowerShell. ++1. Associate the dependent resources to the target Virtual Machine Scale Set, such as Log Analytics Workspace in **Monitoring** section. Also, configure all the extensions that were consolidated in the [Prerequisites section](#prerequisites). +++## Validate ++When the relocation is complete, validate the Virtual Machine Scale Set in the target region by performing the following steps: ++ - Virtual Machine Scale Set doesn't keep the same IP after relocation to new target location. However, make sure to validate the private IP configuration. ++ - Run a scripted or manual smoke test and integration test to validate that all configurations and dependent resources have been properly linked and all configured data are accessible. ++- Validate Virtual Machine Scale Set components and integration. ++## Related content ++- To move registry resources to a new resource group either in the same subscription or a new subscription, see [Move Azure resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md). |
operator-nexus | Howto Baremetal Run Data Extract | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-data-extract.md | Verify you have access to the Cluster Manager's storage account The run data extract command executes one or more predefined scripts to extract data from a bare metal machine. +> [!WARNING] +> Microsoft does not provide or support any Operator Nexus API calls that expect plaintext username and/or password to be supplied. Please note any values sent will be logged and are considered exposed secrets, which should be rotated and revoked. The Microsoft documented method for securely using secrets is to store them in an Azure Key Vault, if you have specific questions or concerns please submit a request via the Azure Portal. + The current list of supported commands are - [SupportAssist/TSR collection for Dell troubleshooting](#hardware-support-data-collection)\ |
operator-nexus | Howto Baremetal Run Read | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-read.md | Verify you have access to the Cluster Manager's storage account ## Executing a run-read command -The run-read command lets you run a command on the BMM that does not change anything. Some commands have more +The run-read command lets you run a command on the BMM that doesn't change anything. Some commands have more than one word, or need an argument to work. These commands are made like this to separate them from the ones that can change things. For example, run-read-command can use `kubectl get` but not `kubectl apply`. When you use these commands, you have to put all the words in the ΓÇ£commandΓÇ¥ field. For example, Some of the run-read commands require specific arguments be supplied to enforce An example of run-read commands that require specific arguments is the allowed Mellanox command `mstconfig`, which requires the `query` argument be provided to enforce read-only. +> [!WARNING] +> Microsoft does not provide or support any Operator Nexus API calls that expect plaintext username and/or password to be supplied. Please note any values sent will be logged and are considered exposed secrets, which should be rotated and revoked. The Microsoft documented method for securely using secrets is to store them in an Azure Key Vault, if you have specific questions or concerns please submit a request via the Azure Portal. + The list below shows the commands you can use. Commands in `*italics*` cannot have `arguments`; the rest can. - `arp` |
reliability | Overview Reliability Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview-reliability-guidance.md | For a more detailed overview of reliability principles in Azure, see [Reliabilit |Azure Container Apps|[Reliability in Azure Container Apps](reliability-azure-container-apps.md)|[Reliability in Azure Container Apps](reliability-azure-container-apps.md)| |Azure Container Instances|[Reliability in Azure Container Instances](reliability-containers.md)| [Reliability in Azure Container Instances](reliability-containers.md#disaster-recovery) | |Azure Container Registry|[Enable zone redundancy in Azure Container Registry for resiliency and high availability](../container-registry/zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|-|Azure Data Explorer|[Azure Data Explorer - Create cluster database](/azure/data-explorer/create-cluster-database-portal?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Data Explorer - Business continuity](/azure/data-explorer/business-continuity-overview) | +|Azure Data Explorer|| [Azure Data Explorer - Business continuity](/azure/data-explorer/business-continuity-overview) | |Azure Data Factory|[Azure Data Factory data redundancy](../data-factory/concepts-data-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|| |Azure Database for MySQL|| [Azure Database for MySQL- Business continuity](/azure/mysql/single-server/concepts-business-continuity?#recover-from-an-azure-regional-data-center-outage) | |Azure Database for MySQL - Flexible Server|[Azure Database for MySQL Flexible Server High availability](/azure/mysql/flexible-server/concepts-high-availability?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Database for MySQL Flexible Server - Restore to latest restore point](/azure/mysql/flexible-server/how-to-restore-server-portal?#geo-restore-to-latest-restore-point) | For a more detailed overview of reliability principles in Azure, see [Reliabilit |Microsoft Entra Domain Services|| [Create replica set](../active-directory-domain-services/tutorial-create-replica-set.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Event Grid|[Reliability in Azure Event Grid](./reliability-event-grid.md)| [Reliability in Azure Event Grid](./reliability-event-grid.md) | |Azure Firewall|[Deploy an Azure Firewall with Availability Zones using Azure PowerShell](../firewall/deploy-availability-zone-powershell.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)||-|Azure Firewall Manager|[Create an Azure Firewall and a firewall policy - ARM template](../firewall-manager/quick-firewall-policy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |Azure Functions|[Azure Functions](reliability-functions.md)| [Azure Functions](reliability-functions.md#cross-region-disaster-recovery-and-business-continuity) | |Azure Guest Configuration| |[Azure Guest Configuration Availability](../governance/machine-configuration/overview.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json?#availability) | |Azure HDInsight|[Reliability in Azure HDInsight](reliability-hdinsight.md)| [Reliability in Azure HDInsight](reliability-hdinsight.md#cross-region-disaster-recovery-and-business-continuity) | For a more detailed overview of reliability principles in Azure, see [Reliabilit |Azure Logic Apps|[Protect logic apps from region failures with zone redundancy and availability zones](../logic-apps/set-up-zone-redundancy-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Business continuity and disaster recovery for Azure Logic Apps](../logic-apps/business-continuity-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Media Services||[High Availability with Media Services and Video on Demand (VOD)](/azure/media-services/latest/architecture-high-availability-encoding-concept) | |Azure Migrate|| [Does Azure Migrate offer Backup and Disaster Recovery?](../migrate/resources-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#does-azure-migrate-offer-backup-and-disaster-recovery) |-|Azure Monitor - Application Insights|[Continuous export advanced storage configuration](../azure-monitor/app/export-telemetry.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#continuous-export-advanced-storage-configuration) | |Azure Monitor-Log Analytics |[Enhance data and service resilience in Azure Monitor Logs with availability zones](../azure-monitor/logs/availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Enable data export](../azure-monitor/logs/logs-data-export.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#enable-data-export) | |Azure Network Watcher|[Service availability and redundancy](../network-watcher/frequently-asked-questions.yml?bc=%2fazure%2freliability%2fbreadcrumb%2ftoc.json&toc=%2fazure%2freliability%2ftoc.json#service-availability-and-redundancy)|| |Azure Notification Hubs|[Reliability Azure Notification Hubs](reliability-notification-hubs.md)|[Reliability Azure Notification Hubs](reliability-notification-hubs.md)| |Azure Operator Nexus|[Reliability in Azure Operator Nexus](reliability-operator-nexus.md)|[Reliability in Azure Operator Nexus](reliability-operator-nexus.md)| |Azure Private Link|[Azure Private Link availability](../private-link/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|| |Azure Route Server|[Azure Route Server FAQ](../route-server/route-server-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|-|Azure SQL Server Registry|| [What are Extended Security Updates for SQL Server?](/sql/sql-server/end-of-support/sql-server-extended-security-updates?preserve-view=true&view=sql-server-ver15#configure-regional-redundancy) | |Azure Storage - Blob Storage|[Choose the right redundancy option](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#choose-the-right-redundancy-option)|[Azure storage disaster recovery planning and failover](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |Azure Stream Analytics|| [Achieve geo-redundancy for Azure Stream Analytics jobs](../stream-analytics/geo-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Virtual WAN|[How are Availability Zones and resiliency handled in Virtual WAN?](../virtual-wan/virtual-wan-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-are-availability-zones-and-resiliency-handled-in-virtual-wan)| [Designing for disaster recovery with ExpressRoute private peering](../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |
reliability | Reliability Microsoft Purview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-microsoft-purview.md | Microsoft Purview makes commercially reasonable efforts to support zone-redundan Microsoft Purview makes commercially reasonable efforts to provide availability zone support in various regions as follows: -| Region | Data Map | Scan | Policy | Insights | -| | | | | | -|Southeast Asia||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|East US||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|Australia East|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| -|West US 2||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|Canada Central|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| -|Central India||:::image type="icon" source="media/yes-icon.svg":::||:::image type="icon" source="media/yes-icon.svg":::| -|East US 2||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|France Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|Germany West Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|Japan East||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|Korea Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|West US 3||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|North Europe||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|South Africa North||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|Sweden Central|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| -|Switzerland North||:::image type="icon" source="media/yes-icon.svg":::||| -|USGov Virginia|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| -|South Central US||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|Brazil South||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|UK South|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| -|Qatar Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| -|China North 3|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| -|West Europe||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +| Region | Data Map | Data Governance | Scan | Policy | Insights | +| | | | | | | +|Southeast Asia|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|East US|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|Australia East|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| +|West US 2|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|Canada Central|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| +|Central India|||:::image type="icon" source="media/yes-icon.svg":::||:::image type="icon" source="media/yes-icon.svg":::| +|East US 2|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|France Central|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|Germany West Central|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|Japan East|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|Korea Central|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|West US 3|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|North Europe|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|South Africa North|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|Sweden Central|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| +|Switzerland North|||:::image type="icon" source="media/yes-icon.svg":::||| +|USGov Virginia|:::image type="icon" source="media/yes-icon.svg":::||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| +|South Central US|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|Brazil South|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|UK South|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| +|Qatar Central|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| +|China North 3|:::image type="icon" source="media/yes-icon.svg":::||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| +|West Europe|||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| ## Disaster recovery and business continuity [!INCLUDE [next step](includes/reliability-disaster-recovery-description-include.md)] >[!IMPORTANT]->Today, Microsoft Purview doesn't support automated disaster recovery. Until that support is added, you're responsible to take care of backup and restore activities. You can manually create a secondary Microsoft Purview account as a warm standby instance in another region. +>Today, Microsoft Purview doesn't support automated disaster recovery. Until that support is added, you're responsible to take care of backup and restore activities. You can manually create a secondary Microsoft Purview account as a warm standby instance in another region. Note that this standby instance in another region would not support Microsoft Purview Data Governance Solution. Today, it only supports Azure Purview solution. We are working on adding DR support for Microsoft Purview Data Governance Solution. To implement disaster recovery for Microsoft Purview, see the [Microsoft Purview disaster recovery documentation.](/purview/disaster-recovery) |
role-based-access-control | Delegate Role Assignments Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-overview.md | If you want to further constrain the Key Vault Data Access Administrator role as Here are the known issues related to delegating role assignment management with conditions: -- You can't delegate role assignment management with conditions using [Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).+- You can't delegate role assignment management for custom roles with conditions using [Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md). - You can't have a role assignment with a Microsoft.Storage data action and an ABAC condition that uses a GUID comparison operator. For more information, see [Troubleshoot Azure RBAC](troubleshooting.md#symptomauthorization-failed). ## License requirements |
sentinel | Create Codeless Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md | Finally, the CCP utilizes the credential objects in the data connector section. ## Create the deployment template -Manually package an Azure Resource Management (ARM) template using the [example template](#example-arm-template) as your guide. +Manually package an Azure Resource Management (ARM) template using the [example template code samples](#example-arm-template) as your guide. These code samples are divided by ARM template sections for you to splice together. In addition to the example template, published solutions available in the Microsoft Sentinel content hub use the CCP for their data connector. Review the following solutions as more examples of how to stitch the components together into an ARM template. |
sentinel | Geographical Availability Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/geographical-availability-data-residency.md | description: In this article, you learn about geographical availability and data Previously updated : 06/09/2024 Last updated : 08/29/2024 #Customer intent: As a security operator setting up Microsoft Sentinel, I want to understand where data is stored, so I can meet compliance guidelines. Microsoft Sentinel is a [non-regional service](https://azure.microsoft.com/explo Microsoft Sentinel can run on workspaces in the following regions: -|North America |South America |Asia |Europe |Australia |Africa | -||||||| -|**US**<br><br>ΓÇó Central US<br>ΓÇó East US<br>ΓÇó East US 2<br>ΓÇó East US 2 EUAP<br>ΓÇó North Central US<br>ΓÇó South Central US<br>ΓÇó West US<br>ΓÇó West US 2<br>ΓÇó West US 3<br>ΓÇó West Central US<br>ΓÇó USNat East<br>ΓÇó USNat West<br>ΓÇó USSec East<br>ΓÇó USSec West<br><br>**Azure government**<br><br>ΓÇó USGov Arizona<br>ΓÇó USGov Virginia<br><br>**Canada**<br><br>ΓÇó Canada Central<br>ΓÇó Canada East |ΓÇó Brazil South<br>ΓÇó Brazil Southeast |ΓÇó East Asia<br>ΓÇó Southeast Asia<br>ΓÇó Qatar Central<br><br>**Japan**<br><br>ΓÇó Japan East<br>ΓÇó Japan West<br><br>**China 21Vianet**<br><br>ΓÇó China East 2<br>ΓÇó China North 3<br><br>**India**<br><br>ΓÇó Central India<br>ΓÇó Jio India West<br>ΓÇó Jio India Central<br><br>**Korea**<br><br>ΓÇó Korea Central<br>ΓÇó Korea South<br><br>**UAE**<br><br>ΓÇó UAE Central<br>ΓÇó UAE North |ΓÇó North Europe<br>ΓÇó West Europe<br><br>**France**<br><br>ΓÇó France Central<br>ΓÇó France South<br><br>**Germany**<br><br>ΓÇó Germany West Central<br><br>**Italy**<br><br>ΓÇó Italy North<br><br>**Norway**<br><br>ΓÇó Norway East<br>ΓÇó Norway West<br><br>**Sweden**<br><br>ΓÇó Sweden Central <br><br>**Switzerland**<br><br>ΓÇó Switzerland North<br>ΓÇó Switzerland West<br><br>**UK**<br><br>ΓÇó UK South<br>ΓÇó UK West |ΓÇó Australia Central<br>Australia Central 2<br>ΓÇó Australia East<br>ΓÇó Australia Southeast |ΓÇó South Africa North | +|Continent | Country | Region | +|||| +| **North America**| **Canada** | ΓÇó Canada Central<br>ΓÇó Canada East | +| | **United States** | ΓÇó Central US<br>ΓÇó East US<br>ΓÇó East US 2<br>ΓÇó East US 2 EUAP<br>ΓÇó North Central US<br>ΓÇó South Central US<br>ΓÇó West US<br>ΓÇó West US 2<br>ΓÇó West US 3<br>ΓÇó West Central US<br><br>**Azure government** <br>ΓÇó USGov Arizona<br>ΓÇó USGov Virginia<br>ΓÇó USNat East<br>ΓÇó USNat West<br>ΓÇó USSec East<br>ΓÇó USSec West| +|**South America** | **Brazil** | ΓÇó Brazil South<br>ΓÇó Brazil Southeast | +|**Asia** | |ΓÇó East Asia<br>ΓÇó Southeast Asia | +| | **China 21Vianet**| ΓÇó China East 2<br>ΓÇó China North 3| +| | **India**| ΓÇó Central India<br>ΓÇó Jio India West<br>ΓÇó Jio India Central| +| | **Israel** | ΓÇó Israel | +| | **Japan** | ΓÇó Japan East<br>ΓÇó Japan West| +| | **Korea**| ΓÇó Korea Central<br>ΓÇó Korea South| +| | **Quatar** | ΓÇó Qatar Central| +| | **UAE**| ΓÇó UAE Central<br>ΓÇó UAE North | +|**Europe**| | ΓÇó North Europe<br>ΓÇó West Europe| +| |**France**| ΓÇó France Central<br>ΓÇó France South| +| |**Germany**| ΓÇó Germany West Central| +| | **Italy** |ΓÇó Italy North| +| | **Norway**|ΓÇó Norway East<br>ΓÇó Norway West| +| |**Sweden**| ΓÇó Sweden Central | +| | **Switzerland**| ΓÇó Switzerland North<br>ΓÇó Switzerland West| +| | **UK**| ΓÇó UK South<br>ΓÇó UK West | +|**Australia** | **Australia**| ΓÇó Australia Central<br>Australia Central 2<br>ΓÇó Australia East<br>ΓÇó Australia Southeast | +|**Africa** | **South Africa**| ΓÇó South Africa North | |
spring-apps | Application Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/application-observability.md | |
spring-apps | How To Custom Persistent Storage With Standard Consumption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/how-to-custom-persistent-storage-with-standard-consumption.md | |
spring-apps | Quickstart Access Standard Consumption Within Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-access-standard-consumption-within-virtual-network.md | |
spring-apps | Quickstart Analyze Logs And Metrics Standard Consumption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-analyze-logs-and-metrics-standard-consumption.md | |
spring-apps | Quickstart Apps Autoscale Standard Consumption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-apps-autoscale-standard-consumption.md | |
spring-apps | Quickstart Provision Standard Consumption App Environment With Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-provision-standard-consumption-app-environment-with-virtual-network.md | |
spring-apps | Quickstart Provision Standard Consumption Service Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-provision-standard-consumption-service-instance.md | |
spring-apps | Quickstart Standard Consumption Config Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-standard-consumption-config-server.md | |
spring-apps | Quickstart Standard Consumption Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-standard-consumption-custom-domain.md | |
spring-apps | Quickstart Standard Consumption Eureka Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-standard-consumption-eureka-server.md | |
spring-apps | Standard Consumption Customer Responsibilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/standard-consumption-customer-responsibilities.md | |
spring-apps | Access App Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/access-app-virtual-network.md | |
spring-apps | Concept App Customer Responsibilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-app-customer-responsibilities.md | |
spring-apps | Concept Zero Downtime Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-zero-downtime-deployment.md | description: Learn about zero downtime deployment with blue-green deployment str Previously updated : 06/01/2023 Last updated : 08/29/2024 |
spring-apps | Cost Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/cost-management.md | description: Learn about how to manage costs in Azure Spring Apps. Previously updated : 09/27/2023 Last updated : 08/28/2024 |
spring-apps | How To Configure Enterprise Spring Cloud Gateway Filters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-enterprise-spring-cloud-gateway-filters.md | description: Shows you how to use VMware Spring Cloud Gateway route filters with Previously updated : 07/12/2023 Last updated : 08/28/2024 |
spring-apps | How To Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-custom-domain.md | description: Learn how to map an existing custom Distributed Name Service (DNS) Previously updated : 10/20/2023 Last updated : 08/28/2024 |
spring-apps | How To Enterprise Configure Apm Integration And Ca Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-configure-apm-integration-and-ca-certificates.md | |
spring-apps | How To Enterprise Deploy App At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-deploy-app-at-scale.md | |
spring-apps | How To Enterprise Deploy Polyglot Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-deploy-polyglot-apps.md | |
spring-apps | How To Enterprise Deploy Static File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-deploy-static-file.md | |
spring-apps | How To Enterprise Large Cpu Memory Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-large-cpu-memory-applications.md | |
spring-apps | How To Map Dns Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-map-dns-virtual-network.md | |
spring-apps | How To Migrate Standard Tier To Enterprise Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-migrate-standard-tier-to-enterprise-tier.md | |
spring-apps | How To Remote Debugging App Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-remote-debugging-app-instance.md | |
spring-apps | How To Self Diagnose Running In Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-self-diagnose-running-in-vnet.md | |
spring-apps | How To Self Diagnose Solve | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-self-diagnose-solve.md | |
spring-apps | How To Use Grpc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-grpc.md | |
spring-apps | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/overview.md | description: Learn the features and benefits of Azure Spring Apps to deploy and Previously updated : 05/23/2023 Last updated : 08/29/2024 #Customer intent: As an Azure Cloud user, I want to deploy, run, and monitor Spring applications. |
spring-apps | Quickstart Automate Deployments Github Actions Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-automate-deployments-github-actions-enterprise.md | |
spring-apps | Quickstart Configure Single Sign On Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-configure-single-sign-on-enterprise.md | |
spring-apps | Quickstart Deploy Infrastructure Vnet Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet-azure-cli.md | |
spring-apps | Quickstart Deploy Infrastructure Vnet Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet-bicep.md | |
spring-apps | Quickstart Deploy Infrastructure Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet.md | |
spring-apps | Quickstart Deploy Java Native Image App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-java-native-image-app.md | description: Describes how to deploy a Java Native Image application to Azure Sp Previously updated : 08/29/2023 Last updated : 08/28/2024 |
spring-apps | Quickstart Integrate Azure Database And Redis Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-integrate-azure-database-and-redis-enterprise.md | |
spring-apps | Quickstart Monitor End To End Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-monitor-end-to-end-enterprise.md | |
spring-apps | Quickstart Sample App Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-sample-app-introduction.md | |
spring-apps | Quickstart Set Request Rate Limits Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-set-request-rate-limits-enterprise.md | |
spring-apps | Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quotas.md | description: Learn about service quotas and service plans for Azure Spring Apps. Previously updated : 05/15/2023 Last updated : 08/29/2024 |
spring-apps | Structured App Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/structured-app-log.md | description: This article explains how to generate and collect structured applic Previously updated : 02/05/2021 Last updated : 08/29/2024 |
spring-apps | Troubleshoot Build Exit Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/troubleshoot-build-exit-code.md | description: Learn how to troubleshoot common build issues in Azure Spring Apps. Previously updated : 10/24/2022 Last updated : 08/28/2024 |
spring-apps | Tutorial Alerts Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-alerts-action-groups.md | |
spring-apps | Tutorial Authenticate Client With Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-authenticate-client-with-gateway.md | description: Learn how to authenticate client with Spring Cloud Gateway on Azure Previously updated : 08/31/2023 Last updated : 08/28/2024 |
spring-apps | Tutorial Managed Identities Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-managed-identities-functions.md | |
spring-apps | Vnet Customer Responsibilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/vnet-customer-responsibilities.md | |
storage | Authorize Access Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md | For more information about scope for Azure RBAC role assignments, see [Understan Azure RBAC provides several built-in roles for authorizing access to blob data using Microsoft Entra ID and OAuth. Some examples of roles that provide permissions to data resources in Azure Storage include: -- [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner): Use to set ownership and manage POSIX access control for Azure Data Lake Storage Gen2. For more information, see [Access control in Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-access-control.md).+- [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner): Use to set ownership and manage POSIX access control for Azure Data Lake Storage. For more information, see [Access control in Azure Data Lake Storage](../../storage/blobs/data-lake-storage-access-control.md). - [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor): Use to grant read/write/delete permissions to Blob storage resources. - [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader): Use to grant read-only permissions to Blob storage resources. - [Storage Blob Delegator](../../role-based-access-control/built-in-roles.md#storage-blob-delegator): Get a user delegation key to use to create a shared access signature that is signed with Microsoft Entra credentials for a container or blob. |
storage | Blob Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md | View the JSON for inventory rules by selecting the **Code view** tab in the **Bl ### Custom schema fields supported for blob inventory > [!NOTE]-> The **Data Lake Storage Gen2** column shows support in accounts that have the hierarchical namespace feature enabled. +> The **Data Lake Storage** column shows support in accounts that have the hierarchical namespace feature enabled. -| Field | Blob Storage (default support) | Data Lake Storage Gen2 | +| Field | Blob Storage (default support) | Data Lake Storage | ||-|| | Name (Required) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | Creation-Time | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | View the JSON for inventory rules by selecting the **Code view** tab in the **Bl ### Custom schema fields supported for container inventory > [!NOTE]-> The **Data Lake Storage Gen2** column shows support in accounts that have the hierarchical namespace feature enabled. +> The **Data Lake Storage** column shows support in accounts that have the hierarchical namespace feature enabled. -| Field | Blob Storage (default support) | Data Lake Storage Gen2 | +| Field | Blob Storage (default support) | Data Lake Storage | ||-|| | Name (Required) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | Last-Modified | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | |
storage | Blobfuse2 How To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md | However, you should be aware of some key [differences in functionality](blobfuse This table shows how this feature is supported in your account and the effect on support when you enable certain capabilities: -| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | +| Storage account type | Blob Storage (default support) | Data Lake Storage <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--| | Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | Premium block blobs | ![Yes](../media/icons/yes-icon.png)|![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | -<sup>1</sup> Azure Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled. +<sup>1</sup> Azure Data Lake Storage, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled. ## See also |
storage | Blobfuse2 What Is | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-what-is.md | The BlobFuse2 project is [licensed under the MIT license](https://github.com/Azu A full list of BlobFuse2 features is in the [BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#features). These are some of the key tasks you can perform by using BlobFuse2: -- Mount an Azure Blob Storage container or Azure Data Lake Storage Gen2 file system on Linux. (BlobFuse2 supports storage accounts with either flat namespaces or hierarchical namespace configured.)+- Mount an Azure Blob Storage container or Azure Data Lake Storage file system on Linux. (BlobFuse2 supports storage accounts with either flat namespaces or hierarchical namespace configured.) - Use basic file system operations like `mkdir`, `opendir`, `readdir`, `rmdir`, `open`, `read`, `create`, `write`, `close`, `unlink`, `truncate`, `stat`, and `rename`. - Use local file caching to improve subsequent access times. - Gain insights into mount activities and resource usage by using BlobFuse2 Health Monitor. BlobFuse2 is different from the Linux file system in some key ways: - **chown and chmod**: - Data Lake Storage Gen2 storage accounts support per object permissions and ACLs, but flat namespace (FNS) block blobs don't. As a result, BlobFuse2 doesn't support the `chown` and `chmod` operations for mounted block blob containers. The operations are supported for Data Lake Storage Gen2. + Data Lake Storage storage accounts support per object permissions and ACLs, but flat namespace (FNS) block blobs don't. As a result, BlobFuse2 doesn't support the `chown` and `chmod` operations for mounted block blob containers. The operations are supported for Data Lake Storage. - **Device files or pipes**: Reading the same blob from multiple simultaneous threads is supported. However, When a container is mounted with the default options, all files get 770 permissions and are accessible only by the user who does the mounting. To allow any user to access the BlobFuse2 mount, mount BlobFuse2 by using the `--allow-other` option. You also can configure this option in the YAML config file. -As stated earlier, the `chown` and `chmod` operations are supported for Data Lake Storage Gen2, but not for FNS block blobs. Running a `chmod` operation against a mounted FNS block blob container returns a success message, but the operation doesn't actually succeed. +As stated earlier, the `chown` and `chmod` operations are supported for Data Lake Storage, but not for FNS block blobs. Running a `chmod` operation against a mounted FNS block blob container returns a success message, but the operation doesn't actually succeed. ## Feature support This table shows how this feature is supported in your account and the effect on support when you enable certain capabilities. -| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | Network File System (NFS) 3.0 <sup>1</sup> | SSH File Transfer Protocol (SFTP) <sup>1</sup> | +| Storage account type | Blob Storage (default support) | Data Lake Storage <sup>1</sup> | Network File System (NFS) 3.0 <sup>1</sup> | SSH File Transfer Protocol (SFTP) <sup>1</sup> | |--|--|--|--|--| | Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | Premium block blobs | ![Yes](../media/icons/yes-icon.png)|![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | -<sup>1</sup> Data Lake Storage Gen2, the NFS 3.0 protocol, and SFTP support all require a storage account that has a hierarchical namespace enabled. +<sup>1</sup> Data Lake Storage, the NFS 3.0 protocol, and SFTP support all require a storage account that has a hierarchical namespace enabled. ## See also |
storage | Create Data Lake Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/create-data-lake-storage-account.md | Title: Create a storage account for Azure Data Lake Storage Gen2 + Title: Create a storage account for Azure Data Lake Storage -description: Learn how to create a storage account for use with Azure Data Lake Storage Gen2. +description: Learn how to create a storage account for use with Azure Data Lake Storage. Last updated 03/09/2023 -# Create a storage account to use with Azure Data Lake Storage Gen2 +# Create a storage account to use with Azure Data Lake Storage -To use Data Lake Storage Gen2 capabilities, create a storage account that has a hierarchical namespace. +To use Data Lake Storage capabilities, create a storage account that has a hierarchical namespace. For step-by-step guidance, see [Create a storage account](../common/storage-account-create.md?toc=/azure/storage/blobs/toc.json). The following image shows this setting in the **Create storage account** page. > [!div class="mx-imgBorder"] > ![Hierarchical namespace setting](./media/create-data-lake-storage-account/hierarchical-namespace-feature.png) -To enable Data Lake Storage capabilities on an existing account, see [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). +To enable Data Lake Storage capabilities on an existing account, see [Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). ## Next steps - [Storage account overview](../common/storage-account-overview.md)-- [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md)-- [Access control in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)+- [Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities](upgrade-to-data-lake-storage-gen2-how-to.md) +- [Access control in Azure Data Lake Storage](data-lake-storage-access-control.md) |
storage | Data Lake Storage Abfs Driver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-abfs-driver.md | Title: The Azure Blob Filesystem driver for Azure Data Lake Storage Gen2 + Title: The Azure Blob Filesystem driver for Azure Data Lake Storage -description: Learn about the Azure Blob Filesystem driver (ABFS), a dedicated Azure Storage driver for Hadoop. Access data in Azure Data Lake Storage Gen2 using this driver. +description: Learn about the Azure Blob Filesystem driver (ABFS), a dedicated Azure Storage driver for Hadoop. Access data in Azure Data Lake Storage using this driver. -One of the primary access methods for data in Azure Data Lake Storage Gen2 is via the [Hadoop FileSystem](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/https://docsupdatetracker.net/index.html). Data Lake Storage Gen2 allows users of Azure Blob Storage access to a new driver, the Azure Blob File System driver or `ABFS`. ABFS is part of Apache Hadoop and is included in many of the commercial distributions of Hadoop. By the ABFS driver, many applications and frameworks can access data in Azure Blob Storage without any code explicitly referencing Data Lake Storage Gen2. +One of the primary access methods for data in Azure Data Lake Storage is via the [Hadoop FileSystem](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/https://docsupdatetracker.net/index.html). Data Lake Storage allows users of Azure Blob Storage access to a new driver, the Azure Blob File System driver or `ABFS`. ABFS is part of Apache Hadoop and is included in many of the commercial distributions of Hadoop. By the ABFS driver, many applications and frameworks can access data in Azure Blob Storage without any code explicitly referencing Data Lake Storage. ## Prior capability: The Windows Azure Storage Blob driver However, there are some functions that the driver must still perform: ### URI scheme to reference data -Consistent with other file system implementations within Hadoop, the ABFS driver defines its own URI scheme so that resources (directories and files) may be distinctly addressed. The URI scheme is documented in [Use the Azure Data Lake Storage Gen2 URI](./data-lake-storage-introduction-abfs-uri.md). The structure of the URI is: `abfs[s]://file_system@account_name.dfs.core.windows.net/<path>/<path>/<file_name>` +Consistent with other file system implementations within Hadoop, the ABFS driver defines its own URI scheme so that resources (directories and files) may be distinctly addressed. The URI scheme is documented in [Use the Azure Data Lake Storage URI](./data-lake-storage-introduction-abfs-uri.md). The structure of the URI is: `abfs[s]://file_system@account_name.dfs.core.windows.net/<path>/<path>/<file_name>` By using this URI format, standard Hadoop tools and frameworks can be used to reference these resources: Internally, the ABFS driver translates the resource(s) specified in the URI to f ### Authentication -The ABFS driver supports two forms of authentication so that the Hadoop application may securely access resources contained within a Data Lake Storage Gen2 capable account. Full details of the available authentication schemes are provided in the [Azure Storage security guide](security-recommendations.md). They are: +The ABFS driver supports two forms of authentication so that the Hadoop application may securely access resources contained within a Data Lake Storage capable account. Full details of the available authentication schemes are provided in the [Azure Storage security guide](security-recommendations.md). They are: - **Shared Key:** This permits users access to ALL resources in the account. The key is encrypted and stored in Hadoop configuration. - **Microsoft Entra ID OAuth Bearer Token:** Microsoft Entra bearer tokens are acquired and refreshed by the driver using either the identity of the end user or a configured Service Principal. Using this authentication model, all access is authorized on a per-call basis using the identity associated with the supplied token and evaluated against the assigned POSIX Access Control List (ACL). > [!NOTE]- > Azure Data Lake Storage Gen2 supports only Azure AD v1.0 endpoints. + > Azure Data Lake Storage supports only Azure AD v1.0 endpoints. ### Configuration The ABFS driver is fully documented in the [Official Hadoop documentation](https ## Next steps - [Create an Azure Databricks Cluster](./data-lake-storage-use-databricks-spark.md)-- [Use the Azure Data Lake Storage Gen2 URI](./data-lake-storage-introduction-abfs-uri.md)+- [Use the Azure Data Lake Storage URI](./data-lake-storage-introduction-abfs-uri.md) |
storage | Data Lake Storage Access Control Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control-model.md | Title: Access control model for Azure Data Lake Storage Gen2 + Title: Access control model for Azure Data Lake Storage description: Learn how to configure container, directory, and file-level access in accounts that have a hierarchical namespace. -# Access control model in Azure Data Lake Storage Gen2 +# Access control model in Azure Data Lake Storage -Data Lake Storage Gen2 supports the following authorization mechanisms: +Data Lake Storage supports the following authorization mechanisms: - Shared Key authorization - Shared access signature (SAS) authorization For more information on using Azure ABAC to control access to Azure Storage, see ## Access control lists (ACLs) -ACLs give you the ability to apply "finer grain" level of access to directories and files. An *ACL* is a permission construct that contains a series of *ACL entries*. Each ACL entry associates security principal with an access level. To learn more, see [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md). +ACLs give you the ability to apply "finer grain" level of access to directories and files. An *ACL* is a permission construct that contains a series of *ACL entries*. Each ACL entry associates security principal with an access level. To learn more, see [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md). ## How permissions are evaluated By using groups, you're less likely to exceed the maximum number of role assignm ## Shared Key and Shared Access Signature (SAS) authorization -Azure Data Lake Storage Gen2 also supports [Shared Key](/rest/api/storageservices/authorize-with-shared-key) and [SAS](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json) methods for authentication. A characteristic of these authentication methods is that no identity is associated with the caller and therefore security principal permission-based authorization cannot be performed. +Azure Data Lake Storage also supports [Shared Key](/rest/api/storageservices/authorize-with-shared-key) and [SAS](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json) methods for authentication. A characteristic of these authentication methods is that no identity is associated with the caller and therefore security principal permission-based authorization cannot be performed. In the case of Shared Key, the caller effectively gains 'super-user' access, meaning full access to all operations on all resources including data, setting owner, and changing ACLs. SAS tokens include allowed permissions as part of the token. The permissions inc ## Next steps -To learn more about access control lists, see [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md). +To learn more about access control lists, see [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md). |
storage | Data Lake Storage Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md | Title: Access control lists in Azure Data Lake Storage Gen2 + Title: Access control lists in Azure Data Lake Storage -description: Understand how POSIX-like ACLs access control lists work in Azure Data Lake Storage Gen2. +description: Understand how POSIX-like ACLs access control lists work in Azure Data Lake Storage. ms.devlang: python -# Access control lists (ACLs) in Azure Data Lake Storage Gen2 +# Access control lists (ACLs) in Azure Data Lake Storage -Azure Data Lake Storage Gen2 implements an access control model that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs). This article describes access control lists in Data Lake Storage Gen2. To learn about how to incorporate Azure RBAC together with ACLs, and how system evaluates them to make authorization decisions, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md). +Azure Data Lake Storage implements an access control model that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs). This article describes access control lists in Data Lake Storage. To learn about how to incorporate Azure RBAC together with ACLs, and how system evaluates them to make authorization decisions, see [Access control model in Azure Data Lake Storage](data-lake-storage-access-control-model.md). <a id="access-control-lists-on-files-and-directories"></a> To set file and directory level permissions, see any of the following articles: | Environment | Article | |--|--|-|Azure Storage Explorer |[Use Azure Storage Explorer to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-explorer-acl.md)| -|Azure portal |[Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-azure-portal.md)| -|.NET |[Use .NET to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-dotnet.md)| -|Java|[Use Java to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-java.md)| -|Python|[Use Python to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-python.md)| -|JavaScript (Node.js)|[Use the JavaScript SDK in Node.js to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-directory-file-acl-javascript.md)| -|PowerShell|[Use PowerShell to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-powershell.md)| -|Azure CLI|[Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-cli.md)| +|Azure Storage Explorer |[Use Azure Storage Explorer to manage ACLs in Azure Data Lake Storage](data-lake-storage-explorer-acl.md)| +|Azure portal |[Use the Azure portal to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-azure-portal.md)| +|.NET |[Use .NET to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-dotnet.md)| +|Java|[Use Java to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-java.md)| +|Python|[Use Python to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-python.md)| +|JavaScript (Node.js)|[Use the JavaScript SDK in Node.js to manage ACLs in Azure Data Lake Storage](data-lake-storage-directory-file-acl-javascript.md)| +|PowerShell|[Use PowerShell to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-powershell.md)| +|Azure CLI|[Use Azure CLI to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-cli.md)| |REST API |[Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update)| > [!IMPORTANT] The permissions on directories and files in a container, are **Read**, **Write** ||-|-| | **Read (R)** | Can read the contents of a file | Requires **Read** and **Execute** to list the contents of the directory | | **Write (W)** | Can write or append to a file | Requires **Write** and **Execute** to create child items in a directory |-| **Execute (X)** | Does not mean anything in the context of Data Lake Storage Gen2 | Required to traverse the child items of a directory | +| **Execute (X)** | Does not mean anything in the context of Data Lake Storage | Required to traverse the child items of a directory | > [!NOTE] > If you are granting permissions by using only ACLs (no Azure RBAC), then to grant a security principal read or write access to a file, you'll need to give the security principal **Execute** permissions to the root folder of the container, and to each folder in the hierarchy of folders that lead to the file. The permissions on directories and files in a container, are **Read**, **Write** ### Permissions inheritance -In the POSIX-style model that's used by Data Lake Storage Gen2, permissions for an item are stored on the item itself. In other words, permissions for an item cannot be inherited from the parent items if the permissions are set after the child item has already been created. Permissions are only inherited if default permissions have been set on the parent items before the child items have been created. +In the POSIX-style model that's used by Data Lake Storage, permissions for an item are stored on the item itself. In other words, permissions for an item cannot be inherited from the parent items if the permissions are set after the child item has already been created. Permissions are only inherited if default permissions have been set on the parent items before the child items have been created. ## Common scenarios related to ACL permissions Every file and directory has distinct permissions for these identities: - Named managed identities - All other users -The identities of users and groups are Microsoft Entra identities. So unless otherwise noted, a *user*, in the context of Data Lake Storage Gen2, can refer to a Microsoft Entra user, service principal, managed identity, or security group. +The identities of users and groups are Microsoft Entra identities. So unless otherwise noted, a *user*, in the context of Data Lake Storage, can refer to a Microsoft Entra user, service principal, managed identity, or security group. ### The super-user In the POSIX ACLs, every user is associated with a *primary group*. For example, #### Assigning the owning group for a new file or directory -- **Case 1:** The root directory `/`. This directory is created when a Data Lake Storage Gen2 container is created. In this case, the owning group is set to the user who created the container if it was done using OAuth. If the container is created using Shared Key, an Account SAS, or a Service SAS, then the owner and owning group are set to `$superuser`.+- **Case 1:** The root directory `/`. This directory is created when a Data Lake Storage container is created. In this case, the owning group is set to the user who created the container if it was done using OAuth. If the container is created using Shared Key, an Account SAS, or a Service SAS, then the owner and owning group are set to `$superuser`. - **Case 2 (every other case):** When a new item is created, the owning group is copied from the parent directory. #### Changing the owning group def access_check( user, desired_perms, path ) : As illustrated in the Access Check Algorithm, the mask limits access for named users, the owning group, and named groups. -For a new Data Lake Storage Gen2 container, the mask for the access ACL of the root directory ("/") defaults to **750** for directories and **640** for files. The following table shows the symbolic notation of these permission levels. +For a new Data Lake Storage container, the mask for the access ACL of the root directory ("/") defaults to **750** for directories and **640** for files. The following table shows the symbolic notation of these permission levels. |Entity|Directories|Files| |--|--|--| The mask may be specified on a per-call basis. This allows different consuming s ### The sticky bit -The sticky bit is a more advanced feature of a POSIX container. In the context of Data Lake Storage Gen2, it is unlikely that the sticky bit will be needed. In summary, if the sticky bit is enabled on a directory, a child item can only be deleted or renamed by the child item's owning user, the directory's owner, or the Superuser ($superuser). +The sticky bit is a more advanced feature of a POSIX container. In the context of Data Lake Storage, it is unlikely that the sticky bit will be needed. In summary, if the sticky bit is enabled on a directory, a child item can only be deleted or renamed by the child item's owning user, the directory's owner, or the Superuser ($superuser). -The sticky bit isn't shown in the Azure portal. To learn more about the sticky bit and how to set it, see [What is the sticky bit Data Lake Storage Gen2?](/troubleshoot/azure/azure-storage/blobs/authentication/adls-gen2-sticky-bit-403-access-denied#what-is-the-sticky-bit-in-adls-gen2). +The sticky bit isn't shown in the Azure portal. To learn more about the sticky bit and how to set it, see [What is the sticky bit Data Lake Storage?](/troubleshoot/azure/azure-storage/blobs/authentication/adls-gen2-sticky-bit-403-access-denied#what-is-the-sticky-bit-in-adls-gen2). ## Default permissions on new files and directories When creating a default ACL, the umask is applied to the access ACL to determine The umask is a 9-bit value on parent directories that contains an RWX value for **owning user**, **owning group**, and **other**. -The umask for Azure Data Lake Storage Gen2 a constant value that is set to 007. This value translates to: +The umask for Azure Data Lake Storage a constant value that is set to 007. This value translates to: | umask component | Numeric form | Short form | Meaning | ||--||| The following table provides a summary view of the limits to consider while usin [!INCLUDE [Security groups](../../../includes/azure-storage-data-lake-rbac-acl-limits.md)] -### Does Data Lake Storage Gen2 support inheritance of Azure RBAC? +### Does Data Lake Storage support inheritance of Azure RBAC? Azure role assignments do inherit. Assignments flow from subscription, resource group, and storage account resources down to the container resource. -### Does Data Lake Storage Gen2 support inheritance of ACLs? +### Does Data Lake Storage support inheritance of ACLs? Default ACLs can be used to set ACLs for new child subdirectories and files created under the parent directory. To update ACLs for existing child items, you will need to add, update, or remove ACLs recursively for the desired directory hierarchy. For guidance, see the [How to set ACLs](#set-access-control-lists) section of this article. The Azure Storage REST API does contain an operation named [Set Container ACL](/ ## See also -- [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md)+- [Access control model in Azure Data Lake Storage](data-lake-storage-access-control-model.md) |
storage | Data Lake Storage Acl Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-azure-portal.md | Title: Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2 + Title: Use the Azure portal to manage ACLs in Azure Data Lake Storage description: Use the Azure portal to manage access control lists (ACLs) in storage accounts that have a hierarchical namespace (HNS) enabled. Last updated 03/09/2023 -# Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2 +# Use the Azure portal to manage ACLs in Azure Data Lake Storage This article shows you how to use [Azure portal](https://portal.azure.com/) to manage the access control list (ACL) of a directory or blob in storage accounts that have the hierarchical namespace featured enabled on them. -For information about the structure of the ACL, see [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md). +For information about the structure of the ACL, see [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md). -To learn about how to use ACLs and Azure roles together, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md). +To learn about how to use ACLs and Azure roles together, see [Access control model in Azure Data Lake Storage](data-lake-storage-access-control-model.md). ## Prerequisites To learn about how to use ACLs and Azure roles together, see [Access control mod > ![Add a security principal to the ACL](./media/data-lake-storage-acl-azure-portal/get-security-principal.png) > [!NOTE]- > We recommend that you create a security group in Microsoft Entra ID, and then maintain permissions on the group rather than for individual users. For details on this recommendation, as well as other best practices, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md). + > We recommend that you create a security group in Microsoft Entra ID, and then maintain permissions on the group rather than for individual users. For details on this recommendation, as well as other best practices, see [Access control model in Azure Data Lake Storage](data-lake-storage-access-control-model.md). 8. To manage the *default ACL*, select the **default permissions** tab, and then select the **Configure default permissions** checkbook. You can find the complete list of guides here: [How to set ACLs](data-lake-stora ## Next steps -Learn about the Data Lake Storage Gen2 permission model. +Learn about the Data Lake Storage permission model. > [!div class="nextstepaction"]-> [Access control model in Azure Data Lake Storage Gen2](./data-lake-storage-access-control-model.md) +> [Access control model in Azure Data Lake Storage](./data-lake-storage-access-control-model.md) |
storage | Data Lake Storage Acl Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-cli.md | Title: Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2 + Title: Use Azure CLI to manage ACLs in Azure Data Lake Storage description: Use the Azure CLI to manage access control lists (ACL) in storage accounts that have a hierarchical namespace. ms.devlang: azurecli -# Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2 +# Use Azure CLI to manage ACLs in Azure Data Lake Storage This article shows you how to use the [Azure CLI](/cli/azure/) to get, set, and update the access control lists of directories and files. The following image shows the output after getting the ACL of a directory. ![Get ACL output](./media/data-lake-storage-directory-file-acl-cli/get-acl.png) -In this example, the owning user has read, write, and execute permissions. The owning group has only read and execute permissions. For more information about access control lists, see [Access control in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md). +In this example, the owning user has read, write, and execute permissions. The owning group has only read and execute permissions. For more information about access control lists, see [Access control in Azure Data Lake Storage](data-lake-storage-access-control.md). ## Set ACLs The following image shows the output after setting the ACL of a file. ![Get ACL output 2](./media/data-lake-storage-directory-file-acl-cli/set-acl-file.png) -In this example, the owning user and owning group have only read and write permissions. All other users have write and execute permissions. For more information about access control lists, see [Access control in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md). +In this example, the owning user and owning group have only read and write permissions. All other users have write and execute permissions. For more information about access control lists, see [Access control in Azure Data Lake Storage](data-lake-storage-access-control.md). ### Set ACLs recursively az storage fs access set-recursive --acl "user::rw-,group::r-x,other::" --con - [Samples](https://github.com/Azure/azure-cli/blob/dev/src/azure-cli/azure/cli/command_modules/storage/docs/ADLS%20Gen2.md) - [Give feedback](https://github.com/Azure/azure-cli-extensions/issues) - [Known issues](data-lake-storage-known-issues.md#api-scope-data-lake-client-library)-- [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)-- [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)+- [Access control model in Azure Data Lake Storage](data-lake-storage-access-control.md) +- [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md) |
storage | Data Lake Storage Acl Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-dotnet.md | Title: Use .NET to manage ACLs in Azure Data Lake Storage Gen2 + Title: Use .NET to manage ACLs in Azure Data Lake Storage description: Use .NET to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled. ms.devlang: csharp -# Use .NET to manage ACLs in Azure Data Lake Storage Gen2 +# Use .NET to manage ACLs in Azure Data Lake Storage This article shows you how to use .NET to get, set, and update the access control lists of directories and files. To use the snippets in this article, you'll need to create a [DataLakeServiceCli ### Connect by using Microsoft Entra ID > [!NOTE]-> If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage Gen2](./data-lake-storage-access-control-model.md). +> If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage](./data-lake-storage-access-control-model.md). You can use the [Azure identity client library for .NET](/dotnet/api/overview/azure/identity-readme) to authenticate your application with Microsoft Entra ID. This example sets ACL entries recursively. If this code encounters a permission - [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Files.DataLake/GEN1_GEN2_MAPPING.md) - [Known issues](data-lake-storage-known-issues.md#api-scope-data-lake-client-library) - [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)-- [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)-- [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)+- [Access control model in Azure Data Lake Storage](data-lake-storage-access-control.md) +- [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md) |
storage | Data Lake Storage Acl Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-java.md | Title: Use Java to manage ACLs in Azure Data Lake Storage Gen2 + Title: Use Java to manage ACLs in Azure Data Lake Storage description: Use Azure Storage libraries for Java to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled. -# Use Java to manage ACLs in Azure Data Lake Storage Gen2 +# Use Java to manage ACLs in Azure Data Lake Storage This article shows you how to use Java to get, set, and update the access control lists of directories and files. This example sets ACL entries recursively. If this code encounters a permission - [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-jav) - [Known issues](data-lake-storage-known-issues.md#api-scope-data-lake-client-library) - [Give Feedback](https://github.com/Azure/azure-sdk-for-java/issues)-- [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)-- [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)+- [Access control model in Azure Data Lake Storage](data-lake-storage-access-control.md) +- [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md) |
storage | Data Lake Storage Acl Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-javascript.md | Title: Use JavaScript (Node.js) to manage ACLs in Azure Data Lake Storage Gen2 + Title: Use JavaScript (Node.js) to manage ACLs in Azure Data Lake Storage description: Use Azure Storage Data Lake client library for JavaScript to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled. ms.devlang: javascript -# Use JavaScript SDK in Node.js to manage ACLs in Azure Data Lake Storage Gen2 +# Use JavaScript SDK in Node.js to manage ACLs in Azure Data Lake Storage This article shows you how to use Node.js to get, set, and update the access control lists of directories and files. To use the snippets in this article, you'll need to create a **DataLakeServiceCl ### Connect by using Microsoft Entra ID > [!NOTE]-> If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage Gen2](./data-lake-storage-access-control-model.md). +> If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage](./data-lake-storage-access-control-model.md). You can use the [Azure identity client library for JS](https://www.npmjs.com/package/@azure/identity) to authenticate your application with Microsoft Entra ID. function GetDataLakeServiceClient(accountName, accountKey) { This example gets and then sets the ACL of a directory named `my-directory`. This example gives the owning user read, write, and execute permissions, gives the owning group only read and execute permissions, and gives all others read access. > [!NOTE]-> If your application authorizes access by using Microsoft Entra ID, then make sure that the security principal that your application uses to authorize access has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control in Azure Data Lake Storage Gen2](./data-lake-storage-access-control.md). +> If your application authorizes access by using Microsoft Entra ID, then make sure that the security principal that your application uses to authorize access has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control in Azure Data Lake Storage](./data-lake-storage-access-control.md). ```javascript async function ManageDirectoryACLs(fileSystemClient) { You can also get and set the ACL of the root directory of a container. To get th This example gets and then sets the ACL of a file named `upload-file.txt`. This example gives the owning user read, write, and execute permissions, gives the owning group only read and execute permissions, and gives all others read access. > [!NOTE]-> If your application authorizes access by using Microsoft Entra ID, then make sure that the security principal that your application uses to authorize access has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control in Azure Data Lake Storage Gen2](./data-lake-storage-access-control.md). +> If your application authorizes access by using Microsoft Entra ID, then make sure that the security principal that your application uses to authorize access has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control in Azure Data Lake Storage](./data-lake-storage-access-control.md). ```javascript async function ManageFileACLs(fileSystemClient) { await fileClient.setAccessControl(acl); - [Package (Node Package Manager)](https://www.npmjs.com/package/@azure/storage-file-datalake) - [Samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-file-datalake/samples) - [Give Feedback](https://github.com/Azure/azure-sdk-for-java/issues)-- [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)-- [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)+- [Access control model in Azure Data Lake Storage](data-lake-storage-access-control.md) +- [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md) |
storage | Data Lake Storage Acl Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-powershell.md | Title: Use PowerShell to manage ACLs in Azure Data Lake Storage Gen2 + Title: Use PowerShell to manage ACLs in Azure Data Lake Storage description: Use PowerShell cmdlets to manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled. ms.devlang: powershell -# Use PowerShell to manage ACLs in Azure Data Lake Storage Gen2 +# Use PowerShell to manage ACLs in Azure Data Lake Storage This article shows you how to use PowerShell to get, set, and update the access control lists of directories and files. Choose how you want your commands to obtain authorization to the storage account ### Option 1: Obtain authorization by using Microsoft Entra ID > [!NOTE]-> If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage Gen2](./data-lake-storage-access-control-model.md). +> If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage](./data-lake-storage-access-control-model.md). With this approach, the system ensures that your user account has the appropriate Azure role-based access control (Azure RBAC) assignments and ACL permissions. The following image shows the output after getting the ACL of a directory. ![Get ACL output for directory](./media/data-lake-storage-directory-file-acl-powershell/get-acl.png) -In this example, the owning user has read, write, and execute permissions. The owning group has only read and execute permissions. For more information about access control lists, see [Access control in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md). +In this example, the owning user has read, write, and execute permissions. The owning group has only read and execute permissions. For more information about access control lists, see [Access control in Azure Data Lake Storage](data-lake-storage-access-control.md). ## Set ACLs The following image shows the output after setting the ACL of a file. ![Get ACL output for file](./media/data-lake-storage-directory-file-acl-powershell/set-acl.png) -In this example, the owning user and owning group have only read and write permissions. All other users have write and execute permissions. For more information about access control lists, see [Access control in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md). +In this example, the owning user and owning group have only read and write permissions. All other users have write and execute permissions. For more information about access control lists, see [Access control in Azure Data Lake Storage](data-lake-storage-access-control.md). ### Set ACLs recursively To see an example that sets ACLs recursively in batches by specifying a batch si - [Known issues](data-lake-storage-known-issues.md#api-scope-data-lake-client-library) - [Storage PowerShell cmdlets](/powershell/module/az.storage)-- [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)-- [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)+- [Access control model in Azure Data Lake Storage](data-lake-storage-access-control.md) +- [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md) |
storage | Data Lake Storage Acl Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-python.md | Title: Use Python to manage ACLs in Azure Data Lake Storage Gen2 + Title: Use Python to manage ACLs in Azure Data Lake Storage description: Use Python manage access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled. ms.devlang: python -# Use Python to manage ACLs in Azure Data Lake Storage Gen2 +# Use Python to manage ACLs in Azure Data Lake Storage This article shows you how to use the Python to get, set, and update the access control lists of directories and files. To use the snippets in this article, you'll need to create a **DataLakeServiceCl ### Connect by using Microsoft Entra ID > [!NOTE]-> If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage Gen2](./data-lake-storage-access-control-model.md). +> If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage](./data-lake-storage-access-control-model.md). You can use the [Azure identity client library for Python](https://pypi.org/project/azure-identity/) to authenticate your application with Microsoft Entra ID. To see an example that processes ACLs recursively in batches by specifying a bat - [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/GEN1_GEN2_MAPPING.md) - [Known issues](data-lake-storage-known-issues.md#api-scope-data-lake-client-library) - [Give Feedback](https://github.com/Azure/azure-sdk-for-python/issues)-- [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)-- [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md)+- [Access control model in Azure Data Lake Storage](data-lake-storage-access-control.md) +- [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md) |
storage | Data Lake Storage Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-best-practices.md | Title: Best practices for using Azure Data Lake Storage Gen2 + Title: Best practices for using Azure Data Lake Storage -description: Learn how to optimize performance, reduce costs, and secure your Data Lake Storage Gen2 enabled Azure Storage account. +description: Learn how to optimize performance, reduce costs, and secure your Data Lake Storage enabled Azure Storage account. -# Best practices for using Azure Data Lake Storage Gen2 +# Best practices for using Azure Data Lake Storage -This article provides best practice guidelines that help you optimize performance, reduce costs, and secure your Data Lake Storage Gen2 enabled Azure Storage account. +This article provides best practice guidelines that help you optimize performance, reduce costs, and secure your Data Lake Storage enabled Azure Storage account. For general suggestions around structuring a data lake, see these articles: - [Overview of Azure Data Lake Storage for the data management and analytics scenario](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-overview?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)-- [Provision three Azure Data Lake Storage Gen2 accounts for each data landing zone](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-services?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json)+- [Provision three Azure Data Lake Storage accounts for each data landing zone](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-services?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) ## Find documentation -Azure Data Lake Storage Gen2 isn't a dedicated service or account type. It's a set of capabilities that support high throughput analytic workloads. The Data Lake Storage Gen2 documentation provides best practices and guidance for using these capabilities. For all other aspects of account management such as setting up network security, designing for high availability, and disaster recovery, see the [Blob storage documentation](storage-blobs-introduction.md) content. +Azure Data Lake Storage isn't a dedicated service or account type. It's a set of capabilities that support high throughput analytic workloads. The Data Lake Storage documentation provides best practices and guidance for using these capabilities. For all other aspects of account management such as setting up network security, designing for high availability, and disaster recovery, see the [Blob storage documentation](storage-blobs-introduction.md) content. #### Evaluate feature support and known issues Use the following pattern as you configure your account to use Blob storage features. -1. Review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to determine whether a feature is fully supported in your account. Some features aren't yet supported or have partial support in Data Lake Storage Gen2 enabled accounts. Feature support is always expanding so make sure to periodically review this article for updates. +1. Review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to determine whether a feature is fully supported in your account. Some features aren't yet supported or have partial support in Data Lake Storage enabled accounts. Feature support is always expanding so make sure to periodically review this article for updates. -2. Review the [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md) article to see if there are any limitations or special guidance around the feature you intend to use. +2. Review the [Known issues with Azure Data Lake Storage](data-lake-storage-known-issues.md) article to see if there are any limitations or special guidance around the feature you intend to use. -3. Scan feature articles for any guidance that is specific to Data Lake Storage Gen2 enabled accounts. +3. Scan feature articles for any guidance that is specific to Data Lake Storage enabled accounts. #### Understand the terms used in documentation As you move between content sets, you notice some slight terminology differences If your workloads require a low consistent latency and/or require a high number of input output operations per second (IOP), consider using a premium block blob storage account. This type of account makes data available via high-performance hardware. Data is stored on solid-state drives (SSDs) which are optimized for low latency. SSDs provide higher throughput compared to traditional hard drives. The storage costs of premium performance are higher, but transaction costs are lower. Therefore, if your workloads execute a large number of transactions, a premium performance block blob account can be economical. -If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account. This combination of using premium block blob storage accounts along with a Data Lake Storage enabled account is referred to as the [premium tier for Azure Data Lake Storage](premium-tier-for-data-lake-storage.md). +If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake Storage along with a premium block blob storage account. This combination of using premium block blob storage accounts along with a Data Lake Storage enabled account is referred to as the [premium tier for Azure Data Lake Storage](premium-tier-for-data-lake-storage.md). ## Optimize for data ingest When ingesting data from a source system, the source hardware, source network hardware, or the network connectivity to your storage account can be a bottleneck. -![Diagram that shows the factors to consider when ingesting data from a source system to Data Lake Storage Gen2.](./media/data-lake-storage-best-practices/bottleneck.png) +![Diagram that shows the factors to consider when ingesting data from a source system to Data Lake Storage.](./media/data-lake-storage-best-practices/bottleneck.png) ### Source hardware Whether you're using on-premises machines or Virtual Machines (VMs) in Azure, ma ### Network connectivity to the storage account -The network connectivity between your source data and your storage account can sometimes be a bottleneck. When your source data is on premise, consider using a dedicated link with [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/). If your source data is in Azure, the performance is best when the data is in the same Azure region as your Data Lake Storage Gen2 enabled account. +The network connectivity between your source data and your storage account can sometimes be a bottleneck. When your source data is on premise, consider using a dedicated link with [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/). If your source data is in Azure, the performance is best when the data is in the same Azure region as your Data Lake Storage enabled account. ### Configure data ingestion tools for maximum parallelization To achieve the best performance, use all available throughput by performing as many reads and writes in parallel as possible. -![Data Lake Storage Gen2 performance](./media/data-lake-storage-best-practices/throughput.png) +![Data Lake Storage performance](./media/data-lake-storage-best-practices/throughput.png) The following table summarizes the key settings for several popular ingestion tools. The following table summarizes the key settings for several popular ingestion to > [!NOTE] > The overall performance of your ingest operations depend on other factors that are specific to the tool that you're using to ingest data. For the best up-to-date guidance, see the documentation for each tool that you intend to use. -Your account can scale to provide the necessary throughput for all analytics scenarios. By default, a Data Lake Storage Gen2 enabled account provides enough throughput in its default configuration to meet the needs of a broad category of use cases. If you run into the default limit, the account can be configured to provide more throughput by contacting [Azure Support](https://azure.microsoft.com/support/faq/). +Your account can scale to provide the necessary throughput for all analytics scenarios. By default, a Data Lake Storage enabled account provides enough throughput in its default configuration to meet the needs of a broad category of use cases. If you run into the default limit, the account can be configured to provide more throughput by contacting [Azure Support](https://azure.microsoft.com/support/faq/). ## Structure data sets Again, the choice you make with the folder and file organization should optimize Start by reviewing the recommendations in the [Security recommendations for Blob storage](security-recommendations.md) article. You'll find best practice guidance about how to protect your data from accidental or malicious deletion, secure data behind a firewall, and use Microsoft Entra ID as the basis of identity management. -Then, review the [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md) article for guidance that is specific to Data Lake Storage Gen2 enabled accounts. This article helps you understand how to use Azure role-based access control (Azure RBAC) roles together with access control lists (ACLs) to enforce security permissions on directories and files in your hierarchical file system. +Then, review the [Access control model in Azure Data Lake Storage](data-lake-storage-access-control-model.md) article for guidance that is specific to Data Lake Storage enabled accounts. This article helps you understand how to use Azure role-based access control (Azure RBAC) roles together with access control lists (ACLs) to enforce security permissions on directories and files in your hierarchical file system. ## Ingest, process, and analyze -There are many different sources of data and different ways in which that data can be ingested into a Data Lake Storage Gen2 enabled account. +There are many different sources of data and different ways in which that data can be ingested into a Data Lake Storage enabled account. For example, you can ingest large sets of data from HDInsight and Hadoop clusters or smaller sets of *ad hoc* data for prototyping applications. You can ingest streamed data that is generated by various sources such as applications, devices, and sensors. For this type of data, you can use tools to capture and process the data on an event-by-event basis in real time, and then write the events in batches into your account. You can also ingest web server logs, which contain information such as the history of page requests. For log data, consider writing custom scripts or applications to upload them so that you'll have the flexibility to include your data uploading component as part of your larger big data application. The following table recommends tools that you can use to ingest, analyze, visual | Download data | Azure portal, [PowerShell](data-lake-storage-directory-file-acl-powershell.md), [Azure CLI](data-lake-storage-directory-file-acl-cli.md), [REST](/rest/api/storageservices/data-lake-storage-gen2), Azure SDKs ([.NET](data-lake-storage-directory-file-acl-dotnet.md), [Java](data-lake-storage-directory-file-acl-java.md), [Python](data-lake-storage-directory-file-acl-python.md), and [Node.js](data-lake-storage-directory-file-acl-javascript.md)), [Azure Storage Explorer](data-lake-storage-explorer.md), [AzCopy](../common/storage-use-azcopy-v10.md#transfer-data), [Azure Data Factory](../../data-factory/copy-activity-overview.md), [Apache DistCp](./data-lake-storage-use-distcp.md) | > [!NOTE]-> This table doesn't reflect the complete list of Azure services that support Data Lake Storage Gen2. To see a list of supported Azure services, their level of support, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md). +> This table doesn't reflect the complete list of Azure services that support Data Lake Storage. To see a list of supported Azure services, their level of support, see [Azure services that support Azure Data Lake Storage](data-lake-storage-supported-azure-services.md). ## Monitor telemetry Azure Storage logs in Azure Monitor can be enabled through the Azure portal, Pow ## See also - [Key considerations for Azure Data Lake Storage](/azure/cloud-adoption-framework/scenarios/data-management/best-practices/data-lake-key-considerations)-- [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md)+- [Access control model in Azure Data Lake Storage](data-lake-storage-access-control-model.md) - [The hitchhiker's guide to the Data Lake](https://azure.github.io/Storage/docs/analytics/hitchhikers-guide-to-the-datalake/)-- [Overview of Azure Data Lake Storage Gen2](data-lake-storage-introduction.md)+- [Overview of Azure Data Lake Storage](data-lake-storage-introduction.md) |
storage | Data Lake Storage Directory File Acl Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-cli.md | Title: Use Azure CLI to manage data (Azure Data Lake Storage Gen2) + Title: Use Azure CLI to manage data (Azure Data Lake Storage) description: Use the Azure CLI to manage directories and files in storage accounts that have a hierarchical namespace. ms.devlang: azurecli -# Manage directories and files in Azure Data Lake Storage Gen2 via the Azure CLI +# Manage directories and files in Azure Data Lake Storage via the Azure CLI This article shows you how to use the [Azure CLI](/cli/azure/) to create and manage directories and files in storage accounts that have a hierarchical namespace. -To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-cli.md). +To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use Azure CLI to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-cli.md). [Samples](https://github.com/Azure/azure-cli/blob/dev/src/azure-cli/azure/cli/command_modules/storage/docs/ADLS%20Gen2.md) | [Give feedback](https://github.com/Azure/azure-cli-extensions/issues) az storage fs file delete -p my-directory/my-file.txt -f my-file-system --accou - [Samples](https://github.com/Azure/azure-cli/blob/dev/src/azure-cli/azure/cli/command_modules/storage/docs/ADLS%20Gen2.md) - [Give feedback](https://github.com/Azure/azure-cli-extensions/issues) - [Known issues](data-lake-storage-known-issues.md#api-scope-data-lake-client-library)-- [Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-cli.md)+- [Use Azure CLI to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-cli.md) |
storage | Data Lake Storage Directory File Acl Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-dotnet.md | Title: Use .NET to manage data in Azure Data Lake Storage Gen2 + Title: Use .NET to manage data in Azure Data Lake Storage description: Use the Azure Storage client library for .NET to manage directories and files in storage accounts that have a hierarchical namespace enabled. ms.devlang: csharp -# Use .NET to manage directories and files in Azure Data Lake Storage Gen2 +# Use .NET to manage directories and files in Azure Data Lake Storage This article shows you how to use .NET to create and manage directories and files in storage accounts that have a hierarchical namespace. -To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use .NET to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-dotnet.md). +To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use .NET to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-dotnet.md). [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Files.DataLake) | [Samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Files.DataLake) | [API reference](/dotnet/api/azure.storage.files.datalake) | [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Files.DataLake/GEN1_GEN2_MAPPING.md) | [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues) |
storage | Data Lake Storage Directory File Acl Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-java.md | Title: Use Java to manage data in Azure Data Lake Storage Gen2 + Title: Use Java to manage data in Azure Data Lake Storage description: Use Azure Storage libraries for Java to manage directories and files in storage accounts that have a hierarchical namespace enabled. -# Use Java to manage directories and files in Azure Data Lake Storage Gen2 +# Use Java to manage directories and files in Azure Data Lake Storage This article shows you how to use Java to create and manage directories and files in storage accounts that have a hierarchical namespace. -To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use .Java to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-java.md). +To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use .Java to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-java.md). [Package (Maven)](https://search.maven.org/artifact/com.azure/azure-storage-file-datalake) | [Samples](https://github.com/Azure/azure-sdk-for-jav) | [Give Feedback](https://github.com/Azure/azure-sdk-for-java/issues) |
storage | Data Lake Storage Directory File Acl Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-javascript.md | Title: Use JavaScript (Node.js) to manage data in Azure Data Lake Storage Gen2 + Title: Use JavaScript (Node.js) to manage data in Azure Data Lake Storage description: Use Azure Storage Data Lake client library for JavaScript to manage directories and files in storage accounts that have a hierarchical namespace enabled. ms.devlang: javascript -# Use JavaScript SDK in Node.js to manage directories and files in Azure Data Lake Storage Gen2 +# Use JavaScript SDK in Node.js to manage directories and files in Azure Data Lake Storage This article shows you how to use Node.js to create and manage directories and files in storage accounts that have a hierarchical namespace. -To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use JavaScript SDK in Node.js to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-javascript.md). +To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use JavaScript SDK in Node.js to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-javascript.md). [Package (Node Package Manager)](https://www.npmjs.com/package/@azure/storage-file-datalake) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-file-datalake/samples) | [Give Feedback](https://github.com/Azure/azure-sdk-for-java/issues) |
storage | Data Lake Storage Directory File Acl Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-powershell.md | Title: 'Use PowerShell to manage data: Azure Data Lake Storage Gen2' + Title: 'Use PowerShell to manage data: Azure Data Lake Storage' description: Use PowerShell cmdlets to manage directories and files in storage accounts that have a hierarchical namespace enabled. ms.devlang: powershell -# Use PowerShell to manage directories and files in Azure Data Lake Storage Gen2 +# Use PowerShell to manage directories and files in Azure Data Lake Storage This article shows you how to use PowerShell to create and manage directories and files in storage accounts that have a hierarchical namespace. -To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use PowerShell to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-powershell.md). +To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use PowerShell to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-powershell.md). [Reference](/powershell/module/Az.Storage/) | [Gen1 to Gen2 mapping](#gen1-gen2-map) | [Give feedback](https://github.com/Azure/azure-powershell/issues) You can use the `-Force` parameter to remove the file without a prompt. ## Gen1 to Gen2 Mapping -The following table shows how the cmdlets used for Data Lake Storage Gen1 map to the cmdlets for Data Lake Storage Gen2. +The following table shows how the cmdlets used for Data Lake Storage Gen1 map to the cmdlets for Data Lake Storage. > [!NOTE] > Azure Data Lake Storage Gen1 is now retired. See the retirement announcement [here](https://aka.ms/data-lake-storage-gen1-retirement-announcement). Data Lake Storage Gen1 resources are no longer accessible. If you require special assistance, please [contact us](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/overview). -|Data Lake Storage Gen1 cmdlet| Data Lake Storage Gen2 cmdlet| Notes | +|Data Lake Storage Gen1 cmdlet| Data Lake Storage cmdlet| Notes | |--||--| |Get-AzDataLakeStoreChildItem|Get-AzDataLakeGen2ChildItem|By default, the Get-AzDataLakeGen2ChildItem cmdlet only lists the first level child items. The -Recurse parameter lists child items recursively. | |Get-AzDataLakeStoreItem<br>Get-AzDataLakeStoreItemAclEntry<br>Get-AzDataLakeStoreItemOwner<br>Get-AzDataLakeStoreItemPermission|Get-AzDataLakeGen2Item|The output items of the Get-AzDataLakeGen2Item cmdlet have these properties: Acl, Owner, Group, Permission.| |
storage | Data Lake Storage Directory File Acl Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-python.md | Title: Use Python to manage data in Azure Data Lake Storage Gen2 + Title: Use Python to manage data in Azure Data Lake Storage description: Use Python to manage directories and files in a storage account that has hierarchical namespace enabled. ms.devlang: python -# Use Python to manage directories and files in Azure Data Lake Storage Gen2 +# Use Python to manage directories and files in Azure Data Lake Storage This article shows you how to use Python to create and manage directories and files in storage accounts that have a hierarchical namespace. -To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use Python to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-python.md). +To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use Python to manage ACLs in Azure Data Lake Storage](data-lake-storage-acl-python.md). [Package (PyPi)](https://pypi.org/project/azure-storage-file-datalake/) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/samples) | [API reference](/python/api/azure-storage-file-datalake/azure.storage.filedatalake) | [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/GEN1_GEN2_MAPPING.md) | [Give Feedback](https://github.com/Azure/azure-sdk-for-python/issues) |
storage | Data Lake Storage Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-events.md | Title: 'Tutorial: Implement the data lake capture pattern to update an Azure Databricks Delta table' -description: This tutorial shows you how to use an Event Grid subscription, an Azure Function, and an Azure Databricks job to insert rows of data into a table that is stored in Azure Data Lake Storage Gen2. +description: This tutorial shows you how to use an Event Grid subscription, an Azure Function, and an Azure Databricks job to insert rows of data into a table that is stored in Azure Data Lake Storage. We'll build this solution in reverse order, starting with the Azure Databricks w ## Prerequisites -- Create a storage account that has a hierarchical namespace (Azure Data Lake Storage Gen2). This tutorial uses a storage account named `contosoorders`. +- Create a storage account that has a hierarchical namespace (Azure Data Lake Storage). This tutorial uses a storage account named `contosoorders`. - See [Create a storage account to use with Azure Data Lake Storage Gen2](create-data-lake-storage-account.md). + See [Create a storage account to use with Azure Data Lake Storage](create-data-lake-storage-account.md). - Make sure that your user account has the [Storage Blob Data Contributor role](assign-azure-role-data-access.md) assigned to it. - Create a service principal, create a client secret, and then grant the service principal access to the storage account. - See [Tutorial: Connect to Azure Data Lake Storage Gen2](/azure/databricks/getting-started/connect-to-azure-storage) (Steps 1 through 3). After completing these steps, make sure to paste the tenant ID, app ID, and client secret values into a text file. You'll need those soon. + See [Tutorial: Connect to Azure Data Lake Storage](/azure/databricks/getting-started/connect-to-azure-storage) (Steps 1 through 3). After completing these steps, make sure to paste the tenant ID, app ID, and client secret values into a text file. You'll need those soon. - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create a sales order In this section, you create an Azure Databricks workspace using the Azure portal This code creates a widget named **source_file**. Later, you'll create an Azure Function that calls this code and passes a file path to that widget. This code also authenticates your service principal with the storage account, and creates some variables that you'll use in other cells. > [!NOTE]- > In a production setting, consider storing your authentication key in Azure Databricks. Then, add a look up key to your code block instead of the authentication key. <br><br>For example, instead of using this line of code: `spark.conf.set("fs.azure.account.oauth2.client.secret", "<password>")`, you would use the following line of code: `spark.conf.set("fs.azure.account.oauth2.client.secret", dbutils.secrets.get(scope = "<scope-name>", key = "<key-name-for-service-credential>"))`. <br><br>After you've completed this tutorial, see the [Azure Data Lake Storage Gen2](/azure/databricks/data/data-sources/azure/azure-datalake-gen2) article on the Azure Databricks Website to see examples of this approach. + > In a production setting, consider storing your authentication key in Azure Databricks. Then, add a look up key to your code block instead of the authentication key. <br><br>For example, instead of using this line of code: `spark.conf.set("fs.azure.account.oauth2.client.secret", "<password>")`, you would use the following line of code: `spark.conf.set("fs.azure.account.oauth2.client.secret", dbutils.secrets.get(scope = "<scope-name>", key = "<key-name-for-service-credential>"))`. <br><br>After you've completed this tutorial, see the [Azure Data Lake Storage](/azure/databricks/data/data-sources/azure/azure-datalake-gen2) article on the Azure Databricks Website to see examples of this approach. 2. Press the **SHIFT + ENTER** keys to run the code in this block. |
storage | Data Lake Storage Explorer Acl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer-acl.md | Title: 'Storage Explorer: Set ACLs in Azure Data Lake Storage Gen2' + Title: 'Storage Explorer: Set ACLs in Azure Data Lake Storage' description: Use the Azure Storage Explorer to manage access control lists (ACLs) in storage accounts that have hierarchical namespace (HNS) enabled. Last updated 03/09/2023 -# Use Azure Storage Explorer to manage ACLs in Azure Data Lake Storage Gen2 +# Use Azure Storage Explorer to manage ACLs in Azure Data Lake Storage This article shows you how to use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to manage access control lists (ACLs) in storage accounts that has hierarchical namespace (HNS) enabled. This article shows you how to modify the ACL of file or directory and how to app - You're the owning user of the target container, directory, or blob to which you plan to apply ACL settings. > [!NOTE]-> Storage Explorer makes use of both the Blob (blob) & Data Lake Storage Gen2 (dfs) [endpoints](../common/storage-private-endpoints.md#private-endpoints-for-azure-storage) when working with Azure Data Lake Storage Gen2. If access to Azure Data Lake Storage Gen2 is configured using private endpoints, ensure that two private endpoints are created for the storage account: one with the target sub-resource `blob` and the other with the target sub-resource `dfs`. +> Storage Explorer makes use of both the Blob (blob) & Data Lake Storage (dfs) [endpoints](../common/storage-private-endpoints.md#private-endpoints-for-azure-storage) when working with Azure Data Lake Storage. If access to Azure Data Lake Storage is configured using private endpoints, ensure that two private endpoints are created for the storage account: one with the target sub-resource `blob` and the other with the target sub-resource `dfs`. ## Sign in to Storage Explorer The **Manage Access** dialog box allows you to manage permissions for owner and To add a new user or group to the access control list, select the **Add** button. Then, enter the corresponding Microsoft Entra entry you wish to add to the list and then select **Add**. The user or group will now appear in the **Users and groups:** field, allowing you to begin managing their permissions. > [!NOTE]-> It is a best practice, and recommended, to create a security group in Microsoft Entra ID and maintain permissions on the group rather than individual users. For details on this recommendation, as well as other best practices, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-explorer-acl.md). +> It is a best practice, and recommended, to create a security group in Microsoft Entra ID and maintain permissions on the group rather than individual users. For details on this recommendation, as well as other best practices, see [Access control model in Azure Data Lake Storage](data-lake-storage-explorer-acl.md). Use the check box controls to set access and default ACLs. To learn more about the difference between these types of ACLs, see [Types of ACLs](data-lake-storage-access-control.md#types-of-acls). To apply ACL entries recursively, Right-click the container or a directory, and ## Next steps -Learn about the Data Lake Storage Gen2 permission model. +Learn about the Data Lake Storage permission model. > [!div class="nextstepaction"]-> [Access control model in Azure Data Lake Storage Gen2](./data-lake-storage-access-control-model.md) +> [Access control model in Azure Data Lake Storage](./data-lake-storage-access-control-model.md) |
storage | Data Lake Storage Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer.md | Title: Use Azure Storage Explorer with Azure Data Lake Storage Gen2 + Title: Use Azure Storage Explorer with Azure Data Lake Storage description: Use the Azure Storage Explorer to manage directories and file and directory access control lists (ACL) in storage accounts that has hierarchical namespace (HNS) enabled. Last updated 03/09/2023 -# Use Azure Storage Explorer to manage directories and files in Azure Data Lake Storage Gen2 +# Use Azure Storage Explorer to manage directories and files in Azure Data Lake Storage This article shows you how to use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to create and manage directories and files in storage accounts that have hierarchical namespace (HNS) enabled. This article shows you how to use [Azure Storage Explorer](https://azure.microso - Azure Storage Explorer installed on your local computer. To install Azure Storage Explorer for Windows, Macintosh, or Linux, see [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). > [!NOTE]-> Storage Explorer makes use of both the Blob (blob) & Data Lake Storage Gen2 (dfs) [endpoints](../common/storage-private-endpoints.md#private-endpoints-for-azure-storage) when working with Azure Data Lake Storage Gen2. If access to Azure Data Lake Storage Gen2 is configured using private endpoints, ensure that two private endpoints are created for the storage account: one with the target sub-resource `blob` and the other with the target sub-resource `dfs`. +> Storage Explorer makes use of both the Blob (blob) & Data Lake Storage (dfs) [endpoints](../common/storage-private-endpoints.md#private-endpoints-for-azure-storage) when working with Azure Data Lake Storage. If access to Azure Data Lake Storage is configured using private endpoints, ensure that two private endpoints are created for the storage account: one with the target sub-resource `blob` and the other with the target sub-resource `dfs`. ## Sign in to Storage Explorer To download files by using **Azure Storage Explorer**, with a file selected, sel Learn how to manage file and directory permission by setting access control lists (ACLs) > [!div class="nextstepaction"]-> [Use Azure Storage Explorer to manage ACLs in Azure Data Lake Storage Gen2](./data-lake-storage-explorer-acl.md) +> [Use Azure Storage Explorer to manage ACLs in Azure Data Lake Storage](./data-lake-storage-explorer-acl.md) |
storage | Data Lake Storage Integrate With Services Tutorials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-integrate-with-services-tutorials.md | Title: 'Tutorials: Azure services with Azure Data Lake Storage Gen2' + Title: 'Tutorials: Azure services with Azure Data Lake Storage' -description: Find tutorials that help you learn how to use Azure services with Azure Data Lake Storage Gen2. +description: Find tutorials that help you learn how to use Azure services with Azure Data Lake Storage. Last updated 03/07/2023 -# Tutorials that use Azure services with Azure Data Lake Storage Gen2 +# Tutorials that use Azure services with Azure Data Lake Storage -This article contains links to tutorials that show you how to use various Azure services with Data Lake Storage Gen2. +This article contains links to tutorials that show you how to use various Azure services with Data Lake Storage. ## List of tutorials | Azure service | Step-by-step guide | ||-| | Azure Synapse Analytics | [Get Started with Azure Synapse Analytics](../../synapse-analytics/get-started.md) |-| Azure Data Factory | [Load data into Azure Data Lake Storage Gen2 with Azure Data Factory](../../data-factory/load-azure-data-lake-storage-gen2.md) | +| Azure Data Factory | [Load data into Azure Data Lake Storage with Azure Data Factory](../../data-factory/load-azure-data-lake-storage-gen2.md) | | Azure Databricks | [Use with Azure Databricks](/azure/databricks/data/data-sources/azure/adls-gen2/) | | Azure Databricks | [Extract, transform, and load data by using Azure Databricks](/azure/databricks/scenarios/databricks-extract-load-sql-data-warehouse) |-| Azure Databricks | [Access Data Lake Storage Gen2 data with Azure Databricks using Spark](data-lake-storage-use-databricks-spark.md)| +| Azure Databricks | [Access Data Lake Storage data with Azure Databricks using Spark](data-lake-storage-use-databricks-spark.md)| | Azure Event Grid | [Implement the data lake capture pattern to update a Databricks Delta table](data-lake-storage-events.md) | | Azure Machine Learning | [Access data in Azure storage services](/azure/machine-learning/how-to-access-data) | | Azure Data Box | [Use Azure Data Box to migrate data from an on-premises HDFS store to Azure Storage](data-lake-storage-migrate-on-premises-hdfs-cluster.md) |-| HDInsight | [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md) | +| HDInsight | [Use Azure Data Lake Storage with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md) | | HDInsight | [Extract, transform, and load data by using Apache Hive on Azure HDInsight](data-lake-storage-tutorial-extract-transform-load-hive.md) |-| Power BI | [Analyze data in Data Lake Storage Gen2 using Power BI](/power-query/connectors/datalakestorage) | +| Power BI | [Analyze data in Data Lake Storage using Power BI](/power-query/connectors/datalakestorage) | | Azure Data Explorer | [Query data in Azure Data Lake using Azure Data Explorer](/azure/data-explorer/data-lake-query-data) |-| Azure Cognitive Search | [Index and search Azure Data Lake Storage Gen2 documents (preview)](/azure/search/search-howto-index-azure-data-lake-storage) | +| Azure Cognitive Search | [Index and search Azure Data Lake Storage documents (preview)](/azure/search/search-howto-index-azure-data-lake-storage) | > [!NOTE]-> This table doesn't reflect the complete list of Azure services that support Data Lake Storage Gen2. To see a list of supported Azure services, their level of support, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md). To see how services organized into categories such as ingest, download, process, and visualize, see [Ingest, process, and analyze](./data-lake-storage-best-practices.md#ingest-process-and-analyze). +> This table doesn't reflect the complete list of Azure services that support Data Lake Storage. To see a list of supported Azure services, their level of support, see [Azure services that support Azure Data Lake Storage](data-lake-storage-supported-azure-services.md). To see how services organized into categories such as ingest, download, process, and visualize, see [Ingest, process, and analyze](./data-lake-storage-best-practices.md#ingest-process-and-analyze). ## See also -[Best practices for using Azure Data Lake Storage Gen2](data-lake-storage-best-practices.md) +[Best practices for using Azure Data Lake Storage](data-lake-storage-best-practices.md) |
storage | Data Lake Storage Introduction Abfs Uri | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-introduction-abfs-uri.md | Title: Use the Azure Data Lake Storage Gen2 URI + Title: Use the Azure Data Lake Storage URI -description: Learn URI syntax for the ABFS scheme identifier, which represents the Azure Blob File System driver (Hadoop Filesystem driver for Azure Data Lake Storage Gen2). +description: Learn URI syntax for the ABFS scheme identifier, which represents the Azure Blob File System driver (Hadoop Filesystem driver for Azure Data Lake Storage). -# Use the Azure Data Lake Storage Gen2 URI +# Use the Azure Data Lake Storage URI -The [Hadoop Filesystem](https://www.aosabook.org/en/hdfs.html) driver that is compatible with Azure Data Lake Storage Gen2 is known by its scheme identifier `abfs` (Azure Blob File System). Consistent with other Hadoop Filesystem drivers, the ABFS driver employs a URI format to address files and directories within a Data Lake Storage Gen2 enabled account. +The [Hadoop Filesystem](https://www.aosabook.org/en/hdfs.html) driver that is compatible with Azure Data Lake Storage is known by its scheme identifier `abfs` (Azure Blob File System). Consistent with other Hadoop Filesystem drivers, the ABFS driver employs a URI format to address files and directories within a Data Lake Storage enabled account. ## URI syntax However, if the account you want to address does have a hierarchical namespace, ## Next steps -- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json)+- [Use Azure Data Lake Storage with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json) |
storage | Data Lake Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-introduction.md | Title: Azure Data Lake Storage Gen2 Introduction + Title: Azure Data Lake Storage Introduction -description: Read an introduction to Azure Data Lake Storage Gen2. Learn key features. Review supported Blob storage features, Azure service integrations, and platforms. +description: Read an introduction to Azure Data Lake Storage. Learn key features. Review supported Blob storage features, Azure service integrations, and platforms. -# Introduction to Azure Data Lake Storage Gen2 +# Introduction to Azure Data Lake Storage -Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on [Azure Blob Storage](storage-blobs-introduction.md). +Azure Data Lake Storage is a set of capabilities dedicated to big data analytics, built on [Azure Blob Storage](storage-blobs-introduction.md). -Data Lake Storage Gen2 converges the capabilities of [Azure Data Lake Storage Gen1](../../data-lake-store/index.yml) with Azure Blob Storage. For example, Data Lake Storage Gen2 provides file system semantics, file-level security, and scale. Because these capabilities are built on Blob storage, you also get low-cost, tiered storage, with high availability/disaster recovery capabilities. +Data Lake Storage converges the capabilities of [Azure Data Lake Storage Gen1](../../data-lake-store/index.yml) with Azure Blob Storage. For example, Data Lake Storage provides file system semantics, file-level security, and scale. Because these capabilities are built on Blob storage, you also get low-cost, tiered storage, with high availability/disaster recovery capabilities. -Data Lake Storage Gen2 makes Azure Storage the foundation for building enterprise data lakes on Azure. Designed from the start to service multiple petabytes of information while sustaining hundreds of gigabits of throughput, Data Lake Storage Gen2 allows you to easily manage massive amounts of data. +Data Lake Storage makes Azure Storage the foundation for building enterprise data lakes on Azure. Designed from the start to service multiple petabytes of information while sustaining hundreds of gigabits of throughput, Data Lake Storage allows you to easily manage massive amounts of data. ## What is a Data Lake? A _data lake_ is a single, centralized repository where you can store all your d _Azure Data Lake Storage_ is a cloud-based, enterprise data lake solution. It's engineered to store massive amounts of data in any format, and to facilitate big data analytical workloads. You use it to capture data of any type and ingestion speed in a single location for easy access and analysis using various frameworks. -## Data Lake Storage Gen2 +## Data Lake Storage -_Azure Data Lake Storage Gen2_ refers to the current implementation of Azure's Data Lake Storage solution. The previous implementation, _Azure Data Lake Storage Gen1_ will be retired on February 29, 2024. +_Azure Data Lake Storage_ refers to the current implementation of Azure's Data Lake Storage solution. The previous implementation, _Azure Data Lake Storage Gen1_ will be retired on February 29, 2024. -Unlike Data Lake Storage Gen1, Data Lake Storage Gen2 isn't a dedicated service or account type. Instead, it's implemented as a set of capabilities that you use with the Blob Storage service of your Azure Storage account. You can unlock these capabilities by enabling the hierarchical namespace setting. +Unlike Data Lake Storage Gen1, Data Lake Storage isn't a dedicated service or account type. Instead, it's implemented as a set of capabilities that you use with the Blob Storage service of your Azure Storage account. You can unlock these capabilities by enabling the hierarchical namespace setting. -Data Lake Storage Gen2 includes the following capabilities. +Data Lake Storage includes the following capabilities. ✓ Hadoop-compatible access Data Lake Storage Gen2 includes the following capabilities. #### Hadoop-compatible access -Azure Data Lake Storage Gen2 is primarily designed to work with Hadoop and all frameworks that use the Apache [Hadoop Distributed File System (HDFS)](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) as their data access layer. Hadoop distributions include the [Azure Blob File System (ABFS)](data-lake-storage-abfs-driver.md) driver, which enables many applications and frameworks to access Azure Blob Storage data directly. The ABFS driver is [optimized specifically](data-lake-storage-abfs-driver.md) for big data analytics. The corresponding REST APIs are surfaced through the endpoint `dfs.core.windows.net`. +Azure Data Lake Storage is primarily designed to work with Hadoop and all frameworks that use the Apache [Hadoop Distributed File System (HDFS)](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) as their data access layer. Hadoop distributions include the [Azure Blob File System (ABFS)](data-lake-storage-abfs-driver.md) driver, which enables many applications and frameworks to access Azure Blob Storage data directly. The ABFS driver is [optimized specifically](data-lake-storage-abfs-driver.md) for big data analytics. The corresponding REST APIs are surfaced through the endpoint `dfs.core.windows.net`. -Data analysis frameworks that use HDFS as their data access layer can directly access Azure Data Lake Storage Gen2 data through ABFS. The Apache Spark analytics engine and the Presto SQL query engine are examples of such frameworks. +Data analysis frameworks that use HDFS as their data access layer can directly access Azure Data Lake Storage data through ABFS. The Apache Spark analytics engine and the Presto SQL query engine are examples of such frameworks. -For more information about supported services and platforms, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md) and [Open source platforms that support Azure Data Lake Storage Gen2](data-lake-storage-supported-open-source-platforms.md). +For more information about supported services and platforms, see [Azure services that support Azure Data Lake Storage](data-lake-storage-supported-azure-services.md) and [Open source platforms that support Azure Data Lake Storage](data-lake-storage-supported-open-source-platforms.md). #### Hierarchical directory structure -The [hierarchical namespace](data-lake-storage-namespace.md) is a key feature that enables Azure Data Lake Storage Gen2 to provide high-performance data access at object storage scale and price. You can use this feature to organize all the objects and files within your storage account into a hierarchy of directories and nested subdirectories. In other words, your Azure Data Lake Storage Gen2 data is organized in much the same way that files are organized on your computer. +The [hierarchical namespace](data-lake-storage-namespace.md) is a key feature that enables Azure Data Lake Storage to provide high-performance data access at object storage scale and price. You can use this feature to organize all the objects and files within your storage account into a hierarchy of directories and nested subdirectories. In other words, your Azure Data Lake Storage data is organized in much the same way that files are organized on your computer. Operations such as renaming or deleting a directory, become single atomic metadata operations on the directory. There's no need to enumerate and process all objects that share the name prefix of the directory. #### Optimized cost and performance -Azure Data Lake Storage Gen2 is priced at Azure Blob Storage levels. It builds on Azure Blob Storage capabilities such as automated lifecycle policy management and object level tiering to manage big data storage costs. +Azure Data Lake Storage is priced at Azure Blob Storage levels. It builds on Azure Blob Storage capabilities such as automated lifecycle policy management and object level tiering to manage big data storage costs. Performance is optimized because you don't need to copy or transform data as a prerequisite for analysis. The hierarchical namespace capability of Azure Data Lake Storage allows for efficient access and navigation. This architecture means that data processing requires fewer computational resources, reducing both the speed and cost of accessing data. #### Finer grain security model -The Azure Data Lake Storage Gen2 access control model supports both Azure role-based access control (Azure RBAC) and Portable Operating System Interface for UNIX (POSIX) access control lists (ACLs). There are also a few extra security settings that are specific to Azure Data Lake Storage Gen2. You can set permissions either at the directory level or at the file level. All stored data is encrypted at rest by using either Microsoft-managed or customer-managed encryption keys. +The Azure Data Lake Storage access control model supports both Azure role-based access control (Azure RBAC) and Portable Operating System Interface for UNIX (POSIX) access control lists (ACLs). There are also a few extra security settings that are specific to Azure Data Lake Storage. You can set permissions either at the directory level or at the file level. All stored data is encrypted at rest by using either Microsoft-managed or customer-managed encryption keys. #### Massive scalability -Azure Data Lake Storage Gen2 offers massive storage and accepts numerous data types for analytics. It doesn't impose any limits on account sizes, file sizes, or the amount of data that can be stored in the data lake. Individual files can have sizes that range from a few kilobytes (KBs) to a few petabytes (PBs). Processing is executed at near-constant per-request latencies that are measured at the service, account, and file levels. +Azure Data Lake Storage offers massive storage and accepts numerous data types for analytics. It doesn't impose any limits on account sizes, file sizes, or the amount of data that can be stored in the data lake. Individual files can have sizes that range from a few kilobytes (KBs) to a few petabytes (PBs). Processing is executed at near-constant per-request latencies that are measured at the service, account, and file levels. -This design means that Azure Data Lake Storage Gen2 can easily and quickly scale up to meet the most demanding workloads. It can also just as easily scale back down when demand drops. +This design means that Azure Data Lake Storage can easily and quickly scale up to meet the most demanding workloads. It can also just as easily scale back down when demand drops. ## Built on Azure Blob Storage -The data that you ingest persist as blobs in the storage account. The service that manages blobs is the Azure Blob Storage service. Data Lake Storage Gen2 describes the capabilities or "enhancements" to this service that caters to the demands of big data analytic workloads. +The data that you ingest persist as blobs in the storage account. The service that manages blobs is the Azure Blob Storage service. Data Lake Storage describes the capabilities or "enhancements" to this service that caters to the demands of big data analytic workloads. Because these capabilities are built on Blob Storage, features such as diagnostic logging, access tiers, and lifecycle management policies are available to your account. Most Blob Storage features are fully supported, but some features might be supported only at the preview level and there are a handful of them that aren't yet supported. For a complete list of support statements, see [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md). The status of each listed feature will change over time as support continues to expand. ## Documentation and terminology -The Azure Blob Storage table of contents features two sections of content. The **Data Lake Storage Gen2** section of content provides best practices and guidance for using Data Lake Storage Gen2 capabilities. The **Blob Storage** section of content provides guidance for account features not specific to Data Lake Storage Gen2. +The Azure Blob Storage table of contents features two sections of content. The **Data Lake Storage** section of content provides best practices and guidance for using Data Lake Storage capabilities. The **Blob Storage** section of content provides guidance for account features not specific to Data Lake Storage. As you move between sections, you might notice some slight terminology differences. For example, content featured in the Blob Storage documentation, will use the term _blob_ instead of _file_. Technically, the files that you ingest to your storage account become blobs in your account. Therefore, the term is correct. However, the term _blob_ can cause confusion if you're used to the term _file_. You'll also see the term _container_ used to refer to a _file system_. Consider these terms as synonymous. ## See also -- [Introduction to Azure Data Lake Storage Gen2 (Training module)](/training/modules/introduction-to-azure-data-lake-storage/)-- [Best practices for using Azure Data Lake Storage Gen2](data-lake-storage-best-practices.md)-- [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md)+- [Introduction to Azure Data Lake Storage (Training module)](/training/modules/introduction-to-azure-data-lake-storage/) +- [Best practices for using Azure Data Lake Storage](data-lake-storage-best-practices.md) +- [Known issues with Azure Data Lake Storage](data-lake-storage-known-issues.md) - [Multi-protocol access on Azure Data Lake Storage](data-lake-storage-multi-protocol-access.md) |
storage | Data Lake Storage Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-known-issues.md | Title: Known issues with Azure Data Lake Storage Gen2 + Title: Known issues with Azure Data Lake Storage -description: Learn about limitations and known issues of Azure Data Lake Storage Gen2. +description: Learn about limitations and known issues of Azure Data Lake Storage. -# Known issues with Azure Data Lake Storage Gen2 +# Known issues with Azure Data Lake Storage This article describes limitations and known issues for accounts that have the hierarchical namespace feature enabled. This article describes limitations and known issues for accounts that have the h ## Supported Blob storage features -An increasing number of Blob storage features now work with accounts that have a hierarchical namespace. For a complete list, see [Blob Storage features available in Azure Data Lake Storage Gen2](./storage-feature-support-in-storage-accounts.md). +An increasing number of Blob storage features now work with accounts that have a hierarchical namespace. For a complete list, see [Blob Storage features available in Azure Data Lake Storage](./storage-feature-support-in-storage-accounts.md). ## Supported Azure service integrations -Azure Data Lake Storage Gen2 supports several Azure services that you can use to ingest data, perform analytics, and create visual representations. For a list of supported Azure services, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md). +Azure Data Lake Storage supports several Azure services that you can use to ingest data, perform analytics, and create visual representations. For a list of supported Azure services, see [Azure services that support Azure Data Lake Storage](data-lake-storage-supported-azure-services.md). -For more information, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md). +For more information, see [Azure services that support Azure Data Lake Storage](data-lake-storage-supported-azure-services.md). ## Supported open source platforms -Several open source platforms support Data Lake Storage Gen2. For a complete list, see [Open source platforms that support Azure Data Lake Storage Gen2](data-lake-storage-supported-open-source-platforms.md). +Several open source platforms support Data Lake Storage. For a complete list, see [Open source platforms that support Azure Data Lake Storage](data-lake-storage-supported-open-source-platforms.md). -For more information, see [Open source platforms that support Azure Data Lake Storage Gen2](data-lake-storage-supported-open-source-platforms.md). +For more information, see [Open source platforms that support Azure Data Lake Storage](data-lake-storage-supported-open-source-platforms.md). ## Blob storage APIs -Data Lake Storage Gen2 APIs, NFS 3.0, and Blob APIs can operate on the same data. +Data Lake Storage APIs, NFS 3.0, and Blob APIs can operate on the same data. -This section describes issues and limitations with using blob APIs, NFS 3.0, and Data Lake Storage Gen2 APIs to operate on the same data. +This section describes issues and limitations with using blob APIs, NFS 3.0, and Data Lake Storage APIs to operate on the same data. -- You can't use blob APIs, NFS 3.0, and Data Lake Storage APIs to write to the same instance of a file. If you write to a file by using Data Lake Storage Gen2 APIs or NFS 3.0, then that file's blocks won't be visible to calls to the [Get Block List](/rest/api/storageservices/get-block-list) blob API. The only exception is when you're overwriting. You can overwrite a file/blob using either API or with NFS 3.0 by using the zero-truncate option. +- You can't use blob APIs, NFS 3.0, and Data Lake Storage APIs to write to the same instance of a file. If you write to a file by using Data Lake Storage APIs or NFS 3.0, then that file's blocks won't be visible to calls to the [Get Block List](/rest/api/storageservices/get-block-list) blob API. The only exception is when you're overwriting. You can overwrite a file/blob using either API or with NFS 3.0 by using the zero-truncate option. - Blobs that are created by using a Data Lake Storage Gen2 operation such the [Path - Create](/rest/api/storageservices/datalakestoragegen2/path/create) operation, can't be overwritten by using [PutBlock](/rest/api/storageservices/put-block) or [PutBlockList](/rest/api/storageservices/put-block-list) operations, but they can be overwritten by using a [PutBlob](/rest/api/storageservices/put-block) operation subject to the maximum permitted blob size imposed by the corresponding api-version that PutBlob uses. + Blobs that are created by using a Data Lake Storage operation such the [Path - Create](/rest/api/storageservices/datalakestoragegen2/path/create) operation, can't be overwritten by using [PutBlock](/rest/api/storageservices/put-block) or [PutBlockList](/rest/api/storageservices/put-block-list) operations, but they can be overwritten by using a [PutBlob](/rest/api/storageservices/put-block) operation subject to the maximum permitted blob size imposed by the corresponding api-version that PutBlob uses. - When you use the [List Blobs](/rest/api/storageservices/list-blobs) operation without specifying a delimiter, the results include both directories and blobs. If you choose to use a delimiter, use only a forward slash (`/`). This is the only supported delimiter. In the storage browser that appears in the Azure portal, you can't access a file ## Third party applications -Third party applications that use REST APIs to work will continue to work if you use them with Data Lake Storage Gen2. Applications that call Blob APIs will likely work. +Third party applications that use REST APIs to work will continue to work if you use them with Data Lake Storage. Applications that call Blob APIs will likely work. ## Windows Azure Storage Blob (WASB) driver |
storage | Data Lake Storage Migrate On Premises HDFS Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-on-premises-HDFS-cluster.md | Title: Migrate from on-premises HDFS store to Azure Storage with Azure Data Box -description: Migrate data from an on-premises HDFS store into Azure Storage (blob storage or Data Lake Storage Gen2) by using a Data Box device. +description: Migrate data from an on-premises HDFS store into Azure Storage (blob storage or Data Lake Storage) by using a Data Box device. -You can migrate data from an on-premises HDFS store of your Hadoop cluster into Azure Storage (blob storage or Data Lake Storage Gen2) by using a Data Box device. You can choose from Data Box Disk, an 80-TB Data Box or a 770-TB Data Box Heavy. +You can migrate data from an on-premises HDFS store of your Hadoop cluster into Azure Storage (blob storage or Data Lake Storage) by using a Data Box device. You can choose from Data Box Disk, an 80-TB Data Box or a 770-TB Data Box Heavy. This article helps you complete these tasks: This article helps you complete these tasks: > - Prepare to migrate your data > - Copy your data to a Data Box Disk, Data Box or a Data Box Heavy device > - Ship the device back to Microsoft-> - Apply access permissions to files and directories (Data Lake Storage Gen2 only) +> - Apply access permissions to files and directories (Data Lake Storage only) ## Prerequisites Follow these steps to prepare and ship the Data Box device to Microsoft. 5. After Microsoft receives your device, it's connected to the data center network, and the data is uploaded to the storage account you specified when you placed the device order. Verify against the BOM files that all your data is uploaded to Azure. -## Apply access permissions to files and directories (Data Lake Storage Gen2 only) +## Apply access permissions to files and directories (Data Lake Storage only) You already have the data into your Azure Storage account. Now you apply access permissions to files and directories. > [!NOTE]-> This step is needed only if you are using Azure Data Lake Storage Gen2 as your data store. If you are using just a blob storage account without hierarchical namespace as your data store, you can skip this section. +> This step is needed only if you are using Azure Data Lake Storage as your data store. If you are using just a blob storage account without hierarchical namespace as your data store, you can skip this section. -### Create a service principal for your Azure Data Lake Storage Gen2 enabled account +### Create a service principal for your Azure Data Lake Storage enabled account To create a service principal, see [How to: Use the portal to create a Microsoft Entra application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md). This command generates a list of copied files with their permissions. ### Apply permissions to copied files and apply identity mappings -Run this command to apply permissions to the data that you copied into the Data Lake Storage Gen2 enabled account: +Run this command to apply permissions to the data that you copied into the Data Lake Storage enabled account: ```bash ./copy-acls.py -s ./filelist.json -i ./id_map.json -A <storage-account-name> -C <container-name> --dest-spn-id <application-id> --dest-spn-secret <client-secret> Here's an example: ## Next steps -Learn how Data Lake Storage Gen2 works with HDInsight clusters. For more information, see [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md). +Learn how Data Lake Storage works with HDInsight clusters. For more information, see [Use Azure Data Lake Storage with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md). |
storage | Data Lake Storage Multi Protocol Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-multi-protocol-access.md | Title: Multi-protocol access on Azure Data Lake Storage -description: Use Blob APIs and applications that use Blob APIs with Azure Data Lake Storage Gen2. +description: Use Blob APIs and applications that use Blob APIs with Azure Data Lake Storage. -Until recently, you might have had to maintain separate storage solutions for object storage and analytics storage. That's because Azure Data Lake Storage Gen2 had limited ecosystem support. It also had limited access to Blob service features such as diagnostic logging. A fragmented storage solution is hard to maintain because you have to move data between accounts to accomplish various scenarios. You no longer have to do that. +Until recently, you might have had to maintain separate storage solutions for object storage and analytics storage. That's because Azure Data Lake Storage had limited ecosystem support. It also had limited access to Blob service features such as diagnostic logging. A fragmented storage solution is hard to maintain because you have to move data between accounts to accomplish various scenarios. You no longer have to do that. With multi-protocol access on Data Lake Storage, you can work with your data by using the ecosystem of tools, applications, and services. This also includes third-party tools and applications. You can point them to accounts that have a hierarchical namespace without having to modify them. These applications work *as is* even if they call Blob APIs, because Blob APIs can now operate on data in accounts that have a hierarchical namespace. Blob storage features such as [diagnostic logging](../common/storage-analytics-l > > [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) >-> [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md) +> [Azure services that support Azure Data Lake Storage](data-lake-storage-supported-azure-services.md) ## How multi-protocol access on data lake storage works -Blob APIs and Data Lake Storage Gen2 APIs can operate on the same data in storage accounts that have a hierarchical namespace. Data Lake Storage Gen2 routes Blob APIs through the hierarchical namespace so that you can get the benefits of first class directory operations and POSIX-compliant access control lists (ACLs). +Blob APIs and Data Lake Storage APIs can operate on the same data in storage accounts that have a hierarchical namespace. Data Lake Storage routes Blob APIs through the hierarchical namespace so that you can get the benefits of first class directory operations and POSIX-compliant access control lists (ACLs). ![Multi-protocol access on Data Lake Storage conceptual](./media/data-lake-storage-interop/interop-concept.png) -Existing tools and applications that use the Blob API gain these benefits automatically. Developers won't have to modify them. Data Lake Storage Gen2 consistently applies directory and file-level ACLs regardless of the protocol that tools and applications use to access the data. +Existing tools and applications that use the Blob API gain these benefits automatically. Developers won't have to modify them. Data Lake Storage consistently applies directory and file-level ACLs regardless of the protocol that tools and applications use to access the data. ## See also - [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md)-- [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md)-- [Open source platforms that support Azure Data Lake Storage Gen2](data-lake-storage-supported-open-source-platforms.md)-- [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md)+- [Azure services that support Azure Data Lake Storage](data-lake-storage-supported-azure-services.md) +- [Open source platforms that support Azure Data Lake Storage](data-lake-storage-supported-open-source-platforms.md) +- [Known issues with Azure Data Lake Storage](data-lake-storage-known-issues.md) |
storage | Data Lake Storage Namespace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-namespace.md | Title: Azure Data Lake Storage Gen2 hierarchical namespace + Title: Azure Data Lake Storage hierarchical namespace -description: Describes the concept of a hierarchical namespace for Azure Data Lake Storage Gen2 +description: Describes the concept of a hierarchical namespace for Azure Data Lake Storage -# Azure Data Lake Storage Gen2 hierarchical namespace +# Azure Data Lake Storage hierarchical namespace -A key mechanism that allows Azure Data Lake Storage Gen2 to provide file system performance at object storage scale and prices is the addition of a **hierarchical namespace**. This allows the collection of objects/files within an account to be organized into a hierarchy of directories and nested subdirectories in the same way that the file system on your computer is organized. With a hierarchical namespace enabled, a storage account becomes capable of providing the scalability and cost-effectiveness of object storage, with file system semantics that are familiar to analytics engines and frameworks. +A key mechanism that allows Azure Data Lake Storage to provide file system performance at object storage scale and prices is the addition of a **hierarchical namespace**. This allows the collection of objects/files within an account to be organized into a hierarchy of directories and nested subdirectories in the same way that the file system on your computer is organized. With a hierarchical namespace enabled, a storage account becomes capable of providing the scalability and cost-effectiveness of object storage, with file system semantics that are familiar to analytics engines and frameworks. ## The benefits of a hierarchical namespace The following benefits are associated with file systems that implement a hierarc This dramatic optimization is especially significant for many big data analytics frameworks. Tools like Hive, Spark, etc. often write output to temporary locations and then rename the location at the conclusion of the job. Without a hierarchical namespace, this rename can often take longer than the analytics process itself. Lower job latency equals lower total cost of ownership (TCO) for analytics workloads. -- **Familiar Interface Style:** File systems are well understood by developers and users alike. There is no need to learn a new storage paradigm when you move to the cloud as the file system interface exposed by Data Lake Storage Gen2 is the same paradigm used by computers, large and small.+- **Familiar Interface Style:** File systems are well understood by developers and users alike. There is no need to learn a new storage paradigm when you move to the cloud as the file system interface exposed by Data Lake Storage is the same paradigm used by computers, large and small. -One of the reasons that object stores haven't historically supported a hierarchical namespace is that a hierarchical namespace limits scale. However, the Data Lake Storage Gen2 hierarchical namespace scales linearly and does not degrade either the data capacity or performance. +One of the reasons that object stores haven't historically supported a hierarchical namespace is that a hierarchical namespace limits scale. However, the Data Lake Storage hierarchical namespace scales linearly and does not degrade either the data capacity or performance. ## Deciding whether to enable a hierarchical namespace -After you've enabled a hierarchical namespace on your account, you can't revert it back to a flat namespace. Therefore, consider whether it makes sense to enable a hierarchical namespace based on the nature of your object store workloads. To evaluate the impact of enabling a hierarchical namespace on workloads, applications, costs, service integrations, tools, features, and documentation, see [Upgrading Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2.md). +After you've enabled a hierarchical namespace on your account, you can't revert it back to a flat namespace. Therefore, consider whether it makes sense to enable a hierarchical namespace based on the nature of your object store workloads. To evaluate the impact of enabling a hierarchical namespace on workloads, applications, costs, service integrations, tools, features, and documentation, see [Upgrading Azure Blob Storage with Azure Data Lake Storage capabilities](upgrade-to-data-lake-storage-gen2.md). Some workloads might not gain any benefit by enabling a hierarchical namespace. Examples include backups, image storage, and other applications where object organization is stored separately from the objects themselves (for example: in a separate database). In general, we recommend that you turn on a hierarchical namespace for storage w The reasons for enabling a hierarchical namespace are determined by a TCO analysis. Generally speaking, improvements in workload latency due to storage acceleration will require compute resources for less time. Latency for many workloads may be improved due to atomic directory manipulation that is enabled by a hierarchical namespace. In many workloads, the compute resource represents > 85% of the total cost and so even a modest reduction in workload latency equates to a significant amount of TCO savings. Even in cases where enabling a hierarchical namespace increases storage costs, the TCO is still lowered due to reduced compute costs. -To analyze differences in data storage prices, transaction prices, and storage capacity reservation pricing between accounts that have a flat hierarchical namespace versus a hierarchical namespace, see [Azure Data Lake Storage Gen2 pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/). +To analyze differences in data storage prices, transaction prices, and storage capacity reservation pricing between accounts that have a flat hierarchical namespace versus a hierarchical namespace, see [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/). ## Next steps -- Enable a hierarchical namespace when you create a new storage account. See [Create a storage account to use with Azure Data Lake Storage Gen2](create-data-lake-storage-account.md).-- Enable a hierarchical namespace on an existing storage account. See [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md).+- Enable a hierarchical namespace when you create a new storage account. See [Create a storage account to use with Azure Data Lake Storage](create-data-lake-storage-account.md). +- Enable a hierarchical namespace on an existing storage account. See [Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). |
storage | Data Lake Storage Supported Azure Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-supported-azure-services.md | Title: Azure services that support Azure Data Lake Storage Gen2 + Title: Azure services that support Azure Data Lake Storage -description: Learn about which Azure services integrate with Azure Data Lake Storage Gen2 +description: Learn about which Azure services integrate with Azure Data Lake Storage Last updated 03/09/2023 -# Azure services that support Azure Data Lake Storage Gen2 +# Azure services that support Azure Data Lake Storage -You can use Azure services to ingest data, perform analytics, and create visual representations. This article provides a list of supported Azure services, discloses their level of support, and provides you with links to articles that help you to use these services with Azure Data Lake Storage Gen2. +You can use Azure services to ingest data, perform analytics, and create visual representations. This article provides a list of supported Azure services, discloses their level of support, and provides you with links to articles that help you to use these services with Azure Data Lake Storage. ## Supported Azure services -This table lists the Azure services that you can use with Azure Data Lake Storage Gen2. The items that appear in these tables will change over time as support continues to expand. +This table lists the Azure services that you can use with Azure Data Lake Storage. The items that appear in these tables will change over time as support continues to expand. > [!NOTE] > Support level refers only to how the service is supported with Data Lake Storage Gen 2. |Azure service |Support level |Microsoft Entra ID |Shared Key| Related articles | ||-||||-|Azure Data Factory|Generally available|Yes|Yes|<ul><li>[Load data into Azure Data Lake Storage Gen2 with Azure Data Factory](../../data-factory/load-azure-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json)</li></ul>| -|Azure Databricks|Generally available|Yes|Yes|<ul><li>[Use with Azure Databricks](/azure/databricks/dat)</li></ul>| +|Azure Data Factory|Generally available|Yes|Yes|<ul><li>[Load data into Azure Data Lake Storage with Azure Data Factory](../../data-factory/load-azure-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json)</li></ul>| +|Azure Databricks|Generally available|Yes|Yes|<ul><li>[Use with Azure Databricks](/azure/databricks/dat)</li></ul>| |Azure Event Hubs|Generally available|No|Yes|<ul><li>[Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage](../../event-hubs/event-hubs-capture-overview.md)</li></ul>| |Azure Event Grid|Generally available|Yes|Yes|<ul><li>[Tutorial: Implement the data lake capture pattern to update a Databricks Delta table](data-lake-storage-events.md)</li></ul>| |Azure Logic Apps|Generally available|No|Yes|<ul><li>[Overview - What is Azure Logic Apps?](../../logic-apps/logic-apps-overview.md)</li></ul>| |Azure Machine Learning|Generally available|Yes|Yes|<ul><li>[Access data in Azure storage services](/azure/machine-learning/how-to-access-data)</li></ul>|-|Azure Stream Analytics|Generally available|Yes|Yes|<ul><li>[Quickstart: Create a Stream Analytics job by using the Azure portal](../../stream-analytics/stream-analytics-quick-create-portal.md)</li><br><li>[Egress to Azure Data Lake Gen2](../../stream-analytics/stream-analytics-define-outputs.md)</li></ul>| +|Azure Stream Analytics|Generally available|Yes|Yes|<ul><li>[Quickstart: Create a Stream Analytics job by using the Azure portal](../../stream-analytics/stream-analytics-quick-create-portal.md)</li><br><li>[Egress to Azure Data Lake](../../stream-analytics/stream-analytics-define-outputs.md)</li></ul>| |Data Box|Generally available|No|Yes|<ul><li>[Use Azure Data Box to migrate data from an on-premises HDFS store to Azure Storage](data-lake-storage-migrate-on-premises-hdfs-cluster.md)</li></ul>|-|HDInsight |Generally available|Yes|Yes|<ul><li>[Azure Storage overview in HDInsight](../../hdinsight/overview-azure-storage.md)</li><br><li>[Use Azure storage with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-blob-storage.md)</li><br><li>[Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md)</li><br><li>[Using the HDFS CLI with Data Lake Storage Gen2](data-lake-storage-use-hdfs-data-lake-storage.md)</li><br><li>[Tutorial: Extract, transform, and load data by using Apache Hive on Azure HDInsight](data-lake-storage-tutorial-extract-transform-load-hive.md)</li></ul>| +|HDInsight |Generally available|Yes|Yes|<ul><li>[Azure Storage overview in HDInsight](../../hdinsight/overview-azure-storage.md)</li><br><li>[Use Azure storage with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-blob-storage.md)</li><br><li>[Use Azure Data Lake Storage with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md)</li><br><li>[Using the HDFS CLI with Data Lake Storage](data-lake-storage-use-hdfs-data-lake-storage.md)</li><br><li>[Tutorial: Extract, transform, and load data by using Apache Hive on Azure HDInsight](data-lake-storage-tutorial-extract-transform-load-hive.md)</li></ul>| |IoT Hub |Generally available|Yes|Yes|<ul><li>[Use IoT Hub message routing to send device-to-cloud messages to different endpoints](../../iot-hub/iot-hub-devguide-messages-d2c.md)</li></ul>|-|Power BI|Generally available|Yes|Yes|<ul><li>[Analyze data in Data Lake Storage Gen2 using Power BI](/power-query/connectors/datalakestorage)</li></ul>| +|Power BI|Generally available|Yes|Yes|<ul><li>[Analyze data in Data Lake Storage using Power BI](/power-query/connectors/datalakestorage)</li></ul>| |Azure Synapse Analytics (formerly SQL Data Warehouse)|Generally available|Yes|Yes|<ul><li>[Analyze data in a storage account](../../synapse-analytics/get-started-analyze-storage.md)</li></ul>| |SQL Server Integration Services (SSIS)|Generally available|Yes|Yes|<ul><li>[Azure Storage connection manager](/sql/integration-services/connection-manager/azure-storage-connection-manager)</li></ul>| |Azure Data Explorer|Generally available|Yes|Yes|<ul><li>[Query data in Azure Data Lake using Azure Data Explorer](/azure/data-explorer/data-lake-query-data)</li></ul>|-|Azure AI Search|Generally available|Yes|Yes|<ul><li>[Index and search Azure Data Lake Storage Gen2 documents](/azure/search/search-howto-index-azure-data-lake-storage)</li></ul>| +|Azure AI Search|Generally available|Yes|Yes|<ul><li>[Index and search Azure Data Lake Storage documents](/azure/search/search-howto-index-azure-data-lake-storage)</li></ul>| |Azure SQL Managed Instance|Preview|No|Yes|<ul><li>[Data virtualization with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/data-virtualization-overview)</li></ul>| This table lists the Azure services that you can use with Azure Data Lake Storag ## See also -- [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md)+- [Known issues with Azure Data Lake Storage](data-lake-storage-known-issues.md) - [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md)-- [Open source platforms that support Azure Data Lake Storage Gen2](data-lake-storage-supported-open-source-platforms.md)+- [Open source platforms that support Azure Data Lake Storage](data-lake-storage-supported-open-source-platforms.md) - [Multi-protocol access on Azure Data Lake Storage](data-lake-storage-multi-protocol-access.md)-- [Best practices for using Azure Data Lake Storage Gen2](data-lake-storage-best-practices.md)+- [Best practices for using Azure Data Lake Storage](data-lake-storage-best-practices.md) |
storage | Data Lake Storage Supported Open Source Platforms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-supported-open-source-platforms.md | Title: Open source platforms that support Azure Data Lake Storage Gen2 + Title: Open source platforms that support Azure Data Lake Storage -description: Learn about which open source platforms that support Azure Data Lake Storage Gen2 +description: Learn about which open source platforms that support Azure Data Lake Storage Last updated 03/09/2023 -# Open source platforms that support Azure Data Lake Storage Gen2 +# Open source platforms that support Azure Data Lake Storage -This article lists the open source platforms that support Data Lake Storage Gen2. +This article lists the open source platforms that support Data Lake Storage. ## Supported open source platforms -This table lists the open source platforms that support Data Lake Storage Gen2. +This table lists the open source platforms that support Data Lake Storage. > [!NOTE] > Only the versions that appear in this table are supported. This table lists the open source platforms that support Data Lake Storage Gen2. ## See also -- [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md)+- [Known issues with Azure Data Lake Storage](data-lake-storage-known-issues.md) - [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md)-- [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md)+- [Azure services that support Azure Data Lake Storage](data-lake-storage-supported-azure-services.md) - [Multi-protocol access on Azure Data Lake Storage](data-lake-storage-multi-protocol-access.md) |
storage | Data Lake Storage Tutorial Extract Transform Load Hive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-tutorial-extract-transform-load-hive.md | If you don't have an Azure subscription, [create a free account](https://azure.m ## Prerequisites -- A storage account that has a hierarchical namespace (Azure Data Lake Storage Gen2) that is configured for HDInsight+- A storage account that has a hierarchical namespace (Azure Data Lake Storage) that is configured for HDInsight - See [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md). + See [Use Azure Data Lake Storage with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md). - A Linux-based Hadoop cluster on HDInsight If you don't have an Azure subscription, [create a free account](https://azure.m ## Download, extract and then upload the data -In this section, you download sample flight data. Then, you upload that data to your HDInsight cluster and then copy that data to your Data Lake Storage Gen2 account. +In this section, you download sample flight data. Then, you upload that data to your HDInsight cluster and then copy that data to your Data Lake Storage account. 1. Download the [On_Time_Reporting_Carrier_On_Time_Performance_1987_present_2016_1.zip](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/tutorials/On_Time_Reporting_Carrier_On_Time_Performance_1987_present_2016_1.zip) file. This file contains the flight data. In this section, you download sample flight data. Then, you upload that data to The command extracts a **.csv** file. -5. Use the following command to create the Data Lake Storage Gen2 container. +5. Use the following command to create the Data Lake Storage container. ```bash hadoop fs -D "fs.azure.createRemoteFileSystemDuringInitialization=true" -ls abfs://<container-name>@<storage-account-name>.dfs.core.windows.net/ All resources used in this tutorial are preexisting. No cleanup is necessary. To learn more ways to work with data in HDInsight, see the following article: > [!div class="nextstepaction"]-> [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json) +> [Use Azure Data Lake Storage with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json) |
storage | Data Lake Storage Use Databricks Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-databricks-spark.md | Title: 'Tutorial: Azure Data Lake Storage Gen2, Azure Databricks & Spark' + Title: 'Tutorial: Azure Data Lake Storage, Azure Databricks & Spark' -description: This tutorial shows how to run Spark queries on an Azure Databricks cluster to access data in an Azure Data Lake Storage Gen2 storage account. +description: This tutorial shows how to run Spark queries on an Azure Databricks cluster to access data in an Azure Data Lake Storage storage account. -# Tutorial: Azure Data Lake Storage Gen2, Azure Databricks & Spark +# Tutorial: Azure Data Lake Storage, Azure Databricks & Spark -This tutorial shows you how to connect your Azure Databricks cluster to data stored in an Azure storage account that has Azure Data Lake Storage Gen2 enabled. This connection enables you to natively run queries and analytics from your cluster on your data. +This tutorial shows you how to connect your Azure Databricks cluster to data stored in an Azure storage account that has Azure Data Lake Storage enabled. This connection enables you to natively run queries and analytics from your cluster on your data. In this tutorial, you will: If you don't have an Azure subscription, create a [free account](https://azure.m ## Prerequisites -- Create a storage account that has a hierarchical namespace (Azure Data Lake Storage Gen2)+- Create a storage account that has a hierarchical namespace (Azure Data Lake Storage) - See [Create a storage account to use with Azure Data Lake Storage Gen2](create-data-lake-storage-account.md). + See [Create a storage account to use with Azure Data Lake Storage](create-data-lake-storage-account.md). - Make sure that your user account has the [Storage Blob Data Contributor role](assign-azure-role-data-access.md) assigned to it. If you don't have an Azure subscription, create a [free account](https://azure.m - Create a service principal, create a client secret, and then grant the service principal access to the storage account. - See [Tutorial: Connect to Azure Data Lake Storage Gen2](/azure/databricks/getting-started/connect-to-azure-storage) (Steps 1 through 3). After completing these steps, make sure to paste the tenant ID, app ID, and client secret values into a text file. You use them later in this tutorial. + See [Tutorial: Connect to Azure Data Lake Storage](/azure/databricks/getting-started/connect-to-azure-storage) (Steps 1 through 3). After completing these steps, make sure to paste the tenant ID, app ID, and client secret values into a text file. You use them later in this tutorial. ## Create an Azure Databricks workspace, cluster, and notebook If you want to learn about the information captured in the on-time reporting per ## Ingest data -In this section, you upload the *.csv* flight data into your Azure Data Lake Storage Gen2 account and then mount the storage account to your Databricks cluster. Finally, you use Databricks to read the *.csv* flight data and write it back to storage in Apache parquet format. +In this section, you upload the *.csv* flight data into your Azure Data Lake Storage account and then mount the storage account to your Databricks cluster. Finally, you use Databricks to read the *.csv* flight data and write it back to storage in Apache parquet format. ### Upload the flight data into your storage account -Use AzCopy to copy your *.csv* file into your Azure Data Lake Storage Gen2 account. You use the `azcopy make` command to create a container in your storage account. Then you use the `azcopy copy` command to copy the *csv* data you just downloaded to a directory in that container. +Use AzCopy to copy your *.csv* file into your Azure Data Lake Storage account. You use the `azcopy make` command to create a container in your storage account. Then you use the `azcopy copy` command to copy the *csv* data you just downloaded to a directory in that container. In the following steps, you need to enter names for the container you want to create, and the directory and blob that you want to upload the flight data to in the container. You can use the suggested names in each step or specify your own observing the [naming conventions for containers, directories, and blobs](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata). In the following steps, you need to enter names for the container you want to cr ### Mount your storage account to your Databricks cluster -In this section, you mount your Azure Data Lake Storage Gen2 cloud object storage to the Databricks File System (DBFS). You use the Azure AD service principle you created previously for authentication with the storage account. For more information, see [Mounting cloud object storage on Azure Databricks](/azure/databricks/dbfs/mounts). +In this section, you mount your Azure Data Lake Storage cloud object storage to the Databricks File System (DBFS). You use the Azure AD service principle you created previously for authentication with the storage account. For more information, see [Mounting cloud object storage on Azure Databricks](/azure/databricks/dbfs/mounts). 1. Attach your notebook to your cluster. In this section, you mount your Azure Data Lake Storage Gen2 cloud object storag 1. In this code block: - In `configs`, replace the `<appId>`, `<clientSecret>`, and `<tenantId>` placeholder values with the application ID, client secret, and tenant ID you copied when you created the service principal in the prerequisites. - - In the `source` URI, replace the `<storage-account-name>`, `<container-name>`, and `<directory-name>` placeholder values with the name of your Azure Data Lake Storage Gen2 storage account and the name of the container and directory you specified when you uploaded the flight data to the storage account. + - In the `source` URI, replace the `<storage-account-name>`, `<container-name>`, and `<directory-name>` placeholder values with the name of your Azure Data Lake Storage storage account and the name of the container and directory you specified when you uploaded the flight data to the storage account. > [!NOTE]- > The scheme identifier in the URI, `abfss`, tells Databricks to use the Azure Blob File System driver with Transport Layer Security (TLS). To learn more about the URI, see [Use the Azure Data Lake Storage Gen2 URI](/azure/storage/blobs/data-lake-storage-introduction-abfs-uri#uri-syntax). + > The scheme identifier in the URI, `abfss`, tells Databricks to use the Azure Blob File System driver with Transport Layer Security (TLS). To learn more about the URI, see [Use the Azure Data Lake Storage URI](/azure/storage/blobs/data-lake-storage-introduction-abfs-uri#uri-syntax). 1. Make sure your cluster has finished starting up before proceeding. The container and directory where you uploaded the flight data in your storage a ### Use Databricks Notebook to convert CSV to Parquet -Now that the *csv* flight data is accessible through a DBFS mount point, you can use an Apache Spark DataFrame to load it into your workspace and write it back in Apache parquet format to your Azure Data Lake Storage Gen2 object storage. +Now that the *csv* flight data is accessible through a DBFS mount point, you can use an Apache Spark DataFrame to load it into your workspace and write it back in Apache parquet format to your Azure Data Lake Storage object storage. - A Spark DataFrame is a two-dimensional labeled data structure with columns of potentially different types. You can use a DataFrame to easily read and write data in various supported formats. With a DataFrame, you can load data from cloud object storage and perform analysis and transformations on it inside your compute cluster without affecting the underlying data in cloud object storage. To learn more, see [Work with PySpark DataFrames on Azure Databricks](/azure/databricks/getting-started/dataframes-python). Before proceeding to the next section, make sure that all of the parquet data ha ## Explore data -In this section, you use the [Databricks file system utility](/azure/databricks/dev-tools/databricks-utils#--file-system-utility-dbutilsfs) to explore your Azure Data Lake Storage Gen2 object storage using the DBFS mount point you created in the previous section. +In this section, you use the [Databricks file system utility](/azure/databricks/dev-tools/databricks-utils#--file-system-utility-dbutilsfs) to explore your Azure Data Lake Storage object storage using the DBFS mount point you created in the previous section. In a new cell, paste the following code to get a list of the files at the mount point. The first command outputs a list of files and directories. The second command displays the output in tabular format for easier reading. As a convenience, you can use the help command to learn detail about other comma dbutils.fs.help("rm") ``` -With these code samples, you've explored the hierarchical nature of HDFS using data stored in a storage account with Azure Data Lake Storage Gen2 enabled. +With these code samples, you've explored the hierarchical nature of HDFS using data stored in a storage account with Azure Data Lake Storage enabled. ## Query the data percent_delayed_flights.show() In this tutorial, you: -- Created Azure resources, including an Azure Data Lake Storage Gen2 storage account and Azure AD service principal, and assigned permissions to access the storage account.+- Created Azure resources, including an Azure Data Lake Storage storage account and Azure AD service principal, and assigned permissions to access the storage account. - Created an Azure Databricks workspace, notebook, and compute cluster. -- Used AzCopy to upload unstructured *.csv* flight data to the Azure Data Lake Storage Gen2 storage account.+- Used AzCopy to upload unstructured *.csv* flight data to the Azure Data Lake Storage storage account. -- Used Databricks File System utility functions to mount your Azure Data Lake Storage Gen2 storage account and explore its hierarchical file system.+- Used Databricks File System utility functions to mount your Azure Data Lake Storage storage account and explore its hierarchical file system. -- Used Apache Spark DataFrames to transform your *.csv* flight data to Apache parquet format and store it back to your Azure Data Lake Storage Gen2 storage account.+- Used Apache Spark DataFrames to transform your *.csv* flight data to Apache parquet format and store it back to your Azure Data Lake Storage storage account. - Used DataFrames to explore the flight data and perform a simple query. |
storage | Data Lake Storage Use Distcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-distcp.md | Title: Copy data into Azure Data Lake Storage Gen2 using DistCp + Title: Copy data into Azure Data Lake Storage using DistCp -description: Copy data to and from Azure Data Lake Storage Gen2 using the Apache Hadoop distributed copy tool (DistCp). +description: Copy data to and from Azure Data Lake Storage using the Apache Hadoop distributed copy tool (DistCp). -# Use DistCp to copy data between Azure Storage Blobs and Azure Data Lake Storage Gen2 +# Use DistCp to copy data between Azure Storage Blobs and Azure Data Lake Storage You can use [DistCp](https://hadoop.apache.org/docs/stable/hadoop-distcp/DistCp.html) to copy data between a general purpose V2 storage account and a general purpose V2 storage account with hierarchical namespace enabled. This article provides instructions on how use the DistCp tool. DistCp provides a variety of command-line parameters and we strongly encourage y ## Prerequisites - An Azure subscription. For more information, see [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).-- An existing Azure Storage account without Data Lake Storage Gen2 capabilities (hierarchical namespace) enabled.-- An Azure Storage account with Data Lake Storage Gen2 capabilities (hierarchical namespace) enabled. For instructions on how to create one, see [Create an Azure Storage account](../common/storage-account-create.md)+- An existing Azure Storage account without Data Lake Storage capabilities (hierarchical namespace) enabled. +- An Azure Storage account with Data Lake Storage capabilities (hierarchical namespace) enabled. For instructions on how to create one, see [Create an Azure Storage account](../common/storage-account-create.md) - A container that has been created in the storage account with hierarchical namespace enabled.-- An Azure HDInsight cluster with access to a storage account with the hierarchical namespace feature enabled. For more information, see [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json). Make sure you enable Remote Desktop for the cluster.+- An Azure HDInsight cluster with access to a storage account with the hierarchical namespace feature enabled. For more information, see [Use Azure Data Lake Storage with Azure HDInsight clusters](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md?toc=/azure/storage/blobs/toc.json). Make sure you enable Remote Desktop for the cluster. ## Use DistCp from an HDInsight Linux cluster |
storage | Data Lake Storage Use Hdfs Data Lake Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-hdfs-data-lake-storage.md | Title: Using the HDFS CLI with Azure Data Lake Storage Gen2 + Title: Using the HDFS CLI with Azure Data Lake Storage -description: Use the Hadoop Distributed File System (HDFS) CLI for Azure Data Lake Storage Gen2. Create a container, get a list of files or directories, and more. +description: Use the Hadoop Distributed File System (HDFS) CLI for Azure Data Lake Storage. Create a container, get a list of files or directories, and more. Last updated 03/09/2023 -# Using the HDFS CLI with Data Lake Storage Gen2 +# Using the HDFS CLI with Data Lake Storage You can access and manage the data in your storage account by using a command line interface just as you would with a [Hadoop Distributed File System (HDFS)](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html). This article provides some examples that will help you get started. hdfs dfs -mkdir /samplefolder The connection string can be found at the "SSH + Cluster login" section of the HDInsight cluster blade in Azure portal. SSH credentials were specified at the time of the cluster creation. > [!IMPORTANT]-> HDInsight cluster billing starts after a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute, so you should always delete your cluster when it is no longer in use. To learn how to delete a cluster, see our [article on the topic](../../hdinsight/hdinsight-delete-cluster.md). However, data stored in a storage account with Data Lake Storage Gen2 enabled persists even after an HDInsight cluster is deleted. +> HDInsight cluster billing starts after a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute, so you should always delete your cluster when it is no longer in use. To learn how to delete a cluster, see our [article on the topic](../../hdinsight/hdinsight-delete-cluster.md). However, data stored in a storage account with Data Lake Storage enabled persists even after an HDInsight cluster is deleted. ## Create a container You can view the complete list of commands on the [Apache Hadoop 2.4.1 File Syst ## Next steps -- [Use an Azure Data Lake Storage Gen2 capable account in Azure Databricks](./data-lake-storage-use-databricks-spark.md)+- [Use an Azure Data Lake Storage capable account in Azure Databricks](./data-lake-storage-use-databricks-spark.md) - [Learn about access control lists on files and directories](./data-lake-storage-access-control.md) |
storage | Data Lake Storage Use Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-sql.md | Title: 'Tutorial: Azure Data Lake Storage Gen2, Azure Synapse' + Title: 'Tutorial: Azure Data Lake Storage, Azure Synapse' -description: This tutorial shows how to run SQL queries on an Azure Synapse serverless SQL endpoint to access data in an Azure Data Lake Storage Gen2 enabled storage account. +description: This tutorial shows how to run SQL queries on an Azure Synapse serverless SQL endpoint to access data in an Azure Data Lake Storage enabled storage account. -# Tutorial: Query Azure Data Lake Storage Gen2 using SQL language in Synapse Analytics +# Tutorial: Query Azure Data Lake Storage using SQL language in Synapse Analytics -This tutorial shows you how to connect your Azure Synapse serverless SQL pool to data stored in an Azure Storage account that has Azure Data Lake Storage Gen2 enabled. +This tutorial shows you how to connect your Azure Synapse serverless SQL pool to data stored in an Azure Storage account that has Azure Data Lake Storage enabled. This connection enables you to natively run SQL queries and analytics using SQL language on your data in Azure Storage. In this tutorial, you will: If you don't have an Azure subscription, create a [free account](https://azure.m ## Prerequisites -- Create a storage account that has a hierarchical namespace (Azure Data Lake Storage Gen2)+- Create a storage account that has a hierarchical namespace (Azure Data Lake Storage) - See [Create a storage account to use with Azure Data Lake Storage Gen2](create-data-lake-storage-account.md). + See [Create a storage account to use with Azure Data Lake Storage](create-data-lake-storage-account.md). - Make sure that your user account has the [Storage Blob Data Contributor role](assign-azure-role-data-access.md) assigned to it. When they're no longer needed, delete the resource group and all related resourc ## Next steps > [!div class="nextstepaction"]-> [Azure Data Lake Storage Gen2, Azure Databricks & Spark](data-lake-storage-use-databricks-spark.md) +> [Azure Data Lake Storage, Azure Databricks & Spark](data-lake-storage-use-databricks-spark.md) |
storage | Data Protection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-protection-overview.md | Title: Data protection overview -description: The data protection options available for you're for Blob Storage and Azure Data Lake Storage Gen2 data enable you to protect your data from being deleted or overwritten. If you should need to recover data that has been deleted or overwritten, this guide can help you to choose the recovery option that's best for your scenario. +description: The data protection options available for you're for Blob Storage and Azure Data Lake Storage data enable you to protect your data from being deleted or overwritten. If you should need to recover data that has been deleted or overwritten, this guide can help you to choose the recovery option that's best for your scenario. -Azure Storage provides data protection for Blob Storage and Azure Data Lake Storage Gen2 to help you to prepare for scenarios where you need to recover data that has been deleted or overwritten. It's important to think about how to best protect your data before an incident occurs that could compromise it. This guide can help you decide in advance which data protection features your scenario requires, and how to implement them. If you should need to recover data that has been deleted or overwritten, this overview also provides guidance on how to proceed, based on your scenario. +Azure Storage provides data protection for Blob Storage and Azure Data Lake Storage to help you to prepare for scenarios where you need to recover data that has been deleted or overwritten. It's important to think about how to best protect your data before an incident occurs that could compromise it. This guide can help you decide in advance which data protection features your scenario requires, and how to implement them. If you should need to recover data that has been deleted or overwritten, this overview also provides guidance on how to proceed, based on your scenario. In the Azure Storage documentation, *data protection* refers to strategies for protecting the storage account and data within it from being deleted or modified, or for restoring data after it has been deleted or modified. Azure Storage also offers options for *disaster recovery*, including multiple levels of redundancy to protect your data from service outages due to hardware problems or natural disasters. Customer-managed (unplanned) failover is another disaster recovery option that allows you to fail over to a secondary region if the primary region becomes unavailable. For more information about how your data is protected from service outages, see [Disaster recovery](#disaster-recovery). |
storage | Immutable Container Level Worm Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-container-level-worm-policies.md | Append blobs are composed of blocks of data and optimized for data append operat The **allowProtectedAppendWrites** property setting allows for writing new blocks to an append blob while maintaining immutability protection and compliance. If this setting is enabled, you can create an append blob directly in the policy-protected container and then continue to add new blocks of data to the end of the append blob with the Append Block operation. Only new blocks can be added; any existing blocks can't be modified or deleted. Enabling this setting doesn't affect the immutability behavior of block blobs or page blobs. -The **AllowProtectedAppendWritesAll** property setting provides the same permissions as the **allowProtectedAppendWrites** property and adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs. +The **AllowProtectedAppendWritesAll** property setting provides the same permissions as the **allowProtectedAppendWrites** property and adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs. Append blobs remain in the immutable state during the effective retention period. Since new data can be appended beyond the initial creation of the append blob, there's a slight difference in how the retention period is determined. The effective retention is the difference between append blob's last modification time and the user-specified retention interval. Similarly, when the retention interval is extended, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period. |
storage | Immutable Policy Configure Container Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-container-scope.md | To configure a time-based retention policy on a container with the Azure portal, The **Append blobs** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation. - The **Block and append blobs** option provides you with the same permissions as the **Append blobs** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob. + The **Block and append blobs** option provides you with the same permissions as the **Append blobs** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob. To learn more about these options, see [Allow protected append blobs writes](immutable-container-level-worm-policies.md#allow-protected-append-blobs-writes). To allow protected append writes, set the `-AllowProtectedAppendWrite` or `-All The **AllowProtectedAppendWrite** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation. -The **AllowProtectedAppendWriteAll** option provides you with the same permissions as the **AllowProtectedAppendWrite** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob. +The **AllowProtectedAppendWriteAll** option provides you with the same permissions as the **AllowProtectedAppendWrite** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob. To learn more about these options, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes). To allow protected append writes, set the `--allow-protected-append-writes` or The **--allow-protected-append-writes** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation. -The **--allow-protected-append-writes-all** option provides you with the same permissions as the **--allow-protected-append-writes** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob. +The **--allow-protected-append-writes-all** option provides you with the same permissions as the **--allow-protected-append-writes** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob. To learn more about these options, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes). To configure a legal hold on a container with the Azure portal, follow these ste The **Append blobs** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation. - This setting also adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs. + This setting also adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs. To learn more about these options, see [Allow protected append blobs writes](immutable-legal-hold-overview.md#allow-protected-append-blobs-writes). |
storage | Immutable Policy Configure Version Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-version-scope.md | To configure a default version-level immutability policy for a container in the The **Append blobs** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation. - The **Block and append blobs** option extends this support by adding the ability to write new blocks to a block blob. The Blob Storage API does not provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs. + The **Block and append blobs** option extends this support by adding the ability to write new blocks to a block blob. The Blob Storage API does not provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs. To learn more about these options, see [Allow protected append blobs writes](immutable-container-level-worm-policies.md#allow-protected-append-blobs-writes). |
storage | Immutable Storage Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md | The following table shows a breakdown of the differences between container-level | Feature dependencies | No other features are a prerequisite or requirement for this feature to function. | Versioning is a prerequisite for this feature to be used. | | Enablement for existing accounts/container | This feature can be enabled at any time for existing containers. | Depending on the level of granularity, this feature might not be enabled for all existing accounts/containers. | | Account/container deletion | Once a time-based retention policy is locked on a container, containers may only be deleted if they're empty. | Once version-level WORM is enabled on an account or container level, they may only be deleted if they're empty.|-| Support for Azure Data Lake Storage Gen2 (storage accounts that have a hierarchical namespace enabled)| Container-level WORM policies are supported in accounts that have a hierarchical namespace. | Version-level WORM policies are not yet supported in accounts that have a hierarchical namespace. | +| Support for Azure Data Lake Storage (storage accounts that have a hierarchical namespace enabled)| Container-level WORM policies are supported in accounts that have a hierarchical namespace. | Version-level WORM policies are not yet supported in accounts that have a hierarchical namespace. | To learn more about container-level WORM, see [Container-Level WORM policies](immutable-container-level-worm-policies.md). To learn more about version-level WORM, please visit [version-Level WORM policies](immutable-version-level-worm-policies.md). The following table helps you decide which type of WORM policy to use. | Organization of data | You want to set policies for specific data sets, which can be categorized by container. All the data in that container needs to be kept in a WORM state for the same amount of time. | You can't group objects by retention periods. All blobs must be stored with an individual retention time based on that blobΓÇÖs scenarios, or user has a mixed workload so that some groups of data can be clustered into containers while other blobs can't. You might also want to set container-level policies and blob-level policies within the same account. | | Amount of data that requires an immutable policy | You don't need to set policies on more than 10,000 containers per account. | You want to set policies on all data or large amounts of data that can be delineated by account. You know that if you use container-level WORM, you'll have to exceed the 10,000-container limit. | | Interest in enabling versioning | You don't want to deal with enabling versioning either because of the cost, or because the workload would create numerous extra versions to deal with. | You either want to use versioning, or don't mind using it. You know that if they donΓÇÖt enable versioning, you can't keep edits or overwrites to immutable blobs as separate versions. |-| Storage location (Blob Storage vs Data Lake Storage Gen2) | Your workload is entirely focused on Azure Data Lake Storage Gen2. You have no immediate interest or plan to switch to using an account that doesn't have the hierarchical namespace feature enabled. | Your workload is either on Blob Storage in an account that doesn't have the hierarchical namespace feature enabled, and can use version-level WORM now, or you're willing to wait for versioning to be available for accounts that do have a hierarchical namespace enabled (Azure Data Lake Storage Gen2).| +| Storage location (Blob Storage vs Data Lake Storage) | Your workload is entirely focused on Azure Data Lake Storage. You have no immediate interest or plan to switch to using an account that doesn't have the hierarchical namespace feature enabled. | Your workload is either on Blob Storage in an account that doesn't have the hierarchical namespace feature enabled, and can use version-level WORM now, or you're willing to wait for versioning to be available for accounts that do have a hierarchical namespace enabled (Azure Data Lake Storage).| ### Access tiers |
storage | Map Rest Apis Transaction Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/map-rest-apis-transaction-categories.md | The price of each type appears in the [Azure Blob Storage pricing](https://azure <sup>2</sup> When the source object is in a different account, the source account incurs one transaction for each read request to the source object. -## Operation type of each Data Lake Storage Gen2 REST operation +## Operation type of each Data Lake Storage REST operation -The following table maps each Data Lake Storage Gen2 REST operation to an operation type. +The following table maps each Data Lake Storage REST operation to an operation type. The price of each type appears in the [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/) page. |
storage | Migrate Gen2 Wandisco Live Data Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/migrate-gen2-wandisco-live-data-platform.md | Title: Data Lake Storage and WANdisco LiveData Platform for Azure -description: Learn how to migrate petabytes of on-premises Hadoop data to Azure Data Lake Storage Gen2 file systems without interrupting data operations or requiring downtime. +description: Learn how to migrate petabytes of on-premises Hadoop data to Azure Data Lake Storage file systems without interrupting data operations or requiring downtime. -# Migrate on-premises Hadoop data to Azure Data Lake Storage Gen2 with WANdisco LiveData Platform for Azure +# Migrate on-premises Hadoop data to Azure Data Lake Storage with WANdisco LiveData Platform for Azure -[WANdisco LiveData Platform for Azure](https://docs.wandisco.com/live-data-platform/docs/landing/) migrates petabytes of on-premises Hadoop data to Azure Data Lake Storage Gen2 file systems without interrupting data operations or requiring downtime. The platform's continuous checks prevent data from being lost while keeping it consistent at both ends of transference even while it undergoes modification. +[WANdisco LiveData Platform for Azure](https://docs.wandisco.com/live-data-platform/docs/landing/) migrates petabytes of on-premises Hadoop data to Azure Data Lake Storage file systems without interrupting data operations or requiring downtime. The platform's continuous checks prevent data from being lost while keeping it consistent at both ends of transference even while it undergoes modification. The platform consists of two services. [LiveData Migrator for Azure](https://cirata.com/products/data-integration) migrates actively used data from on-premises environments to Azure storage, and [LiveData Plane for Azure](https://cirata.com/products/data-integration) ensures that all modified or ingested data is replicated consistently. To perform a migration: 3. Configure Kerberos details, if applicable. -4. Define the target Azure Data Lake Storage Gen2-enabled storage account. +4. Define the target Azure Data Lake Storage-enabled storage account. > [!div class="mx-imgBorder"] > ![Create a LiveData Migrator target](./media/migrate-gen2-wandisco-live-data-platform/create-target.png) |
storage | Monitor Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md | For a list of available metrics for Azure Blob Storage, see [Azure Blob Storage [!INCLUDE [horz-monitor-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-resource-logs.md)] For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure Blob Storage, see [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md#resource-logs). > [!NOTE]-> Data Lake Storage Gen2 doesn't appear as a storage type because Data Lake Storage Gen2 is a set of capabilities available to Blob storage. +> Data Lake Storage doesn't appear as a storage type because Data Lake Storage is a set of capabilities available to Blob storage. #### Destination limitations |
storage | Network File System Protocol Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-known-issues.md | -> Because you must enable the hierarchical namespace feature of your account to use NFS 3.0, all of the known issues that are described in the [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md) article also apply to your account. +> Because you must enable the hierarchical namespace feature of your account to use NFS 3.0, all of the known issues that are described in the [Known issues with Azure Data Lake Storage](data-lake-storage-known-issues.md) article also apply to your account. ## NFS 3.0 support |
storage | Network File System Protocol Support How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md | Create a directory on your Linux system and then mount the container in the stor |`EINVAL ("Invalid argument"`) |This error can appear when a client attempts to:<li>Write to a blob that was created from a blob endpoint.<li>Delete a blob that has a snapshot or is in a container that has an active WORM (write once, read many) policy.| |`EROFS ("Read-only file system"`) |This error can appear when a client attempts to:<li>Write to a blob or delete a blob that has an active lease.<li>Write to a blob or delete a blob in a container that has an active WORM policy. | |`NFS3ERR_IO/EIO ("Input/output error"`) |This error can appear when a client attempts to read, write, or set attributes on blobs that are stored in the archive access tier. |-|`OperationNotSupportedOnSymLink` error| This error can be returned during a write operation via a Blob Storage or Azure Data Lake Storage Gen2 API. Using these APIs to write or delete symbolic links that are created by using NFS 3.0 is not allowed. Make sure to use the NFS 3.0 endpoint to work with symbolic links. | +|`OperationNotSupportedOnSymLink` error| This error can be returned during a write operation via a Blob Storage or Azure Data Lake Storage API. Using these APIs to write or delete symbolic links that are created by using NFS 3.0 is not allowed. Make sure to use the NFS 3.0 endpoint to work with symbolic links. | |`mount: /nfsdata: bad option;`| Install the NFS helper program by using `sudo apt install nfs-common`.| |`Connection Timed Out`| Make sure that client allows outgoing communication through ports 111 and 2048. The NFS 3.0 protocol uses these ports. Makes sure to mount the storage account by using the Blob service endpoint and not the Data Lake Storage endpoint. | |
storage | Network File System Protocol Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support.md | The NFS 3.0 protocol feature is best suited for processing high throughput, high ## NFS 3.0 and the hierarchical namespace -NFS 3.0 protocol support requires blobs to be organized into a hierarchical namespace. You can enable a hierarchical namespace when you create a storage account. The ability to use a hierarchical namespace was introduced by Azure Data Lake Storage Gen2. It organizes objects (files) into a hierarchy of directories and subdirectories in the same way that the file system on your computer is organized. The hierarchical namespace scales linearly and doesn't degrade data capacity or performance. Different protocols extend from the hierarchical namespace. The NFS 3.0 protocol is one of the these available protocols. +NFS 3.0 protocol support requires blobs to be organized into a hierarchical namespace. You can enable a hierarchical namespace when you create a storage account. The ability to use a hierarchical namespace was introduced by Azure Data Lake Storage. It organizes objects (files) into a hierarchy of directories and subdirectories in the same way that the file system on your computer is organized. The hierarchical namespace scales linearly and doesn't degrade data capacity or performance. Different protocols extend from the hierarchical namespace. The NFS 3.0 protocol is one of the these available protocols. > [!div class="mx-imgBorder"] > ![hierarchical namespace](./media/network-protocol-support/hierarchical-namespace-and-nfs-support.png) |
storage | Object Replication Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md | Object replication isn't supported for blobs in the source account that are encr Customer-managed failover isn't supported for either the source or the destination account in an object replication policy. -Object replication is not supported for blobs that are uploaded by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs. +Object replication is not supported for blobs that are uploaded by using [Data Lake Storage](/rest/api/storageservices/data-lake-storage-gen2) APIs. ## How object replication works |
storage | Point In Time Restore Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md | Point-in-time restore for block blobs has the following limitations and known is - If a blob with an active lease is included in the range to restore, and if the current version of the leased blob is different from the previous version at the timestamp provided for PITR, the restore operation fails atomically. We recommend breaking any active leases before initiating the restore operation. - Performing a customer-managed failover on a storage account resets the earliest possible restore point for the storage account. For more details, see [Point-in-time restore](../common/storage-disaster-recovery-guidance.md#point-in-time-restore-inconsistencies). - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state.-- Point-in-time restore isn't supported for hierarchical namespaces or operations via Azure Data Lake Storage Gen2.+- Point-in-time restore isn't supported for hierarchical namespaces or operations via Azure Data Lake Storage. - Point-in-time restore isn't supported when the storage account's **AllowedCopyScope** property is set to restrict copy scope to the same Microsoft Entra tenant or virtual network. For more information, see [About Permitted scope for copy operations (preview)](../common/security-restrict-copy-operations.md?toc=/azure/storage/blobs/toc.json&tabs=portal#about-permitted-scope-for-copy-operations-preview). - Point-in-time restore isn't supported when version-level immutability is enabled on a storage account or a container in an account. For more information on version-level immutability, see [Configure immutability policies for blob versions](immutable-version-level-worm-policies.md). |
storage | Premium Tier For Data Lake Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/premium-tier-for-data-lake-storage.md | Title: Premium tier for Azure Data Lake Storage -description: Use the premium performance tier with Azure Data Lake Storage Gen2 +description: Use the premium performance tier with Azure Data Lake Storage -Azure Data Lake Storage Gen2 now supports [premium block blob storage accounts](storage-blob-block-blob-premium.md). Premium block blob storage accounts are ideal for big data analytics applications and workloads that require low consistent latency and have a high number of transactions. Example workloads include interactive workloads, IoT, streaming analytics, artificial intelligence, and machine learning. +Azure Data Lake Storage now supports [premium block blob storage accounts](storage-blob-block-blob-premium.md). Premium block blob storage accounts are ideal for big data analytics applications and workloads that require low consistent latency and have a high number of transactions. Example workloads include interactive workloads, IoT, streaming analytics, artificial intelligence, and machine learning. >[!TIP]-> To learn more about the performance and cost advantages of using a premium block blob storage account, and to see how other Data Lake Storage Gen2 customers have used this type of account, see [Premium block blob storage accounts](storage-blob-block-blob-premium.md). +> To learn more about the performance and cost advantages of using a premium block blob storage account, and to see how other Data Lake Storage customers have used this type of account, see [Premium block blob storage accounts](storage-blob-block-blob-premium.md). ## Getting started with premium As you create the account, choose the **Premium** performance option and the **B > [!div class="mx-imgBorder"] > ![Create block blob storage account](./media/storage-blob-block-blob-premium/create-block-blob-storage-account.png) -To unlock Azure Data Lake Storage Gen2 capabilities, enable the **Hierarchical namespace** setting in the **Advanced** tab of the **Create storage account** page. +To unlock Azure Data Lake Storage capabilities, enable the **Hierarchical namespace** setting in the **Advanced** tab of the **Create storage account** page. The following image shows this setting in the **Create storage account** page. The following image shows this setting in the **Create storage account** page. ## Next steps -Use the premium tier for Azure Data Lake Storage with your favorite analytics service such as Azure Databricks, Azure HDInsight and Azure Synapse Analytics. See [Tutorials that use Azure services with Azure Data Lake Storage Gen2](data-lake-storage-integrate-with-services-tutorials.md). +Use the premium tier for Azure Data Lake Storage with your favorite analytics service such as Azure Databricks, Azure HDInsight and Azure Synapse Analytics. See [Tutorials that use Azure services with Azure Data Lake Storage](data-lake-storage-integrate-with-services-tutorials.md). |
storage | Quickstart Storage Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-storage-explorer.md | On first launch, the **Microsoft Azure Storage Explorer - Connect to Azure Stora - Subscription - Storage account - Blob container-- ADLS Gen2 container or directory+- Azure Data Lake Storage container or directory - File share - Queue - Table |
storage | Secure File Transfer Protocol Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md | -> Because you must enable hierarchical namespace for your account to use SFTP, all of the known issues that are described in the Known issues with [Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md) article also apply to your account. +> Because you must enable hierarchical namespace for your account to use SFTP, all of the known issues that are described in the Known issues with [Azure Data Lake Storage](data-lake-storage-known-issues.md) article also apply to your account. ## Known unsupported clients To transfer files to or from Azure Blob Storage via SFTP clients, see the follow | Capacity Information | `df` - usage info for filesystem | | Extensions | Unsupported extensions include but aren't limited to: fsync@openssh.com, limits@openssh.com, lsetstat@openssh.com, statvfs@openssh.com | | SSH Commands | SFTP is the only supported subsystem. Shell requests after the completion of key exchange will fail. |-| Multi-protocol writes | Random writes and appends (`PutBlock`,`PutBlockList`, `GetBlockList`, `AppendBlock`, `AppendFile`) aren't allowed from other protocols (NFS, Blob REST, Data Lake Storage Gen2 REST) on blobs that are created by using SFTP. Full overwrites are allowed.| +| Multi-protocol writes | Random writes and appends (`PutBlock`,`PutBlockList`, `GetBlockList`, `AppendBlock`, `AppendFile`) aren't allowed from other protocols (NFS, Blob REST, Data Lake Storage REST) on blobs that are created by using SFTP. Full overwrites are allowed.| | Rename Operations | Rename operations where the target file name already exists is a protocol violation. Attempting such an operation returns an error. See [Removing and Renaming Files](https://datatracker.ietf.org/doc/html/draft-ietf-secsh-filexfer-02#section-6.5) for more information.| | Cross Container Operations | Traversing between containers or performing operations on multiple containers from the same connection are unsupported. | Undelete | There is no way to restore a soft-deleted blob with SFTP. The `Undelete` REST API must be used.| To transfer files to or from Azure Blob Storage via SFTP clients, see the follow - Microsoft Entra ID isn't supported for the SFTP endpoint. -To learn more, see [SFTP permission model](secure-file-transfer-protocol-support.md#sftp-permission-model) and see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md). +To learn more, see [SFTP permission model](secure-file-transfer-protocol-support.md#sftp-permission-model) and see [Access control model in Azure Data Lake Storage](data-lake-storage-access-control-model.md). ## Networking |
storage | Secure File Transfer Protocol Support Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-connect.md | After the transfer is complete, you can view and manage the file in the Azure po > ![Screenshot of the uploaded file appearing in storage account.](./media/secure-file-transfer-protocol-support-connect/uploaded-file-in-storage-account.png) > [!NOTE]-> The Azure portal uses the Blob REST API and Data Lake Storage Gen2 REST API. Being able to interact with an uploaded file in the Azure portal demonstrates the interoperability between SFTP and REST. +> The Azure portal uses the Blob REST API and Data Lake Storage REST API. Being able to interact with an uploaded file in the Azure portal demonstrates the interoperability between SFTP and REST. See the documentation of your SFTP client for guidance about how to connect and transfer files. |
storage | Secure File Transfer Protocol Support How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md | To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer - A standard general-purpose v2 or premium block blob storage account. You can also enable SFTP as you create the account. For more information on these types of storage accounts, see [Storage account overview](../common/storage-account-overview.md). -- The hierarchical namespace feature of the account must be enabled. To enable the hierarchical namespace feature, see [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md).+- The hierarchical namespace feature of the account must be enabled. To enable the hierarchical namespace feature, see [Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). ## Enable SFTP support This section shows you how to enable SFTP support for an existing storage accoun 2. Under **Settings**, select **SFTP**. > [!NOTE]- > This option appears only if the hierarchical namespace feature of the account has been enabled. To enable the hierarchical namespace feature, see [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). + > This option appears only if the hierarchical namespace feature of the account has been enabled. To enable the hierarchical namespace feature, see [Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). 3. Select **Enable SFTP**. |
storage | Secure File Transfer Protocol Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md | This article describes SFTP support for Azure Blob Storage. To learn how to enab SFTP support requires hierarchical namespace to be enabled. Hierarchical namespace organizes objects (files) into a hierarchy of directories and subdirectories in the same way that the file system on your computer is organized. The hierarchical namespace scales linearly and doesn't degrade data capacity or performance. -Different protocols are supported by the hierarchical namespace. SFTP is one of these available protocols. The following image shows storage access via multiple protocols and REST APIs. For easier reading, this image uses the term Gen2 REST to refer to the Azure Data Lake Storage Gen2 REST API. +Different protocols are supported by the hierarchical namespace. SFTP is one of these available protocols. The following image shows storage access via multiple protocols and REST APIs. For easier reading, this image uses the term REST to refer to the Azure Data Lake Storage REST API. > [!div class="mx-imgBorder"] > ![hierarchical namespace](./media/secure-file-transfer-protocol-support/hierarchical-namespace-and-sftp-support.png) To set up access permissions, you create a local user, and choose authentication > [!CAUTION] > Local users do not interoperate with other Azure Storage permission models such as RBAC (role based access control) and ABAC (attribute based access control). Access control lists (ACLs) are supported for local users at the preview level. >-> For example, Jeff has read only permission (can be controlled via RBAC or ABAC) via their Microsoft Entra identity for file _foo.txt_ stored in container _con1_. If Jeff is accessing the storage account via NFS (when not mounted as root/superuser), Blob REST, or Data Lake Storage Gen2 REST, these permissions will be enforced. However, if Jeff also has a local user identity with delete permission for data in container _con1_, they can delete _foo.txt_ via SFTP using the local user identity. +> For example, Jeff has read only permission (can be controlled via RBAC or ABAC) via their Microsoft Entra identity for file _foo.txt_ stored in container _con1_. If Jeff is accessing the storage account via NFS (when not mounted as root/superuser), Blob REST, or Data Lake Storage REST, these permissions will be enforced. However, if Jeff also has a local user identity with delete permission for data in container _con1_, they can delete _foo.txt_ via SFTP using the local user identity. -Enabling SFTP support doesn't prevent other types of clients from using Microsoft Entra ID. For users that access Blob Storage by using the Azure portal, Azure CLI, Azure PowerShell commands, AzCopy, as well as Azure SDKs, and Azure REST APIs, you can continue to use the full breadth of Azure Blob Storage security setting to authorize access. To learn more, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md). +Enabling SFTP support doesn't prevent other types of clients from using Microsoft Entra ID. For users that access Blob Storage by using the Azure portal, Azure CLI, Azure PowerShell commands, AzCopy, as well as Azure SDKs, and Azure REST APIs, you can continue to use the full breadth of Azure Blob Storage security setting to authorize access. To learn more, see [Access control model in Azure Data Lake Storage](data-lake-storage-access-control-model.md). ## Authentication methods When performing write operations on blobs in sub directories, Read permission is > This capability is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. -ACLs let you grant "fine-grained" access, such as write access to a specific directory or file. To learn more about ACLs, see [Access control lists (ACLs) in Azure Data Lake Storage Gen2](data-lake-storage-access-control.md). +ACLs let you grant "fine-grained" access, such as write access to a specific directory or file. To learn more about ACLs, see [Access control lists (ACLs) in Azure Data Lake Storage](data-lake-storage-access-control.md). To authorize a local user by using ACLs, you must first enable ACL authorization for that local user. See [Give permission to containers](secure-file-transfer-protocol-support-authorize-access.md#give-permission-to-containers). |
storage | Soft Delete Blob Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-overview.md | The following table describes the expected behavior for delete and write operati [!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)] -Soft delete isn't supported for blobs that are uploaded by using Data Lake Storage Gen2 APIs on Storage accounts with no hierarchical namespace. +Soft delete isn't supported for blobs that are uploaded by using Data Lake Storage APIs on Storage accounts with no hierarchical namespace. ## Pricing and billing |
storage | Soft Delete Container Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-container-overview.md | Container soft delete is available for the following types of storage accounts: - Block blob storage accounts - Blob storage accounts -Storage accounts with a hierarchical namespace enabled for use with Azure Data Lake Storage Gen2 are also supported. +Storage accounts with a hierarchical namespace enabled for use with Azure Data Lake Storage are also supported. Version 2019-12-12 or higher of the Azure Storage REST API supports container soft delete. |
storage | Storage Auth Abac Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md | This section lists the supported Azure Blob Storage actions and suboperations yo > | **Principal attributes support** | [True](../../role-based-access-control/conditions-format.md#principal-attributes) | > | **Environment attributes** | [Is private link](#is-private-link)</br>[Private endpoint](#private-endpoint)</br>[Subnet](#subnet)</br>[UTC now](#utc-now)</br> | > | **Examples** | [Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers)<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path)<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path)<br/>[Example: Write blobs in named containers with a path](storage-auth-abac-examples.md#example-write-blobs-in-named-containers-with-a-path)<br/>[Example: Read only current blob versions](storage-auth-abac-examples.md#example-read-only-current-blob-versions)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots)<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) |-> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) | +> | **Learn more** | [Azure Data Lake Storage hierarchical namespace](data-lake-storage-namespace.md) | ## Azure Blob Storage attributes The following table summarizes the available attributes by source: > | **Is key case sensitive** | True | > | **Hierarchical namespace support** | False | > | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAllOfAnyValues:StringEquals {'Project', 'Program'}`<br/>[Example: Existing blobs must have blob index tag keys](storage-auth-abac-examples.md#example-existing-blobs-must-have-blob-index-tag-keys) |-> | **Learn more** | [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) | +> | **Learn more** | [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage hierarchical namespace](data-lake-storage-namespace.md) | ### Blob index tags [Values in key] The following table summarizes the available attributes by source: > | **Is key case sensitive** | True | > | **Hierarchical namespace support** | False | > | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:`*keyname*`<$key_case_sensitive$>`<br/>`@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'`<br/>[Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag) |-> | **Learn more** | [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) | +> | **Learn more** | [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage hierarchical namespace](data-lake-storage-namespace.md) | ### Blob path The following table summarizes the available attributes by source: > | **Attribute source** | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) | > | **Attribute type** | [Boolean](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) | > | **Examples** | `@Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true`<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) |-> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) | +> | **Learn more** | [Azure Data Lake Storage hierarchical namespace](data-lake-storage-namespace.md) | ### Is private link The following table summarizes the available attributes by source: > | **Exists support** | [True](../../role-based-access-control/conditions-format.md#exists) | > | **Hierarchical namespace support** | False | > | **Examples** | `Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]`<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) |-> | **Learn more** | [Blob snapshots](snapshots-overview.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) | +> | **Learn more** | [Blob snapshots](snapshots-overview.md)<br/>[Azure Data Lake Storage hierarchical namespace](data-lake-storage-namespace.md) | ### Subnet The following table summarizes the available attributes by source: > | **Exists support** | [True](../../role-based-access-control/conditions-format.md#exists) | > | **Hierarchical namespace support** | False | > | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'`<br/>[Example: Read current blob versions and a specific blob version](storage-auth-abac-examples.md#example-read-current-blob-versions-and-a-specific-blob-version)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) |-> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) | +> | **Learn more** | [Azure Data Lake Storage hierarchical namespace](data-lake-storage-namespace.md) | ## See also |
storage | Storage Auth Abac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md | Title: Authorize access to Azure Blob Storage using Azure role assignment conditions -description: Authorize access to Azure Blob Storage and Azure Data Lake Storage Gen2 using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Blob Storage attributes. +description: Authorize access to Azure Blob Storage and Azure Data Lake Storage using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Blob Storage attributes. The benefits of using role assignment conditions are: The trade-off of using conditions is that you need a structured and consistent taxonomy when using attributes across your organization. Attributes must be protected to prevent access from being compromised. Also, conditions must be carefully designed and reviewed for their effect. -Role-assignment conditions in Azure Storage are supported for Azure blob storage. You can also use conditions with accounts that have the [hierarchical namespace](data-lake-storage-namespace.md) (HNS) feature enabled on them (Data Lake Storage Gen2). +Role-assignment conditions in Azure Storage are supported for Azure blob storage. You can also use conditions with accounts that have the [hierarchical namespace](data-lake-storage-namespace.md) (HNS) feature enabled on them (Data Lake Storage). ## Supported attributes and operations You can use conditions with custom roles as long as the role includes [actions t If you're working with conditions based on [blob index tags](storage-manage-find-blobs.md), you should use the *Storage Blob Data Owner* since permissions for tag operations are included in this role. > [!NOTE]-> Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which use a hierarchical namespace. You should not author role-assignment conditions using index tags on storage accounts that have HNS enabled. +> Blob index tags are not supported for Data Lake Storage storage accounts, which use a hierarchical namespace. You should not author role-assignment conditions using index tags on storage accounts that have HNS enabled. The [Azure role assignment condition format](../../role-based-access-control/conditions-format.md) allows the use of `@Principal`, `@Resource`, `@Request` or `@Environment` attributes in the conditions. A `@Principal` attribute is a custom security attribute on a principal, such as a user, enterprise application (service principal), or managed identity. A `@Resource` attribute refers to an existing attribute of a storage resource that is being accessed, such as a storage account, a container, or a blob. A `@Request` attribute refers to an attribute or parameter included in a storage operation request. An `@Environment` attribute refers to the network environment or the date and time of a request. The [Azure role assignment condition format](../../role-based-access-control/con ## Status of condition features in Azure Storage -Azure attribute-based access control (Azure ABAC) is generally available (GA) for controlling access to Azure Blob Storage, Azure Data Lake Storage Gen2, and Azure Queues using `request`, `resource`, `environment`, and `principal` attributes in both the standard and premium storage account performance tiers. Currently, the container metadata resource attribute and the list blob include request attribute are in PREVIEW. +Azure attribute-based access control (Azure ABAC) is generally available (GA) for controlling access to Azure Blob Storage, Azure Data Lake Storage, and Azure Queues using `request`, `resource`, `environment`, and `principal` attributes in both the standard and premium storage account performance tiers. Currently, the container metadata resource attribute and the list blob include request attribute are in PREVIEW. The following table shows the current status of ABAC by storage resource type and attribute type. Exceptions for specific attributes are also shown. | Resource types | Attribute types | Attributes | Availability | |||||-| Blobs<br/>Data Lake Storage Gen2<br/>Queues | Request<br/>Resource<br/>Environment<br/>Principal | All attributes except those noted in this table | GA | -| Data Lake Storage Gen2 | Resource | [Snapshot](storage-auth-abac-attributes.md#snapshot) | Preview | -| Blobs<br/>Data Lake Storage Gen2 | Resource | [Container metadata](storage-auth-abac-attributes.md#container-metadata) | Preview | +| Blobs<br/>Data Lake Storage<br/>Queues | Request<br/>Resource<br/>Environment<br/>Principal | All attributes except those noted in this table | GA | +| Data Lake Storage | Resource | [Snapshot](storage-auth-abac-attributes.md#snapshot) | Preview | +| Blobs<br/>Data Lake Storage | Resource | [Container metadata](storage-auth-abac-attributes.md#container-metadata) | Preview | | Blobs | Request | [List blob include](storage-auth-abac-attributes.md#list-blob-include) | Preview | See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > [!NOTE] -> Some storage features aren't supported for Data Lake Storage Gen2 storage accounts, which use a hierarchical namespace (HNS). To learn more, see [Blob storage feature support](storage-feature-support-in-storage-accounts.md). +> Some storage features aren't supported for Data Lake Storage storage accounts, which use a hierarchical namespace (HNS). To learn more, see [Blob storage feature support](storage-feature-support-in-storage-accounts.md). > >The following ABAC attributes aren't supported when hierarchical namespace is enabled for a storage account: > |
storage | Storage Blob Block Blob Premium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md | In most cases, workloads executing more than 35 to 40 transactions per second pe > [!NOTE] > Prices differ per operation and per region. Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to compare pricing between standard and premium performance tiers. -The following table demonstrates the cost-effectiveness of premium block blob storage accounts. The numbers in this table are based on an Azure Data Lake Storage Gen2 enabled premium block blob storage account (also referred to as the [premium tier for Azure Data Lake Storage](premium-tier-for-data-lake-storage.md)). Each column represents the number of transactions in a month. Each row represents the percentage of transactions that are read transactions. Each cell in the table shows the percentage of cost reduction associated with a read transaction percentage and the number of transactions executed. +The following table demonstrates the cost-effectiveness of premium block blob storage accounts. The numbers in this table are based on an Azure Data Lake Storage enabled premium block blob storage account (also referred to as the [premium tier for Azure Data Lake Storage](premium-tier-for-data-lake-storage.md)). Each column represents the number of transactions in a month. Each row represents the percentage of transactions that are read transactions. Each cell in the table shows the percentage of cost reduction associated with a read transaction percentage and the number of transactions executed. For example, assuming that your account is in the East US 2 region, the number of transactions with your account exceeds 90M, and 70% of those transactions are read transactions, premium block blob storage accounts are more cost-effective. For example, assuming that your account is in the East US 2 region, the number o ## Premium scenarios -This section contains real-world examples of how some of our Azure Storage partners use premium block blob storage. Some of them also enable Azure Data Lake Storage Gen2 which introduces a hierarchical file structure that can further enhance transaction performance in certain scenarios. +This section contains real-world examples of how some of our Azure Storage partners use premium block blob storage. Some of them also enable Azure Data Lake Storage which introduces a hierarchical file structure that can further enhance transaction performance in certain scenarios. > [!TIP]-> If you have an analytics use case, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account. +> If you have an analytics use case, we highly recommend that you use Azure Data Lake Storage along with a premium block blob storage account. This section contains the following examples: In almost every industry, there is a need for enterprises to query and analyze t Data scientists, analysts, and developers can derive time-sensitive insights faster by running queries on data that is stored in a premium block blob storage account. Executives can load their dashboards much more quickly when the data that appears in those dashboards comes from a premium block blob storage account instead of a standard general-purpose v2 account. -In one scenario, analysts needed to analyze telemetry data from millions of devices quickly to better understand how their products are used, and to make product release decisions. Storing data in SQL databases is expensive. To reduce cost, and to increase queryable surface area, they used an Azure Data Lake Storage Gen2 enabled premium block blob storage account and performed computation in Presto and Spark to produce insights from hive tables. This way, even rarely accessed data has all of the same power of compute as frequently accessed data. +In one scenario, analysts needed to analyze telemetry data from millions of devices quickly to better understand how their products are used, and to make product release decisions. Storing data in SQL databases is expensive. To reduce cost, and to increase queryable surface area, they used an Azure Data Lake Storage enabled premium block blob storage account and performed computation in Presto and Spark to produce insights from hive tables. This way, even rarely accessed data has all of the same power of compute as frequently accessed data. -To close the gap between SQL's subsecond performance and Presto's input output operations per second (IOPs) to external storage, consistency and speed are critical, especially when dealing with small optimized row columnar (ORC) files. A premium block blob storage account when used with Data Lake Storage Gen2, has repeatedly demonstrated a 3X performance improvement over a standard general-purpose v2 account in this scenario. Queries executed fast enough to feel local to the compute machine. +To close the gap between SQL's subsecond performance and Presto's input output operations per second (IOPs) to external storage, consistency and speed are critical, especially when dealing with small optimized row columnar (ORC) files. A premium block blob storage account when used with Data Lake Storage, has repeatedly demonstrated a 3X performance improvement over a standard general-purpose v2 account in this scenario. Queries executed fast enough to feel local to the compute machine. -In another case, a partner stores and queries logs that are generated from their security solution. The logs are generated by using Databricks, and then and stored in a Data Lake Storage Gen2 enabled premium block blob storage account. End users query and search this data by using Azure Data Explorer. They chose this type of account to increase stability and increase the performance of interactive queries. They also set the life cycle management `Delete Action` policy to a few days, which helps to reduce costs. This policy prevents them from keeping the data forever. Instead, data is deleted once it is no longer needed. +In another case, a partner stores and queries logs that are generated from their security solution. The logs are generated by using Databricks, and then and stored in a Data Lake Storage enabled premium block blob storage account. End users query and search this data by using Azure Data Explorer. They chose this type of account to increase stability and increase the performance of interactive queries. They also set the life cycle management `Delete Action` policy to a few days, which helps to reduce costs. This policy prevents them from keeping the data forever. Instead, data is deleted once it is no longer needed. ### Data processing pipelines In some cases, we've seen partners use multiple standard storage accounts to sto IoT has become a significant part of our daily lives. IoT is used to track car movements, control lights, and monitor our health. It also has industrial applications. For example, companies use IoT to enable their smart factory projects, improve agricultural output, and on oil rigs for predictive maintenance. Premium block blob storage accounts add significant value to these scenarios. -We have partners in the mining industry. They use a Data Lake Storage Gen2 enable premium block blob storage account along with HDInsight (Hbase) to ingest time series sensor data from multiple mining equipment types, with a very taxing load profile. Premium block blob storage has helped to satisfy their need for high sample rate ingestion. It's also cost effective, because premium block blob storage is cost optimized for workloads that perform a large number of write transactions, and this workload generates a large number of small write transactions (in the tens of thousands per second). +We have partners in the mining industry. They use a Data Lake Storage enable premium block blob storage account along with HDInsight (Hbase) to ingest time series sensor data from multiple mining equipment types, with a very taxing load profile. Premium block blob storage has helped to satisfy their need for high sample rate ingestion. It's also cost effective, because premium block blob storage is cost optimized for workloads that perform a large number of write transactions, and this workload generates a large number of small write transactions (in the tens of thousands per second). ### Machine Learning In many cases, a lot of data has to be processed to train a machine learning model. To complete this processing, compute machines must run for a long time. Compared to storage costs, compute costs usually account for a much larger percentage of your bill, so reducing the amount of time that your compute machines run can lead to significant savings. The low latency that you get by using premium block blob storage can significantly reduce this time and your bill. -We have partners that deploy data processing pipelines to spark clusters where they run machine learning training and inference. They store spark tables (parquet files) and checkpoints to a premium block blob storage account. Spark checkpoints can create a huge number of nested files and folders. Their directory listing operations are fast because they combined the low latency of a premium block blob storage account with the hierarchical data structure made available with Data Lake Storage Gen2. +We have partners that deploy data processing pipelines to spark clusters where they run machine learning training and inference. They store spark tables (parquet files) and checkpoints to a premium block blob storage account. Spark checkpoints can create a huge number of nested files and folders. Their directory listing operations are fast because they combined the low latency of a premium block blob storage account with the hierarchical data structure made available with Data Lake Storage. -We also have partners in the semiconductor industry with use cases that intersect IoT and machine learning. IoT devices attached to machines in the manufacturing plant take images of semiconductor wafers and send those to their account. Using deep learning inference, the system can inform the on-premises machines if there is an issue with the production and if an action needs to be taken. They mush be able to load and process images quickly and reliably. Using Data Lake Storage Gen2 enabled premium block blob storage account helps to make this possible. +We also have partners in the semiconductor industry with use cases that intersect IoT and machine learning. IoT devices attached to machines in the manufacturing plant take images of semiconductor wafers and send those to their account. Using deep learning inference, the system can inform the on-premises machines if there is an issue with the production and if an action needs to be taken. They mush be able to load and process images quickly and reliably. Using Data Lake Storage enabled premium block blob storage account helps to make this possible. ### Real-time streaming analytics -To support interactive analytics in near real time, a system must ingest and process large amounts of data, and then make that data available to downstream systems. Using a Data Lake Storage Gen2 enabled premium block blob storage account is perfect for these types of scenarios. +To support interactive analytics in near real time, a system must ingest and process large amounts of data, and then make that data available to downstream systems. Using a Data Lake Storage enabled premium block blob storage account is perfect for these types of scenarios. Companies in the media and entertainment industry can generate a large number of logs and telemetry data in a short amount of time as they broadcast an event. Some of our partners rely on multiple content delivery network (CDN) partners for streaming. They must make near real-time decisions about which CDN partners to allocate traffic to. Therefore, data needs to be available for querying a few seconds after it is ingested. To facilitate this quick decision making, they use data stored within premium block blob storage, and process that data in Azure Data Explorer (ADX). All of the telemetry that is uploaded to storage is transformed in ADX, where it can be stored in a familiar format that operators and executives can query quickly and reliably. To create a premium block blob storage account, make sure to choose the **Premiu > [!NOTE] > Some Blob Storage features aren't yet supported or have partial support in premium block blob storage accounts. Before choosing premium, review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to determine whether the features that you intend to use are fully supported in your account. Feature support is always expanding so make sure to periodically review this article for updates. -If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account. To unlock Azure Data Lake Storage Gen2 capabilities, enable the **Hierarchical namespace** setting in the **Advanced** tab of the **Create storage account** page. +If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake Storage along with a premium block blob storage account. To unlock Azure Data Lake Storage capabilities, enable the **Hierarchical namespace** setting in the **Advanced** tab of the **Create storage account** page. The following image shows this setting in the **Create storage account** page. For complete guidance, see [Create a storage account](../common/storage-account- ## See also - [Storage account overview](../common/storage-account-overview.md)-- [Introduction to Azure Data Lake Storage Gen2](data-lake-storage-introduction.md)-- [Create a storage account to use with Azure Data Lake Storage Gen2](create-data-lake-storage-account.md)+- [Introduction to Azure Data Lake Storage](data-lake-storage-introduction.md) +- [Create a storage account to use with Azure Data Lake Storage](create-data-lake-storage-account.md) - [Premium tier for Azure Data Lake Storage](premium-tier-for-data-lake-storage.md) |
storage | Storage Blob Event Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-overview.md | If you want to try blob storage events, see any of these quickstart articles: To view in-depth examples of reacting to Blob storage events by using Azure functions, see these articles: -- [Tutorial: Use Azure Data Lake Storage Gen2 events to update a Databricks Delta table](data-lake-storage-events.md).+- [Tutorial: Use Azure Data Lake Storage events to update a Databricks Delta table](data-lake-storage-events.md). - [Tutorial: Automate resizing uploaded images using Event Grid](../../event-grid/resize-images-on-storage-blob-upload-event.md?tabs=dotnet) > [!NOTE] |
storage | Storage Blob Reserved Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-reserved-capacity.md | Title: Optimize costs for Blob storage with reserved capacity -description: Learn about purchasing Azure Storage reserved capacity to save costs on block blob and Azure Data Lake Storage Gen2 resources. +description: Learn about purchasing Azure Storage reserved capacity to save costs on block blob and Azure Data Lake Storage resources. -You can save money on storage costs for blob data with Azure Storage reserved capacity. Azure Storage reserved capacity offers you a discount on capacity for block blobs and for Azure Data Lake Storage Gen2 data in standard storage accounts when you commit to a reservation for either one year or three years. A reservation provides a fixed amount of storage capacity for the term of the reservation. +You can save money on storage costs for blob data with Azure Storage reserved capacity. Azure Storage reserved capacity offers you a discount on capacity for block blobs and for Azure Data Lake Storage data in standard storage accounts when you commit to a reservation for either one year or three years. A reservation provides a fixed amount of storage capacity for the term of the reservation. -Azure Storage reserved capacity can significantly reduce your capacity costs for block blobs and Azure Data Lake Storage Gen2 data. The cost savings achieved depend on the duration of your reservation, the total capacity you choose to reserve, and the access tier and type of redundancy that you've chosen for your storage account. Reserved capacity provides a billing discount and doesn't affect the state of your Azure Storage resources. +Azure Storage reserved capacity can significantly reduce your capacity costs for block blobs and Azure Data Lake Storage data. The cost savings achieved depend on the duration of your reservation, the total capacity you choose to reserve, and the access tier and type of redundancy that you've chosen for your storage account. Reserved capacity provides a billing discount and doesn't affect the state of your Azure Storage resources. For information about Azure Storage reservation pricing, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) and [Azure Data Lake Storage Gen 2 pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/). You can purchase Azure Storage reserved capacity in units of 100 TiB and 1 PiB p Azure Storage reserved capacity is available for a single subscription, multiple subscriptions (shared scope), and management groups. When scoped to a single subscription, the reservation discount is applied to the selected subscription only. When scoped to multiple subscriptions, the reservation discount is shared across those subscriptions within the customer's billing context. When scoped to management group, the reservation discount is shared across the subscriptions that are a part of both the management group and billing scope. -When you purchase Azure Storage reserved capacity, you can use your reservation for both block blob and Azure Data Lake Storage Gen2 data. A reservation is applied to your usage within the purchased scope and cannot be limited to a specific storage account, container, or object within the subscription. +When you purchase Azure Storage reserved capacity, you can use your reservation for both block blob and Azure Data Lake Storage data. A reservation is applied to your usage within the purchased scope and cannot be limited to a specific storage account, container, or object within the subscription. An Azure Storage reservation covers only the amount of data that is stored in a subscription or shared resource group. Early deletion, operations, bandwidth, and data transfer charges are not included in the reservation. As soon as you buy a reservation, the capacity charges that match the reservation attributes are charged at the discount rates instead of at the pay-as-you go rates. For more information on Azure reservations, see [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md). Follow these steps to purchase reserved capacity: ![Screenshot showing how to purchase a reservation](media/storage-blob-reserved-capacity/purchase-reservations.png) -After you purchase a reservation, it is automatically applied to any existing Azure Storage block blob or Azure Data Lake Storage Gen2 resources that matches the terms of the reservation. If you haven't created any Azure Storage resources yet, the reservation will apply whenever you create a resource that matches the terms of the reservation. In either case, the term of the reservation begins immediately after a successful purchase. +After you purchase a reservation, it is automatically applied to any existing Azure Storage block blob or Azure Data Lake Storage resources that matches the terms of the reservation. If you haven't created any Azure Storage resources yet, the reservation will apply whenever you create a resource that matches the terms of the reservation. In either case, the term of the reservation begins immediately after a successful purchase. ## Exchange or refund a reservation |
storage | Storage Blobs List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md | Name: folderA/folderB/file3.txt, Is deleted? false ``` > [!NOTE]-> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-java.md#list-directory-contents). +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage)](data-lake-storage-directory-file-acl-java.md#list-directory-contents). ## Use a hierarchical listing |
storage | Storage Blobs List Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md | Flat listing: 6: folder2/sub1/d ``` > [!NOTE]-> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents). +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents). ## Use a hierarchical listing |
storage | Storage Blobs List Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-python.md | Name: folderA/folderB/file3.txt, Tags: {'tag1': 'value1', 'tag2': 'value2'} ``` > [!NOTE]-> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-python.md#list-directory-contents). +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage)](data-lake-storage-directory-file-acl-python.md#list-directory-contents). ## Use a hierarchical listing |
storage | Storage Blobs List Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-typescript.md | Flat listing: 6: folder2/sub1/d ``` > [!NOTE]-> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents). +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents). ## Use a hierarchical listing |
storage | Storage Blobs List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md | Blob name: FolderA/FolderB/FolderC/blob3.txt ``` > [!NOTE]-> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-dotnet.md#list-directory-contents). +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage)](data-lake-storage-directory-file-acl-dotnet.md#list-directory-contents). ## Use a hierarchical listing |
storage | Storage Blobs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-overview.md | Title: About Blob (object) storage -description: Azure Blob storage stores massive amounts of unstructured object data, such as text or binary data. Blob storage also supports Azure Data Lake Storage Gen2 for big data analytics. +description: Azure Blob storage stores massive amounts of unstructured object data, such as text or binary data. Blob storage also supports Azure Data Lake Storage for big data analytics. +- [Introduction to Azure Data Lake Storage](../blobs/data-lake-storage-introduction.md) |
storage | Storage Feature Support In Storage Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md | The following table describes whether a feature is supported in a premium block ## See also -- [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md)+- [Known issues with Azure Data Lake Storage](data-lake-storage-known-issues.md) - [Known issues with Network File System (NFS) 3.0 protocol support in Azure Blob Storage](network-file-system-protocol-known-issues.md) |
storage | Upgrade To Data Lake Storage Gen2 How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md | Title: Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities + Title: Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities description: Shows you how to use Resource Manager templates to upgrade from Azure Blob Storage to Data Lake Storage. Last updated 01/18/2024 -# Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities +# Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities -This article helps you to enable a hierarchical namespace and unlock capabilities such as file and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2. +This article helps you to enable a hierarchical namespace and unlock capabilities such as file and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage. -To learn more about these capabilities and evaluate the impact of this upgrade on workloads, applications, costs, service integrations, tools, features, and documentation, see [Upgrading Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2.md). +To learn more about these capabilities and evaluate the impact of this upgrade on workloads, applications, costs, service integrations, tools, features, and documentation, see [Upgrading Azure Blob Storage with Azure Data Lake Storage capabilities](upgrade-to-data-lake-storage-gen2.md). > [!IMPORTANT] > An upgrade is one-way. There's no way to revert your account once you've performed the upgrade. We recommend that you validate your upgrade in a nonproduction environment. ## Prepare to upgrade -To prepare to upgrade your storage account to Data Lake Storage Gen2: +To prepare to upgrade your storage account to Data Lake Storage: > [!div class="checklist"] > - [Review feature support](#review-feature-support) To prepare to upgrade your storage account to Data Lake Storage Gen2: ### Review feature support -Your storage account might be configured to use features that aren't yet supported in Data Lake Storage Gen2 enabled accounts. If your account is using such features, the upgrade will not pass the validation step. Review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to identify unsupported features. If you're using any such features in your account, disable them before you begin the upgrade. +Your storage account might be configured to use features that aren't yet supported in Data Lake Storage enabled accounts. If your account is using such features, the upgrade will not pass the validation step. Review the [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md) article to identify unsupported features. If you're using any such features in your account, disable them before you begin the upgrade. -The following features are supported for Data Lake Storage Gen2 accounts, but are not supported by the upgrade process: +The following features are supported for Data Lake Storage accounts, but are not supported by the upgrade process: - Blob snapshots - Encryption scopes If your storage account has such features enabled, you must disable them before In some cases, you will have to allow time for clean-up operations after a feature is disabled before upgrading. One example is the [blob soft delete](soft-delete-blob-overview.md) feature. You must disable blob soft delete and then allow all soft-delete blobs to expire before you can upgrade the account. > [!IMPORTANT]-> You cannot upgrade a storage account to Data Lake Storage Gen2 that has **ever** had the change feed feature enabled. +> You cannot upgrade a storage account to Data Lake Storage that has **ever** had the change feed feature enabled. > Simply disabling change feed will not allow you to perform an upgrade. Instead, you must create an account with the hierarchical namespace feature enabled on it, and move then transfer your data into that account. ### Ensure the segments of each blob path are named -The migration process creates a directory for each path segment of a blob. Data Lake Storage Gen2 directories must have a name so for migration to succeed, each path segment in a virtual directory must have a name. The same requirement is true for segments that are named only with a space character. If any path segments are either unnamed (`//`) or named only with a space character (`_`), then before you proceed with the migration, you must copy those blobs to a new path that is compatible with these naming requirements. +The migration process creates a directory for each path segment of a blob. Data Lake Storage directories must have a name so for migration to succeed, each path segment in a virtual directory must have a name. The same requirement is true for segments that are named only with a space character. If any path segments are either unnamed (`//`) or named only with a space character (`_`), then before you proceed with the migration, you must copy those blobs to a new path that is compatible with these naming requirements. ### Prevent write activity to the storage account az storage account hns-migration stop -n <storage-account-name> -g <resource-gro 2. Test custom applications to ensure that they work as expected with your upgraded account. - [Multi-protocol access on Data Lake Storage](data-lake-storage-multi-protocol-access.md) enables most applications to continue using Blob APIs without modification. If you encounter issues or you want to use APIs to work with directory operations and ACLs, consider moving some of your code to use Data Lake Storage Gen2 APIs. See guides for [.NET](data-lake-storage-directory-file-acl-dotnet.md), [Java](data-lake-storage-directory-file-acl-java.md), [Python](data-lake-storage-directory-file-acl-python.md), [Node.js](data-lake-storage-acl-javascript.md), and [REST](/rest/api/storageservices/data-lake-storage-gen2). + [Multi-protocol access on Data Lake Storage](data-lake-storage-multi-protocol-access.md) enables most applications to continue using Blob APIs without modification. If you encounter issues or you want to use APIs to work with directory operations and ACLs, consider moving some of your code to use Data Lake Storage APIs. See guides for [.NET](data-lake-storage-directory-file-acl-dotnet.md), [Java](data-lake-storage-directory-file-acl-java.md), [Python](data-lake-storage-directory-file-acl-python.md), [Node.js](data-lake-storage-acl-javascript.md), and [REST](/rest/api/storageservices/data-lake-storage-gen2). 3. Test any custom scripts to ensure that they work as expected with your upgraded account. - As is the case with Blob APIs, many of your scripts will likely work without requiring you to modify them. However, if needed, you can upgrade script files to use Data Lake Storage Gen2 [PowerShell cmdlets](data-lake-storage-directory-file-acl-powershell.md), and [Azure CLI commands](data-lake-storage-directory-file-acl-cli.md). + As is the case with Blob APIs, many of your scripts will likely work without requiring you to modify them. However, if needed, you can upgrade script files to use Data Lake Storage [PowerShell cmdlets](data-lake-storage-directory-file-acl-powershell.md), and [Azure CLI commands](data-lake-storage-directory-file-acl-cli.md). ## See also -[Introduction to Azure Data Lake storage Gen2](data-lake-storage-introduction.md) +[Introduction to Azure Data Lake storage](data-lake-storage-introduction.md) |
storage | Upgrade To Data Lake Storage Gen2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2.md | Title: Upgrading Azure Blob Storage to Azure Data Lake Storage Gen2 + Title: Upgrading Azure Blob Storage to Azure Data Lake Storage description: Description goes here. -# Upgrading Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities +# Upgrading Azure Blob Storage with Azure Data Lake Storage capabilities -This article helps you to enable a hierarchical namespace and unlock capabilities such as file- and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage Gen2. The most popular capabilities include: +This article helps you to enable a hierarchical namespace and unlock capabilities such as file- and directory-level security and faster operations. These capabilities are widely used by big data analytics workloads and are referred to collectively as Azure Data Lake Storage. The most popular capabilities include: - Higher throughput, input/output operations per second (IOPS), and storage capacity limits. This article helps you to enable a hierarchical namespace and unlock capabilitie - Security at the container, directory, and file level. -To learn more about them, see [Introduction to Azure Data Lake Storage Gen2](data-lake-storage-introduction.md). +To learn more about them, see [Introduction to Azure Data Lake Storage](data-lake-storage-introduction.md). -This article helps you evaluate the impact on workloads, applications, costs, service integrations, tools, features, and documentation. Make sure to review these impacts carefully. When you are ready to upgrade an account, see this step-by-step guide: [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). +This article helps you evaluate the impact on workloads, applications, costs, service integrations, tools, features, and documentation. Make sure to review these impacts carefully. When you are ready to upgrade an account, see this step-by-step guide: [Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). > [!IMPORTANT] > An upgrade is one-way. There's no way to revert your account once you've performed the upgrade. We recommend that you validate your upgrade in a nonproduction environment. Your upgraded account will have a Data Lake storage endpoint. You can find the U You don't have to modify your existing applications and workloads to use that endpoint. [Multiprotocol access in Data Lake Storage](data-lake-storage-multi-protocol-access.md) makes it possible for you to use either the Blob service endpoint or the Data Lake storage endpoint to interact with your data. -Azure services and tools (such as AzCopy) might use the Data Lake storage endpoint to interact with the data in your storage account. Also, you'll need to use this new endpoint for any operations that you perform by using the Data Lake Storage Gen2 [SDKs](data-lake-storage-directory-file-acl-dotnet.md), [PowerShell commands](data-lake-storage-directory-file-acl-powershell.md), or [Azure CLI commands](data-lake-storage-directory-file-acl-cli.md). +Azure services and tools (such as AzCopy) might use the Data Lake storage endpoint to interact with the data in your storage account. Also, you'll need to use this new endpoint for any operations that you perform by using the Data Lake Storage [SDKs](data-lake-storage-directory-file-acl-dotnet.md), [PowerShell commands](data-lake-storage-directory-file-acl-powershell.md), or [Azure CLI commands](data-lake-storage-directory-file-acl-cli.md). ### Directories When you upload a blob, and the path that you specify includes a directory that ### List operations -A [List Blobs](/rest/api/storageservices/list-blobs) operation returns both directories and files. Each is listed separately. Directories appear in the list as zero-length blobs. In a Blob storage account that does not have a hierarchical namespace, a [List Blobs](/rest/api/storageservices/list-blobs) operation returns only blobs and not directories. If you use the Data Lake Storage Gen2 [Path - List](/rest/api/storageservices/datalakestoragegen2/path/list) operation, directories will appear as directory entries and not as zero-length blobs. +A [List Blobs](/rest/api/storageservices/list-blobs) operation returns both directories and files. Each is listed separately. Directories appear in the list as zero-length blobs. In a Blob storage account that does not have a hierarchical namespace, a [List Blobs](/rest/api/storageservices/list-blobs) operation returns only blobs and not directories. If you use the Data Lake Storage [Path - List](/rest/api/storageservices/datalakestoragegen2/path/list) operation, directories will appear as directory entries and not as zero-length blobs. The list order is different as well. Directories and files appear in *depth-first search* order. A Blob storage account that does not have a hierarchical namespace lists blobs in *lexicographical* order. There is no cost to perform the upgrade. After you upgrade, the cost to store yo - [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/). -- [Azure Data Lake Storage Gen2 pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/).+- [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/). You can also use the **Storage Accounts** option in the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the impact of costs after an upgrade. -Aside from pricing changes, consider the cost savings associated with Data Lake Storage Gen2 capabilities. Overall total of cost of ownership typically declines because of higher throughput and optimized operations. Higher throughput enables you to transfer more data in less time. A hierarchical namespace improves the efficiency of operations. +Aside from pricing changes, consider the cost savings associated with Data Lake Storage capabilities. Overall total of cost of ownership typically declines because of higher throughput and optimized operations. Higher throughput enables you to transfer more data in less time. A hierarchical namespace improves the efficiency of operations. ## Impact on service integrations -While most Azure service integrations will continue to work after you've enabled these capabilities, some of them remain in preview or not yet supported. See [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md) to understand the current support for Azure service integrations with Data Lake Storage Gen2. +While most Azure service integrations will continue to work after you've enabled these capabilities, some of them remain in preview or not yet supported. See [Azure services that support Azure Data Lake Storage](data-lake-storage-supported-azure-services.md) to understand the current support for Azure service integrations with Data Lake Storage. ## Impact on tools, features, and documentation After you upgrade, the way you that interact with some features will change. Thi While most of the Blob storage features will continue to work after you've enabled these capabilities, some of them remain in preview or are not yet supported. -See [Blob Storage features available in Azure Data Lake Storage Gen2](./storage-feature-support-in-storage-accounts.md) to understand the current support for Blob storage features with Data Lake Storage Gen2. +See [Blob Storage features available in Azure Data Lake Storage](./storage-feature-support-in-storage-accounts.md) to understand the current support for Blob storage features with Data Lake Storage. ### Diagnostic logs The following buttons don't yet appear in the Ribbon of Azure Storage Explorer: |--|--| |Copy URL|Not yet implemented| |Manage snapshots|Not yet implemented|-|Undelete|Depends on Blob storage features not yet supported with Data Lake Storage Gen2 | +|Undelete|Depends on Blob storage features not yet supported with Data Lake Storage | The following buttons behave differently in your new account. -|Button|Blob storage behavior|Data Lake Storage Gen2 behavior| +|Button|Blob storage behavior|Data Lake Storage behavior| |||| |Folder|Folder is virtual and disappears if you don't add files to it. |Folder exists even with no files added to it.| |Rename|Results in a copy and then a delete of the source blob|Renames the same blob. Far more efficient.| ### Documentation -You can find guidance for using Data Lake Storage Gen2 capabilities here: [Introduction to Azure Data Lake Storage Gen2](data-lake-storage-introduction.md). +You can find guidance for using Data Lake Storage capabilities here: [Introduction to Azure Data Lake Storage](data-lake-storage-introduction.md). Nothing has changed with respect to where you find the guidance for all of the existing Blob storage features. That guidance is here: [Introduction to Azure Blob storage](storage-blobs-introduction.md). -As you move between content sets, you'll notice some slight terminology differences. For example, content featured in the Data Lake Storage Gen2 content might use the term *file* and *file system* instead of *blob* and *container*. The terms *file* and *file system* are deeply rooted in the world of big data analytics where Data Lake storage has had a long history. The content contains these terms to keep it relatable to these audiences. These terms don't describe separate *things*. +As you move between content sets, you'll notice some slight terminology differences. For example, content featured in the Data Lake Storage content might use the term *file* and *file system* instead of *blob* and *container*. The terms *file* and *file system* are deeply rooted in the world of big data analytics where Data Lake storage has had a long history. The content contains these terms to keep it relatable to these audiences. These terms don't describe separate *things*. ## Next steps -When you are ready to upgrade your storage account to include Data Lake Storage Gen2 capabilities, see this step-by-step guide. +When you are ready to upgrade your storage account to include Data Lake Storage capabilities, see this step-by-step guide. > [!div class="nextstepaction"]-> [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md) +> [Upgrade Azure Blob Storage with Azure Data Lake Storage capabilities](upgrade-to-data-lake-storage-gen2-how-to.md) |
storage | Versioning Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md | Blob versions are immutable. You can't modify the content or metadata of an exis Having a large number of versions per blob can increase the latency for blob listing operations. Microsoft recommends maintaining fewer than 1000 versions per blob. You can use lifecycle management to automatically delete old versions. For more information about lifecycle management, see [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md). -Blob versioning is available for standard general-purpose v2, premium block blob, and legacy Blob storage accounts. Storage accounts with a hierarchical namespace enabled for use with Azure Data Lake Storage Gen2 aren't currently supported. +Blob versioning is available for standard general-purpose v2, premium block blob, and legacy Blob storage accounts. Storage accounts with a hierarchical namespace enabled for use with Azure Data Lake Storage aren't currently supported. Version 2019-10-10 and higher of the Azure Storage REST API supports blob versioning. When blob soft delete is enabled, all soft-deleted entities are billed at full c [!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)] -Versioning is not supported for blobs that are uploaded by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs. +Versioning is not supported for blobs that are uploaded by using [Data Lake Storage](/rest/api/storageservices/data-lake-storage-gen2) APIs. ## See also |
storage | Network Routing Preference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/network-routing-preference.md | For more information on routing preference in Azure, see [What is routing prefer For step-by-step guidance that shows you how to configure the routing preference and route-specific endpoints, see [Configure network routing preference for Azure Storage](configure-network-routing-preference.md). -You can choose between the Microsoft global network and internet routing as the default routing preference for the public endpoint of your storage account. The default routing preference applies to all traffic from clients outside Azure and affects the endpoints for Azure Data Lake Storage Gen2, Blob storage, Azure Files, and static websites. Configuring routing preference is not supported for Azure Queues or Azure Tables. +You can choose between the Microsoft global network and internet routing as the default routing preference for the public endpoint of your storage account. The default routing preference applies to all traffic from clients outside Azure and affects the endpoints for Azure Data Lake Storage, Blob storage, Azure Files, and static websites. Configuring routing preference is not supported for Azure Queues or Azure Tables. You can also publish route-specific endpoints for your storage account. When you publish route-specific endpoints, Azure Storage creates new public endpoints for your storage account that route traffic over the desired path. This flexibility enables you to direct traffic to your storage account over a specific route without changing your default routing preference. For example, publishing an internet route-specific endpoint for the 'StorageAcco | Storage service | Route-specific endpoint | | : | :- | | Blob service | `StorageAccountA-internetrouting.blob.core.windows.net` |-| Data Lake Storage Gen2 | `StorageAccountA-internetrouting.dfs.core.windows.net` | +| Data Lake Storage | `StorageAccountA-internetrouting.dfs.core.windows.net` | | File service | `StorageAccountA-internetrouting.file.core.windows.net` | | Static Websites | `StorageAccountA-internetrouting.web.core.windows.net` | If you have a read-access geo-redundant storage (RA-GRS) or a read-access geo-zo | Storage service | Route-specific read-only secondary endpoint | | : | :-- | | Blob service | `StorageAccountA-internetrouting-secondary.blob.core.windows.net` |-| Data Lake Storage Gen2 | `StorageAccountA-internetrouting-secondary.dfs.core.windows.net` | +| Data Lake Storage | `StorageAccountA-internetrouting-secondary.dfs.core.windows.net` | | File service | `StorageAccountA-internetrouting-secondary.file.core.windows.net` | | Static Websites | `StorageAccountA-internetrouting-secondary.web.core.windows.net` | |
storage | Nfs Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/nfs-comparison.md | For more general comparisons, see [this article](storage-introduction.md) to com |Category |Azure Blob Storage |Azure Files |Azure NetApp Files | ||||| |Use cases |Blob Storage is best suited for large scale read-heavy sequential access workloads where data is ingested once and minimally modified further.<br></br>Blob Storage offers the lowest total cost of ownership, if there is little or no maintenance.<br></br>Some example scenarios are: Large scale analytical data, throughput sensitive high-performance computing, backup and archive, autonomous driving, media rendering, or genomic sequencing. |Azure Files is a highly available service best suited for random access workloads.<br></br>For NFS shares, Azure Files provides full POSIX file system support and can easily be used from container platforms like Azure Container Instance (ACI) and Azure Kubernetes Service (AKS) with the built-in CSI driver, in addition to VM-based platforms.<br></br>Some example scenarios are: Shared files, databases, home directories, traditional applications, ERP, CMS, NAS migrations that don't require advanced management, and custom applications requiring scale-out file storage. |Fully managed file service in the cloud, powered by NetApp, with advanced management capabilities.<br></br>Azure NetApp Files is suited for workloads that require random access and provides broad protocol support and data protection capabilities.<br></br>Some example scenarios are: On-premises enterprise NAS migration that requires rich management capabilities, latency sensitive workloads like SAP HANA, latency-sensitive or IOPS intensive high performance compute, or workloads that require simultaneous multi-protocol access. |-|Available protocols |NFSv3<br></br>REST<br></br>Data Lake Storage Gen2 |SMB<br><br>NFSv4.1<br></br> (No interoperability between either protocol) |NFSv3 and NFSv4.1<br></br>SMB<br></br>Dual protocol (SMB and NFSv3, SMB and NFSv4.1) | +|Available protocols |NFSv3<br></br>REST<br></br>Data Lake Storage |SMB<br><br>NFSv4.1<br></br> (No interoperability between either protocol) |NFSv3 and NFSv4.1<br></br>SMB<br></br>Dual protocol (SMB and NFSv3, SMB and NFSv4.1) | |Key features | Integrated with HPC cache for low latency workloads. <br> </br> Integrated management, including lifecycle, immutable blobs, data failover, and metadata index. | Zonally redundant for high availability. <br></br> Consistent single-digit millisecond latency. <br></br>Predictable performance and cost that scales with capacity. |Extremely low latency (as low as sub-ms).<br></br>Rich ONTAP management capabilities such as snapshots, backup, cross-region replication, and cross-zone replication.<br></br>Consistent hybrid cloud experience. | |Performance (Per volume) |Up to 20,000 IOPS, up to 15 GiB/s throughput. |Up to 100,000 IOPS, up to 10 GiB/s throughput. |Up to 460,000 IOPS, up to 4.5 GiB/s throughput per regular volume, up to 10 GiB/s throughput per large volume. | |Scale | Up to 5 PiB for a single volume. <br></br> Up to 190.7 TiB for a single blob.<br></br>No minimum capacity requirements. |Up to 100 TiB for a single file share.<br></br>Up to 4 TiB for a single file.<br></br>50 GiB min capacity. |Up to 100 TiB for a single regular volume, up to 2 PiB for a large volume.<br></br>Up to 16 TiB for a single file.<br></br>Consistent hybrid cloud experience. | |
storage | Shared Key Authorization Prevent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md | Some Azure tools offer the option to use Microsoft Entra authorization to access |-|-| | Azure portal | Supported. For information about authorizing with your Microsoft Entra account from the Azure portal, see [Choose how to authorize access to blob data in the Azure portal](../blobs/authorize-data-operations-portal.md). | | AzCopy | Supported for Blob Storage. For information about authorizing AzCopy operations, see [Choose how you'll provide authorization credentials](storage-use-azcopy-v10.md#choose-how-youll-provide-authorization-credentials) in the AzCopy documentation. |-| Azure Storage Explorer | Supported for Blob Storage, Queue Storage, Table Storage, and Azure Data Lake Storage Gen2. Microsoft Entra ID access to File storage is not supported. Make sure to select the correct Microsoft Entra tenant. For more information, see [Get started with Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) | +| Azure Storage Explorer | Supported for Blob Storage, Queue Storage, Table Storage, and Azure Data Lake Storage. Microsoft Entra ID access to File storage is not supported. Make sure to select the correct Microsoft Entra tenant. For more information, see [Get started with Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) | | Azure PowerShell | Supported. For information about how to authorize PowerShell commands for blob or queue operations with Microsoft Entra ID, see [Run PowerShell commands with Microsoft Entra credentials to access blob data](../blobs/authorize-data-operations-powershell.md) or [Run PowerShell commands with Microsoft Entra credentials to access queue data](../queues/authorize-data-operations-powershell.md). | | Azure CLI | Supported. For information about how to authorize Azure CLI commands with Microsoft Entra ID for access to blob and queue data, see [Run Azure CLI commands with Microsoft Entra credentials to access blob or queue data](../blobs/authorize-data-operations-cli.md). | | Azure IoT Hub | Supported. For more information, see [IoT Hub support for virtual networks](../../iot-hub/virtual-network-support.md). | |
storage | Storage Account Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md | The following table describes the fields on the **Advanced** tab. | Security | Default to Microsoft Entra authorization in the Azure portal | Optional | When enabled, the Azure portal authorizes data operations with the user's Microsoft Entra credentials by default. If the user does not have the appropriate permissions assigned via Azure role-based access control (Azure RBAC) to perform data operations, then the portal will use the account access keys for data access instead. The user can also choose to switch to using the account access keys. For more information, see [Default to Microsoft Entra authorization in the Azure portal](../blobs/authorize-data-operations-portal.md#default-to-azure-ad-authorization-in-the-azure-portal). | | Security | Minimum TLS version | Required | Select the minimum version of Transport Layer Security (TLS) for incoming requests to the storage account. The default value is TLS version 1.2. When set to the default value, incoming requests made using TLS 1.0 or TLS 1.1 are rejected. For more information, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account](transport-layer-security-configure-minimum-version.md). | | Security | Permitted scope for copy operations (preview) | Required | Select the scope of storage accounts from which data can be copied to the new account. The default value is `From any storage account`. When set to the default value, users with the appropriate permissions can copy data from any storage account to the new account.<br /><br />Select `From storage accounts in the same Azure AD tenant` to only allow copy operations from storage accounts within the same Microsoft Entra tenant.<br />Select `From storage accounts that have a private endpoint to the same virtual network` to only allow copy operations from storage accounts with private endpoints on the same virtual network.<br /><br /> For more information, see [Restrict the source of copy operations to a storage account](security-restrict-copy-operations.md). |-| Data Lake Storage Gen2 | Enable hierarchical namespace | Optional | To use this storage account for Azure Data Lake Storage Gen2 workloads, configure a hierarchical namespace. For more information, see [Introduction to Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md). | +| Data Lake Storage | Enable hierarchical namespace | Optional | To use this storage account for Azure Data Lake Storage workloads, configure a hierarchical namespace. For more information, see [Introduction to Azure Data Lake Storage](../blobs/data-lake-storage-introduction.md). | | Blob storage | Enable SFTP | Optional | Enable the use of Secure File Transfer Protocol (SFTP) to securely transfer of data over the internet. For more information, see [Secure File Transfer (SFTP) protocol support in Azure Blob Storage](../blobs/secure-file-transfer-protocol-support.md). | | Blob storage | Enable network file system (NFS) v3 | Optional | NFS v3 provides Linux file system compatibility at object storage scale enables Linux clients to mount a container in Blob storage from an Azure Virtual Machine (VM) or a computer on-premises. For more information, see [Network File System (NFS) 3.0 protocol support in Azure Blob Storage](../blobs/network-file-system-protocol-support.md). | | Blob storage | Allow cross-tenant replication | Required | By default, users with appropriate permissions can configure object replication across Microsoft Entra tenants. To prevent replication across tenants, deselect this option. For more information, see [Prevent replication across Microsoft Entra tenants](../blobs/object-replication-overview.md#prevent-replication-across-azure-ad-tenants). | |
storage | Storage Account Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md | The following table describes the types of storage accounts recommended by Micro | Premium file shares<sup>3</sup> | Azure Files | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both Server Message Block (SMB) and NFS file shares. | | Premium page blobs<sup>3</sup> | Page blobs only | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for page blobs only. [Learn more about page blobs and sample use cases.](../blobs/storage-blob-pageblob-overview.md) | -<sup>1</sup> Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob Storage. For more information, see [Introduction to Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) and [Create a storage account to use with Data Lake Storage Gen2](../blobs/create-data-lake-storage-account.md). +<sup>1</sup> Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob Storage. For more information, see [Introduction to Data Lake Storage](../blobs/data-lake-storage-introduction.md) and [Create a storage account to use with Data Lake Storage](../blobs/create-data-lake-storage-account.md). <sup>2</sup> ZRS, GZRS, and RA-GZRS are available only for standard general-purpose v2, premium block blobs, premium file shares, and premium page blobs accounts in certain regions. For more information, see [Azure Storage redundancy](storage-redundancy.md). The following table lists the format for the standard endpoints for each of the |--|--| | Blob Storage | `https://<storage-account>.blob.core.windows.net` | | Static website (Blob Storage) | `https://<storage-account>.web.core.windows.net` |-| Data Lake Storage Gen2 | `https://<storage-account>.dfs.core.windows.net` | +| Data Lake Storage | `https://<storage-account>.dfs.core.windows.net` | | Azure Files | `https://<storage-account>.file.core.windows.net` | | Queue Storage | `https://<storage-account>.queue.core.windows.net` | | Table Storage | `https://<storage-account>.table.core.windows.net` | The following table lists the format for Azure DNS Zone endpoints for each of th |--|--| | Blob Storage | `https://<storage-account>.z[00-50].blob.storage.azure.net` | | Static website (Blob Storage) | `https://<storage-account>.z[00-50].web.storage.azure.net` |-| Data Lake Storage Gen2 | `https://<storage-account>.z[00-50].dfs.storage.azure.net` | +| Data Lake Storage | `https://<storage-account>.z[00-50].dfs.storage.azure.net` | | Azure Files | `https://<storage-account>.z[00-50].file.storage.azure.net` | | Queue Storage | `https://<storage-account>.z[00-50].queue.storage.azure.net` | | Table Storage | `https://<storage-account>.z[00-50].table.storage.azure.net` | |
storage | Storage Analytics Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-analytics-logging.md | The following sections show an example log entry for each supported Azure Storag `2.0;2022-01-03T20:34:54.4617505Z;PutBlob;SASSuccess;201;7;7;sas;;logsamples;blob;https://logsamples.blob.core.windows.net/container1/1.txt?se=2022-02-02T20:34:54Z&sig=XXXXX&sp=rwl&sr=c&sv=2020-04-08&timeout=901;"/logsamples/container1/1.txt";xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx;0;71.197.193.44:53371;2019-12-12;654;13;337;0;13;"xxxxxxxxxxxxxxxxxxxxx==";"xxxxxxxxxxxxxxxxxxxxx==";""0x8D9CEF88004E296"";Monday, 03-Jan-22 20:34:54 GMT;;"Microsoft Azure Storage Explorer, 1.20.1, win32, azcopy-node, 2.0.0, win32, AzCopy/10.11.0 Azure-Storage/0.13 (go1.15; Windows_NT)";;"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx";;;;;;;;` -##### Example log entry for Blob Storage (Data Lake Storage Gen2 enabled) +##### Example log entry for Blob Storage (Data Lake Storage enabled) `2.0;2022-01-04T22:50:56.0000775Z;RenamePathFile;Success;201;49;49;authenticated;logsamples;logsamples;blob;"https://logsamples.dfs.core.windows.net/my-container/myfileorig.png?mode=legacy";"/logsamples/my-container/myfilerenamed.png";xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx;0;73.157.16.8;2020-04-08;591;0;224;0;0;;;;Friday, 11-Jun-21 17:58:15 GMT;;"Microsoft Azure Storage Explorer, 1.19.1, win32 azsdk-js-storagedatalake/12.3.1 (NODE-VERSION v12.16.3; Windows_NT 10.0.22000)";;"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx";;;;;;;;` |
storage | Storage Compliance Offerings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-compliance-offerings.md | Microsoft retains independent, third-party auditing firms to conduct audits of M Azure Storage is included in many Azure compliance audits such as CSA STAR, ISO, SOC, PCI DSS, HITRUST, FedRAMP, DoD, and others. The resulting compliance assurances are applicable to: -- Blobs (including Azure Data Lake Storage Gen2)+- Blobs (including Azure Data Lake Storage) - Files - Queues - Tables |
storage | Storage Disaster Recovery Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md | Azure Storage accounts support three types of failover: <sup>1</sup> Microsoft-managed failover can't be initiated for individual storage accounts, subscriptions, or tenants. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).<br/> <sup>2</sup> Use customer-managed failover options to develop, test, and implement your disaster recovery plans. **Do not** rely on Microsoft-managed failover, which would only be used in extreme circumstances. -Each type of failover has a unique set of use cases, corresponding expectations for data loss, and support for accounts with a hierarchical namespace enabled (Azure Data Lake Storage Gen2). This table summarizes those aspects of each type of failover: +Each type of failover has a unique set of use cases, corresponding expectations for data loss, and support for accounts with a hierarchical namespace enabled (Azure Data Lake Storage). This table summarizes those aspects of each type of failover: | Type | Failover Scope | Use case | Expected data loss | Hierarchical Namespace (HNS) supported | |-|--|-|--|-| The new primary region is configured to be locally redundant (LRS) after the fai You also might experience file or data inconsistencies if your storage accounts have one or more of the following enabled: -- [Hierarchical namespace (Azure Data Lake Storage Gen2)](#file-consistency-for-azure-data-lake-storage-gen2)+- [Hierarchical namespace (Azure Data Lake Storage)](#file-consistency-for-azure-data-lake-storage) - [Change feed](#change-feed-and-blob-data-inconsistencies) - [Point-in-time restore for block blobs](#point-in-time-restore-inconsistencies) As a best practice, design your application so that you can use **Last Sync Time For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md). -#### File consistency for Azure Data Lake Storage Gen2 +#### File consistency for Azure Data Lake Storage -Replication for storage accounts with a [hierarchical namespace enabled (Azure Data Lake Storage Gen2)](../blobs/data-lake-storage-introduction.md) occurs at the file level. Because replication occurs at this level, an outage in the primary region might prevent some of the files within a container or directory from successfully replicating to the secondary region. Consistency for all files within a container or directory after a storage account failover isn't guaranteed. +Replication for storage accounts with a [hierarchical namespace enabled (Azure Data Lake Storage)](../blobs/data-lake-storage-introduction.md) occurs at the file level. Because replication occurs at this level, an outage in the primary region might prevent some of the files within a container or directory from successfully replicating to the secondary region. Consistency for all files within a container or directory after a storage account failover isn't guaranteed. #### Change feed and blob data inconsistencies The following table can be used to reference feature support. | | Planned failover | Unplanned failover | |-|||-| **ADLS Gen2** | Supported (preview) | Supported (preview) | +| **Azure Data Lake Storage** | Supported (preview) | Supported (preview) | | **Change Feed** | Unsupported | Supported | | **Object Replication** | Unsupported | Unsupported | | **SFTP** | Supported (preview) | Supported (preview) | |
storage | Storage Explorer Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-security.md | Storage Explorer supports Azure RBAC access to Storage Accounts, Blobs, Queues, #### Access control lists (ACLs) -[Access control lists (ACLs)](../blobs/data-lake-storage-access-control.md) let you control file and folder level access in ADLS Gen2 blob containers. You can manage your ACLs using Storage Explorer. +[Access control lists (ACLs)](../blobs/data-lake-storage-access-control.md) let you control file and folder level access in ADLS blob containers. You can manage your ACLs using Storage Explorer. ### Shared access signatures (SAS) |
storage | Storage Explorer Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-sign-in.md | -Sign-in is the recommended way to access your Azure storage resources with Storage Explorer. By signing in you take advantage of Microsoft Entra backed permissions, such as RBAC and Gen2 POSIX ACLs. +Sign-in is the recommended way to access your Azure storage resources with Storage Explorer. By signing in you take advantage of Microsoft Entra backed permissions, such as RBAC and Azure Data Lake Storage POSIX ACLs. ## How to sign in |
storage | Storage Explorer Support Policy Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-support-policy-lifecycle.md | This table describes the release date and the end of support date for each relea | Storage Explorer version | Release date | End of support date | |:-:|::|:-:|+| v1.35.0 | August 19, 2024 | August 19, 2025 | | v1.34.0 | May 24, 2024 | May 24, 2025 | | v1.33.0 | March 1, 2024 | March 1, 2025 | | v1.32.1 | November 15, 2023 | November 1, 2024 | |
storage | Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md | Azure Storage services offer the following benefits for application developers a The Azure Storage platform includes the following data -- [Azure Blobs](../blobs/storage-blobs-introduction.md): A massively scalable object store for text and binary data. Also includes support for big data analytics through Data Lake Storage Gen2.+- [Azure Blobs](../blobs/storage-blobs-introduction.md): A massively scalable object store for text and binary data. Also includes support for big data analytics through Data Lake Storage. - [Azure Files](../files/storage-files-introduction.md): Managed file shares for cloud or on-premises deployments. - [Azure Elastic SAN](../elastic-san/elastic-san-introduction.md): A fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN in Azure. - [Azure Queues](../queues/storage-queues-introduction.md): A messaging store for reliable messaging between application components. - [Azure Tables](../tables/table-storage-overview.md): A NoSQL store for schemaless storage of structured data. - [Azure managed Disks](/azure/virtual-machines/managed-disks-overview): Block-level storage volumes for Azure VMs.-- [Azure Container Storage](/azure/container-storage/container-storage-introduction): A volume management, deployment, and orchestration service built natively for containers.+- [Azure Container Storage](/azure/storage/container-storage/container-storage-introduction): A volume management, deployment, and orchestration service built natively for containers. Each service is accessed through a storage account with a unique address. To get started, see [Create a storage account](storage-account-create.md). The following table compares Azure Storage services and shows example scenarios |--|-|-| | **Azure Files** |Offers fully managed cloud file shares that you can access from anywhere via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview), [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System), and [Azure Files REST API](/rest/api/storageservices/file-service-rest-api).<br><br>You can mount Azure file shares from cloud or on-premises deployments of Windows, Linux, and macOS. | You want to "lift and shift" an application to the cloud that already uses the native file system APIs to share data between it and other applications running in Azure.<br/><br/>You want to replace or supplement on-premises file servers or NAS devices.<br><br> You want to store development and debugging tools that need to be accessed from many virtual machines. | | **Azure NetApp Files** | Offers a fully managed, highly available, enterprise-grade NAS service that can handle the most demanding, high-performance, low-latency workloads requiring advanced data management capabilities. | You have a difficult-to-migrate workload such as POSIX-compliant Linux and Windows applications, SAP HANA, databases, high-performance compute (HPC) infrastructure and apps, and enterprise web applications. <br></br> You require support for multiple file-storage protocols in a single service, including NFSv3, NFSv4.1, and SMB3.1.x, enables a wide range of application lift-and-shift scenarios, with no need for code changes. |-| **Azure Blobs** | Allows unstructured data to be stored and accessed at a massive scale in block blobs.<br/><br/>Also supports [Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) for enterprise big data analytics solutions. | You want your application to support streaming and random access scenarios.<br/><br/>You want to be able to access application data from anywhere.<br/><br/>You want to build an enterprise data lake on Azure and perform big data analytics. | +| **Azure Blobs** | Allows unstructured data to be stored and accessed at a massive scale in block blobs.<br/><br/>Also supports [Azure Data Lake Storage](../blobs/data-lake-storage-introduction.md) for enterprise big data analytics solutions. | You want your application to support streaming and random access scenarios.<br/><br/>You want to be able to access application data from anywhere.<br/><br/>You want to build an enterprise data lake on Azure and perform big data analytics. | | **Azure Elastic SAN** | Azure Elastic SAN is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability. | You want large scale storage that is interoperable with multiple types of compute resources (such as SQL, MariaDB, Azure virtual machines, and Azure Kubernetes Services) accessed via the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.| | **Azure Disks** | Allows data to be persistently stored and accessed from an attached virtual hard disk. | You want to "lift and shift" applications that use native file system APIs to read and write data to persistent disks.<br/><br/>You want to store data that isn't required to be accessed from outside the virtual machine to which the disk is attached. | | **Azure Container Storage**| Azure Container Storage is a volume management, deployment, and orchestration service that integrates with Kubernetes and is built natively for containers. | You want to dynamically and automatically provision persistent volumes to store data for stateful applications running on Kubernetes clusters. | |
storage | Storage Network Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md | If your account doesn't have the hierarchical namespace feature enabled on it, y You can use the same technique for an account that has the hierarchical namespace feature enabled on it. However, you don't have to assign an Azure role if you add the managed identity to the access control list (ACL) of any directory or blob that the storage account contains. In that case, the scope of access for the instance corresponds to the directory or file to which the managed identity has access. -You can also combine Azure roles and ACLs together to grant access. To learn more, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md). +You can also combine Azure roles and ACLs together to grant access. To learn more, see [Access control model in Azure Data Lake Storage](../blobs/data-lake-storage-access-control-model.md). We recommend that you [use resource instance rules to grant access to specific resources](#grant-access-from-azure-resource-instances). |
storage | Storage Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-plan-manage-costs.md | Requests can originate from any of these sources: - Hadoop workloads that use the [Azure Blob File System driver (ABFS)](../blobs/data-lake-storage-abfs-driver.md) driver -- Clients that use [Data Lake Storage Gen2 REST APIs](/rest/api/storageservices/data-lake-storage-gen2) or Data Lake Storage Gen2 APIs from an Azure Storage client library+- Clients that use [Data Lake Storage REST APIs](/rest/api/storageservices/data-lake-storage-gen2) or Data Lake Storage APIs from an Azure Storage client library -The correct pricing page for these requests is the [Azure Data Lake Storage Gen2 pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/) page. +The correct pricing page for these requests is the [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/) page. If your account does not have the hierarchical namespace feature enabled, but you expect clients, workloads, or applications to make requests over the Data Lake Storage endpoint of your account, then set the **File Structure** drop-down list to **Flat Namespace**. Otherwise, make sure that it is set to **Hierarchical Namespace**. See any of these articles to itemize and analyze your existing containers and bl #### Reserve storage capacity -You can save money on storage costs for blob data with Azure Storage reserved capacity. Azure Storage reserved capacity offers you a discount on capacity for block blobs and for Azure Data Lake Storage Gen2 data in standard storage accounts when you commit to a reservation for either one year or three years. A reservation provides a fixed amount of storage capacity for the term of the reservation. Azure Storage reserved capacity can significantly reduce your capacity costs for block blobs and Azure Data Lake Storage Gen2 data. +You can save money on storage costs for blob data with Azure Storage reserved capacity. Azure Storage reserved capacity offers you a discount on capacity for block blobs and for Azure Data Lake Storage data in standard storage accounts when you commit to a reservation for either one year or three years. A reservation provides a fixed amount of storage capacity for the term of the reservation. Azure Storage reserved capacity can significantly reduce your capacity costs for block blobs and Azure Data Lake Storage data. To learn more, see [Optimize costs for Blob Storage with reserved capacity](../blobs/storage-blob-reserved-capacity.md). |
storage | Storage Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md | To create a private endpoint by using PowerShell or the Azure CLI, see either of When you create a private endpoint, you must specify the storage account and the storage service to which it connects. -You need a separate private endpoint for each storage resource that you need to access, namely [Blobs](../blobs/storage-blobs-overview.md), [Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md), [Files](../files/storage-files-introduction.md), [Queues](../queues/storage-queues-introduction.md), [Tables](../tables/table-storage-overview.md), or [Static Websites](../blobs/storage-blob-static-website.md). On the private endpoint, these storage services are defined as the **target sub-resource** of the associated storage account. +You need a separate private endpoint for each storage resource that you need to access, namely [Blobs](../blobs/storage-blobs-overview.md), [Data Lake Storage](../blobs/data-lake-storage-introduction.md), [Files](../files/storage-files-introduction.md), [Queues](../queues/storage-queues-introduction.md), [Tables](../tables/table-storage-overview.md), or [Static Websites](../blobs/storage-blob-static-website.md). On the private endpoint, these storage services are defined as the **target sub-resource** of the associated storage account. -If you create a private endpoint for the Data Lake Storage Gen2 storage resource, then you should also create one for the Blob Storage resource. That's because operations that target the Data Lake Storage Gen2 endpoint might be redirected to the Blob endpoint. Similarly, if you add a private endpoint for Blob Storage only, and not for Data Lake Storage Gen2, some operations (such as Manage ACL, Create Directory, Delete Directory, etc.) will fail since the Gen2 APIs require a DFS private endpoint. By creating a private endpoint for both resources, you ensure that all operations can complete successfully. +If you create a private endpoint for the Data Lake Storage storage resource, then you should also create one for the Blob Storage resource. That's because operations that target the Data Lake Storage endpoint might be redirected to the Blob endpoint. Similarly, if you add a private endpoint for Blob Storage only, and not for Data Lake Storage, some operations (such as Manage ACL, Create Directory, Delete Directory, etc.) will fail since the APIs require a DFS private endpoint. By creating a private endpoint for both resources, you ensure that all operations can complete successfully. > [!TIP] > Create a separate private endpoint for the secondary instance of the storage service for better read performance on RA-GRS accounts. The recommended DNS zone names for private endpoints for storage services, and t | Storage service | Target sub-resource | Zone name | | : | : | :-- | | Blob service | blob | `privatelink.blob.core.windows.net` |-| Data Lake Storage Gen2 | dfs | `privatelink.dfs.core.windows.net` | +| Data Lake Storage | dfs | `privatelink.dfs.core.windows.net` | | File service | file | `privatelink.file.core.windows.net` | | Queue service | queue | `privatelink.queue.core.windows.net` | | Table service | table | `privatelink.table.core.windows.net` | This constraint is a result of the DNS changes made when account A2 creates a pr You can copy blobs between storage accounts by using private endpoints only if you use the Azure REST API, or tools that use the REST API. These tools include AzCopy, Storage Explorer, Azure PowerShell, Azure CLI, and the Azure Blob Storage SDKs. -Only private endpoints that target the `blob` or `file` storage resource endpoint are supported. This includes REST API calls against Data Lake Storage Gen2 accounts in which the `blob` resource endpoint is referenced explicitly or implicitly. Private endpoints that target the Data Lake Storage Gen2 `dfs` resource endpoint are not yet supported. Copying between storage accounts by using the Network File System (NFS) protocol is not yet supported. +Only private endpoints that target the `blob` or `file` storage resource endpoint are supported. This includes REST API calls against Data Lake Storage accounts in which the `blob` resource endpoint is referenced explicitly or implicitly. Private endpoints that target the Data Lake Storage `dfs` resource endpoint are not yet supported. Copying between storage accounts by using the Network File System (NFS) protocol is not yet supported. ## Next steps |
storage | Storage Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md | Data in an Azure Storage account is always replicated three times in the primary - **Zone-redundant storage (ZRS)** copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability, Microsoft recommends using ZRS in the primary region, and also replicating to a secondary region. > [!NOTE] -> Microsoft recommends using ZRS in the primary region for Azure Data Lake Storage Gen2 workloads. +> Microsoft recommends using ZRS in the primary region for Azure Data Lake Storage workloads. ### Locally redundant storage |
storage | Storage Ref Azcopy Bench | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-bench.md | Runs a performance benchmark by uploading or downloading test data to or from a The benchmark command runs the same process as 'copy', except that: -- Instead of requiring both source and destination parameters, benchmark takes just one. This is the blob container, Azure Files Share, or Azure Data Lake Storage Gen2 file system that you want to upload to or download from.+- Instead of requiring both source and destination parameters, benchmark takes just one. This is the blob container, Azure Files Share, or Azure Data Lake Storage file system that you want to upload to or download from. - The 'mode' parameter describes whether AzCopy should test uploads to or downloads from given target. Valid values ar`e 'Upload' and 'Download'. Default value is 'Upload'. |
storage | Storage Ref Azcopy Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md | Copies source data to a destination location. The supported directions are: - local <-> Azure Blob (SAS or OAuth authentication) - local <-> Azure Files (Share/directory SAS authentication or OAuth authentication)-- local <-> Azure Data Lake Storage Gen2 (SAS, OAuth, or SharedKey authentication)+- local <-> Azure Data Lake Storage (SAS, OAuth, or SharedKey authentication) - Azure Blob (SAS or public) -> Azure Blob (SAS or OAuth authentication)-- Azure Data Lake Storage Gen2 (SAS or public) -> Azure Data Lake Storage Gen2 (SAS or OAuth authentication)+- Azure Data Lake Storage (SAS or public) -> Azure Data Lake Storage (SAS or OAuth authentication) - Azure Blob (SAS or OAuth authentication) <-> Azure Blob (SAS or OAuth authentication) - See [Guidelines](./storage-use-azcopy-blobs-copy.md#guidelines).-- Azure Data Lake Storage Gen2 (SAS or OAuth authentication) <-> Azure Data Lake Storage Gen2 (SAS or OAuth authentication)-- Azure Data Lake Storage Gen2 (SAS or OAuth authentication) <-> Azure Blob (SAS or OAuth authentication)+- Azure Data Lake Storage (SAS or OAuth authentication) <-> Azure Data Lake Storage (SAS or OAuth authentication) +- Azure Data Lake Storage (SAS or OAuth authentication) <-> Azure Blob (SAS or OAuth authentication) - Azure Blob (SAS or public) -> Azure Files (SAS) - Azure File (SAS or OAuth authentication) <-> Azure File (SAS or OAuth authentication) - Azure Files (SAS) -> Azure Blob (SAS or OAuth authentication) Copy a subset of files modified on or before the given date and time (in ISO8601 `--preserve-smb-permissions` will still preserve ACLs but Owner and Group is based on the user running AzCopy (default true) -`--preserve-permissions` False by default. Preserves ACLs between aware resources (Windows and Azure Files, or Azure Data Lake Storage Gen2 to Azure Data Lake Storage Gen2). For accounts that have a hierarchical namespace, your security principal must be the owning user of the target container or it must be assigned the Storage Blob Data Owner role, scoped to the target container, storage account, parent resource group, or subscription. For downloads, you'll also need the `--backup` flag to restore permissions where the new Owner won't be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). +`--preserve-permissions` False by default. Preserves ACLs between aware resources (Windows and Azure Files, or Azure Data Lake Storage to Azure Data Lake Storage). For accounts that have a hierarchical namespace, your security principal must be the owning user of the target container or it must be assigned the Storage Blob Data Owner role, scoped to the target container, storage account, parent resource group, or subscription. For downloads, you'll also need the `--backup` flag to restore permissions where the new Owner won't be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). `--preserve-posix-properties` False by default. Preserves property info gleaned from `stat` or `statx` into object metadata. |
storage | Storage Ref Azcopy List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-list.md | -This command lists accounts, containers, and directories. Blob Storage, Azure Data Lake Storage Gen2, and File Storage are supported. OAuth for Files is currently not supported; please use SAS to authenticate for Files. +This command lists accounts, containers, and directories. Blob Storage, Azure Data Lake Storage, and File Storage are supported. OAuth for Files is currently not supported; please use SAS to authenticate for Files. ## Synopsis |
storage | Storage Ref Azcopy Set Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-set-properties.md | azcopy set-properties [resourceURL] [flags] Sets properties of Blob and File storage. The properties currently supported by this command are: - Blobs -> Tier, Metadata, Tags-- Data Lake Storage Gen2 -> Tier, Metadata, Tags+- Data Lake Storage -> Tier, Metadata, Tags - Files -> Metadata > [!NOTE]-> Data Lake Storage Gen2 endpoints will be will be replaced by Blob Storage endpoints. +> Data Lake Storage endpoints will be will be replaced by Blob Storage endpoints. Refer to the examples for more information. |
storage | Storage Samples C Plus Plus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-c-plus-plus.md | The following table provides an overview of our samples repository and the scena :::column-end::: :::row-end::: -## Data Lake Storage Gen2 samples +## Data Lake Storage samples :::row::: :::column::: |
storage | Storage Use Azcopy Authorize Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-authorize-azure-active-directory.md | To learn how to verify and assign roles, see [Assign an Azure role for access to You don't need to have one of these roles assigned to your security principal if your security principal is added to the access control list (ACL) of the target container or directory. In the ACL, your security principal needs write permission on the target directory, and execute permission on container and each parent directory. -To learn more, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md). +To learn more, see [Access control model in Azure Data Lake Storage](../blobs/data-lake-storage-access-control-model.md). ## Authorize with AzCopy |
storage | Use Container Storage With Local Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md | kubectl delete sp -n acstor <storage-pool-name> ### Optimize performance when using local NVMe -Depending on your workloadΓÇÖs performance requirements, you can choose from three different performance tiers: **Basic**, **Standard**, and **Advanced**. Your selection will impact the number of vCPUs that Azure Container Storage components consume in the nodes where it's installed. Standard is the default configuration if you don't update the performance tier. +Depending on your workloadΓÇÖs performance requirements, you can choose from three different performance tiers: **Basic**, **Standard**, and **Premium**. Your selection will impact the number of vCPUs that Azure Container Storage components consume in the nodes where it's installed. Standard is the default configuration if you don't update the performance tier. These three tiers offer a different range of IOPS. The following table contains guidance on what you could expect with each of these tiers. We used [FIO](https://github.com/axboe/fio), a popular benchmarking tool, to achieve these numbers with the following configuration: - AKS: Node SKU - Standard_L16s_v3; These three tiers offer a different range of IOPS. The following table contains | | | | | | `Basic` | 12.5% of total VM cores | Up to 100,000 | Up to 90,000 | | `Standard` (default)| 25% of total VM cores | Up to 200,000 | Up to 180,000 |-| `Advanced` | 50% of total VM cores | Up to 400,000 | Up to 360,000 | +| `Premium` | 50% of total VM cores | Up to 400,000 | Up to 360,000 | > [!NOTE] > RAM and hugepages consumption will stay consistent across all tiers: 1 GiB of RAM and 2 GiB of hugepages. -Once you've identified the performance tier that aligns best to your needs, you can run the following command to update the performance tier of your Azure Container Storage installation. Replace `<performance tier>` with basic, standard, or advanced. +Once you've identified the performance tier that aligns best to your needs, you can run the following command to update the performance tier of your Azure Container Storage installation. Replace `<performance tier>` with basic, standard, or premium. ```azurecli-interactive az aks update -n <cluster-name> -g <resource-group> --enable-azure-container-storage <storage-pool-type> --ephemeral-disk-nvme-perf-tier <performance-tier> |
storage | Use Container Storage With Local Nvme Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-nvme-replication.md | kubectl delete sp -n acstor <storage-pool-name> ### Optimize performance when using local NVMe -Depending on your workloadΓÇÖs performance requirements, you can choose from three different performance tiers: **Basic**, **Standard**, and **Advanced**. These tiers offer a different range of IOPS, and your selection will impact the number of vCPUs that Azure Container Storage components consume in the nodes where it's installed. Standard is the default configuration if you don't update the performance tier. +Depending on your workloadΓÇÖs performance requirements, you can choose from three different performance tiers: **Basic**, **Standard**, and **Premium**. These tiers offer a different range of IOPS, and your selection will impact the number of vCPUs that Azure Container Storage components consume in the nodes where it's installed. Standard is the default configuration if you don't update the performance tier. | **Tier** | **Number of vCPUs** | ||--| | `Basic` | 12.5% of total VM cores | | `Standard` (default) | 25% of total -| `Advanced` | 50% of total VM cores | +| `Premium` | 50% of total VM cores | > [!NOTE] > RAM and hugepages consumption will stay consistent across all tiers: 1 GiB of RAM and 2 GiB of hugepages. -Once you've identified the performance tier that aligns best to your needs, you can run the following command to update the performance tier of your Azure Container Storage installation. Replace `<performance tier>` with basic, standard, or advanced. +Once you've identified the performance tier that aligns best to your needs, you can run the following command to update the performance tier of your Azure Container Storage installation. Replace `<performance tier>` with basic, standard, or premium. ```azurecli-interactive az aks update -n <cluster-name> -g <resource-group> --enable-azure-container-storage <storage-pool-type> --ephemeral-disk-nvme-perf-tier <performance-tier> |
storage | Storage How To Use Files Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md | description: Learn how to create and use SMB Azure file shares with the Azure po Previously updated : 05/13/2024 Last updated : 08/29/2024 ms.devlang: azurecli az storage account create \ To create an Azure file share: 1. Select the storage account from your dashboard.-1. On the storage account page, in the **Data storage** section, select **File shares**. +1. In the service menu, under **Data storage**, select **File shares**. - ![A screenshot of the data storage section of the storage account; select file shares.](media/storage-how-to-use-files-portal/create-file-share-1.png) + :::image type="content" source="media/storage-how-to-use-files-portal/create-file-share.png" alt-text="Screenshot showing the data storage section of the storage account; select file shares." border="true"::: 1. On the menu at the top of the **File shares** page, select **+ File share**. The **New file share** page drops down. 1. In **Name**, type *myshare*. File share names must be all lower-case letters, numbers, and single hyphens, and must begin and end with a lower-case letter or number. The name can't contain two consecutive hyphens. For details about naming file shares and files, see [Naming and Referencing Shares, Directories, Files, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Shares--Directories--Files--and-Metadata).-1. Leave **Transaction optimized** selected for **Tier**. +1. Leave **Transaction optimized** selected for **Access tier**. 1. Select the **Backup** tab. By default, [backup is enabled](../../backup/backup-azure-files.md) when you create an Azure file share using the Azure portal. If you want to disable backup for the file share, uncheck the **Enable backup** checkbox. If you want backup enabled, you can either leave the defaults or create a new Recovery Services Vault in the same region and subscription as the storage account. To create a new backup policy, select **Create a new policy**. :::image type="content" source="media/storage-how-to-use-files-portal/create-file-share-backup.png" alt-text="Screenshot showing how to enable or disable file share backup." border="true"::: |
storage | Migration Tools Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md | The following comparison matrix shows basic functionality of different tools tha | **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Storage Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | | **UID / SID remapping** | No | No | Yes | No | No | | **Protocol ACL remapping** | No | No | No | No | No |-| **Azure Data Lake Storage Gen2** | Yes | No | Yes | Yes | No | +| **Azure Data Lake Storage** | Yes | No | Yes | Yes | No | | **Throttling support** | Yes | No | Yes | No | Yes | | **File pattern exclusions** | Yes | No | Yes | Yes | Yes | | **Support for selective file attributes** | No | No | Yes | Yes | Yes | |
storage | Vs Azure Tools Storage Manage With Storage Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/storage-explorer/vs-azure-tools-storage-manage-with-storage-explorer.md | Storage Explorer provides several ways to connect to Azure resources: ### Attach to an individual resource -Storage Explorer lets you connect to individual resources, such as an Azure Data Lake Storage Gen2 container, using various authentication methods. Some authentication methods are only supported for certain resource types. +Storage Explorer lets you connect to individual resources, such as an Azure Data Lake Storage container, using various authentication methods. Some authentication methods are only supported for certain resource types. | Resource type | Microsoft Entra ID | Account Name and Key | Shared Access Signature (SAS) | Public (anonymous) | ||--|-|--|--| | Storage accounts | Yes | Yes | Yes (connection string or URL) | No | | Blob containers | Yes | No | Yes (URL) | Yes |-| Gen2 containers | Yes | No | Yes (URL) | Yes | -| Gen2 directories | Yes | No | Yes (URL) | Yes | +| Data Lake Storage containers | Yes | No | Yes (URL) | Yes | +| Data Lake Storage directories | Yes | No | Yes (URL) | Yes | | File shares | No | No | Yes (URL) | No | | Queues | Yes | No | Yes (URL) | No | | Tables | Yes | No | Yes (URL) | No | To connect to an individual resource, select the **Connect** button in the left- When a connection to a storage account is successfully added, a new tree node appears under **Local & Attached** > **Storage Accounts**. -For other resource types, a new node is added under **Local & Attached** > **Storage Accounts** > **(Attached Containers)**. The node appears under a group node matching its type. For example, a new connection to an Azure Data Lake Storage Gen2 container appears under **Blob Containers**. +For other resource types, a new node is added under **Local & Attached** > **Storage Accounts** > **(Attached Containers)**. The node appears under a group node matching its type. For example, a new connection to an Azure Data Lake Storage container appears under **Blob Containers**. If Storage Explorer couldn't add your connection, or if you can't access your data after successfully adding the connection, see the [Azure Storage Explorer troubleshooting guide](../common/storage-explorer-troubleshooting.md). The following sections describe the different authentication methods you can use Storage Explorer can use your Azure account to connect to the following resource types: * Blob containers-* Azure Data Lake Storage Gen2 containers -* Azure Data Lake Storage Gen2 directories +* Azure Data Lake Storage containers +* Azure Data Lake Storage directories * Queues Microsoft Entra ID is the preferred option if you have data layer access to your resource but no management layer access. TableEndpoint=https://contoso.table.core.windows.net/; Storage Explorer can connect to the following resource types using a SAS URI: * Blob container-* Azure Data Lake Storage Gen2 container or directory +* Azure Data Lake Storage container or directory * File share * Queue * Table |
virtual-desktop | Configure Single Sign On | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md | description: Learn how to configure single sign-on for an Azure Virtual Desktop Previously updated : 12/15/2023 Last updated : 08/28/2024 # Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID authentication Before you enable single sign-on, review the following information for using it ### Disconnection when the session is locked -When single sign-on is enabled, you sign in to Windows using a Microsoft Entra ID authentication token, which provides support for passwordless authentication to Windows. The Windows lock screen in the remote session doesn't support Microsoft Entra ID authentication tokens or passwordless authentication methods, like FIDO keys. The lack of support for these authentication methods means that users can't unlock their screens in a remote session. When you try to lock a remote session, either through user action or system policy, the session is instead disconnected and the service sends a message to the user explaining they were disconnected. +When single sign-on is enabled and the remote session is locked, either by the user or by policy, the session is instead disconnected and a dialog is shown to let users know they were disconnected. Users can choose the **Reconnect** option from the dialog when they are ready to connect again. This is done for security reasons and to ensure full support of passwordless authentication. Disconnecting the session provides the following benefits: -Disconnecting the session also ensures that when the connection is relaunched after a period of inactivity, Microsoft Entra ID reevaluates any applicable conditional access policies. +- Consistent sign-in experience through Microsoft Entra ID when needed. ++- Single sign-on experience and reconnection without authentication prompt when allowed by conditional access policies. ++- Supports passwordless authentication like passkeys and FIDO2 devices, contrary to the remote lock screen. ++- Conditional access policies, including multifactor authentication and sign-in frequency, are re-evaluated when the user reconnects to their session. ++- Can require multi-factor authentication to return to the session and prevent users from unlocking with a simple username and password. ++If you prefer to show the remote lock screen instead of disconnecting the session, your session hosts must use the following operating systems: ++- Windows 11 single or multi-session with the [2024-05 Cumulative Updates for Windows 11 (KB5037770)](https://support.microsoft.com/kb/KB5037770) or later installed. ++- Windows 10 single or multi-session, versions 21H2 or later with the [2024-06 Cumulative Updates for Windows 10 (KB5039211)](https://support.microsoft.com/kb/KB5039211) or later installed. ++- Windows Server 2022 with the [2024-05 Cumulative Update for Microsoft server operating system (KB5037782)](https://support.microsoft.com/kb/KB5037782) or later installed. ++You can configure the session lock behavior of your session hosts by using Intune, Group Policy, or the registry. ++# [Intune](#tab/intune) ++To configure the session lock experience using Intune, follow these steps. This process creates an Intune [settings catalog](/mem/intune/configuration/settings-catalog) policy. ++1. Sign in to the [Microsoft Intune admin center](https://intune.microsoft.com/). ++1. Select **Devices** > **Manage devices** > **Configuration** > **Create** > **New policy**. ++1. Enter the following properties: ++ - **Platform**: Select **Windows 10 and later**. ++ - **Profile type**: Select **Settings catalog**. ++1. Select **Create**. ++1. In **Basics**, enter the following properties: ++ - **Name**: Enter a descriptive name for the profile. Name your profile so you can easily identify it later. ++ - **Description**: Enter a description for the profile. This setting is optional, but recommended. ++1. Select **Next**. ++1. In **Configuration settings**, select **Add settings**. Then: ++ 1. In the settings picker, expand **Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Security**. ++ 1. Select the **Disconnect remote session on lock for Microsoft identity platform authentication** setting. ++ 1. Close the settings picker. ++1. Configure the setting to "Disabled" to show the remote lock screen when the session locks. ++1. Select **Next**. ++1. (Optional) Add the **Scope tags**. For more information about scope tags in Intune, see [Use RBAC roles and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags). ++1. Select **Next**. ++1. For the **Assignments** tab, select the devices, or groups to receive the profile, then select **Next**. For more information on assigning profiles, see [Assign user and device profiles](/mem/intune/configuration/device-profile-assign). ++1. On the **Review + create** tab, review the configuration information, then select **Create**. ++1. Once the policy configuration is created, the setting will take effect after the session hosts sync with Intune and users initiate a new session. ++# [Group Policy](#tab/group-policy) ++To configure the session lock experience using Group Policy, follow these steps. ++1. Open **Local Group Policy Editor** from the Start menu or by running `gpedit.msc`. ++1. Browse to the following policy section: ++ - `Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Security` ++1. Select the **Disconnect remote session on lock for Microsoft identity platform authentication** policy. ++1. Set the policy to **Disabled** to show the remote lock screen when the session locks. ++1. Select **OK** to save your changes. ++1. Once the policy is configured, it will take effect after the user initiates a new session. ++> [!TIP] +> To configure the Group Policy centrally on Active Directory Domain Controllers using Windows Server 2019 or Windows Server 2016, copy the `terminalserver.admx` and `terminalserver.adml` administrative template files from a session host to the [Group Policy Central Store](/troubleshoot/windows-client/group-policy/create-and-manage-central-store) on the domain controller. ++# [Registry](#tab/registry) ++To configure the session lock experience using the registry on a session host, follow these steps. ++1. Open **Registry Editor** from the Start menu or by running `regedit.exe`. ++1. Set the following registry key and its value. ++ - **Key**: `HKLM\Software\Policies\Microsoft\Windows NT\Terminal Services` ++ - **Type**: `REG_DWORD` ++ - **Value name**: `fdisconnectonlockmicrosoftidentity` ++ - **Value data**: Enter a value from the following table: ++ | Value Data | Description | + |--|--| + | `0` | Show the remote lock screen. | + | `1` | Disconnect the session. | ### Active Directory domain administrator accounts with single sign-on If you need to make changes to a session host as an administrator, sign in to th Before you can enable single sign-on, you must meet the following prerequisites: - To configure your Microsoft Entra tenant, you must be assigned one of the following [Microsoft Entra built-in roles](/entra/identity/role-based-access-control/manage-roles-portal):+ - [Application Administrator](/entra/identity/role-based-access-control/permissions-reference#application-administrator)+ - [Cloud Application Administrator](/entra/identity/role-based-access-control/permissions-reference#cloud-application-administrator)+ - [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator) - Your session hosts must be running one of the following operating systems with the relevant cumulative update installed: - Windows 11 Enterprise single or multi-session with the [2022-10 Cumulative Updates for Windows 11 (KB5018418)](https://support.microsoft.com/kb/KB5018418) or later installed.+ - Windows 10 Enterprise single or multi-session with the [2022-10 Cumulative Updates for Windows 10 (KB5018410)](https://support.microsoft.com/kb/KB5018410) or later installed.+ - Windows Server 2022 with the [2022-10 Cumulative Update for Microsoft server operating system (KB5018421)](https://support.microsoft.com/kb/KB5018421) or later installed. - Your session hosts must be [Microsoft Entra joined](/entra/identity/devices/concept-directory-join) or [Microsoft Entra hybrid joined](/entra/identity/devices/concept-hybrid-join). Session hosts joined to Microsoft Entra Domain Services or to Active Directory Domain Services only aren't supported. Before you can enable single sign-on, you must meet the following prerequisites: - A supported Remote Desktop client to connect to a remote session. The following clients are supported: - [Windows Desktop client](users/connect-windows.md) on local PCs running Windows 10 or later. There's no requirement for the local PC to be joined to Microsoft Entra ID or an Active Directory domain.+ - [Web client](users/connect-web.md).+ - [macOS client](users/connect-macos.md), version 10.8.2 or later.+ - [iOS client](users/connect-ios-ipados.md), version 10.5.1 or later.+ - [Android client](users/connect-android-chrome-os.md), version 10.0.16 or later. - To configure allowing Active Directory domain administrator account to connect when single sign-on is enabled, you need an account that is a member of the **Domain Admins** security group. To configure the service principal, use the [Microsoft Graph PowerShell SDK](/po id True ``` -## Configure the target device groups +## Hide the consent prompt dialog -After you enable Microsoft Entra authentication for RDP, you need to configure the target device groups. By default when enabling single sign-on, users are prompted to authenticate to Microsoft Entra ID and allow the Remote Desktop connection when launching a connection to a new session host. Microsoft Entra remembers up to 15 hosts for 30 days before prompting again. If you see a dialogue to allow the Remote Desktop connection, select **Yes** to connect. +By default when single sign-on is enabled, users will see a dialog to allow the Remote Desktop connection when connecting to a new session host. Microsoft Entra remembers up to 15 hosts for 30 days before prompting again. If users see this dialogue to allow the Remote Desktop connection, they can select **Yes** to connect. -You can hide this dialog and provide single sign-on for connections to all your session hosts by configuring a list of trusted devices. You need to create one or more groups in Microsoft Entra ID that contains your session hosts, then set a property on the service principals for the same *Microsoft Remote Desktop* and *Windows Cloud Login* applications, as used in the previous section, for the group. +You can hide this dialog by configuring a list of trusted devices. To configure the list of devices, create one or more groups in Microsoft Entra ID that contains your session hosts, then add the group IDs to a property on the SSO service principals, *Microsoft Remote Desktop* and *Windows Cloud Login*. > [!TIP]-> We recommend you use a dynamic group and configure the dynamic membership rules to includes all your Azure Virtual Desktop session hosts. You can use the device names in this group, but for a more secure option, you can set and use [device extension attributes](/graph/extensibility-overview) using [Microsoft Graph API](/graph/api/resources/device). While dynamic groups normally update within 5-10 minutes, large tenants can take up to 24 hours. +> We recommend you use a dynamic group and configure the dynamic membership rules to include all your Azure Virtual Desktop session hosts. You can use the device names in this group, but for a more secure option, you can set and use [device extension attributes](/graph/extensibility-overview) using [Microsoft Graph API](/graph/api/resources/device). While dynamic groups normally update within 5-10 minutes, large tenants can take up to 24 hours. > > Dynamic groups requires the Microsoft Entra ID P1 license or Intune for Education license. For more information, see [Dynamic membership rules for groups](/entra/identity/users/groups-dynamic-membership). To configure the service principal, use the [Microsoft Graph PowerShell SDK](/po If your session hosts meet the following criteria, you must [Create a Kerberos Server object](../active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md#create-a-kerberos-server-object): - Your session host is Microsoft Entra hybrid joined. You must have a Kerberos Server object to complete authentication to a domain controller.+ - Your session host is Microsoft Entra joined and your environment contains Active Directory domain controllers. You must have a Kerberos Server object for users to access on-premises resources, such as SMB shares, and Windows-integrated authentication to websites. > [!IMPORTANT] When single sign-on is enabled, a new Microsoft Entra ID app is introduced to au To enable single sign-on on your host pool, you must configure the following RDP property, which you can do using the Azure portal or PowerShell. You can find the steps to do configure RDP properties in [Customize Remote Desktop Protocol (RDP) properties for a host pool](customize-rdp-properties.md). - In the Azure portal, set **Microsoft Entra single sign-on** to **Connections will use Microsoft Entra authentication to provide single sign-on**.+ - For PowerShell, set the **enablerdsaadauth** property to **1**. ## Next steps - Check out [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication) to learn how to enable passwordless authentication.+ - For more information about Microsoft Entra Kerberos, see [Deep dive: How Microsoft Entra Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889)+ - If you're accessing Azure Virtual Desktop from our Windows Desktop client, see [Connect with the Windows Desktop client](./users/connect-windows.md).+ - If you're accessing Azure Virtual Desktop from our web client, see [Connect with the web client](./users/connect-web.md).+ - If you encounter any issues, go to [Troubleshoot connections to Microsoft Entra joined VMs](troubleshoot-azure-ad-connections.md). |
virtual-network | Virtual Network Service Endpoint Policies Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-portal.md | - Title: Create and associate service endpoint policies - Azure portal- -description: In this article, learn how to set up and associated service endpoint policies using the Azure portal. ---- Previously updated : 02/21/2020----# Create, change, or delete service endpoint policy using the Azure portal --Service endpoint policies enable you to filter virtual network traffic to specific Azure resources, over service endpoints. If you're not familiar with service endpoint policies, see [service endpoint policies overview](virtual-network-service-endpoint-policies-overview.md) to learn more. -- In this tutorial, you learn how to: --> [!div class="checklist"] -> * Create a service endpoint policy -> * Create a service endpoint policy definition -> * Create a virtual network with a subnet -> * Associate a service endpoint policy to a subnet --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --## Sign in to Azure --Sign in to the [Azure portal](https://portal.azure.com). --## Create a service endpoint policy --1. Select **+ Create a resource** on the upper, left corner of the Azure portal. -2. In search pane, type "service endpoint policy" and select **Service endpoint policy** and then select **Create**. --![Create service endpoint policy](./media/virtual-network-service-endpoint-policies-portal/create-sep-resource.png) --3. Enter, or select, the following information in **Basics** -- - Subscription : Select your subscription for policy - - Resource group : Select **Create new** and enter *myResourceGroup* - - Name : myEndpointPolicy - - Location : Central US - - ![Create service endpoint policy basics](./media/virtual-network-service-endpoint-policies-portal/create-sep-basics.png) --4. Select **+ Add** under **Resources** and enter or select the following information in **Add a resource** pane -- - Service : Only **Microsoft.Storage** is available with Service Endpoint Policies - - Scope : Select one out of **Single Account**, **All accounts in subscription** and **All accounts in resource group** - - Subscription : Select your subscription for storage account. Policy and storage accounts can be in different subscriptions. - - Resource group : Select your resource group. Required, if scope is set as, "All accounts in resource group" or "Single account". - - Resource : Select your Azure Storage resource under the selected Subscription or Resource Group - - Click on **Add** button at bottom to finish adding the resource -- ![Service endpoint policy definition - resource](./media/virtual-network-service-endpoint-policies-portal/create-sep-add-resource.png) -- - Add more resources by repeating the above steps as needed --5. Optional: Enter or select, the following information in **Tags**: - - - Key : Select your key for the policy. Ex: Dept - - Value : Enter value pair for the key. Ex: Finance --6. Select **Review + Create**. Validate the information and Click **Create**. To make further edits, click **Previous**. -- ![Create service endpoint policy final validations](./media/virtual-network-service-endpoint-policies-portal/create-sep-review-create.png) - -## View endpoint policies --1. In the *All services* box in the portal, begin typing *service endpoint policies*. Select **Service Endpoint Policies**. -2. Under **Subscriptions**, select your subscription and resource group, as shown in the following picture -- ![Show policy](./media/virtual-network-service-endpoint-policies-portal/sep-view.png) - -3. Select the policy and click on **Policy Definitions** to view or add more policy definitions. -- ![Show policy definitions](./media/virtual-network-service-endpoint-policies-portal/sep-policy-definition.png) --4. Select **Associated subnets** to view the subnets the policy is associated. If no subnet is associated yet, follow the instructions in the next step. -- ![Associated subnets](./media/virtual-network-service-endpoint-policies-portal/sep-associated-subnets.png) - -5. Associate a policy to a subnet -->[!WARNING] -> Ensure that all the resources accessed from the subnet are added to the policy definition before associating the policy to the given subnet. Once the policy is associated, only access to the *allow listed* resources will be allowed over service endpoints. -> -> Also ensure that no managed Azure services exist in the subnet that is being associated to the service endpoint policy --- Before you can associate a policy to a subnet, you have to create a virtual network and subnet. Please refer to the [Create a Virtual Network](./quick-create-portal.md) article for help with this.--- Once you have the virtual network and subnet are setup, you need to configure Virtual Network Service Endpoints for Azure Storage. On the Virtual Network blade, select **Service endpoints**, and in the next pane select **Microsoft.Storage** and under **Subnets** select the desired VNet or Subnet--- Now, you can either choose to select the Service Endpoint Policy from the drop-down in the above pane if you have already created Service Endpoint policies before configuring Service Endpoint for the Subnet as shown below-- ![Associate subnet while creating service endpoint](./media/virtual-network-service-endpoint-policies-portal/vnet-config-service-endpoint-add-sep.png) --- OR if you are associating Service Endpoint policies after Service Endpoints are already configured, you can choose to associate the subnet from within the Service Endpoint Policy blade by navigating to the **Associated Subnets** pane as shown below-- ![Associate subnet via SEP](./media/virtual-network-service-endpoint-policies-portal/sep-edit-subnet-association.png) -->[!WARNING] ->Access to Azure Storage resources in all regions will be restricted as per Service Endpoint Policy from this subnet. --## Next steps -In this tutorial, you created a service endpoint policy and associated it to a subnet. To learn more about service endpoint policies, see [service endpoint policies overview.](virtual-network-service-endpoint-policies-overview.md) |
virtual-network | Virtual Network Service Endpoint Policies Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-powershell.md | - Title: Restrict data exfiltration to Azure Storage - Azure PowerShell -description: In this article, you learn how to limit and restrict virtual network data exfiltration to Azure Storage resources with virtual network service endpoint policies using Azure PowerShell. ----- Previously updated : 02/03/2020---# Customer intent: I want only resources in a virtual network subnet to access an Azure PaaS resource, such as an Azure Storage account. ---# Manage data exfiltration to Azure Storage accounts with Virtual network service endpoint policies using Azure PowerShell --Virtual network service endpoint policies enable you to apply access control on Azure Storage accounts from within a virtual network over service endpoints. This is a key to securing your workloads, managing what storage accounts are allowed and where data exfiltration is allowed. -In this article, you learn how to: --* Create a virtual network. -* Add a subnet and enable service endpoint for Azure Storage. -* Create two Azure Storage accounts and allow network access to it from the subnet created above. -* Create a service endpoint policy to allow access only to one of the storage accounts. -* Deploy a virtual machine (VM) to the subnet. -* Confirm access to the allowed storage account from the subnet. -* Confirm access is denied to the non-allowed storage account from the subnet. ---If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ---If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. --## Create a virtual network --Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *myResourceGroup*: --```azurepowershell-interactive -New-AzResourceGroup ` - -ResourceGroupName myResourceGroup ` - -Location EastUS -``` --Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named *myVirtualNetwork* with the address prefix *10.0.0.0/16*. --```azurepowershell-interactive -$virtualNetwork = New-AzVirtualNetwork ` - -ResourceGroupName myResourceGroup ` - -Location EastUS ` - -Name myVirtualNetwork ` - -AddressPrefix 10.0.0.0/16 -``` --## Enable a service endpoint --Create a subnet in the virtual network. In this example, a subnet named *Private* is created with a service endpoint for *Microsoft.Storage*: --```azurepowershell-interactive -$subnetConfigPrivate = Add-AzVirtualNetworkSubnetConfig ` - -Name Private ` - -AddressPrefix 10.0.0.0/24 ` - -VirtualNetwork $virtualNetwork ` - -ServiceEndpoint Microsoft.Storage --$virtualNetwork | Set-AzVirtualNetwork -``` --## Restrict network access for the subnet --Create network security group security rules with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig). The following rule allows outbound access to the public IP addresses assigned to the Azure Storage service: --```azurepowershell-interactive -$rule1 = New-AzNetworkSecurityRuleConfig ` - -Name Allow-Storage-All ` - -Access Allow ` - -DestinationAddressPrefix Storage ` - -DestinationPortRange * ` - -Direction Outbound ` - -Priority 100 -Protocol * ` - -SourceAddressPrefix VirtualNetwork ` - -SourcePortRange * -``` --The following rule denies access to all public IP addresses. The previous rule overrides this rule, due to its higher priority, which allows access to the public IP addresses of Azure Storage. --```azurepowershell-interactive -$rule2 = New-AzNetworkSecurityRuleConfig ` - -Name Deny-Internet-All ` - -Access Deny ` - -DestinationAddressPrefix Internet ` - -DestinationPortRange * ` - -Direction Outbound ` - -Priority 110 -Protocol * ` - -SourceAddressPrefix VirtualNetwork ` - -SourcePortRange * -``` --The following rule allows Remote Desktop Protocol (RDP) traffic inbound to the subnet from anywhere. Remote desktop connections are allowed to the subnet, so that you can confirm network access to a resource in a later step. --```azurepowershell-interactive -$rule3 = New-AzNetworkSecurityRuleConfig ` - -Name Allow-RDP-All ` - -Access Allow ` - -DestinationAddressPrefix VirtualNetwork ` - -DestinationPortRange 3389 ` - -Direction Inbound ` - -Priority 120 ` - -Protocol * ` - -SourceAddressPrefix * ` - -SourcePortRange * -``` --Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). The following example creates a network security group named *myNsgPrivate*. --```azurepowershell-interactive -$nsg = New-AzNetworkSecurityGroup ` - -ResourceGroupName myResourceGroup ` - -Location EastUS ` - -Name myNsgPrivate ` - -SecurityRules $rule1,$rule2,$rule3 -``` --Associate the network security group to the *Private* subnet with [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) and then write the subnet configuration to the virtual network. The following example associates the *myNsgPrivate* network security group to the *Private* subnet: --```azurepowershell-interactive -Set-AzVirtualNetworkSubnetConfig ` - -VirtualNetwork $VirtualNetwork ` - -Name Private ` - -AddressPrefix 10.0.0.0/24 ` - -ServiceEndpoint Microsoft.Storage ` - -NetworkSecurityGroup $nsg --$virtualNetwork | Set-AzVirtualNetwork -``` --## Restrict network access to Azure Storage accounts --The steps necessary to restrict network access to resources created through Azure services enabled for service endpoints varies across services. See the documentation for individual services for specific steps for each service. The remainder of this article includes steps to restrict network access for an Azure Storage account, as an example. --### Create two storage accounts --Create an Azure storage account with [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount). --```azurepowershell-interactive -$storageAcctName1 = 'allowedaccount' --New-AzStorageAccount ` - -Location EastUS ` - -Name $storageAcctName1 ` - -ResourceGroupName myResourceGroup ` - -SkuName Standard_LRS ` - -Kind StorageV2 -``` --After the storage account is created, retrieve the key for the storage account into a variable with [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey): --```azurepowershell-interactive -$storageAcctKey1 = (Get-AzStorageAccountKey -ResourceGroupName myResourceGroup -AccountName $storageAcctName1).Value[0] -``` --The key is used to create a file share in a later step. Enter `$storageAcctKey` and note the value, as you'll also need to manually enter it in a later step when you map the file share to a drive in a VM. --Now repeat the above steps to create a second storage account. --```azurepowershell-interactive -$storageAcctName2 = 'notallowedaccount' --New-AzStorageAccount ` - -Location EastUS ` - -Name $storageAcctName2 ` - -ResourceGroupName myResourceGroup ` - -SkuName Standard_LRS ` - -Kind StorageV2 -``` --Also retrieve the storage account key from this account for using later to create a file share. --```azurepowershell-interactive -$storageAcctKey2 = (Get-AzStorageAccountKey -ResourceGroupName myResourceGroup -AccountName $storageAcctName2).Value[0] -``` --### Create a file share in each of the storage account --Create a context for your storage account and key with [New-AzStorageContext](/powershell/module/az.storage/new-AzStoragecontext). The context encapsulates the storage account name and account key: --```azurepowershell-interactive -$storageContext1 = New-AzStorageContext $storageAcctName1 $storageAcctKey1 --$storageContext2 = New-AzStorageContext $storageAcctName2 $storageAcctKey2 -``` --Create a file share with [New-AzStorageShare](/powershell/module/az.storage/new-azstorageshare): --```azurepowershell-interactive -$share1 = New-AzStorageShare my-file-share -Context $storageContext1 --$share2 = New-AzStorageShare my-file-share -Context $storageContext2 -``` --### Deny all network access to a storage accounts --By default, storage accounts accept network connections from clients in any network. To limit access to selected networks, change the default action to *Deny* with [Update-AzStorageAccountNetworkRuleSet](/powershell/module/az.storage/update-azstorageaccountnetworkruleset). Once network access is denied, the storage account is not accessible from any network. --```azurepowershell-interactive -Update-AzStorageAccountNetworkRuleSet ` - -ResourceGroupName myresourcegroup ` - -Name $storageAcctName1 ` - -DefaultAction Deny --Update-AzStorageAccountNetworkRuleSet ` - -ResourceGroupName myresourcegroup ` - -Name $storageAcctName2 ` - -DefaultAction Deny -``` --### Enable network access only from the VNet subnet --Retrieve the created virtual network with [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) and then retrieve the private subnet object into a variable with [Get-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/get-azvirtualnetworksubnetconfig): --```azurepowershell-interactive -$privateSubnet = Get-AzVirtualNetwork ` - -ResourceGroupName myResourceGroup ` - -Name myVirtualNetwork ` - | Get-AzVirtualNetworkSubnetConfig -Name Private -``` --Allow network access to the storage account from the *Private* subnet with [Add-AzStorageAccountNetworkRule](/powershell/module/az.network/add-aznetworksecurityruleconfig). --```azurepowershell-interactive -Add-AzStorageAccountNetworkRule ` - -ResourceGroupName myresourcegroup ` - -Name $storageAcctName1 ` - -VirtualNetworkResourceId $privateSubnet.Id --Add-AzStorageAccountNetworkRule ` - -ResourceGroupName myresourcegroup ` - -Name $storageAcctName2 ` - -VirtualNetworkResourceId $privateSubnet.Id -``` --## Apply policy to allow access to valid storage account --To make sure the users in the virtual network can only access the Azure Storage accounts that are safe and allowed, you can create a Service endpoint policy with the list of allowed storage accounts in the definition. This policy is then applied to the virtual network subnet which is connected to storage via service endpoints. --### Create a service endpoint policy --This section creates the policy definition with the list of allowed resources for access over service endpoint --Retrieve the resource ID for the first (allowed) storage account --```azurepowershell-interactive -$resourceId = (Get-AzStorageAccount -ResourceGroupName myresourcegroup -Name $storageAcctName1).id -``` --Create the policy definition to allow the above resource --```azurepowershell-interactive -$policyDefinition = New-AzServiceEndpointPolicyDefinition -Name mypolicydefinition ` - -Description "Service Endpoint Policy Definition" ` - -Service "Microsoft.Storage" ` - -ServiceResource $resourceId -``` --Create the service endpoint policy using the policy definition created above --```azurepowershell-interactive -$sepolicy = New-AzServiceEndpointPolicy -ResourceGroupName myresourcegroup ` - -Name mysepolicy -Location EastUS - -ServiceEndpointPolicyDefinition $policyDefinition -``` --### Associate the service endpoint policy to the virtual network subnet --After creating the service endpoint policy, you'll associate it with the target subnet with the service endpoint configuration for Azure Storage. --```azurepowershell-interactive -Set-AzVirtualNetworkSubnetConfig -VirtualNetwork $VirtualNetwork ` - -Name Private ` - -AddressPrefix 10.0.0.0/24 ` - -NetworkSecurityGroup $nsg ` - -ServiceEndpoint Microsoft.Storage ` - -ServiceEndpointPolicy $sepolicy --$virtualNetwork | Set-AzVirtualNetwork -``` -## Validate access restriction to Azure Storage accounts --### Deploy the virtual machine --To test network access to a storage account, deploy a VM in the subnet. --Create a virtual machine in the *Private* subnet with [New-AzVM](/powershell/module/az.compute/new-azvm). When running the command that follows, you are prompted for credentials. The values that you enter are configured as the user name and password for the VM. The `-AsJob` option creates the VM in the background, so that you can continue to the next step. --```azurepowershell-interactive -New-AzVm -ResourceGroupName myresourcegroup ` - -Location "East US" ` - -VirtualNetworkName myVirtualNetwork ` - -SubnetName Private ` - -Name "myVMPrivate" -AsJob -``` --Output similar to the following example output is returned: --```powershell -Id Name PSJobTypeName State HasMoreData Location Command - - -- -- -- --1 Long Running... AzureLongRun... Running True localhost New-AzVM -``` --### Confirm access to the *allowed* storage account --Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) to return the public IP address of a VM. The following example returns the public IP address of the *myVmPrivate* VM: --```azurepowershell-interactive -Get-AzPublicIpAddress ` - -Name myVmPrivate ` - -ResourceGroupName myResourceGroup ` - | Select IpAddress -``` --Replace `<publicIpAddress>` in the following command, with the public IP address returned from the previous command, and then enter the following command: --```powershell -mstsc /v:<publicIpAddress> -``` --A Remote Desktop Protocol (.rdp) file is created and downloaded to your computer. Open the downloaded rdp file. If prompted, select **Connect**. Enter the user name and password you specified when creating the VM. You may need to select **More choices**, then **Use a different account**, to specify the credentials you entered when you created the VM. Select **OK**. You may receive a certificate warning during the sign-in process. If you receive the warning, select **Yes** or **Continue**, to proceed with the connection. --On the *myVmPrivate* VM, map the Azure file share from allowed storage account to drive Z using PowerShell. --```powershell -$acctKey = ConvertTo-SecureString -String $storageAcctKey1 -AsPlainText -Force -$credential = New-Object System.Management.Automation.PSCredential -ArgumentList ("Azure\allowedaccount"), $acctKey -New-PSDrive -Name Z -PSProvider FileSystem -Root "\\allowedaccount.file.core.windows.net\my-file-share" -Credential $credential -``` --PowerShell returns output similar to the following example output: --```powershell -Name Used (GB) Free (GB) Provider Root -- -- --Z FileSystem \\allowedaccount.file.core.windows.net\my-f... -``` --The Azure file share successfully mapped to the Z drive. --Close the remote desktop session to the *myVmPrivate* VM. --### Confirm access is denied to *non-allowed* storage account --On the same *myVmPrivate* VM, attempt to map the Azure file share to drive X. --```powershell -$acctKey = ConvertTo-SecureString -String $storageAcctKey1 -AsPlainText -Force -$credential = New-Object System.Management.Automation.PSCredential -ArgumentList "Azure\notallowedaccount", $acctKey -New-PSDrive -Name X -PSProvider FileSystem -Root "\\notallowedaccount.file.core.windows.net\my-file-share" -Credential $credential -``` --Access to the share is denied, and you receive a `New-PSDrive : Access is denied` error. Access is denied because the storage account *notallowedaccount* is not in the allowed resources list in the service endpoint policy. --Close the remote desktop session to the *myVmPublic* VM. --## Clean up resources --When no longer needed, you can use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains: --```azurepowershell-interactive -Remove-AzResourceGroup -Name myResourceGroup -Force -``` --## Next steps --In this article, you applied a service endpoint policy over an Azure virtual network service endpoint to Azure Storage. You created Azure Storage accounts and limited network access to only certain storage accounts (and thus denied others) from a virtual network subnet. To learn more about service endpoint policies, see [Service endpoints policies overview](virtual-network-service-endpoint-policies-overview.md). |
virtual-network | Virtual Network Service Endpoint Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies.md | + + Title: Create and associate service endpoint policies ++description: In this article, learn how to set up and associate service endpoint policies. +++ Last updated : 08/20/2024+++ - devx-track-azurecli + - devx-track-azurepowershell +content_well_notification: + - AI-contribution +ai-usage: ai-assisted +++# Create and associate service endpoint policies ++Service endpoint policies enable you to filter virtual network traffic to specific Azure resources, over service endpoints. If you're not familiar with service endpoint policies, see [service endpoint policies overview](virtual-network-service-endpoint-policies-overview.md) to learn more. ++ In this tutorial, you learn how to: ++> [!div class="checklist"] +* Create a virtual network. +* Add a subnet and enable service endpoint for Azure Storage. +* Create two Azure Storage accounts and allow network access to it from the subnet in the virtual network. +* Create a service endpoint policy to allow access only to one of the storage accounts. +* Deploy a virtual machine (VM) to the subnet. +* Confirm access to the allowed storage account from the subnet. +* Confirm access is denied to the nonallowed storage account from the subnet. ++## Prerequisites ++### [Portal](#tab/portal) ++- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++### [PowerShell](#tab/powershell) ++- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +++If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. ++### [CLI](#tab/cli) ++++- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ++++## Create a virtual network and enable service endpoint ++Create a virtual network to contain the resources you create in this tutorial. ++### [Portal](#tab/portal) ++1. In the search box in the portal, enter **Virtual networks**. Select **Virtual networks** in the search results. ++1. Select **+ Create** to create a new virtual network. ++1. Enter or select the following information in the **Basics** tab of **Create virtual network**. ++ | Setting | Value | + | -| - | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **Create new**.</br> Enter **test-rg** in **Name**.</br> Select **OK**. | + | Name | Enter **vnet-1**. | + | Region | Select **West US 2**. | ++1. Select **Next**. ++1. Select **Next**. ++1. In the **IP addresses** tab, in **Subnets**, select the **default** subnet. ++1. Enter or select the following information in **Edit subnet**. ++ | Setting | Value | + | -| - | + | Name | Enter **subnet-1**. | + | **Service Endpoints** | | + | **Services** | | + | In the pull-down menu, select **Microsoft.Storage**. | ++1. Select **Save**. ++1. Select **Review + Create**. ++1. Select **Create**. ++### [PowerShell](#tab/powershell) ++Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *test-rg*: ++```azurepowershell-interactive +$rg = @{ + ResourceGroupName = "test-rg" + Location = "westus2" +} +New-AzResourceGroup @rg +``` ++Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named *vnet-1* with the address prefix *10.0.0.0/16*. ++```azurepowershell-interactive +$vnet = @{ + ResourceGroupName = "test-rg" + Location = "westus2" + Name = "vnet-1" + AddressPrefix = "10.0.0.0/16" +} +$virtualNetwork = New-AzVirtualNetwork @vnet +``` ++Create a subnet configuration with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig), and then write the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork). The following example adds a subnet named _subnet-1_ to the virtual network and creates the service endpoint for *Microsoft.Storage*. ++```azurepowershell-interactive +$subnet = @{ + Name = "subnet-1" + VirtualNetwork = $virtualNetwork + AddressPrefix = "10.0.0.0/24" + ServiceEndpoint = "Microsoft.Storage" +} +Add-AzVirtualNetworkSubnetConfig @subnet ++$virtualNetwork | Set-AzVirtualNetwork +``` ++### [CLI](#tab/cli) ++Before creating a virtual network, you have to create a resource group for the virtual network, and all other resources created in this article. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *test-rg* in the *westus2* location. ++```azurecli-interactive +az group create \ + --name test-rg \ + --location westus2 +``` ++Create a virtual network with one subnet with [az network vnet create](/cli/azure/network/vnet). ++```azurecli-interactive +az network vnet create \ + --name vnet-1 \ + --resource-group test-rg \ + --address-prefix 10.0.0.0/16 \ + --subnet-name subnet-1 \ + --subnet-prefix 10.0.0.0/24 +``` ++In this example, a service endpoint for `Microsoft.Storage` is created for the subnet *subnet-1*: ++```azurecli-interactive +az network vnet subnet create \ + --vnet-name vnet-1 \ + --resource-group test-rg \ + --name subnet-1 \ + --address-prefix 10.0.0.0/24 \ + --service-endpoints Microsoft.Storage +``` ++++## Restrict network access for the subnet ++Create a network security group and rules that restrict network access for the subnet. ++### Create a network security group ++### [Portal](#tab/portal) ++1. In the search box in the portal, enter **Network security groups**. Select **Network security groups** in the search results. ++1. Select **+ Create** to create a new network security group. ++1. In the **Basics** tab of **Create network security group**, enter, or select the following information. ++ | Setting | Value | + | -| - | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | Name | Enter **nsg-1**. | + | Region | Select **West US 2**. | ++1. Select **Review + Create**. ++1. Select **Create**. ++### Create network security group rules ++1. In the search box in the portal, enter **Network security groups**. Select **Network security groups** in the search results. ++1. Select **nsg-1**. ++1. Expand **Settings**. Select **Outbound security rules**. ++1. Select **+ Add** to add a new outbound security rule. ++1. In **Add outbound security rule**, enter or select the following information. ++ | Setting | Value | + | -| - | + | Source | Select **Service Tag**. | + | Source service tag | Select **VirtualNetwork**. | + | Source port ranges | Enter **\***. | + | Destination | Select **Service Tag**. | + | Destination service tag | Select **Storage**. | + | Service | Select **Custom**. | + | Destination port ranges | Enter **\***. | + | Protocol | Select **Any**. | + | Action | Select **Allow**. | + | Priority | Enter **100**. | + | Name | Enter **allow-storage-all**. | ++1. Select **Add**. ++1. Select **+ Add** to add another outbound security rule. ++1. In **Add outbound security rule**, enter or select the following information. ++ | Setting | Value | + | -| - | + | Source | Select **Service Tag**. | + | Source service tag | Select **VirtualNetwork**. | + | Source port ranges | Enter **\***. | + | Destination | Select **Service Tag**. | + | Destination service tag | Select **Internet**. | + | Service | Select **Custom**. | + | Destination port ranges | Enter **\***. | + | Protocol | Select **Any**. | + | Action | Select **Deny**. | + | Priority | Enter **110**. | + | Name | Enter **deny-internet-all**. | ++1. Select **Add**. ++1. Expand **Settings**. Select **Subnets**. ++1. Select **Associate**. ++1. In **Associate subnet**, enter or select the following information. ++ | Setting | Value | + | -| - | + | Virtual network | Select **vnet-1 (test-rg)**. | + | Subnet | Select **subnet-1**. | ++1. Select **OK**. ++### [PowerShell](#tab/powershell) ++Create network security group security rules with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig). The following rule allows outbound access to the public IP addresses assigned to the Azure Storage service: ++```azurepowershell-interactive +$r1 = @{ + Name = "Allow-Storage-All" + Access = "Allow" + DestinationAddressPrefix = "Storage" + DestinationPortRange = "*" + Direction = "Outbound" + Priority = 100 + Protocol = "*" + SourceAddressPrefix = "VirtualNetwork" + SourcePortRange = "*" +} +$rule1 = New-AzNetworkSecurityRuleConfig @r1 +``` ++The following rule denies access to all public IP addresses. The previous rule overrides this rule, due to its higher priority, which allows access to the public IP addresses of Azure Storage. ++```azurepowershell-interactive +$r2 = @{ + Name = "Deny-Internet-All" + Access = "Deny" + DestinationAddressPrefix = "Internet" + DestinationPortRange = "*" + Direction = "Outbound" + Priority = 110 + Protocol = "*" + SourceAddressPrefix = "VirtualNetwork" + SourcePortRange = "*" +} +$rule2 = New-AzNetworkSecurityRuleConfig @r2 +``` ++Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). The following example creates a network security group named *nsg-1*. ++```azurepowershell-interactive +$securityRules = @($rule1, $rule2) ++$nsgParams = @{ + ResourceGroupName = "test-rg" + Location = "westus2" + Name = "nsg-1" + SecurityRules = $securityRules +} +$nsg = New-AzNetworkSecurityGroup @nsgParams +``` ++Associate the network security group to the *subnet-1* subnet with [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) and then write the subnet configuration to the virtual network. The following example associates the *nsg-1* network security group to the *subnet-1* subnet: ++```azurepowershell-interactive +$subnetConfig = @{ + VirtualNetwork = $VirtualNetwork + Name = "subnet-1" + AddressPrefix = "10.0.0.0/24" + ServiceEndpoint = "Microsoft.Storage" + NetworkSecurityGroup = $nsg +} +Set-AzVirtualNetworkSubnetConfig @subnetConfig ++$virtualNetwork | Set-AzVirtualNetwork +``` ++### [CLI](#tab/cli) ++Create a network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named *nsg-1*. ++```azurecli-interactive +az network nsg create \ + --resource-group test-rg \ + --name nsg-1 +``` ++Associate the network security group to the *subnet-1* subnet with [az network vnet subnet update](/cli/azure/network/vnet/subnet). The following example associates the *nsg-1* network security group to the *subnet-1* subnet: ++```azurecli-interactive +az network vnet subnet update \ + --vnet-name vnet-1 \ + --name subnet-1 \ + --resource-group test-rg \ + --network-security-group nsg-1 +``` ++Create security rules with [az network nsg rule create](/cli/azure/network/nsg/rule). The rule that follows allows outbound access to the public IP addresses assigned to the Azure Storage service: ++```azurecli-interactive +az network nsg rule create \ + --resource-group test-rg \ + --nsg-name nsg-1 \ + --name Allow-Storage-All \ + --access Allow \ + --protocol "*" \ + --direction Outbound \ + --priority 100 \ + --source-address-prefix "VirtualNetwork" \ + --source-port-range "*" \ + --destination-address-prefix "Storage" \ + --destination-port-range "*" +``` ++Each network security group contains several [default security rules](./network-security-groups-overview.md#default-security-rules). The rule that follows overrides a default security rule that allows outbound access to all public IP addresses. The `destination-address-prefix "Internet"` option denies outbound access to all public IP addresses. The previous rule overrides this rule, due to its higher priority, which allows access to the public IP addresses of Azure Storage. ++```azurecli-interactive +az network nsg rule create \ + --resource-group test-rg \ + --nsg-name nsg-1 \ + --name Deny-Internet-All \ + --access Deny \ + --protocol "*" \ + --direction Outbound \ + --priority 110 \ + --source-address-prefix "VirtualNetwork" \ + --source-port-range "*" \ + --destination-address-prefix "Internet" \ + --destination-port-range "*" +``` ++++## Restrict network access to Azure Storage accounts ++The steps necessary to restrict network access to resources created through Azure services enabled for service endpoints varies across services. See the documentation for individual services for specific steps for each service. The remainder of this article includes steps to restrict network access for an Azure Storage account, as an example. ++### Create two storage accounts ++### [Portal](#tab/portal) ++1. In the search box in the portal, enter **Storage accounts**. Select **Storage accounts** in the search results. ++1. Select **+ Create** to create a new storage account. ++1. In **Create a storage account**, enter or select the following information. ++ | Setting | Value | + | -| - | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Storage account name | Enter **allowedaccount(random-number)**.</br> **Note: The storage account name must be unique. Add a random number to the end of the name `allowedaccount`**. | + | Region | Select **West US 2**. | + | Performance | Select **Standard**. | + | Redundancy | Select **Locally-redundant storage (LRS)**. | ++1. Select **Next** until you reach the **Data protection** tab. ++1. In **Recovery**, deselect all of the options. ++1. Select **Review + Create**. ++1. Select **Create**. ++1. Repeat the previous steps to create another storage account with the following information. ++ | Setting | Value | + | -| - | + | Storage account name | Enter **deniedaccount(random-number)**. | ++### [PowerShell](#tab/powershell) ++Create the allowed Azure storage account with [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount). ++```azurepowershell-interactive +$storageAcctParams = @{ + Location = 'westus2' + Name = 'allowedaccount' + ResourceGroupName = 'test-rg' + SkuName = 'Standard_LRS' + Kind = 'StorageV2' +} +New-AzStorageAccount @storageAcctParams +``` ++Use the same command to create the denied Azure storage account, but change the name to `deniedaccount`. ++```azurepowershell-interactive +$storageAcctParams = @{ + Location = 'westus2' + Name = 'deniedaccount' + ResourceGroupName = 'test-rg' + SkuName = 'Standard_LRS' + Kind = 'StorageV2' +} +New-AzStorageAccount @storageAcctParams +``` ++### [CLI](#tab/cli) ++Create two Azure storage accounts with [az storage account create](/cli/azure/storage/account). ++```azurecli-interactive +storageAcctName1="allowedaccount" ++az storage account create \ + --name $storageAcctName1 \ + --resource-group test-rg \ + --sku Standard_LRS \ + --kind StorageV2 +``` ++Use the same command to create the denied Azure storage account, but change the name to `deniedaccount`. ++```azurecli-interactive +storageAcctName2="deniedaccount" ++az storage account create \ + --name $storageAcctName2 \ + --resource-group test-rg \ + --sku Standard_LRS \ + --kind StorageV2 +``` ++++### Create file shares ++### [Portal](#tab/portal) ++1. In the search box in the portal, enter **Storage accounts**. Select **Storage accounts** in the search results. ++1. Select **allowedaccount(random-number)**. ++1. Expand the **Data storage** section and select **File shares**. ++1. Select **+ File share**. ++1. In **New file share**, enter or select the following information. ++ | Setting | Value | + | -| - | + | Name | Enter **file-share**. | ++1. Leave the rest of the settings as default and select **Review + create**. ++1. Select **Create**. ++1. Repeat the previous steps to create a file share in **deniedaccount(random-number)**. ++### [PowerShell](#tab/powershell) ++### Create allowed storage account file share ++Use [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey) to get the storage account key for the allowed storage account. You'll use this key in the next step to create a file share in the allowed storage account. ++```azurepowershell-interactive +$storageAcctName1 = "allowedaccount" +$storageAcctParams1 = @{ + ResourceGroupName = "test-rg" + AccountName = $storageAcctName1 +} +$storageAcctKey1 = (Get-AzStorageAccountKey @storageAcctParams1).Value[0] +``` ++Create a context for your storage account and key with [New-AzStorageContext](/powershell/module/az.storage/new-AzStoragecontext). The context encapsulates the storage account name and account key. ++```azurepowershell-interactive +$storageContext1 = New-AzStorageContext $storageAcctName1 $storageAcctKey1 +``` ++Create a file share with [New-AzStorageShare](/powershell/module/az.storage/new-azstorageshare). ++```azurepowershell-interactive +$share1 = New-AzStorageShare file-share -Context $storageContext1 +``` ++### Create denied storage account file share ++Use [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey) to get the storage account key for the allowed storage account. You'll use this key in the next step to create a file share in the denied storage account. ++```azurepowershell-interactive +$storageAcctName2 = "deniedaccount" +$storageAcctParams2 = @{ + ResourceGroupName = "test-rg" + AccountName = $storageAcctName2 +} +$storageAcctKey2 = (Get-AzStorageAccountKey @storageAcctParams2).Value[0] +``` ++Create a context for your storage account and key with [New-AzStorageContext](/powershell/module/az.storage/new-AzStoragecontext). The context encapsulates the storage account name and account key. ++```azurepowershell-interactive +$storageContext2= New-AzStorageContext $storageAcctName2 $storageAcctKey2 +``` ++Create a file share with [New-AzStorageShare](/powershell/module/az.storage/new-azstorageshare). ++```azurepowershell-interactive +$share2 = New-AzStorageShare file-share -Context $storageContext2 +``` ++### [CLI](#tab/cli) ++### Create allowed storage account file share ++Retrieve the connection string for the storage accounts into a variable with [az storage account show-connection-string](/cli/azure/storage/account). The connection string is used to create a file share in a later step. ++```azurecli-interactive +saConnectionString1=$(az storage account show-connection-string \ + --name $storageAcctName1 \ + --resource-group test-rg \ + --query 'connectionString' \ + --out tsv) +``` ++Create a file share in the storage account with [az storage share create](/cli/azure/storage/share). In a later step, this file share is mounted to confirm network access to it. ++```azurecli-interactive +az storage share create \ + --name file-share \ + --quota 2048 \ + --connection-string $saConnectionString1 > +``` ++### Create denied storage account file share ++Retrieve the connection string for the storage accounts into a variable with [az storage account show-connection-string](/cli/azure/storage/account). The connection string is used to create a file share in a later step. ++```azurecli-interactive +saConnectionString2=$(az storage account show-connection-string \ + --name $storageAcctName2 \ + --resource-group test-rg \ + --query 'connectionString' \ + --out tsv) +``` ++Create a file share in the storage account with [az storage share create](/cli/azure/storage/share). In a later step, this file share is mounted to confirm network access to it. ++```azurecli-interactive +az storage share create \ + --name file-share \ + --quota 2048 \ + --connection-string $saConnectionString2 > +``` ++++### Deny all network access to storage accounts ++By default, storage accounts accept network connections from clients in any network. To restrict network access to the storage accounts, you can configure the storage account to accept connections only from specific networks. In this example, you configure the storage account to accept connections only from the virtual network subnet you created earlier. ++### [Portal](#tab/portal) ++1. In the search box in the portal, enter **Storage accounts**. Select **Storage accounts** in the search results. ++1. Select **allowedaccount(random-number)**. ++1. Expand **Security + networking** and select **Networking**. ++1. In **Firewalls and virtual networks**, in **Public network access**, select **Enabled from selected virtual networks and IP addresses**. ++1. In **Virtual networks**, select **+ Add existing virtual network**. ++1. In **Add networks**, enter or select the following information. ++ | Setting | Value | + | -| - | + | Subscription | Select your subscription. | + | Virtual networks | Select **vnet-1**. | + | Subnets | Select **subnet-1**. | ++1. Select **Add**. ++1. Select **Save**. ++1. Repeat the previous steps to deny network access to **deniedaccount(random-number)**. ++### [PowerShell](#tab/powershell) ++Use [Update-AzStorageAccountNetworkRuleSet](/powershell/module/az.storage/update-azstorageaccountnetworkruleset) to deny access to the storage accounts except from the virtual network and subnet you created earlier. Once network access is denied, the storage account isn't accessible from any network. ++```azurepowershell-interactive +$storageAcctParams1 = @{ + ResourceGroupName = "test-rg" + Name = $storageAcctName1 + DefaultAction = "Deny" +} +Update-AzStorageAccountNetworkRuleSet @storageAcctParams1 ++$storageAcctParams2 = @{ + ResourceGroupName = "test-rg" + Name = $storageAcctName2 + DefaultAction = "Deny" +} +Update-AzStorageAccountNetworkRuleSet @storageAcctParams2 +``` ++### Enable network access only from the virtual network subnet ++Retrieve the created virtual network with [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) and then retrieve the private subnet object into a variable with [Get-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/get-azvirtualnetworksubnetconfig): ++```azurepowershell-interactive +$privateSubnetParams = @{ + ResourceGroupName = "test-rg" + Name = "vnet-1" +} +$privateSubnet = Get-AzVirtualNetwork @privateSubnetParams | Get-AzVirtualNetworkSubnetConfig -Name "subnet-1" +``` ++Allow network access to the storage account from the *subnet-1* subnet with [Add-AzStorageAccountNetworkRule](/powershell/module/az.network/add-aznetworksecurityruleconfig). ++```azurepowershell-interactive +$networkRuleParams1 = @{ + ResourceGroupName = "test-rg" + Name = $storageAcctName1 + VirtualNetworkResourceId = $privateSubnet.Id +} +Add-AzStorageAccountNetworkRule @networkRuleParams1 ++$networkRuleParams2 = @{ + ResourceGroupName = "test-rg" + Name = $storageAcctName2 + VirtualNetworkResourceId = $privateSubnet.Id +} +Add-AzStorageAccountNetworkRule @networkRuleParams2 +``` ++### [CLI](#tab/cli) ++By default, storage accounts accept network connections from clients in any network. To limit access to selected networks, change the default action to *Deny* with [az storage account update](/cli/azure/storage/account). Once network access is denied, the storage account isn't accessible from any network. ++```azurecli-interactive +az storage account update \ + --name $storageAcctName1 \ + --resource-group test-rg \ + --default-action Deny ++az storage account update \ + --name $storageAcctName2 \ + --resource-group test-rg \ + --default-action Deny +``` ++### Enable network access only from the virtual network subnet ++Allow network access to the storage account from the *subnet-1* subnet with [az storage account network-rule add](/cli/azure/storage/account/network-rule). ++```azurecli-interactive +az storage account network-rule add \ + --resource-group test-rg \ + --account-name $storageAcctName1 \ + --vnet-name vnet-1 \ + --subnet subnet-1 ++az storage account network-rule add \ + --resource-group test-rg \ + --account-name $storageAcctName2 \ + --vnet-name vnet-1 \ + --subnet subnet-1 +``` ++++## Apply policy to allow access to valid storage account ++You can create a service endpoint policy. The policy ensures users in the virtual network can only access safe and allowed Azure Storage accounts. This policy contains a list of allowed storage accounts applied to the virtual network subnet that is connected to storage via service endpoints. ++### Create a service endpoint policy ++This section creates the policy definition with the list of allowed resources for access over service endpoint. ++### [Portal](#tab/portal) ++1. In the search box in the portal, enter **Service endpoint policy**. Select **Service endpoint policies** in the search results. ++1. Select **+ Create** to create a new service endpoint policy. ++1. Enter or select the following information in the **Basics** tab of **Create a service endpoint policy**. ++ | Setting | Value | + | -| - | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Name | Enter **service-endpoint-policy**. | + | Location | Select **West US 2**. | ++1. Select **Next: Policy definitions**. ++1. Select **+ Add a resource** in **Resources**. ++1. In **Add a resource**, enter or select the following information: ++ | Setting | Value | + | -| - | + | Service | Select **Microsoft.Storage**. | + | Scope | Select **Single account** | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | Resource | Select **allowedaccount(random-number)** | ++1. Select **Add**. ++1. Select **Review + Create**. ++1. Select **Create**. ++### [PowerShell](#tab/powershell) ++To retrieve the resource ID for the first (allowed) storage account, use [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount). ++```azurepowershell-interactive +$storageAcctParams1 = @{ + ResourceGroupName = "test-rg" + Name = $storageAcctName1 +} +$resourceId = (Get-AzStorageAccount @storageAcctParams1).id +``` ++To create the policy definition to allow the previous resource, use [New-AzServiceEndpointPolicyDefinition](/powershell/module/az.network/new-azserviceendpointpolicydefinition) . ++```azurepowershell-interactive +$policyDefinitionParams = @{ + Name = "policy-definition" + Description = "Service Endpoint Policy Definition" + Service = "Microsoft.Storage" + ServiceResource = $resourceId +} +$policyDefinition = New-AzServiceEndpointPolicyDefinition @policyDefinitionParams +``` ++Use [New-AzServiceEndpointPolicy](/powershell/module/az.network/new-azserviceendpointpolicy) to create the service endpoint policy with the policy definition. ++```azurepowershell-interactive +$sepolicyParams = @{ + ResourceGroupName = "test-rg" + Name = "service-endpoint-policy" + Location = "westus2" + ServiceEndpointPolicyDefinition = $policyDefinition +} +$sepolicy = New-AzServiceEndpointPolicy @sepolicyParams +``` ++### [CLI](#tab/cli) ++Service endpoint policies are applied over service endpoints. Start by creating a service endpoint policy. Then create the policy definitions under this policy for Azure Storage accounts to be approved for this subnet ++Use [az storage account show](/cli/azure/storage/account) to get the resource ID for the storage account that is allowed. ++```azurecli-interactive +serviceResourceId=$(az storage account show --name allowedaccount --query id --output tsv) +``` ++Create a service endpoint policy ++```azurecli-interactive +az network service-endpoint policy create \ + --resource-group test-rg \ + --name service-endpoint-policy \ + --location westus2 +``` ++Create and add a policy definition for allowing the previous Azure Storage account to the service endpoint policy ++```azurecli-interactive +az network service-endpoint policy-definition create \ + --resource-group test-rg \ + --policy-name service-endpoint-policy \ + --name policy-definition \ + --service "Microsoft.Storage" \ + --service-resources $serviceResourceId +``` ++++## Associate a service endpoint policy to a subnet ++After creating the service endpoint policy, you'll associate it with the target subnet with the service endpoint configuration for Azure Storage. ++### [Portal](#tab/portal) ++1. In the search box in the portal, enter **Service endpoint policy**. Select **Service endpoint policies** in the search results. ++1. Select **service-endpoint-policy**. ++1. Expand **Settings** and select **Associated subnets**. ++1. Select **+ Edit subnet association**. ++1. In **Edit subnet association**, select **vnet-1** and **subnet-1**. ++1. Select **Apply**. ++### [PowerShell](#tab/powershell) ++Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) to associate the service endpoint policy to the subnet. ++```azurepowershell-interactive +$subnetConfigParams = @{ + VirtualNetwork = $VirtualNetwork + Name = "subnet-1" + AddressPrefix = "10.0.0.0/24" + NetworkSecurityGroup = $nsg + ServiceEndpoint = "Microsoft.Storage" + ServiceEndpointPolicy = $sepolicy +} +Set-AzVirtualNetworkSubnetConfig @subnetConfigParams ++$virtualNetwork | Set-AzVirtualNetwork +``` ++### [CLI](#tab/cli) ++Use [az network vnet subnet update](/cli/azure/network/vnet/subnet) to associate the service endpoint policy to the subnet. ++```azurecli-interactive +az network vnet subnet update \ + --vnet-name vnet-1 \ + --resource-group test-rg \ + --name subnet-1 \ + --service-endpoints Microsoft.Storage \ + --service-endpoint-policy service-endpoint-policy +``` ++++>[!WARNING] +> Ensure that all the resources accessed from the subnet are added to the policy definition before associating the policy to the given subnet. Once the policy is associated, only access to the *allow listed* resources will be allowed over service endpoints. +> +> Ensure that no managed Azure services exist in the subnet that is being associated to the service endpoint policy. +> +> Access to Azure Storage resources in all regions will be restricted as per Service Endpoint Policy from this subnet. ++## Validate access restriction to Azure Storage accounts ++To test network access to a storage account, deploy a VM in the subnet. ++### Deploy the virtual machine ++### [Portal](#tab/portal) ++1. In the search box in the portal, enter **Virtual machines**. Select **Virtual machines** in the search results. ++1. In the **Basics** tab of **Create a virtual machine**, enter, or select the following information: ++ | Setting | Value | + | -| - | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Virtual machine name | Enter **vm-1**. | + | Region | Select **(US) West US 2**. | + | Availability options | Select **No infrastructure redundancy required**. | + | Security type | Select **Standard**. | + | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. | + | Size | Select a size. | + | **Administrator account** | | + | Username | Enter a username. | + | Password | Enter a password. | + | Confirm password | Enter the password again. | + | **Inbound port rules** | | ++1. Select **Next: Disks**, then select **Next: Networking**. ++1. In the **Networking** tab, enter or select the following information. ++ | Setting | Value | + | -| - | + | **Network interface** | | + | Virtual network | Select **vnet-1**. | + | Subnet | Select **subnet-1 (10.0.0.0/24)**. | + | Public IP | Select **None**. | + | NIC network security group | Select **None**. | ++1. Leave the rest of the settings as default and select **Review + Create**. ++1. Select **Create**. ++### [PowerShell](#tab/powershell) ++Create a virtual machine in the *subnet-1* subnet with [New-AzVM](/powershell/module/az.compute/new-azvm). When running the command that follows, you're prompted for credentials. The values that you enter are configured as the user name and password for the VM. ++```azurepowershell-interactive +$vmParams = @{ + ResourceGroupName = "test-rg" + Location = "westus2" + VirtualNetworkName = "vnet-1" + SubnetName = "subnet-1" + Name = "vm-1" +} +New-AzVm @vmParams +``` ++### [CLI](#tab/cli) ++Create a VM in the *subnet-1* subnet with [az vm create](/cli/azure/vm). ++```azurecli-interactive +az vm create \ + --resource-group test-rg \ + --name vm-1 \ + --image Win2022Datacenter \ + --admin-username azureuser \ + --vnet-name vnet-1 \ + --subnet subnet-1 +``` ++++Wait for the virtual machine to finish deploying before continuing on to the next steps. ++### Confirm access to the *allowed* storage account ++1. Sign-in to the [Azure portal](https://portal.azure.com/). ++1. In the search box in the portal, enter **Storage accounts**. Select **Storage accounts** in the search results. ++1. Select **allowedaccount(random-number)**. ++1. Expand **Security + networking** and select **Access keys**. ++1. Copy the **key1** value. You use this key to map a drive to the storage account from the virtual machine you created earlier. ++1. In the search box in the portal, enter **Virtual machines**. Select **Virtual machines** in the search results. ++1. Select **vm-1**. ++1. Expand **Operations**. Select **Run command**. ++1. Select **RunPowerShellScript**. ++1. Paste the following script in **Run Command Script**. ++ ```powershell + ## Enter the storage account key for the allowed storage account that you recorded earlier. + $storageAcctKey1 = (pasted from procedure above) + $acctKey = ConvertTo-SecureString -String $storageAcctKey1 -AsPlainText -Force + ## Replace the login account with the name of the storage account you created. + $credential = New-Object System.Management.Automation.PSCredential -ArgumentList ("Azure\allowedaccount"), $acctKey + ## Replace the storage account name with the name of the storage account you created. + New-PSDrive -Name Z -PSProvider FileSystem -Root "\\allowedaccount.file.core.windows.net\file-share" -Credential $credential + ``` ++1. Select **Run**. ++1. If the drive map is successful, the output in the **Output** box looks similar to the following example: ++ ```output + Name Used (GB) Free (GB) Provider Root + - -- - + Z FileSystem \\allowedaccount.file.core.windows.net\fil.. + ``` ++### Confirm access is denied to the *denied* storage account ++1. In the search box in the portal, enter **Storage accounts**. Select **Storage accounts** in the search results. ++1. Select **deniedaccount(random-number)**. ++1. Expand **Security + networking** and select **Access keys**. ++1. Copy the **key1** value. You use this key to map a drive to the storage account from the virtual machine you created earlier. ++1. In the search box in the portal, enter **Virtual machines**. Select **Virtual machines** in the search results. ++1. Select **vm-1**. ++1. Expand **Operations**. Select **Run command**. ++1. Select **RunPowerShellScript**. ++1. Paste the following script in **Run Command Script**. ++ ```powershell + ## Enter the storage account key for the denied storage account that you recorded earlier. + $storageAcctKey2 = (pasted from procedure above) + $acctKey = ConvertTo-SecureString -String $storageAcctKey2 -AsPlainText -Force + ## Replace the login account with the name of the storage account you created. + $credential = New-Object System.Management.Automation.PSCredential -ArgumentList ("Azure\deniedaccount"), $acctKey + ## Replace the storage account name with the name of the storage account you created. + New-PSDrive -Name Z -PSProvider FileSystem -Root "\\deniedaccount.file.core.windows.net\file-share" -Credential $credential + ``` ++1. Select **Run**. ++1. You receive the following error message in the **Output** box: ++ ```output + New-PSDrive : Access is denied + At line:1 char:1 + + New-PSDrive -Name Z -PSProvider FileSystem -Root "\\deniedaccount8675 ... + + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + CategoryInfo : InvalidOperation: (Z:PSDriveInfo) [New-PSDrive], Win32Exception + + FullyQualifiedErrorId : CouldNotMapNetworkDrive,Microsoft.PowerShell.Commands.NewPSDriveCommand + ``` +1. The drive map is denied because of the service endpoint policy that restricts access to the storage account. ++### [Portal](#tab/portal) +++### [PowerShell](#tab/powershell) ++When no longer needed, you can use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all of the resources it contains: ++```azurepowershell-interactive +$params = @{ + Name = "test-rg" + Force = $true +} +Remove-AzResourceGroup @params +``` ++### [CLI](#tab/cli) ++When no longer needed, use [az group delete](/cli/azure/group) to remove the resource group and all of the resources it contains. ++```azurecli-interactive +az group delete \ + --name test-rg \ + --yes \ + --no-wait +``` ++++## Next steps +In this tutorial, you created a service endpoint policy and associated it to a subnet. To learn more about service endpoint policies, see [service endpoint policies overview.](virtual-network-service-endpoint-policies-overview.md) |